Skip to content

Installing ArangoDB

This section explains the installation and configuration of ArangoDB in a multi-node setup. It includes preparing the environment, setting up the data disks, installing ArangoDB, configuring SSL certificates for secure communication, enabling JWT authentication and backup.

Prerequisites

The following requirements must be fulfilled to install ArangoDB:

  1. Node requirements:

    • 3 nodes for ArangoDB.
    • CPU and memory must meet system requirements.
    • Independent data disks with sufficient disk IOPS.
  2. System requirements:

    • OS: CentOS 7.8 (Minimal).
    • SELinux: Disabled.
    • Firewall: Disabled.
  3. Environment preparation:

    • CPU: 12 cores per node.
    • Memory: 32 GB per node.
    • Data disk: 50 GB per node.
  4. Data disk preparation:

    • Prepare data disk: fdisk /dev/sdb
    • Insert the following words sequentially:

      n
      p
      1
      ENTER
      ENTER
      t
      8e
      
    • Create LVM partition and mount:

    pvcreate /dev/sdb1
    vgcreate vg0 /dev/sdb1
    lvcreate -l 100%FREE -n lv0 vg0
    mkfs.xfs /dev/vg0/lv0 
    mkdir -p /data
    mount -t xfs /dev/vg0/lv0 /data
    echo '/dev/vg0/lv0 /data/ xfs defaults 0 0' >>/etc/fstab
    

To install ArangoDB, follow the steps:

  1. Install ArangoDB using the following commands.

    cd /etc/yum.repos.d/
    curl -OL https://download.arangodb.com/arangodb38/RPM/arangodb.repo
    yum -y install arangodb3-3.8.0-1.0
    

    For more information, refer to ArangoDB Distributions.

  2. Generate a certificate and secret on the server to establish a secure connection for ArangoDB.

    openssl genpkey -out arangodb.tmp.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -aes-128-cbc # input a password for private key
    openssl rsa -in arangodb.tmp.key -out arangodb.key # input the password of private key for removing password
    openssl req -new -key arangodb.key -out arangodb.csr -subj "/C=CN/ST=Sichuan/L=Chengdu/O=<organization>/OU=<organizational-unit>/CN=<common-name>.local" -reqexts v3_req -config <(cat /etc/pki/tls/openssl.cnf <(printf "[v3_req]\nsubjectAltName=IP:<ip-address>, IP:<ip-address>, IP:<ip-address>, DNS:*.rancherbox.local, DNS:arangodb.rancherbox.local"))
    openssl x509 -req -days 3650 -in arangodb.csr -signkey arangodb.key -out arangodb.crt -extensions v3_req -extfile <(cat /etc/pki/tls/openssl.cnf <(printf "[v3_req]\nsubjectAltName=IP:<ip-address>, IP:<ip-address>, IP:<ip-address>, DNS:*.rancherbox.local, DNS:arangodb.rancherbox.local"))
    cat arangodb.crt arangodb.key > server.pem # put it into /opt/arangodb/server.pem
    mkdir -p /opt/arangodb
    cp server.pem /opt/arangodb/
    

    Note: Ensure /opt/arangodb/server.pem exists on all nodes.

    To review the compatible certificate types for ArangoDB, refer to the ArangoDB SSL Documentation.

  3. Import the certificate file on the client using the following command: mdsp-init container.

    Info: You can ignore this if you are not working directly within that container or environment:

    keytool -importcert -alias "newkey_for_arangodb" -file <certificate-file-name>.pem -keystore <truststore-name> -storepass <truststore-password> -noprompt 
    

    Note: If you need to define your truststore path or password, ensure you add the truststore/keystore location and password to the Kubernetes secrets. This information is required in the code to establish a secure connection to ArangoDB.

  4. Create the JWT secret using the following command or add a custom created JWT secret to all 3 VMs:

    arangodb create jwt-secret --secret=jwtSecret
    
    • Create the necessary directory on each VM: mkdir -p /opt/arangodb
    • Copy the JWT secret to the directory on each VM: cp jwtSecret /opt/arangodb/

    Note: Ensure that the jwtSecret file exists on each node under the path /opt/arangodb/jwtSecret.

  5. Disable the default ArangoDB service to prevent the automatic startup of arangodb3, create a custom manual startup service: systemctl disable arangodb3

    • Create a custom start file for ArangoDB under /etc/systemd/system/arangodb.service with the following content:
    [Unit]
    Description=ArangoDB database server manual
    After=sysinit.target sockets.target timers.target paths.target slices.target network.target syslog.target
    [Service]
    # we could use another type for more reliable reporting
    Type=simple
    PermissionsStartOnly=true
    User=arangodb
    Group=arangodb
    # system limits
    LimitNOFILE=131072
    LimitNPROC=131072
    TasksMax=131072
    PIDFile=/var/run/arangodb/arangod.pid
    Environment=GLIBCXX_FORCE_NEW=1
    ExecStartPre=/usr/bin/install -g arangodb -o arangodb -d /var/tmp/arangodb
    ExecStartPre=/usr/bin/install -g arangodb -o arangodb -d /var/run/arangodb
    ExecStartPre=/bin/chown -R arangodb:arangodb /opt/arangodb/jwtSecret
    ExecStartPre=/bin/chown -R arangodb:arangodb /opt/arangodb/server.pem
    ExecStartPre=/bin/chown -R arangodb:arangodb /data
    ExecStart=/usr/bin/arangodb --ssl.keyfile=/opt/arangodb/server.pem --auth.jwt-secret=/opt/arangodb/jwtSecret --starter.data-dir=/data --starter.join <ip-address>,<ip-address>,<ip-address>
    TimeoutStopSec=3600
    TimeoutSec=3600
    Restart=on-failure
    RestartSec=5
    [Install]
    WantedBy=multi-user.target
    
    • Reload the systemd configuration, start ArangoDB manually and enable it to start on boot:
    systemctl daemon-reload
    systemctl start arangodb
    systemctl enable arangodb
    
  6. Add password to ArangoDB for authentication purpose. Run the following commands on any VM to update the password for a specific user.
    Replace <Private-IP-of-any-VM>, <username>, and <password> with the appropriate values:

    arangosh --server.endpoint ssl://<Private-IP-of-any-VM>:8529 --server.password "" --javascript.execute-string 'require("org/arangodb/users").update("<username>", "<password>");'
    
  7. Set up external endpoint for the cluster (Optional):

    To create an external endpoint for the ArangoDB cluster, use the --cluster.advertised-endpoint option in the ArangoDB starter command.

    For more information, refer to the ArangoDB Cluster Options documentation.

  8. To access the ArangoDB cluster using the default root user, run the following command.

    arangosh --server.endpoint ssl://<Private-IP-of-any-VM>:8529 --server.password "<password>"
    

    Alternatively, use the public IP or the custom external endpoint.

    https://<Public-IP-of-any-VM>:8529
    https://<custom-cluster-external-endpoint>:8529
    
  9. Configure the ArangoDB on Kubernetes: Extract the certificate from /opt/arangodb/server.pem and include it in the Kubernetes config map as arangodb.cert.

    1

    2

    3

Additional operations

If any of the nodes in the system crashes, a new node can be brought up by using the following commands:

arangodb --ssl.keyfile=<path_to_certificate_file> --starter.data-dir=<path_to_data_dir> --auth.jwt-secret=<path_to_jwtSecret> --cluster.start-agent=true --args.all.cluster.default-replication-factor=3 --args.agents.agency.disaster-recovery-id=<Agent-id> --starter.join <Private-IP-1>,<Private-IP-2>,<Private-IP-3>

Note

Keep a record of the agent IDs created during the initial cluster setup to reuse them in case of failure.

Backup and restore instruction

Backup Data: To store a backup of the data, run the following command on all three nodes while the ArangoDB cluster is up and running.

arangodump --server.endpoint ssl://<Private-IP-of-VM>:8529 --server.username "<username>" --all-databases true --output-directory "<output-directory>" --include-system-collections true --overwrite true

For more information, refer to the ArangoDB Backup documentation.

Restore Data: Bring all nodes online and run the following commands on all the three nodes.

arangorestore --server.username <username> --all-databases true --create-database true --create-collection true --import-data true --include-system-collections true --input-directory "<input-directory>"

For more information, refer to ArangoDB Restore documentation.

User Management: To create and manage the new users, refer to the ArangoDB User Management.

Reference

Known Issue

If a cluster with 3 nodes (1 leader, 2 followers) and a replication factor of 3 experiences a node failure, the cluster cannot create new collections since the number of active nodes are below the replication factor. Ensure that the cluster maintains 3 operational nodes to avoid disruptions.


Last update: January 31, 2025