Page tree
Skip to end of metadata
Go to start of metadata


Please contact your account manager to enable High Availability Control Panel for your cloud.


If you want to enable High Availability for your cloud, please contact your account manager.

This guide suggests the steps required for fresh installation of OnApp together with setting up a High Availability cluster. That requires additional configuration besides the standard installation procedure.

There are two scenarios for High Availability configuration:

  • regular deployment - the configuration consists of three Control Panel servers
  • advanced deployment - the configuration consists of several Control Panel servers and several data servers. The testing has been performed with the cloud configuration with three Control Panel servers and three data servers.

The general workflow is the following:

Preparation

Read the Technical Details

Installation



Preparation



Important Notes



  • If you are a High Availability customer, it is recommended that you contact support for help with the procedure described below. Be aware, that if the configuration below is performed incorrectly it may cause damage to your cloud.
  • Do not use the Control Panel server as the backup/template server. The High Availability configuration requires a separate backup server.
  • It is highly recommended that you do not deploy the Zabbix server on the CP server.

For the steps which start transactions, wait for each step to complete, before proceeding to the next step. You can monitor the process by viewing logs. Log will appear in the process of configuration.

tail -f /var/log/{{lsyncd,lsyncd-status,pacemaker,cluster/corosync,haproxy/*}.log,messages}

It might be useful also to look into the transactions logs even if they have been marked as Completed since there could be some issues nevertheless. If at some step the GUI is not available you can look into /log/transactions/... to find the reason of failure.

Currently, onapp-database-dump.sh is performed on all nodes. This behavior can be overridden by commenting out/removing the corresponding line from the following file:

/etc/crontab

The onapp-bd file is stored locally by default. However, OnApp also allows storing backups on a remote host. To store backups on a remote host, edit the DB_DUMP_SERVER="" variable in the following file:

/onapp/onapp-cp.conf

List of Default Ports



Below you will find the list of ports that should be open on servers if you want to use them as nodes in a High Availability configuration. 

RabbitMQ
5672AMQP connection
15672Management UI
25672Cluster communication
4369Erlang port mapper daemon (EPMD)
Percona Cluster
3306Database connection
4567Data synchronization
9200Cluster check
4444Port for State Transfer by default
4568Port for Incremental State Transfer
Redis
6379Redis connection
26379Cluster membership (Sentinel)
httpd
10080HTTP
10443HTTPS
Corosync
5405Cluster communication
5406Cluster communication
HAProxy
5772Load balanced AMQP connection
3406Load balanced Database connection
6479Load balanced Redis connection
80Load balanced HTTP connection
443Load balanced HTTPS connection
5005HAProxy status (No authorization, must be filtered on production)
VNC Proxy
30000-30099VNC connections
Lsyncd/csync2
30865Data synchronization

CP Node Installation




Comply with the following steps  to perform an installation of Control Panel node:

  1. Install three CP servers.  
  2. The node on which you start configuring the clusters will become the master node; the other two nodes are considered as slaves. Once the deployment is completed, these terms become obsolete. 
  3. Public IP addresses that are intended to serve the GUI should be whitelisted for OnApp repo and added to the OnApp license. You need three public IP addresses for you nodes and another public IP address to be used as the virtual IP for the Load Balancer. 
  4. The High Availability feature should be enabled in the license.

The master node should be installed and configured first. Install slave nodes after you have installed the master.

Comply with the following steps  to perform an installation of Control Panel node:

  1. Install three CP servers.  
  2. The node on which you start configuring the clusters will become the master node; the other two nodes are considered as slaves. Once the deployment is completed, these terms become obsolete. 
  3. Public IP addresses that are intended to serve the GUI should be whitelisted for OnApp repo and added to the OnApp license. You need three public IP addresses for you nodes and another public IP address to be used as the virtual IP for the Load Balancer. 
  4. The High Availability feature should be enabled in the license.

The master node should be installed and configured first. Install slave nodes after you have installed the master.

To use an external MySQL server/cluster, the server/cluster should run version 5.1 - 5.6.

Hardware Requirements

The Control Panel servers in the configuration must comply with the hardware requirementsBelow are the minimum hardware requirements for servers in the High Availability configuration:

  • Processor: 2 x 8 Core CPUs, for example, Xeon e5-2640 v3

  • Memory: 16GB RAM
  • Disks: 2 x 400GB SSD
  • RAID Configuration: RAID 1
  • Network Adapters: Quad port 1Gbp NIC

Installation

To install CP node for HA:

  1. Update your server:

    bash# yum update
  2. Download OnApp YUM repository file:

    # rpm -Uvh http://rpm.repo.onapp.com/repo/onapp-repo.noarch.rpm
  3. Install OnApp Control Panel installer package:

    bash# yum install onapp-cp-install
  4.  Set the custom Control Panel configuration. It is important to set the custom values before the installer script runs.

     Edit the /onapp/onapp-cp.conf file to set Control Panel custom values

    Template server URL

    TEMPLATE_SERVER_URL='http://templates-manager.onapp.com';

    # IPs (separated with coma) list for the snmp to trap

    SNMP_TRAP_IPS=

    # OnApp Control Panel custom version

    ONAPP_VERSION=""

    # OnApp MySQL/MariaDB connection data (database.yml)

    ONAPP_CONN_WAIT_TIMEOUT=15
    ONAPP_CONN_POOL=30
    ONAPP_CONN_RECONNECT='true'
    ONAPP_CONN_ENCODING='utf8'
    ONAPP_CONN_SOCKET='/var/lib/mysql/mysql.sock'

    # MySQL/MariaDB server configuration data (in case of local server)

    MYSQL_WAIT_TIMEOUT=604800
    MYSQL_MAX_CONNECTIONS=500
    MYSQL_PORT=3306

    # Use MariaDB instead of MySQL as OnApp database server (Deprecated parameter. If you set any values for this parameter, they will not take effect)

    WITH_MARIADB=0

    # Configure the database server relative amount of available RAM (Deprecated parameter. If you set any values for this parameter, they will not take effect)

    TUNE_DB_SERVER=0

    # The number of C data structures that can be allocated before triggering the garbage collector. It defaults to 8 million

    RUBY_GC_MALLOC_LIMIT=16000000

    # sysctl.conf net.core.somaxconn value

    NET_CORE_SOMAXCONN=2048

    # The root of OnApp database dump directory (on the Control Panel box)

    ONAPP_DB_DUMP_ROOT=""

    # Remote server's (to store database dumps) IP, user, path, openssh connection options ans number of dumps to keep

    DB_DUMP_SERVER=""
    DB_DUMP_USER="root"
    DB_DUMP_SERVER_ROOT="/onapp/backups"
    DB_DUMP_SERVER_SSH_OPT="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o PasswordAuthentication=no"
    KEEP_DUMPS=168
    DB_DUMP_CRON='40 * * * *'

    # Enable monit - tool for managing and monitoring Unix systems

    ENABLE_MONIT=1

    # If enabled (the 1 value is set) - install (if local box) and configures RabbitMQ Server (messaging system) for the vCloud support. (Deprecated parameter. If you set any values for this parameter, they will not take effect)


    ENABLE_RABBITMQ=1


    # Rotate transactions' log files created more than TRANS_LOGS_ROTATE_TIME day(s) ago


    TRANS_LOGS_ROTATE_TIME=30


    # Maximum allowed for uploading file size in bytes, from 0 (meaning unlimited) to 2147483647 (2GB). Default is 1GB


    MAX_UPLOAD_SIZE=1073741824


    # Timeout before ping Redis Server to check if it is started. Default is 5 sec.

    REDIS_PING_TIMEOUT=5


    # OnApp Control Panel SSL certificates (please do not change if you aren't familar with SSL certificates)
    # * The data below to generate self-signed PEM-encoded X.509 certificate

    SSL_CERT_COUNTRY_NAME=UK
    SSL_CERT_ORGANIZATION_NAME='OnApp Limited'
    SSL_CERT_ORGANIZATION_ALUNITNAME='OnApp Cloud'
    SSL_CERT_COMMON_NAME=`hostname --fqdn 2>/dev/null`


    #   SSLCertificateFile, SSLCertificateKeyFile Apache directives' values
    #   ssl_certificate, ssl_certificate_key Nginx directives' values

    SSLCERTIFICATEFILE=/etc/pki/tls/certs/ca.crt
    SSLCERTIFICATECSRFILE=/etc/pki/tls/private/ca.csr
    SSLCERTIFICATEKEYFILE=/etc/pki/tls/private/ca.key


    # * PEM-encoded CA Certificate (if custom one exists)
    #   SSLCACertificateFile, SSLCertificateChainFile Apache directives' values
    #   ssl_client_certificate Nginx directives' values

    SSLCACERTIFICATEFILE=
    SSLCERTIFICATECHAINFILE=


     

    #   SSLCipherSuite, SSLProtocol Apache directives' values
    #   ssl_ciphers, ssl_protocols Nginx directives' values

    SSLCIPHERSUITE=
    SSLPROTOCOL=


     



  5. Run the Control Panel installer on new CPs:
     

    • run the following for regular deployment (skip for advanced):

      bash# /onapp/onapp-cp-install/onapp-cp-install.sh --ha-install --percona-cluster
    • run the following for advanced configuration scheme (that is the data servers reside separately from the Control Panel servers):

      bash# /onapp/onapp-cp-install/onapp-cp-install.sh --ha-install --percona
     The full list of Control Panel installer options:



    Usage:

    /onapp/onapp-cp-install/onapp-cp-install.sh [-c CONFIG_FILE] [--mariadb | --community | --percona | --percona-cluster] [-m MYSQL_HOST] [--mysql-port=MYSQL_PORT] [--mysql-sock[=MYSQL_SOCK] [-p MYSQL_PASSWD] [-d MYSQL_DB] [-u MYSQL_USER] [-U ADMIN_LOGIN] [-P ADMIN_PASSWD] [-F ADMIN_FIRSTNAME] [-L ADMIN_LASTNAME] [-E ADMIN_EMAIL] [-v ONAPP_VERSION] [-i SNMP_TRAP_IPS] [--redis-host=REDIS_HOST] [--redis-bind[=REDIS_BIND] [--redis-passwd[=REDIS_PASSWD] [--redis-port=REDIS_PORT] [--redis-sock[=REDIS_SOCK] [--rbthost RBT_HOST] [--vcdlogin VCD_LOGIN] [--vcdpasswd VCD_PASSWD] [--vcdvhost VCD_VHOST] [--rbtlogin RBT_LOGIN] [--rbtpasswd RBT_PASSWD] [-a] [-y] [-D] [-t] [--noservices] [--ha-install] [--rake=RAKE_TASKS] [-h]


    Where:


     Database server options:Default database SQL server is MySQL Server. Please use one of the following option to install LOCALLY.
    --mariadbMariaDB Server

    --community

    MySQL Community Server
    --perconaPercona Server
    --percona-clusterPercona Cluster


    MYSQL_*Options are useful if MySQL is already installed and configured.
    -m MYSQL_HOSTMySQL host. Default is 'localhost'

    --mysql-port=MYSQL_PORT

    TCP port where MySQL Server serves connections.

    Default values is 3306 for the local installation.

    --mysql-sock[=MYSQL_SOCK]

    Unix socket on which MySQL Server serves connections.

    Default values is /var/lib/mysql/mysql.sock. Used if local server only. The socket is unset if the option's argument isn't specified.

    -p MYSQL_PASSWDMySQL password. Random is generated if is not set or specified.
    -d MYSQL_DBOnApp MySQL database name. Default is 'onapp'
    -u MYSQL_USERMySQL user. Default is 'root'


    REDIS_*Options are useful if Redis Server is already installed and configured.
    --redis-host=REDIS_HOST


    IP address/FQDN where Redis Server runs. It is used by Control Panel to connect to Redis Server.
    The Redis Server will be installed and configured on the current box if localhost/127.0.0.1 or box's public IP address (listed in SNMP_TRAP_IPS) is specified. Default value is 127.0.0.1.

    If local Redis, it will serve as well on the unix socket 'PORT' (if --redis-sock without argument isn't specified)

    --redis-bind[=REDIS_BIND]

    The IP address for Redis Server to serve connections (to listen). The option isn't mandatory.

    --redis-port=REDIS_PORTRedis Server listen port.
    Defaults are:
    0 - if local server
    6379 - if remote server
    --redis-passwd[=REDIS_PASSWD]Redis Server password to authentificate.
    Random password is generated if the option's argument isn't specified.
    By default no password is used for local Redis.
    --redis-sock=REDIS_PATH :Path to the Redis Server's socket. Used if local server only.
    Default is /tmp/redis.sock

    The socket is unset if the option's argument isn't specified.



    ADMIN_*Options are used to configure OnApp Control Panel administrator data.
    Please note, that these options are for NEW INSTALL only and not for upgrade
    -P ADMIN_PASSWD CP administrator password
    -F ADMIN_FIRSTNAMECP administrator first name
    -L ADMIN_LASTNAMECP administrator last name
    -E ADMIN_EMAILCP administrator e-mail


      --rbthost   RBT_HOST  IP address/FQDN where RabbitMQ Server runs. The RabbitMQ will be installed and configured on the current box if localhost/127.0.0.1 or box's public IP address (enlisted in SNMP_TRAP_IPS). Default values are 127.0.0.1.
    VCD_*Options are usefull if vCloud/RabbitMQ are already installed and configured.
    --vcdlogin  VCD_LOGINRabbitMQ/vCloud user. Default value is 'rbtvcd'.
    --vcdpasswd VCD_PASSWDRabbitMQ/vCloud user password. The random password is generated if isn't specified.
    --vcdvhost  VCD_VHOSTRabbitMQ/vCloud vhost. Default value is '/'
    RBT_*  Options are used to configure RabbitMQ manager account. If local RabbitMQ server.
    --rbtlogin  RBT_LOGIN RabbitMQ manager login. The default value is 'rbtmgr'.
    --rbtpasswd RBT_PASSWDRabbitMQ manager password. The random password is generated if isn't specified.


    --ha-install

    Proceed with Control Panel and Hight Availability components installation

    --rake RAKE_TASKS

    List of OnApp Control Panel rake tasks (separated with space) to run at the very end of install or upgrade

    -v ONAPP_VERSIONInstall custom OnApp CP version
    -i SNMP_TRAP_IPSIP addresses separated with coma for snmp to trap
    -yupdate OS packages (except of OnApp provided) on the box with 'yum update'.
    -aDo not be interactive. Process with automatic installation. Please note, this will continue OnApp Control Panel install/upgrade even if there is transaction currently running.
    -tAdd to the database and download Base Templates. For new installs only.
    --noservices

    Do not start OnApp services: monit, onapp and httpd
    Please note, crond and all OnApp's cron tasks remain running. They could be disabled by stopping crond service manually for your own risk.

    -Ddo not make database dump, and make sure it is disabled in the cron and not running at the moment

    -c CONFIG_FILE

    Custom installer configuration file. Otherwise, preinstalled one is used.

    -hprint this info
  6. Install OnApp license to activate the Control Panel. Enter a valid license key via the Web UI (you'll be prompted to do so). Your default OnApp login is admin/changeme. The password can be changed via the Control Panel's Users and Groups menu in the Control Panel. Do not change the password on the slave nodes.

    Once you have entered a license it can take up to 15 minutes to activate. You can perform the next steps while the license is being configured. License should be common for all your nodes and all your IPs should be included in this license.
  7. Ensure that  host has a hostname properly set up including short hostname. 

  8. After the installation, it is recommended to increase the soft and hard limits for the opened files.
    Open the /etc/security/limits.conf file:

    vi /etc/security/limits.conf

    Change the following parameters to at least the following:

    • root soft nofile 2048
    • root hard nofile 4096
    • onapp soft nofile 2048
    • onapp hard nofile 4096

    For heavy loaded cloud deployments, the limits should be increased.

  9. After you have installed the Control Panel server, configure your Cloud Settings. See Configure Cloud for details. 

Mutual Accessibility Provisioning




  1. Provide mutual access among all the hosts using their hostnames - either via DNS resolution or using /etc/hosts capability as shown below (use local network addresses for this, preferably management interface).
    • run the following for regular deployment (skip for advanced):

      bash# cat /etc/hosts
      127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
      ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
      10.0.51.125	host1
      10.0.51.126	host2
      10.0.51.127 host3
    • run the following for advanced configuration scheme (that is the data servers reside separately from the Control Panel servers):

      bash# cat /etc/hosts
      127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
      ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
      10.0.1.125	host1
      10.0.1.126	host2
      10.0.1.127 host3
      10.0.1.128 host4
      10.0.1.129 host5
      10.0.1.130 host6 

    2. Install keys for root and onapp user access via SSH. If you’ve skipped installing onapp-store previously, ensure you have SSH keys for root, otherwise generate them with ssh-keygen.

    • run the following for regular deployment (skip for advanced):

      bash# for cphost in host1 host2 host3; do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$cphost;done
      bash# su onapp
      bash# for cphost in host1 host2 host3; do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$cphost;done
    • run the following for advanced configuration scheme (that is the data servers reside separately from the Control Panel servers)

      bash# for cphost in host1 host2 host3  host4  host5  host6; do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$cphost;done
      bash# su onapp
      bash# for cphost in host1 host2 host3  host4  host5  host6; do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$cphost;done

Configuration in CP


Log in to the master Control Panel  and configure hosts, Redis, clusters and communication.


Hosts



Add relevant hosts corresponding to physical infrastructure:

  1. Go to your Control Panel Settings menu.
  2. Click the HA Clusters > Hosts tab. 
  3. Click the New Host button or click the "+" button.
  4. On the screen that appears, fill in the hostname. This must be exactly the same name that the command hostname returns on CP hosts.
  5. Click Submit.


Clusters



Configure the clusters in the system. In the High Availability configuration two clusters, Daemon and User Interface, are already present and you need to edit them. Sequentially edit the Daemon and User Interface clusters and add all three nodes to them.

IP addresses for the UI and Daemon clusters should be in the same network, where Database/Redis/RMQ reside.


 To edit a cluster: 

  1. Go to your Control Panel's Settings menu.
  2. Click the HA Clusters icon > Clusters tab. 
  3. Click the Actions button next to the cluster you want to edit, then click Edit.
  4. On the screen that appears, change the following parameters:
    • Virtual IP - fill in the IP address.
    • Net mask - indicate the net mask

      Do not use such format as 255.255.255.0 in the Net mask field. Instead indicate network prefix (valid values are from 0 to 32).

    • Ports - indicate ports. Ports can be left blank.
  5. Click Update.

To add a node to a cluster: 

  1. Go to your Control Panel's Settings menu.
  2. Click the HA Clusters icon > Clusters tab. 
  3. Click the label of the cluster to which you want to add a node
  4. The page that loads shows the list of nodes in the cluster. Click the Add Node button.
  5. Fill in the details of the new node:
    • Host - select the host with which the new node is to associated from the drop down list
    • IP address - fill in the physical IP address of the node
    • Interface - fill in the network interface where the IP address is set.
    • Priority - set the priority for the node. Set priority to 100 for slave nodes and to a larger value for the master node. The node with the highest priority will take over the virtual IP address when the component of the cluster fails.
  6. Click Submit.

You need to create Load Balancer, Database, Redis and RabbitMQ clusters.You also need to add nodes to your clusters.

If you intend to use CloudBoot Compute Resources, add a CloudBoot Cluster, however, you need to set Static config target and CP server CloudBoot target at Control Panel > Settings > Configuration > System tab. These parameters should contain the same IP address that will be used as the virtual IP address for the CloudBoot cluster.

To add a cluster:

  1. Go to your Control Panel's Settings menu.
  2. Click the HA Clusters icon > Clusters tab. 
  3. Choose one of the optional clusters and click the appropriate button: Add Load BalancerAdd Database,Add Redis or Add Message Queue.
  4. Fill in required information:
    • Virtual IP - the virtual IP address of the cluster. This IP address should be unique
    • Net mask - mask of the network
    • Ports - cluster ports
  5.  Click Submit to add the cluster.
  • Virtual IP for the Load Balancer cluster must be a public front end IP. This will be the new IP address for your CP after the High Availability configuration process is completed.

  • Virtual IP for Database, Redis, RabbitMQ has to be in a data network (LAN) and can be the same (one for three clusters). 
  • Virtual IP for Cloudboot has to be the same as configured in main Settings, Cloudboot section. 
  • The Load Balancer cluster must be added first, then you will be able to add Database, Redis and Message Queue. 
  • You can leave virtual IP field empty for Daemon, Database, Message Queue and Redis clusters since they have no effect on current HA implementation.

Load Balancer Cluster Options



It is possible to customize frontend/backend ports for the Load Balancer cluster using options.

To set options for the Load Balancer cluster:

  1. Go to Control Panel > Settings > HA Clusters > Clusters tab.
  2. Click the Actions button next to the Load Balancer cluster and select Options.
  3. On the page that loads click Add Option.
  4. Set the variable and its value and click Submit.

The following list of options is relevant to the Load Balancer cluster:

  • frontend_http_port  
  • frontend_https_port
  • frontend_db_port    
  • frontend_redis_port
  • frontend_rabbitmq_port 
  • backend_http_port   
  • backend_https_port  
  • backend_db_port     
  • backend_redis_port  
  • backend_mq_port     

If you do not customize any port values and the Load Balancer cluster hosts overlap with at least one of the hosts from other clusters, OnApp automatically sets the following values:

  • frontend port = (DEFAILT_PORT + 100)
  • backend port = DEFAILT_PORT
The default values are the following:
  • DEFAULT_DB_PORT = 3306
  • DEFAULT_REDIS_PORT = 6379
  • DEFAULT_RABBITMQ_PORT = 5672


Communication



Configure relevant communication channels. Two channels in different networks are recommended.  

Here you can select the way of communication among the hosts - Multicast or Unicast, by pressing the corresponding button: Switch to Unicast or Switch to Multicast

To add a communication ring:

  1. Go to your Control Panel Settings menu.
  2. Click the HA Clusters icon > Communication tab. 
  3. Click the Add New Ring button or click the "+" button.
  4. Fill in the following parameters:
    • Network - the multicast network used by the hosts to communicate with each other
    • Multicast IP Address - the multicast IP address
    • Multicast Port - the multicast port 
    • TTL - time to live (only for the multicast configuration)
    • Members - the IP address of the hosts in the configuration. Fill in the IP address of the hosts separated by a comma (only for the unicast configuration)
  5. Click Submit.
  6. At Settings > HA Clusters > Communication click Apply to save the changes. This will re-generate corosync configuration files and reload service.

Switching to unicast mode is recommended for the sake of network productivity.

If you are going to set up more than one communication ring, ensure that pairs multicast address/port differ, that is either they have different multicast addresses or ports differ by more than 1. For example, if addresses are the same then ports 5005 and 5006 for such communication rings wouldn't fit, you need to set at least 5007 for the latter one.

Please note, the you are required to add the correct IP address when configuring multicast. Adding incorrect IP address will affect the multicast configuration.

The maximum number of communication rings corresponds to the number of available NICs on hosts. For example, if all hosts have two NICs, you can configure a maximum of two communication rings.

High Availability Initialization



  1. Go to Settings > HA Clusters > General and review the modified configuration.

  2. Go to Control Panel > Sysadmin > Control Panel Maintenace and click Enable. This prevents any customer from performing any activity unless they have permissions for Sysadmin tools.

  3. After validating configuration click Enable at Settings > HA Clusters > General.
  4. After High Availability initialization go to Control Panel > Settings > Configuration tab on master node, synchronize settings by clicking the Save Configuration button.
  • Process is shown in the activity logs.
  • On all Control Panels OnApp will deploy reverse proxy servers.
  • On all Control Panels http and https ports will be modified from 80, 443 to 10080, 10443 respectively by default. This behavior can be overridden if you set additional options for load balancer cluster: http_port, https_port
  • The OnApp interface will automatically be switched into maintenance mode.
  • OnApp will automatically dump your current database, shift mySQL servers on all Control Panels with multimaster database cluster and apply the last dump. 
  • OnApp will clusterize RabbitMQ and Redis.
  • All Control Panels will get database, Redis and RabbitMQ connections automatically reconfigured to connect to the corresponding virtual IPs.
  • All requests to the UI, database, Redis and RabbitMQ servers will be load balanced.
  • The OnApp interface application will be restarted on new ports.
  • The Onapp interface application will be available via Load Balancer virtual IP address, 80, 443 ports.

After you initialize High Availability you can monitor the configuration process by watching the following script:

crm_mon -r

Activate Clusters


After High Availability is initialized you need to activate the clusters you have configured.


User Interface Cluster



  1. Go to Control Panel > Settings > HA Clusters > Clusters tab.
  2. Activate the UI cluster by clicking the Actions button next to the required cluster and selecting Recreate.
  3. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  4. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.


CloudBoot Cluster



Omit these steps if you are not using CloudBoot.

High Availability should be initialized before installing OnApp store.

Make sure that the public keys from all nodes are stored in /onapp/configuration/keys:

# ll /onapp/configuration/keys
  1. Prepare the system for OnApp store installation by running the following commands:

    #  crm resource stop lsyncd-сluster
    #  crm configure property maintenance-mode=true 


  2. Install OnApp store:

    # yum install onapp-store-install -y
    # /onapp/onapp-store-install/onapp-store-install.sh
  3. Enable the system to monitor resources and start synchronization:

    #  crm configure property maintenance-mode=false  
    #  crm resource start lsyncd-сluster


  4. Switch on the Enable CloudBoot option in Control Panel > Settings > Configuration.
  5. Add IPs for the CloudBoot compute resources you will create at Control Panel > Settings > Compute Resources > CloudBoot IPs tab. You can also add these IP addresses later.
  6. At Control Panel > Settings > HA Clusters > Clusters tab, activate the CloudBoot cluster by clicking the Actions button next to the required cluster and selecting Recreate
  7. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  8. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.


Load Balancer Cluster




  1. At Control Panel > Settings > HA Clusters > Clusters tab, activate the Load Balancer cluster by clicking the Actions button next to the required cluster and selecting Recreate
  2. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  3. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at ControlPanel > Settings > HA Clusters > Clusters tab. Once all the actions succeed you can access virtual IP on port 5005 to see the cluster's status. 

    From now on use port 10080 on the master CP to access GUI until the last step.

Database Cluster Activation for Regular High Availability Deployment



  1. At Control Panel > Settings > HA Clusters > Clusters tab, activate the Database cluster by clicking the Actions button next to the required cluster and selecting Recreate
  2. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  3. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.
  4. Run the following command:

    crm resource unmanage onapp-db
  5. Run the following command on the slave nodes:

    service mysql stop
  6. Increase max_connections and innodb_buffer_pool_size in my.cnf on all three Control Panels.

    /etc/my.cnf
    
    max_connections=2000
    innodb_buffer_pool_size=3GB

    Where:

    max_connections - Max allowed connections

    innodb_buffer_pool_size - the buffer pool is where data and indexes are cached: having it as large as possible will ensure you use memory and not disks for most read operations. Typical values are 2-4GB (8GB RAM), 10-15GB (32GB RAM), 20-25GB (128GB RAM)

  7. Run the following command on the master node:

    service mysql restart-bootstrap
  8. Run the following command on the slave nodes sequentially so that the slave nodes restart one by one:

    service mysql restart

    Wait until the following message appears on the screen:

    *.... SUCCESS!*

    The fillowing message should appear in /var/log/mysqld.log:

    *WSREP: Member 2.0 (onapp_db_xxxxxxx) synced with group.*
  9. On all Control Panel servers, edit the /onapp/interface/config/database.yml file and set the port parameter to the same value as the frontend_mysql_port parameter in Dashboard > Settings > HA Clusters > Clusters > Load Balancer cluster > Options:

    host: 127.0.0.1
    port: 3406
    #socket: '/var/lib/mysql/mysql.sock'
  10. Restart the onapp httpd services on all Control Panel servers:

    # service onapp restart && service httpd restart
  11. Run the following command on the master node:

    #crm resource manage onapp-db

Database Cluster Activation for Advanced High Availability Deployment



If you are installing the advanced version of High Availability configuration, that is the database servers reside separately from the Control Panel servers, the database cluster activation procedure will be the following:

The testing has been performed with the cloud configuration with three Control Panel servers and three data servers.

  1. Check that you do not have the MySQL installed on data nodes:

    # rpm -qa| grep mysql

    If MySQL packages exist, you you need to delete them all, otherwise the database cluster activation will fail. 

  2. At Control Panel > Settings > HA Clusters > Clusters tab, activate the Database cluster by clicking the Actions button next to the required cluster and selecting Recreate
  3. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  4. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.
  5. Unmanage database nodes by running the following command:

    #   crm resource unmanage onapp-db;
  6. On any two database nodes stop MySQL service by running the following command:

    # service mysql stop
  7. Increase max_connections and innodb_buffer_pool_size in my.cnf on all three Control Panels.

    /etc/my.cnf
    
    max_connections=2000
    innodb_buffer_pool_size=3GB

    Where:

    max_connections - Max allowed connections

    innodb_buffer_pool_size - the buffer pool is where data and indexes are cached: having it as large as possible will ensure you use memory and not disks for most read operations. Typical values are 2-4GB (8GB RAM), 10-15GB (32GB RAM), 20-25GB (128GB RAM)

  8. Run the following command on the third database node:

    # service mysql restart-bootstrap 
  9. Wait until the following message appears on the screen:

    Bootstrapping PXC (Percona XtraDB Cluster)Starting MySQL (Percona XtraDB Cluster)...... SUCCESS!
  10. Run the following command sequentially on the database nodes on which you have operated on step 6 to restart the data base:

    #  service mysql restart
  11. Wait until the following message appears on the screen:

    *.... SUCCESS!*
  12. Check the /var/log/mysqld.log file, the following message should appear there:

    *WSREP: Member 2.0 (onapp_db_xxxxxxx) synced with group.*
  13. Create a database dump on the master Control Panel node:

    # mysqldump -u root -p<password> onapp > onapp_db_dump.sql
  14. Copy and upload the created database dump to the database master node:

    # mysql -h <DB_master_IP> -u root -p<password> onapp < onapp_db_dump.sql
  15. On all Control Panel servers, edit the /onapp/interface/config/database.yml file and set the port parameter to the same value as the frontend_mysql_port parameter in Dashboard > Settings > HA Clusters > Clusters > Load Balancer cluster > Options:

    host: 127.0.0.1
    port: 3406
    #socket: '/var/lib/mysql/mysql.sock'
  16. Restart the onapp httpd services on all Control Panel servers:

    # service onapp restart && service httpd restart
  17. Run the following command on the master node:

    crm resource manage onapp-db; crm resource manage onapp-frontend-httpd-group-cluster

Message Queue Cluster



  1. At Control Panel > Settings > HA Clusters > Clusters tab, activate the Message Queue cluster by clicking the Actions button next to the required cluster and selecting Recreate
  2. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  3. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.
  4. Edit /onapp/interface/config/on_app.yml and add rabbitmq_port option with value the same as frontend_rabbitmq_port in Control Panel Settings HA Clusters Clusters Load Balancer cluster > Options.


Redis Cluster




  1. At Control Panel > Settings > HA Clusters > Clusters tab, activate the Redis cluster by clicking the Actions button next to the required cluster and selecting Recreate.

  2. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.

  3. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.

  4. On master node edit /onapp/interface/config/redis.yml file and for the field :password: set the same value as the requirepass parameter in /etc/redis.conf and set port (or you can take these values from GUI - a variable master_auth at Control Panel > Settings > HA Clusters > Clusters > Redis cluster> Options  and frontend_redis_port  at Control Panel Settings HA Clusters Clusters Load Balancer cluster > Options).

  5. Remove the following file from redis.yml:

     

    :path: "/var/run/redis/redis.sock"


  6. Edit the redis.yml file so that it looks the following way:

    production:
      :host: 127.0.0.1
      :port:  <frontend_redis_port>
      :password: <password>
  7. Run the following commands on all nodes:

    # crm resource restart onapp-redis-group-cluster
    # service onapp restart && service httpd restart

Daemon Cluster



  1. At Control Panel > Settings > HA Clusters > Clusters tab, activate the Daemon cluster by clicking the Actions button next to the required cluster and selecting Recreate
  2. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  3. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.

    • Now you can re-check logs for errors, check Admin Tools in GUI (to be sure that all services are running), disable Maintenance mode and let users use virtual IP to access CP.
    • "Manage_X_Cluster" log outputs have to be checked as well as after any cluster gets activated, relevant resource in crm_mon should appear.
  4. Run the following command on the master node:

    # crm resource manage onapp-frontend-httpd-group-cluster

Configure Backup servers and Compute Resources Configuration


Take the following steps:

  • Add all  management CP IP addresses:
  1. Go to your Control Panel's Settings menu, and click the Configuration icon.
  2. Click the System tab and go to SNMP Trap Settings, where you should indicate:
  • Snmptrap addresses - a set of IPv4 management network IP(s) from the CP server separated by coma. These IP addresses will be used for communication between Control Panel and Compute resources.
  • Snmptrap port - port used for snmptrap. This must be greater than 1024.

    We recommend that you do not change the default value.
    In case you change the port value on your OnApp CP - the corresponding change of the portVM_STATUS_SNMP_PORT should be made for all Compute resources in /etc/onapp.conf file.  

#trackbackRdf ($trackbackUtils.getContentIdentifier($page) $page.title $trackbackUtils.getPingUrl($page))
  • No labels