Page tree
Skip to end of metadata
Go to start of metadata


Please contact your account manager to enable High Availability Control Panel for your cloud.


If you want to enable High Availability for your cloud, please contact your account manager.

This section describes the procedure for setting up regular High Availability configuration for those who already have OnApp installed. Setting up a High Availability cluster requires additional configuration besides the standard installation procedure. The general workflow is the following:

Preparation

Read the Technical Details

Make the necessary Preliminary Configurations 

Installation



During installation process of the existing CP is considered as a master node, two other nodes are slave nodes. Once the deployment is completed, these terms become obsolete.

Public IP addresses that are intended to serve the GUI should be whitelisted for OnApp repo and added to the OnApp license. You need three public IP addresses for you nodes and another public IP address to be used as the virtual IP for the Load Balancer.

The High Availability feature should be enabled in the license.


  • If you are a High Availability customer, it is recommended that you contact support for help with the procedure described below. Be aware, that if the configuration below is performed incorrectly it may cause damage to your cloud.
  • Do not use the Control Panel server as the backup/template server. The High Availability configuration requires a separate backup server.
  • It is highly recommended that you do not deploy the Zabbix server on the CP server.
  • Current procedure may potentially cause database failure. Make sure to backup your database before starting. 
  • Current procedure will cause temporary service downtime. Please, notify your customers. You may also want to switch the CP to maintenance mode.

 Currently, onapp-database-dump.sh is performed on all nodes. This behavior can be overridden by commenting out/removing the corresponding line from the following file:

/etc/crontab

 The onapp-bd file is stored locally by default. However, OnApp also allows storing backups on a remote host. To store backups on a remote host, edit the DB_DUMP_SERVER="" variable in the following file:

/onapp/onapp-cp.conf


Preliminary Configurations 


 You need to perform the following steps prior to the High Availability configuration:

  • Upgrade the existing CP to OnApp version 5.0.
  • Move all the OnApp related processes on the master node (DB, Redis, RabbitMQ server, DHCPD, ftpd, NFS server) to be local and running on the same host, otherwise issues will arise during the deployment.
  • Migrate the database on the master to Percona.

  • Install the tools to configure Control Panels' High Availability:

    #yum install onapp-cp-ha


  • Run the installer on the the CP that already exists:

    #/onapp/onapp-ha/onapp-cp-ha.sh 

List of Default Ports


Below you will find the list of ports that should be open on servers if you want to use them as nodes in a High Availability configuration. 

RabbitMQ
5672AMQP connection
15672Management UI
25672Cluster communication
4369Erlang port mapper daemon (EPMD)
Percona Cluster
3306Database connection
4567Data synchronization
9200Cluster check
4444Port for State Transfer by default
4568Port for Incremental State Transfer
Redis
6379Redis connection
26379Cluster membership (Sentinel)
httpd
10080HTTP
10443HTTPS
Corosync
5405Cluster communication
5406Cluster communication
HAProxy
5772Load balanced AMQP connection
3406Load balanced Database connection
6479Load balanced Redis connection
80Load balanced HTTP connection
443Load balanced HTTPS connection
5005HAProxy status (No authorization, must be filtered on production)
VNC Proxy
30000-30099VNC connections
Lsyncd/csync2
30865Data synchronization

Installation of two New Nodes 



It is required to install two new nodes except the master node. 

Hardware Requirements

The Control Panel servers in the configuration must comply with the hardware requirementsBelow are the minimum hardware requirements for servers in the High Availability configuration:

  • Processor: 2 x 8 Core CPUs, for example, Xeon e5-2640 v3

  • Memory: 16GB RAM
  • Disks: 2 x 400GB SSD
  • RAID Configuration: RAID 1
  • Network Adapters: Quad port 1Gbp NIC

Installation

To install CP node for HA:

  1. Update your server:

    bash# yum update
  2. Download OnApp YUM repository file:

    # rpm -Uvh http://rpm.repo.onapp.com/repo/onapp-repo.noarch.rpm
  3. Install OnApp Control Panel installer package:

    bash# yum install onapp-cp-install
  4.  Set the custom Control Panel configuration. It is important to set the custom values before the installer script runs.

     Edit the /onapp/onapp-cp.conf file to set Control Panel custom values

    Template server URL

    TEMPLATE_SERVER_URL='http://templates-manager.onapp.com';

    # IPs (separated with coma) list for the snmp to trap

    SNMP_TRAP_IPS=

    # OnApp Control Panel custom version

    ONAPP_VERSION=""

    # OnApp MySQL/MariaDB connection data (database.yml)

    ONAPP_CONN_WAIT_TIMEOUT=15
    ONAPP_CONN_POOL=30
    ONAPP_CONN_RECONNECT='true'
    ONAPP_CONN_ENCODING='utf8'
    ONAPP_CONN_SOCKET='/var/lib/mysql/mysql.sock'

    # MySQL/MariaDB server configuration data (in case of local server)

    MYSQL_WAIT_TIMEOUT=604800
    MYSQL_MAX_CONNECTIONS=500
    MYSQL_PORT=3306

    # Use MariaDB instead of MySQL as OnApp database server (Deprecated parameter. If you set any values for this parameter, they will not take effect)

    WITH_MARIADB=0

    # Configure the database server relative amount of available RAM (Deprecated parameter. If you set any values for this parameter, they will not take effect)

    TUNE_DB_SERVER=0

    # The number of C data structures that can be allocated before triggering the garbage collector. It defaults to 8 million

    RUBY_GC_MALLOC_LIMIT=16000000

    # sysctl.conf net.core.somaxconn value

    NET_CORE_SOMAXCONN=2048

    # The root of OnApp database dump directory (on the Control Panel box)

    ONAPP_DB_DUMP_ROOT=""

    # Remote server's (to store database dumps) IP, user, path, openssh connection options ans number of dumps to keep

    DB_DUMP_SERVER=""
    DB_DUMP_USER="root"
    DB_DUMP_SERVER_ROOT="/onapp/backups"
    DB_DUMP_SERVER_SSH_OPT="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o PasswordAuthentication=no"
    KEEP_DUMPS=168
    DB_DUMP_CRON='40 * * * *'

    # Enable monit - tool for managing and monitoring Unix systems

    ENABLE_MONIT=1

    # If enabled (the 1 value is set) - install (if local box) and configures RabbitMQ Server (messaging system) for the vCloud support. (Deprecated parameter. If you set any values for this parameter, they will not take effect)

     

    ENABLE_RABBITMQ=1

     

    # Rotate transactions' log files created more than TRANS_LOGS_ROTATE_TIME day(s) ago

     

    TRANS_LOGS_ROTATE_TIME=30

     

    # Maximum allowed for uploading file size in bytes, from 0 (meaning unlimited) to 2147483647 (2GB). Default is 1GB

     

    MAX_UPLOAD_SIZE=1073741824

     

    # Timeout before ping Redis Server to check if it is started. Default is 5 sec.

    REDIS_PING_TIMEOUT=5

     

    # OnApp Control Panel SSL certificates (please do not change if you aren't familar with SSL certificates)
    # * The data below to generate self-signed PEM-encoded X.509 certificate

    SSL_CERT_COUNTRY_NAME=UK
    SSL_CERT_ORGANIZATION_NAME='OnApp Limited'
    SSL_CERT_ORGANIZATION_ALUNITNAME='OnApp Cloud'
    SSL_CERT_COMMON_NAME=`hostname --fqdn 2>/dev/null`

     

    #   SSLCertificateFile, SSLCertificateKeyFile Apache directives' values
    #   ssl_certificate, ssl_certificate_key Nginx directives' values

    SSLCERTIFICATEFILE=/etc/pki/tls/certs/ca.crt
    SSLCERTIFICATECSRFILE=/etc/pki/tls/private/ca.csr
    SSLCERTIFICATEKEYFILE=/etc/pki/tls/private/ca.key

     

    # * PEM-encoded CA Certificate (if custom one exists)
    #   SSLCACertificateFile, SSLCertificateChainFile Apache directives' values
    #   ssl_client_certificate Nginx directives' values

    SSLCACERTIFICATEFILE=
    SSLCERTIFICATECHAINFILE=


     

    #   SSLCipherSuite, SSLProtocol Apache directives' values
    #   ssl_ciphers, ssl_protocols Nginx directives' values

    SSLCIPHERSUITE=
    SSLPROTOCOL=


     

     

     

  5. Run the Control Panel installer on new CPs:

    bash# /onapp/onapp-cp-install/onapp-cp-install.sh --ha-install --percona-cluster
     The full list of Control Panel installer options:

     

     

    Usage:

    /onapp/onapp-cp-install/onapp-cp-install.sh -hUsage: /onapp/onapp-cp-install/onapp-cp-install.sh [-c CONFIG_FILE] [--mariadb | --percona | --percona-cluster] [-m MYSQL_HOST] [-p MYSQL_PASSWD] [-d MYSQL_DB] [-u MYSQL_USER] [-U ADMIN_LOGIN] [-P ADMIN_PASSWD] [-F ADMIN_FIRSTNAME] [-L ADMIN_LASTNAME] [-E ADMIN_EMAIL] [-v ONAPP_VERSION] [-i SNMP_TRAP_IPS] [--redis-host=REDIS_HOST] [--redis-passwd[=REDIS_PASSWD] [--redis-port=REDIS_PORT] [--redis-sock=REDIS_PATH] [--rbthost RBT_HOST] [--vcdlogin VCD_LOGIN] [--vcdpasswd VCD_PASSWD] [--vcdvhost VCD_VHOST] [--rbtlogin RBT_LOGIN] [--rbtpasswd RBT_PASSWD] [-a] [-y] [-D] [-t] [--noservices] [-h]

     

    Where:

     
     Database server options:Default database SQL server is MySQL Server. Please use one of the following option to install LOCALLY.
    --mariadbMariaDB Server
    --perconaPercona Server
    --percona-clusterPercona Cluster
      
    MYSQL_*Options are useful if MySQL is already installed and configured.
    -m MYSQL_HOSTMySQL host. Default is 'localhost'
    -p MYSQL_PASSWDMySQL password. Random is generated if is not set or specified.
    -d MYSQL_DBOnApp MySQL database name. Default is 'onapp'
    -u MYSQL_USERMySQL user
      
    REDIS_*Options are useful if Redis Server is already installed and configured.
    --redis-host=REDIS_HOST


    IP address/FQDN where Redis Server runs.
    The Redis Server will be installed and configured on the current box if localhost/127.0.0.1 or box's public IP address (listed in SNMP_TRAP_IPS) is specified.
    If local Redis, it will serve as well on the unix socket '/tmp/redis.sock'.
    Default value is 127.0.0.1.
    --redis-port=REDIS_PORTRedis Server listen port.
    Defaults are:
    0 - if local server
    6379 - if remote server
    --redis-passwd[=REDIS_PASSWD]Redis Server password to authentificate.
    Random password is generated if the option's argument isn't specified.
    By default no password is used for local Redis.
    --redis-sock=REDIS_PATH :Path to the Redis Server's socket. Used if local server only.
    Default is /tmp/redis.sock
      
    ADMIN_*Options are used to configure OnApp Control Panel administrator data.
    Please note, that these options are for NEW INSTALL only and not for upgrade
    -P ADMIN_PASSWD CP administrator password
    -F ADMIN_FIRSTNAMECP administrator first name
    -L ADMIN_LASTNAMECP administrator last name
    -E ADMIN_EMAILCP administrator e-mail
      
      --rbthost   RBT_HOST  IP address/FQDN where RabbitMQ Server runs. The RabbitMQ will be installed and configured on the current box if localhost/127.0.0.1 or box's public IP address (enlisted in SNMP_TRAP_IPS) Default values are 127.0.0.1.
    VCD_*Options are usefull if vCloud/RabbitMQ are already installed and configured.
    --vcdlogin  VCD_LOGINRabbitMQ/vCloud user. Default value is 'rbtvcd'.
    --vcdpasswd VCD_PASSWDRabbitMQ/vCloud user password. The random password is generated if isn't specified.
    --vcdvhost  VCD_VHOSTRabbitMQ/vCloud vhost. Default value is '/'
    RBT_*  Options are used to configure RabbitMQ manager account. If local RabbitMQ server.
    --rbtlogin  RBT_LOGIN RabbitMQ manager login. The default value is 'rbtmgr'.
    --rbtpasswd RBT_PASSWDRabbitMQ manager password. The random password is generated if isn't specified.
      
    -v ONAPP_VERSIONInstall custom OnApp CP version
    -i SNMP_TRAP_IPSIP addresses separated with coma for snmp to trap
    -c CONFIG_FILECustom installer configuration file. Otherwise, preinstalled one is used.
    -yupdate OS packages (except of OnApp provided) on the box with 'yum update'.
    -aDo not be interactive. Process with automatic installation. Please note, this will continue OnApp Control Panel install/upgrade even if there is transaction currently running.
    -tAdd to the database and download Base Templates. For new installs only. If this option is not used, then only the following mandatory System Templates will be added by default during fresh install: OnApp CDN Appliance; Load Balancer Virtual Appliance; Application Server Appliance.
    --noservices

    Do not start OnApp services: monit, onapp and httpd
    Please note, crond and all OnApp's cron tasks remain running. They could be disabled by stopping crond service manually for your own risk.

    -Ddo not make database dump, and make sure it is disabled in the cron and not running at the moment
    -hprint this info
  6. Install OnApp license to activate the Control Panel. Enter a valid license key via the Web UI (you'll be prompted to do so). Your default OnApp login is admin/changeme. The password can be changed via the Control Panel's Users and Groups menu in the Control Panel. Do not change the password on the slave nodes.

    Once you have entered a license it can take up to 15 minutes to activate. You can perform the next steps while the license is being configured. License should be common for all your nodes and all your IPs should be included in this license.
  7. Ensure that  host has a hostname properly set up including short hostname. 

  8. After the installation, it is recommended to increase the soft and hard limits for the opened files.
    Open the /etc/security/limits.conf file:

    vim  /etc/security/limits.conf

    Change the following parameters to at least the following:

    • root soft nofile 2048
    • root hard nofile 4096
    • onapp soft nofile 2048
    • onapp hard nofile 4096

    For heavy loaded cloud deployments, the limits should be increased.

  9. Install Cloudboot dependencies:

    This step is optional: if you have Integrated Storage, take this step, otherwise skip it.

    bash#> yum install onapp-store-install
    bash#> /onapp/onapp-store-install/onapp-store-install.sh
  10. After you have installed the Control Panel server, configure your Cloud Settings. See Configure Cloud for details. 



Mutual Accessibility Provisioning



  1. Provide mutual access among the all hosts using their hostnames - either via DNS resolution or using /etc/hosts capability as shown below (use local network addresses for this, preferably management interface):

    bash# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.0.51.125	host1
    10.0.51.126	host2
    10.0.51.127 host3
  2. Install keys for root and onapp user access via ssh. If you’ve skipped installing onapp-store previously, ensure you have ssh keys for root, otherwise generate them with ssh-keygen.

    bash# for cphost in host1 host2 host3; do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$cphost;done
    bash# su onapp
    bash# for cphost in host1 host2 host3; do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$cphost;done

Configuration in CP

Log in to the master Control Panel  and configure hosts, Redis, clusters and communication.


Hosts



Add relevant hosts corresponding to physical infrastructure:

  1. Go to your Control Panel Settings menu.
  2. Click the HA Clusters > Hosts tab. 
  3. Click the New Host button or click the "+" button.
  4. On the screen that appears, fill in the hostname. This must be exactly the same name that the command hostname returns on CP hosts.
  5. Click Submit.


Clusters



Configure the clusters in the system. In the High Availability configuration two clusters, Daemon and User Interface, are already present and you need to edit them. Sequentially edit the Daemon and User Interface clusters and add all three nodes to them.

IP addresses for the UI and Daemon clusters should be in the same network, where Database/Redis/RMQ reside.


 To edit a cluster: 

  1. Go to your Control Panel's Settings menu.
  2. Click the HA Clusters icon > Clusters tab. 
  3. Click the Actions button next to the cluster you want to edit, then click Edit.
  4. On the screen that appears, change the following parameters:
    • Virtual IP - fill in the IP address.
    • Net mask - indicate the net mask

      Do not use such format as 255.255.255.0 in the Net mask field. Instead indicate network prefix (valid values are from 0 to 32).

    • Ports - indicate ports. Ports can be left blank.
  5. Click Update.

To add a node to a cluster: 

  1. Go to your Control Panel's Settings menu.
  2. Click the HA Clusters icon > Clusters tab. 
  3. Click the label of the cluster to which you want to add a node
  4. The page that loads shows the list of nodes in the cluster. Click the Add Node button.
  5. Fill in the details of the new node:
    • Host - select the host with which the new node is to associated from the drop down list
    • IP address - fill in the physical IP address of the node
    • Interface - fill in the network interface where the IP address is set.
    • Priority - set the priority for the node. Set priority to 100 for slave nodes and to a larger value for the master node. The node with the highest priority will take over the virtual IP address when the component of the cluster fails.
  6. Click Submit.

You need to create Load Balancer, Database, Redis and RabbitMQ clusters.You also need to add nodes to your clusters.

If you intend to use CloudBoot Compute Resources, add a CloudBoot Cluster, however, you need to set Static config target and CP server CloudBoot target at Control Panel > Settings > Configuration > System tab. These parameters should contain the same IP address that will be used as the virtual IP address for the CloudBoot cluster.

To add a cluster:

  1. Go to your Control Panel's Settings menu.
  2. Click the HA Clusters icon > Clusters tab. 
  3. Choose one of the optional clusters and click the appropriate button: Add Load BalancerAdd Database,Add Redis or Add Message Queue.
  4. Fill in required information:
    • Virtual IP - the virtual IP address of the cluster. This IP address should be unique
    • Net mask - mask of the network
    • Ports - cluster ports
  5.  Click Submit to add the cluster.
  • Virtual IP for the Load Balancer cluster must be a public front end IP. This will be the new IP address for your CP after the High Availability configuration process is completed.

  • Virtual IP for Database, Redis, RabbitMQ has to be in a data network (LAN) and can be the same (one for three clusters). 
  • Virtual IP for Cloudboot has to be the same as configured in main Settings, Cloudboot section. 
  • The Load Balancer cluster must be added first, then you will be able to add Database, Redis and Message Queue. 
  • You can leave virtual IP field empty for Daemon, Database, Message Queue and Redis clusters since they have no effect on current HA implementation.

Load Balancer Cluster Options



It is possible to customize frontend/backend ports for the Load Balancer cluster using options.

To set options for the Load Balancer cluster:

  1. Go to Control Panel > Settings > HA Clusters > Clusters tab.
  2. Click the Actions button next to the Load Balancer cluster and select Options.
  3. On the page that loads click Add Option.
  4. Set the variable and its value and click Submit.

The following list of options is relevant to the Load Balancer cluster:

  • frontend_http_port  
  • frontend_https_port
  • frontend_db_port    
  • frontend_redis_port
  • frontend_rabbitmq_port   
  • backend_http_port   
  • backend_https_port  
  • backend_db_port     
  • backend_redis_port  
  • backend_mq_port     

If you do not customize any port values and the Load Balancer cluster hosts overlap with at least one of the hosts from other clusters, OnApp automatically sets the following values:

  • frontend port = (DEFAILT_PORT + 100)
  • backend port = DEFAILT_PORT
The default values are the following:
  • DEFAULT_DB_PORT = 3306
  • DEFAULT_REDIS_PORT = 6379
  • DEFAULT_RABBITMQ_PORT = 5672


Communication



Configure relevant communication channels. Two channels in different networks are recommended.  

Here you can select the way of communication among the hosts - Multicast or Unicast, by pressing the corresponding button: Switch to Unicast or Swith to Multicast. This will re-generate corosync configuration files and reload service.

To add a communication ring:

  1. Go to your Control Panel Settings menu.
  2. Click the HA Clusters icon > Communication tab. 
  3. Click the Add New Ring button or click the "+" button.
  4. Fill in the following parameters:
    • Network - the multicast network used by the hosts to communicate with each other
    • Multicast IP Address - the multicast IP address
    • Multicast Port - the multicast port 
    • TTL - time to live (only for the multicast configuration)
    • Members - the IP address of the hosts in the configuration. Fill in the IP address of the hosts separated by a comma (only for the unicast configuration)
  5. Click Submit.
  6. At Settings > HA Clusters > Communication click Apply to save the changes.

Switching to unicast mode is recommended for the sake of network productivity.

If you are going to set up more than one communication ring, ensure that pairs multicast address/port differ, that is either they have different multicast addresses or ports differ by more than 1. For example, if addresses are the same then ports 5005 and 5006 for such communication rings wouldn't fit, you need to set at least 5007 for the latter one.

Please note, the you are required to add the correct IP address when configuring multicast. Adding incorrect IP address will affect the multicast configuration.

The maximum number of communication rings corresponds to the number of available NICs on hosts. For example, if all hosts have two NICs, you can configure a maximum of two communication rings.

High Availability Initialization



  1. Go to Settings > HA Clusters > General and review the modified configuration.

  2. Go to Control Panel > Sysadmin > Control Panel Maintenace and click Enable. This prevents any customer from performing any activity unless they have permissions for Sysadmin tools.
  3. Stop the monit service on all three Control panels by issuing the following command:

    service monit stop
  4.  Disable monit in autostart on all CP nodes by issuing the following command:

    chkconfig monit off
  5. Put ENABLE_MONIT as 0 in the  /onapp/onapp-cp.conf file.

  6. After validating configuration click Enable at Settings > HA Clusters > General.
  7. After High Availability initialization go to Control Panel > Settings > Configuration tab on the master node, synchronize settings by clicking the Save Configuration button.
  • Process is shown in the activity logs.
  • On all Control Panels OnApp will deploy reverse proxy servers.
  • On all Control Panels http and https ports will be modified from 80, 443 to 10080, 10443 respectively by default. This behavior can be overridden if you set additional options for load balancer cluster: http_port, https_port
  • The OnApp interface will automatically be switched into maintenance mode.
  • OnApp will automatically dump your current database, shift mySQL servers on all Control Panels with multimaster database cluster and apply the last dump. 
  • OnApp will clusterize rabbitMQ and redis.
  • All Control Panels will get database, redis and rabbitMQ connections automatically reconfigured to connect to the corresponding virtual IPs.
  • All requests to the UI, database, redis and rabbitMQ servers will be load balanced.
  • The OnApp interface application will be restarted on new ports.
  • The Onapp interface application will be available via Load Balancer virtual IP address, 80, 443 ports.

After you initialize High Availability you can monitor the configuration process by watching the following script:

crm_mon -r

Clusters Activation


After High Availability is initialized you need to activate the clusters you have configured.

User Interface Cluster



  1. Go to Control Panel > Settings > HA Clusters > Clusters tab.
  2. Activate the UI cluster by clicking the Actions button next to the required cluster and selecting Recreate.
  3. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  4. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.

CloudBoot Cluster



Omit these steps if you are not using CloudBoot.

High Availability should be initialized before installing OnApp store.

Make sure that the public keys from all nodes are stored in /onapp/configuration/keys:

# ll /onapp/configuration/keys
  1. Prepare the system for OnApp store installation by running the following commands:

    #  crm resource stop lsyncd-сluster
    #  crm configure property maintenance-mode=true 
  2. Install OnApp store:

    # yum install onapp-store-install -y
    # /onapp/onapp-store-install/onapp-store-install.sh
  3. Enable the system to monitor resources and start synchronization:

    #  crm configure property maintenance-mode=false  
    #  crm resource start lsyncd-сluster
  4. Switch on the Enable CloudBoot option in Control Panel > Settings > Configuration.
  5. Add IPs for the CloudBoot compute resources you will create at Control Panel > Settings > Compute Resources > CloudBoot IPs tab. You can also add these IP addresses later.
  6. At Control Panel > Settings > HA Clusters > Clusters tab, activate the CloudBoot cluster by clicking the Actions button next to the required cluster and selecting Recreate
  7. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  8. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.

  9. Run the following script to update the CloudBoot components if you already have them.

    rake pxe:update

Load Balancer Cluster



  1. At Control Panel > Settings > HA Clusters > Clusters tab, activate the Load Balancer cluster by clicking the Actions button next to the required cluster and selecting Recreate
  2. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  3. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at ControlPanel > Settings > HA Clusters > Clusters tab. Once all the actions succeed you can access virtual IP on port 5005 to see the cluster's status. 

    From now on use port 10080 on the master CP to access GUI until the last step.

Database Cluster



  1. At Control Panel > Settings > HA Clusters > Clusters tab, activate the Database cluster by clicking the Actions button next to the required cluster and selecting Recreate
  2. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  3. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.
  4. Run the following command:

    crm resource unmanage onapp-db
  5. Run the following command on the slave nodes:

    service mysql stop
  6. Increase max_connections and innodb_buffer_pool_size in my.cnf on all three Control Panels.

    /etc/my.cnf
    
    max_connections=2000
    innodb_buffer_pool_size=3GB

    Where:

    max_connections - Max allowed connections

    innodb_buffer_pool_size - the buffer pool is where data and indexes are cached: having it as large as possible will ensure you use memory and not disks for most read operations. Typical values are 2-4GB (8GB RAM), 10-15GB (32GB RAM), 20-25GB (128GB RAM)

  7. Run the following command on the master node:

    service mysql restart-bootstrap
  8. Run the following command on the slave nodes sequentially so that the slave nodes restart one by one:

    service mysql restart

    Wait until the following message appears on the screen:

    *.... SUCCESS!*

    The fillowing message should appear in /var/log/mysqld.log:

    *WSREP: Member 2.0 (onapp_db_xxxxxxx) synced with group.*
  9. On all Control Panel servers, edit the /onapp/interface/config/database.yml file and set the port parameter to the same value as the frontend_mysql_port parameter in Dashboard > Settings > HA Clusters > Clusters > Load Balancer cluster > Options:

    host: 127.0.0.1
    port: 3406
    #socket: '/var/lib/mysql/mysql.sock'
  10. Restart the onapp and httpd services on all Control Panel servers:

     
    # service onapp restart && service httpd restart
  11. Run the following command on the master node:

    crm resource manage onapp-db


Message Queue Cluster



  1. At Control Panel > Settings > HA Clusters > Clusters tab, activate the Message Queue cluster by clicking the Actions button next to the required cluster and selecting Recreate
  2. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  3. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.

  4. Edit /onapp/interface/config/on_app.yml and add rabbitmq_port option with value the same as frontend_rabbitmq_port in Control Panel Settings HA Clusters Clusters Load Balancer cluster>Options.


Redis Cluster




  • At Control Panel > Settings > HA Clusters > Clusters tab, activate the Redis cluster by clicking the Actions button next to the required cluster and selecting Recreate.

  • Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.

  • Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.

  • On master node edit /onapp/interface/config/redis.yml file and for the field :password: set the same value as the requirepass parameter in /etc/redis.conf and set port (or you can take these values from GUI - a variable master_auth at Control Panel > Settings > HA Clusters > Clusters > Redis cluster> Options  and frontend_redis_port  at Control Panel Settings HA Clusters Clusters Load Balancer cluster > Options).

  • Remove the following file from redis.yml:

    :path: "/var/run/redis/redis.sock"


  • Edit the redis.yml file so that it looks the following way:

    production:
      :host: 127.0.0.1
      :port:  <frontend_redis_port>
      :password: <password>
  • Run the following commands on all nodes:

    # crm resource restart onapp-redis-group-cluster
    # service onapp restart && service httpd restart

Daemon Cluster



  1. At Control Panel > Settings > HA Clusters > Clusters tab, activate the Daemon cluster by clicking the Actions button next to the required cluster and selecting Recreate
  2. Save changes by clicking the Apply Changes button at Control Panel > Settings > HA Clusters.
  3. Check that the status of the activated cluster has changed to Stable. The clusters' statuses are displayed at Control Panel > Settings > HA Clusters > Clusters tab.
  4. Now you can re-check logs for errors, check Admin Tools in GUI (to be sure that all services are running), disable Maintenance mode and let users use virtual IP to access CP.
  5. Run the following command on the master node:

    # crm resource manage onapp-frontend-httpd-group-cluster

"Manage_X_Cluster" log outputs have to be checked as well as after any cluster gets activated, relevant resource in crm_mon should appear.

Backup servers and Compute Resources Configuration


Since each CP needs to have an ssh access to compute resources in the cloud, you have to update the authorized_keys file by running the following command on each Control Panel server(except the master):

bash# ssh-copy-id -i /home/onapp/.ssh/id_rsa.pub root@<HV_HOST_IP>


Do this for each compute resource and backup server.


Run the following command to add all IP addresses to HOST variable in /etc/onapp.conf  and to reconfigure all needed services:

# /onapp/onapp-hv-install/onapp-hv-config.sh -h 10.0.24.63,10.0.24.64,10.0.24.65
#trackbackRdf ($trackbackUtils.getContentIdentifier($page) $page.title $trackbackUtils.getPingUrl($page))
  • No labels