RabbitMQ Configuration for Accelerator

Below you can find instructions on how to configure RabbitMQ for CDN Accelerator. 

Compute Resources and Control Panel must use the same rabbitmq-server. For instructions on how to install RabbitMQ server, refer to the RabbitMQ Server Installation document.

Upgrade Control Panel Server



To upgrade your Control Panel server:

  1. Download and install the latest OnApp YUM repository file:

    #> rpm -Uvh http://rpm.repo.onapp.com/repo/onapp-repo.noarch.rpm
  2. Upgrade OnApp Control Panel installer package:

    #> yum update onapp-cp-install


  3. Update your server OS components (if required):

    # /onapp/onapp-cp-install/onapp-cp-install.sh -y
  4. Custom Control Panel configuration. Custom values must be set before the installer script runs.

     Edit the /onapp/onapp-cp.conf file to set Control Panel custom values

    Template server URL

    TEMPLATE_SERVER_URL='http://templates-manager.onapp.com';

    # IPs (separated with coma) list for the snmp to trap

    SNMP_TRAP_IPS=

    # OnApp Control Panel custom version

    ONAPP_VERSION=""

    # OnApp MySQL/MariaDB connection data (database.yml)

    ONAPP_CONN_WAIT_TIMEOUT=15
    ONAPP_CONN_POOL=30
    ONAPP_CONN_RECONNECT='true'
    ONAPP_CONN_ENCODING='utf8'
    ONAPP_CONN_SOCKET='/var/lib/mysql/mysql.sock'

    # MySQL/MariaDB server configuration data (in case of local server)

    MYSQL_WAIT_TIMEOUT=604800
    MYSQL_MAX_CONNECTIONS=500
    MYSQL_PORT=3306

    # Use MariaDB instead of MySQL as OnApp database server

    WITH_MARIADB=0

    # Configure the database server relative amount of available RAM

    TUNE_DB_SERVER=0

    # The number of C data structures that can be allocated before triggering the garbage collector. It defaults to 8 million

    RUBY_GC_MALLOC_LIMIT=16000000

    # sysctl.conf net.core.somaxconn value

    NET_CORE_SOMAXCONN=2048

    # The root of OnApp database dump directory (on the Control Panel box)

    ONAPP_DB_DUMP_ROOT=""

    # Remote server's (to store database dumps) IP, user, path, openssh connection options ans number of dumps to keep

    DB_DUMP_SERVER=""
    DB_DUMP_USER="root"
    DB_DUMP_SERVER_ROOT="/onapp/backups"
    DB_DUMP_SERVER_SSH_OPT="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o PasswordAuthentication=no"
    KEEP_DUMPS=168
    DB_DUMP_CRON='40 * * * *'

    # Enable monit - tool for managing and monitoring Unix systems

    ENABLE_MONIT=1


    # vi /onapp/onapp-cp.conf
  5. Run Control Panel installer:

    #> /onapp/onapp-cp-install/onapp-cp-install.sh

    See the installer options below for details.

     The full list of Control Panel installer options:

    Usage:

    /onapp/onapp-cp-install/onapp-cp-install.sh [-c CONFIG_FILE] [-m MYSQL_HOST] [-p MYSQL_PASSWD] [-d MYSQL_DB] [-u MYSQL_USER] [-U ADMIN_LOGIN] [-P ADMIN_PASSWD] [-F ADMIN_FIRSTNAME] [-L ADMIN_LASTNAME] [-E ADMIN_EMAIL] [-v ONAPP_VERSION] [-i SNMP_TRAP_IPS] [--redis-host=REDIS_HOST] [--redis-passwd[=REDIS_PASSWD] [--redis-port=REDIS_PORT] [--redis-sock=REDIS_PATH] [-a] [-y] [-D] [-h]


    Where:


    MYSQL_*Options are useful if MySQL is already installed and configured.
    -m MYSQL_HOST MySQL host
    -p MYSQL_PASSWDMySQL password
    -d MYSQL_DBOnApp MySQL database name
    -u MYSQL_USERMySQL user


    REDIS_*Options are useful if Redis Server is already installed and configured.
    --redis-host=REDIS_HOST


    IP address/FQDN where Redis Server runs.
    The Redis Server will be installed and configured on the current box if localhost/127.0.0.1 or box's public IP address (listed in SNMP_TRAP_IPS) is specified.
    If local Redis, it will serve as well on the unix socket '/tmp/redis.sock'.
    Default value is 127.0.0.1.
    --redis-port=REDIS_PORTRedis Server listen port.
    Defaults are:
    0 - if local server
    6379 - if remote server
    --redis-passwd[=REDIS_PASSWD]Redis Server password to authentificate.
    Random password is generated if the option's argument isn't specified.
    By default no password is used for local Redis.
    --redis-sock=REDIS_PATH : Path to the Redis Server's socket. Used if local server only.
    Default is /tmp/redis.sock


    ADMIN_*Options are used to configure OnApp Control Panel administrator data.
    Please note, that these options are for NEW INSTALL only and not for upgrade
    -P ADMIN_PASSWD CP administrator password
    -F ADMIN_FIRSTNAMECP administrator first name
    -L ADMIN_LASTNAMECP administrator last name
    -E ADMIN_EMAILCP administrator e-mail


    -v ONAPP_VERSIONInstall custom OnApp CP version
    -i SNMP_TRAP_IPSIP addresses separated with coma for snmp to trap
    -c CONFIG_FILECustom installer configuration file. Otherwise, preinstalled one is used.
    -yupdate OS packages (except of OnApp provided) on the box with 'yum update'.
    -ado not be interactive. Process with automatic installation.
    -Ddo not make database dump, and make sure it is disabled in the cron and not running at the moment
    -hprint this info

    You may wish to reboot your Control Panel server to take advantage of a new kernel if it is installed. It is not required immediately as a part of the upgrade process though.


    Perform the following steps if you plan to deploy Accelerator. Otherwise skip.


  6. Run the following command:

    cd /onapp/interface
  7. If you plan to configure an Accelerator, run the following commands:

    • For all compute resources:

      rake hypervisor:messaging:configure
    • For certain compute resources only:

      rake hypervisor:messaging:configure['11.0.50.111 11.0.50.112']

      To perform the configuration for a number of compute resources, separate their IPs with a space.


    The command above runs on compute resources that are online. If some compute resources are offline, you should run the command again when they are online.

    The rabbitmq_host parameter in the on_app.yml file should contain the real IP address of the server with RabbitMQ installed. The rabbitmq_host parameter should not be set to 'localhost' or '127.0.0.1'.

    The server with RabbitMQ installed should be available from the compute resources.


    To make the configuration for the Accelerator manually, perform the following steps:

    Specify user name and password for rabbitmq-server:

    rabbitmqctl add_user username 'userpass'


    Set permissions for this user:

    rabbitmqctl set_permissions -p '/' username ".*" ".*" ".*"
  8. Restart OnApp service:


    This step can be omitted if you are only configuring the RabbitMQ server and do not intend to update the Control Panel server.

    service onapp restart


Upgrade Static Compute Resources




  1. At first upgrade your static compute resources.
  2. Make sure your compute resource is visible and online in the Control Panel.

  3. Download the OnApp repository:

    #> rpm -Uvh http://rpm.repo.onapp.com/repo/onapp-repo.noarch.rpm
  4. Depending on the compute resource type run the following:

    • For XEN compute resources


      /onapp/onapp-hv-install/onapp-hv-xen-install.sh -y
      /onapp/onapp-hv-install/onapp-hv-xen-install.sh
    • For KVM compute resources
      /onapp/onapp-hv-install/onapp-hv-kvm-install.sh -y
      /onapp/onapp-hv-install/onapp-hv-kvm-install.sh

  5. Reboot static compute resources.

    If you have the latest OnApp update installed, there is no need to reboot compute resources. In this case just run the service onapp-messaging start command.

    If you do not have the /home/mq/onapp/messaging/credentials.yml file on your compute resources, run the following command on the CP server:

    • For all compute resources:

      rake hypervisor:messaging:configure
    • For certain compute resources only:

      rake hypervisor:messaging:configure['11.0.50.111 11.0.50.112']

      To perform the configuration for a number of compute resources, separate their IPs with a space.

To make the configuration for the Accelerator manually, perform the following steps:

  1. Copy file:

    cp /home/mq/onapp/messaging/credentials{_example,}.yml
  2. Open vi /home/mq/onapp/messaging/credentials.yml and check the following details:

    ---
    host: 10.0.50.4  # RABBITMQ SERVER IP/FQDN
    port: 5672  	# RABBITMQ CONNECTION PORT(default: 5672)
    vhost: '/'  	
    user: accelerator-example # RABBITMQ USER NAME
    password: 'e{y31?s8l' #RABBITMQ ACCESS PASSWORD
    queue: 'hv-10.0.50.102' # hv-[IP Address of Compute Resource]
    exchange:
      name: 'acceleration'
      type: 'direct'
      durable: True 
  3. Change owner:

    chown -R mq:mq /home/mq
  4. Run the following commands:

    service onapp-messaging start
    monit monitor onapp-messaging

Upgrade Cloudboot Compute Resources



Follow the below procedure to upgrade the CloudBoot compute resources with reboot:
 

  1. Upgrade CloudBoot Packages. Upgrade the repo:

    #> rpm -Uvh http://rpm.repo.onapp.com/repo/onapp-repo.noarch.rpm
    
  2. Upgrade the packages:

    #> yum update onapp-store-install
  3. Run the script:

    #> /onapp/onapp-store-install/onapp-store-install.sh

    When run in the interactive mode, enter the required information.



Depending on the infrastructure, scale and needs of your cloud we suggest the following methods of upgrading CloudBoot compute resources:

Simple RebootThis method is the simplest method technically. It also ensures all tools are updated. However, it will result in some limited downtime (its duration depends on how many virtual servers are running on each compute resource).
Migrate and rebootThis method involves migrating all virtual servers off each CloudBoot compute resource in turn. The compute resource can then be safely rebooted, picking up the upgraded Integrated Storage and CloudBoot packages. Virtual servers that do not support hot migrate will have to be stopped.
Live UpgradeThis method will upgrade Integrated Storage components but will not upgrade CloudBoot image.

In case you have applied any custom configuration to your CloudBoot servers, it is recommended to recheck that this customization does not break new cloud boot image version. For this, reboot a compute resource and run Storage Health Check and Network Health Check. Make sure that Vdisks hosted on a compute resource are redundant and healthy before rebooting a CloudBoot compute resource.

Simple Reboot

Follow the below procedure to upgrade the CloudBoot compute resources with reboot:
 

1. Upgrade CloudBoot Packages.
2. When the CloudBoot packages upgrade is complete, stop all virtual servers which reside on the CloudBoot compute resources.

3. Reboot all CloudBoot compute resources.

Once the compute resources are booted, the upgrade is complete. Before starting all Virtual Servers please ensure that the diagnostics page does not report any issue. In case of any issue, please press repair button to resolve it, then continue with starting Virtual Servers.


Note that virtual servers cannot be stopped simultaneously, but must be stopped in sequence. This can result in considerable downtime if there are a large number of virtual servers.



Migrate and Reboot

Use this procedure if you prefer migrating all virtual servers to another compute resource and conducting overall upgrade of your CloudBoot and Integrated Storage. Virtual servers that do not support hot migrate will have to be stopped.

Once you have upgraded the CloudBoot packages, you have to reboot your CloudBoot compute resources to update them.

To do so:

  1.  Run the following command from the Control Panel server terminal to display the list of compute resources with their IP addresses. Make a note of the list of IPs:

    CP_host#> liveUpdate listHVs 

    This command will also show whether compute resources are eligible for live upgrade.

    If the command liveUpdate is not available then it may be located in the sbin directory instead (cd /usr/local/sbin).

  2. Run the following command for every compute resource:

    CP_host#> liveUpdate updateToolstack <HV IP Addr> 

    Once all the toolstacks are updated run the following command for every compute resource: 

    CP_host#> liveUpdate refreshControllers <HV IP Addr>

    Wait several minutes for all degraded disks to come to synchronized state. The synchronization will take approximately three minutes for each compute resource.

    After each controller restart, check for any issues on the backup server (or on one Compute resource from each zone):

    1. Log on via SSH to the backup server (or Compute resource).
    2. Run getdegradednodes from the SSH console.
    3. Run getdegradedvdisks from the SSH console.

  3. Migrate all the virtual servers from the CloudBoot compute resource to another compute resource. Follow the instructions described in the Migrate Virtual Server section of the Admin guide to migrate virtual servers.

  4. After that, go to your Control Panel Settings menu.

  5. Click the Appliances icon.

  6. Click the label of the CloudBoot compute resource you have migrated all VSs from.

  7. On the compute resource details screen, click the Actions button, then click Reboot Compute resource.

    Rebooting a compute resource assigned to a data store with a single replica (single-replica compute resource) or degraded virtual disks may result in data loss.


  8. A new screen will open asking for confirmation (via two check boxes) before reboot:

    • Stop all virtual servers that cannot be migrated to another compute resource? Check this box if you want VSs that cannot be migrated to be powered off. When a compute resource is scheduled for a reboot, OnApp will first attempt to hot migrate all VSs it hosts. If hot migration is not possible for a VS, OnApp will attempt to cold migrate that VS. With this box checked, if cold migration fails, the VS will be stopped so the reboot may proceed. If you don't check this box, OnApp will attempt to hot and then cold migrate all VSs hosted by the compute resource being rebooted – but will stop the migration process if any VS cannot be migrated.
    • Are you sure you want to reboot this compute resource? A simple confirmation to confirm that you want the compute resource to reboot.

      Before the reboot, please ensure that all vdisks are fully synced and redundant. If some of them are not fully synced, the virtual server, that is owner of a degraded (or non-redundant) vdisk, can loose access to the vdisk. It can be manifested as IO errors during writes or reads to/from the vdisk inside the virtual server.

  9. When you're certain you want to proceed with the reboot, click the Reboot button.

  10. Once the compute resource is booted, repair the disk that were degraded during the reboot.

    1. Make sure no disks are out of sync. To do so, check the Diagnostics page in CP at Dashbord > Integrated Storage > Compute zonelabel > Diagnostics. Alternatively, log into a compute resource and run the command below: 

      HV_host#> getdegradedvdisks
    2. Repair all the degraded disks before proceeding to the upgrade process. To do so, log in to your CP and go to Integrated Storage > Compute zonelabel > Diagnostics page. Alternatively, run one of the following commands:

      HV_host#> onappstore repair uuid=
      HV_host#> parallelrepairvdisks
  11. Repeat these steps for all CloudBoot compute resources in your cloud.


Live Upgrade


Live Upgrade is only applicable if your cloud is running latest 4.3 CloudBoot RPM.

  • Live Upgrade with passthrough is currently unsupported. Passthrough to storage means that network interface will be added to the Storage Controller Server without the bond and the Storage Controller Server will have the complete control over this interface.
  • Power off all Windows virtual machines and virtual backup servers  before starting the live upgrade.

  • If your current Storage package is 4.0, Windows virtual servers can remain running.

  • During the CloudBoot compute resource live upgrade, only the control stack for managing integrated storage is upgraded.  Other changes come into effect after the compute resource is next rebooted. Due to this, hot migration may fail between compute resource which is already rebooted and the one that hasn't.
  • Do not make any changes to the cloud during the upgrade process!
  • Any offline Cloudboot compute resources should be removed from the CP server before running live upgrade as the scripts expect to be able to speak to all compute resources during these steps. 
  • Please, consult OnApp IS Upgrade Paths to learn the minimum Integrated Storage version required for the current update to be performed in LiveUpgrade mode.


Use this procedure to upgrade without rebooting your servers:

  1. Make sure no disks are out of sync. To do so, check the Diagnostics page in CP at Dashbord > Integrated Storage > Compute zonelabel > Diagnostics. Alternatively, log into a compute resource and run the command below: 

    HV_host#> getdegradedvdisks
  2. Repair all the degraded disks before proceeding to the upgrade process. To do so, log in to your CP and go to Integrated Storage > Compute zonelabel > Diagnostics page. Alternatively, run one of the following commands:

    HV_host#> onappstore repair uuid=
    HV_host#> parallelrepairvdisks
  3. Run the following command from the CP server to stop the OnApp service:

    CP_host#> service onapp stop
  4. Stop the Apache server:

    CP_host#> service httpd stop
  5. Make sure to update CloudBoot packages before proceeding to the following steps.

  6. Run the following command from the Control Panel server terminal to display the list of compute resources with their IP addresses. Make a note of the list of IPs:

    CP_host#> liveUpdate listHVs 

    This command will also show whether compute resources are eligible for live upgrade.

    If the command liveUpdate is not available then it may be located in the sbin directory instead (cd /usr/local/sbin).

  7.  Run the following command for every compute resource:

    CP_host#> liveUpdate updateToolstack <HV IP Addr> 

    Once all the toolstacks are updated run the following command for every compute resource: 

    CP_host#> liveUpdate refreshControllers <HV IP Addr>

    Wait several minutes for all degraded disks to come to synchronized state. The synchronization will take approximately three minutes for each compute resource.

    After each controller restart, check for any issues on the backup server (or on one Compute resource from each zone):

    1. Log on via SSH to the backup server (or Compute resource).
    2. Run getdegradednodes from the SSH console.
    3. Run getdegradedvdisks from the SSH console.

  8. Restarts the storage controllers. This command can be performed later at a more suitable time
    Run the following command for each compute resource in turn:

       CP_host#> liveUpdate restartControllers <HV IP Addr> 

    Please make sure you restart all controllers and don’t leave your cloud in a partially updated state for too long. Note that when operating in LiveUpdated mode (e.g. with the tool stacks updated but before you have performed the controller restart) you cannot use disk hot plug. 

    After each controller restart check for any issues on the backup server or one Hypervisor from each zone:
    1. Log on via SSH to the backup server (or Hypervisor).
    2. Run getdegradednodes from the SSH console.
    3. Run getdegradedvdisks from the SSH console.
    If there are any issues seen please rectify them before continuing with the next controller restart.

  9. Make sure that the package versions are upgraded by running the following command on each compute resource:

    HV_host#> cat /onappstore/package-version.txt | grep Source
  10. Start the Apache server:

    CP_host#> service httpd start
  11. Start the OnApp service:

    CP_host#> service onapp start

Configuration for Accelerator

Perform the following steps for your Cloudboot compute resources if you plan to deploy Accelerator. These steps are to be performed on each of the compute resources.

  1. Run the following command on the CP server:

    • For all compute resources:

      rake hypervisor:messaging:configure
    • For certain compute resources only:

      rake hypervisor:messaging:configure['11.0.50.111 11.0.50.112']

      To perform the configuration for a number of compute resources, separate their IPs with a space.

  2. The command above should  be run after every reboot. However, you can avoid the necessity to run the command repeatedly after every reboot by coping the following information (using your parameters) from /home/mq/onapp/messaging/credentials.yml to the custom config:

     echo "---
    host: 10.0.50.4  # RABBITMQ SERVER IP/FQDN
    port: 5672      # RABBITMQ CONNECTION PORT(default: 5672)
    vhost: '/'
    user: accelerator-example # RABBITMQ USER NAME
    password: 'e{y31?s8l' #RABBITMQ ACCESS PASSWORD
    queue: 'hv-10.0.50.102' # hv-[IP Address of Compute Resource]
    exchange:
      name: 'acceleration'
      type: 'direct'
      durable: True" > /home/mq/onapp/messaging/credentials.yml
    chown -R mq:mq /home/mq
    service onapp-messaging restart

To make the configuration for the Accelerator manually, perform the following steps:

  1. Copy file:

    cp /home/mq/onapp/messaging/credentials{_example,}.yml
  2. Open vi /home/mq/onapp/messaging/credentials.yml and check the following details:

    ---
    host: 10.0.50.4  # RABBITMQ SERVER IP/FQDN
    port: 5672  	# RABBITMQ CONNECTION PORT(default: 5672)
    vhost: '/'  	
    user: accelerator-example # RABBITMQ USER NAME
    password: 'e{y31?s8l' #RABBITMQ ACCESS PASSWORD
    queue: 'hv-10.0.50.102' # hv-[IP Address of Compute Resource]
    exchange:
      name: 'acceleration'
      type: 'direct'
      durable: True 
  3. Change owner:

    chown -R mq:mq /home/mq
  4. Run the following:

    service onapp-messaging start

    Note that steps 1-4 of the above instruction should be done after every reboot of CloudBoot compute resource. You can run the following commands (using your parameters) to the custom config instead:

    cp /home/mq/onapp/messaging/credentials{_example,}.yml
    echo "---
    host: 10.0.50.4  # RABBITMQ SERVER IP/FQDN
    port: 5672      # RABBITMQ CONNECTION PORT(default: 5672)
    vhost: '/'
    user: accelerator-example # RABBITMQ USER NAME
    password: 'e{y31?s8l' #RABBITMQ ACCESS PASSWORD
    queue: 'hv-10.0.50.102' # hv-[IP Address of Compute Resource]
    exchange:
      name: 'acceleration'
      type: 'direct'
      durable: True" > /home/mq/onapp/messaging/credentials.yml
    chown -R mq:mq /home/mq
    service onapp-messaging restart