OnApp 4.3.0-35 Update

Release NotesAffects Version/s
Improved billing statistics recalculation for VS. 
Fixed the issue with incorrect billing for acceleration.4.3
Fixed the incorrect names of network types on the Edit Org Network page.4.3
Fixed the issue when processing of the Control Panel settings failed under load. 4.2, 4.3
Fixed the issue when rake vm:generate_hourly_stats did not work properly if execution period was too long.3.5, 4.0, 4.1, 4.2, 4.3
Fixed the issue with adding new CloudBoot compute resource.4.3
Moved "save_vm_billing_stats"" to the separate queue. 4.3
Fixed the error which occurred during onapp service restart.4.3
Fixed the issue when lock timeout caused unpredictable system behaviour.4.2, 4.3

Patch Installation

The instructions below are relevant if you already run OnApp 4.3.0 version. If you are upgrading from OnApp 4.2.x. version, run the full upgrade procedure. For more information refer to the Get Started guide.

To apply the patch into Control Panel (running 4.3.0 version):

  1. Upgrade OnApp Control Panel installer package:

    # yum update onapp-cp-install
  2. Run the Control Panel installer:

    # /onapp/onapp-cp-install/onapp-cp-install.sh

Patch Installation for Clouds with High Availability

If you are a High Availability customer, it is recommended that you contact support for help with the procedure described below. Be aware, that if the configuration below is performed incorrectly it may cause damage to your cloud.

To apply the patch for clouds with High Availability enabled:
  1. Switch the cloud to maintenance mode at Control Panel > Sysadmin Tools > Control Panel Maintenance

  2. Upgrade OnApp Control Panel installer package:

    yum update onapp-cp-install
  3. Update your server OS components (if required):

    bash# /onapp/onapp-cp-install/onapp-cp-install.sh -y
  4. Run the Control Panel installer:

  5. Switch all nodes back online one by one by running the following command:

    # crm configure property maintenance-mode=false
  6. Enable file synchronization on all nodes by running the following command on one of the nodes:

    # crm resource start lsyncd-cluster