• Live Upgrade with passthrough is currently unsupported.
  • Power off all Windows virtual machines and virtual backup servers  before starting the live upgrade.
  • During the CloudBoot hypervisor live upgrade, only the control stack for managing integrated storage is upgraded.  Other changes come into effect after the HV reboot. Due to this behavior, hot migration may fail between hypervisor which is already rebooted and the hypervisor that needs a reboot to fetch the latest changes.
  • Do not make any changes to the cloud during the upgrade process!
  • Any offline Cloudboot HVs should be removed from the CP server before running live upgrade as the scripts expect to be able to speak to all HVs during these steps.
  • CloudBoot hypervisors must be installed and running as Cloudboot
  1. Make sure no disks are out of sync. To do so, log in into a hypervisor and run the following command:

    bash#> cd /usr/pythoncontroller/ 
    bash#>  ./getdegradedvdisks
  2. Repair all the degraded disks before proceeding to the upgrade process.

    bash#>  ./repairvdisks
  3. Run the following command from the CP server to stop the OnApp service:

    service onapp stop
  4. Stop the Apache server:

    service httpd stop
  5. Download and install the latest OnApp YUM repository file:

    bash#> rpm -Uvh http://rpm.repo.onapp.com/repo/onapp-repo-3.2.noarch.rpm
  6. Install latest CloudBoot dependencies:

    bash#> yum update onapp-store-install
    bash#> /onapp/onapp-store-install/onapp-store-install.sh
  7. Run the following command from the Control Panel server terminal to display the list of hypervisors with their IP addresses:

    liveUpdate listHVs
    

    This command will also show whether hypervisors are eligible for live upgrade.

    If the command liveUpdate is not available then it may be located in the sbin directory instead (cd /usr/local/sbin).

  8. Ensure the line beginning filter = in /etc/lvm/lvm.conf on each Hypervisor has the following syntax:

    filter = [ "r|/dev/nbd|","r|/dev/mapper|","r|/dev/dm-|" ]
  9. Run lvmdiskscan from each hypervisor to enable those changes if a change was required.

  10. Run the following commands from the Control Panel server terminal for each hypervisor:

    liveUpdate updateToolstack <HV IP Addr>
    

    The synchronization will take approximately three minutes for each hypervisor.

     

  11.  Run the following command for every hypervisor in turn:

    liveUpdate restartControllers <HV IP Addr>
    

    At this stage, an error message about degraded disks may be displayed. VDisks should still be unpaused, but may be degraded. Check the number of degraded disks by repeating step 1 above after restarting the controller.

     Check for any nodes in a state other than ACTIVE using 'onappstore nodes' from the Backup Server



  12. Make sure that the package versions are upgraded by running the following command on each HV:

    cat /onappstore/package-version.txt | grep Source
  13. Check that the storage controllers have been started cleanly by running the following command on each HV:

    ifconfig onappstoresan
    log into storagenodes
    uptime
    
  14. Check that the disk hotplug slots came up fine on each hypervisor:

    /usr/pythoncontroller/diskhotplug list
  15. Start the Apache server:

    service httpd start
  16. Start the OnApp service:

    service onapp start
Please contact support if hypervisors are displayed as offline or report I/O errors during the upgrade.


If you do not have a dedicated backup server you must mount your Template and Backup repository to the Hypervisor for VS provisioning and backups to work, for example from your Control Panel server:

Add to /etc/exports on the Control Panel server:

/onapp/templates 192.168.10.0/24(rw,no_root_squash)

/onapp/backups 192.168.10.0/24(rw,no_root_squash)

Add to Custom Config on the Hypervisor and run them manually on the command line (In this example we are mounting from 192.168.10.101):

mkdir -p /onapp/backups && mount -t nfs 192.168.10.101:/onapp/backups /onapp/backups

mkdir -p /onapp/templates && mount -t nfs 192.168.10.101:/onapp/templates /onapp/templates


 

#trackbackRdf ($trackbackUtils.getContentIdentifier($page) $page.title $trackbackUtils.getPingUrl($page))
  • No labels