OnApp Storage is a distributed block storage system that uses existing commodity cloud hardware to present a reliable, scalable storage system as an alternative to traditional SANs. This section provides a general overview of the OnApp storage architecture.
At the lowest level, the disk drives are visible to back-end instances that perform network communication with the front end, either locally or within the OnApp Control Panel.
A virtual disk or vDisk is part of a data store. Each vDisk replica has an individual handler that connects it with the front end; the back end also handles the access to the storage drives. Once a vDisk has been successfully created, it becomes available through the device mapper as a block-based drive. The Figure 1 shows the integrated storage architecture map, with more implementation details shown in Figure 2.
Figure 1. Integrated Storage architecture
Figure 2. Detailed view of the Integrated Storage architecture
Drives that are connected to the compute resource are displayed in the OnApp Management user interface, a web-console which manages the OnApp Cloud Platform and Storage system.
After the back end reports about the storage drives, they will be displayed in the OnApp user interface as shown in Figure 3 and Figure 4.
Figure 3. OnApp Control Panel management interface
Figure 4. Integrated Storage user interface
OnApp Storage uses multiple front ends (2+ compute resources) that communicate via back-ends to avoid a single point of failure. As long as there is an active back end with access to a replica, the data can be accessed. If a compute resource that contains a replica fails, the failed data replica will become out of date as soon as data writes are performed. This leads to the vDisk degrading. To fix the degraded disk, you need to manually perform the disk repair operation, as described in the Repair VS Disks Assigned to Integrated Storage Data Store section. During the disk repair, disk volume is repaired using well-available replicas. However, if the disk drive has completely failed and cannot be repaired, it can be forgotten via UI. Then, it can be replaced with the new drive after the rebalancing operation.
The OnApp Storage system detects data location. Having detected where the application virtual server is, the Storage system will attempt to keep and use a replica on the back end system which is local to that server. This feature allows to optimize data placement, reduce the amount of network traffic and improve performance. If the virtual server is migrated to another location, the Storage system will detect changes and migrate data to the new VS location.
Virtual server live migration is available on Xen and KVM compute resources. Follow the links below to view the list of templates that support live migration:
Storage migration is fully supported across the data store to any compute resource drives within the same zone.
The OnApp Storage architecture has been designed to use existing cloud hardware. There are many different types of storage drives connected to compute resource servers. The Storage system divides the drive performance into low, medium, and high. For example, most of the Solid State Drives (SSDs) will be classified as high performance. Standard Hard Disk Drives (HDDs) can be either of low or medium performance. The performance metrics are calculated when the storage is activated to check the read and write drives' behavior. You can also manually set disk performance in OnApp User Interface.
Different drives are then detected and enabled through a multicast channel local to a single Control Panel and divided by compute resource zones, as shown in Figure 5. The division by compute resource zones helps to separate the storage channels for different types of underlying compute resource types (Xen/KVM etc).
Figure 5. Integrated Storage system available across multiple compute resources
The OnApp Storage system utilizes CloudBoot compute resource bootstrap method and the centralized management system - the OnApp Control Panel, as shown in Figure 6. That means, different compute resources can be provisioned rapidly through a templates system. The compute resources will be provisioned when the storage is activated. After that, the OnApp Storage will be available to all virtual servers across the CP.
Figure 6. CloudBoot and the Control Panel view
OnApp Storage uses an internal /16 private address range within the 10.0.0.0/8 range. By default, we assign 10.200.0.0/16 for integrated SAN operations. To attach and access an existing SAN, it should use an address with the 10.0.0.0/8 range that doesn't conflict with the SAN address. Note, however, that it can use the same physical subnet/ethernet NICs on the compute resources, though with the obvious performance impact of aggregating data flows over the same physical interface.