Integrated Storage is a distributed block storage system that uses existing commodity cloud hardware to meet the requirements of virtual servers.

The main components of the Integrated Storage are the following:

  •     Device mapper
  •     Network block device
  •     Custom API service
  •     Custom distributed DB

Integrated Storage uses many components to deliver data written and read by a virtual server to hardware storage devices and from storage devices back to virtual servers.

With the custom kernel module dm-mirror-sync, a device mapper splits a write or read request according to the number of mirrors or stripes of a vDisk. The dm-mirror-sync kernel module ensures the redundancy of the disk, storing several copies of the disk.

NBD (Network Block Device), an open-source protocol, transfers data from the device mapper to NBD servers through the network. NBD servers (rspamd instances) are hosted on IS controllers. They are lightweight virtual servers that consist of BusyBox and a minimal set of binaries optimized for I/O.

The device mapper passes hardware storage devices to IS controller virtual servers to ensure the smooth and uninterrupted hot plugging of new disks.

Thus, the virtual server is entirely independent of the physical disk, provided that the virtual server and the physical disk are in the same compute zone.


Figure 1. Integrated Storage Architecture

 


Configuration

Virtual servers, device mappers, and NBD servers are configured through a single configuration file /onappstore/onappstore.conf. It is located on all compute resources and backup servers with the enabled Integrated Storage.

The /etc/init.d/SANController service ensures the work of virtual servers, that is, Integrated Storage controllers, while the /etc/init.d/storageAPI service is used for API.

After the /etc/init.d/storageAPI service starts up, API listens on 8080 port. Then, the IS can be configured through API. 


Abstractions

Compute zones can be used to create different tiers of service and have data stores and networks attached to them. The combination of compute resources, data stores, and network groups can be used to create private clouds for customers.

Abstractions are used to configure IS. There are two types of nodes:

  • Front-end nodes - compute resources and backup servers
  • Back-end nodes - hardware disks

Data stores are a joint of back-end nodes that work as a single space for virtual disk creation and can be configured to use many stripes or mirrors.

For the creation of a data store, back-end nodes should be within the same compute zone and servers should be connected to the same SAN network.

Virtual disks are created within the data store that has back-end nodes as members. Also, data can be written to a virtual disk or can be read if a vDisk is online only. vDisks can be made online on a compute resource and attached to a virtual server or on a backup server to take a backup.


Figure 2. Applied Abstraction


All compute resource backup servers and IS controllers within a compute zone share the same distributed database (DDB). For more information on the distributed database, refer to the Distributed Data Base page.


Network 

A SAN network ensures the work of the IS. Compute resources and backup servers are managed through a management network from CP.


Figure 3. Network


API

  •     API is represented by the Python web library and a bunch of scripts.
  •     API supports GET, PUT, POST, DELETE requests in JSON format.
  •     API redirects a request to the node according to the UUID within the compute zone.
  •     API request example:

    curl 192.168.1.2:8080/is/Datastore/n3qlhk1vypdr5s
    CODE

The following diagram shows how an API call is redirected. Control Panel refers to a backup server to get information on the node with its ID. The backup server uses /onappstore/DB and finds the IP of the compute resource, where the node is located, and redirects the request to the compute resource. The compute resource responses, and the backup server sends this response to CP.


Figure 4. API Call Redirection

For more information on benchmarks, refer to the Performance Benchmarks page.