OnApp storage nodes are self-managing, self-discovering, and self-contained hot-pluggable units. Each storage node manages its own content in the most efficient way possible, without loss of performance, using a thin-provisioning layer for storage space efficiency and overcommit. This ensures that data is stored optimally across the whole environment while maintaining data replication and drive resiliency properties. There is no centralized management system to fail, and each node can make decisions about data synchronization and load balancing without depending on a central controller.

The list of storage nodes can be found in the Storage > Nodes menu.

NBD paths

The number of NBD paths available for your virtual disks depends on the amount of RAM available to the storage node. You can use the following formula to calculate the maximum number of NBD paths, N:
N = (Controller memory size - 128) ÷ 4, where: 
Controller memory size = the memory assigned to the storage controller (by default it is 1024 MB, you may calculate the needed memory as DB size (128 MB by default) + 10 MB x vDisk parts at the controller). 
128 = amount of system memory reserved for the storage controller
4 = the amount of memory needed per NBD server

So by default:
(1024 -128) = 896 MB for NBD servers
896 ÷ 4 = 224 NBD paths available

Each stripe in a datastore needs 1 NBD path, so the total number of vDisks, D, you can have is given by:
D = N ÷ (SxR)
where: S = the number of stripes in the datastore
R = the number of replicas

Free space per storage node

In order to calculate free space per storage node we use the next two formulas:

a) Formula for overcommit scheme: physical_disk_size + overcommit_value_in_percentage - allocated_space

b) Formula without overcommit: physical_disk_size  - allocated_space

For example:

Our physical disk which is mounted to the controller has a size of 558.6GB:

Allocated space for it - 659 GB

Overcommit - 20%

So: 558.6+20%-659=11.32 or 11 GB

The same calculation is done for the second formula but without using the overcommit value.

  • Enabling overcommit and running out of physical space is a dangerous condition and should always be avoided. It is strongly recommended that you create data stores with overcommit = none for production purposes.
  • These calculations will not make sense for overcommit unlimited. We can use them only when we have a specific value of overcommit (for instance 20% 50% etc).