This section is the part of the OnApp preparation guide. |
---|
Configure Networks > Configure Storage > Configure Servers |
Centralized Storage (SAN)
Primary storage is critical to your cloud, and your SAN will have a huge impact on the performance of the whole platform.
OnApp gives you a lot of flexibility in your primary storage technology. It supports anything that is capable of presenting a block device to compute resources. This could be, for example, FiberChannel, SCSI or SAS HBA, iSCSI or ATAoE, or a InfiniBand HCA controller, since all of these present the block device directly. OnApp does not support services such as NFS for primary storage, because these present a filesystem and not the block device.
Beyond the type of block device, there are three main things to consider in your SAN design: the host, fabric and storage components. You need to think about each very carefully and pay particular attention to performance, stability and throughput when planning your SAN.
You will need to think about redundancy, and whether you need to design a fault tolerant switching mesh to coincide with your multipath configurations at the host and SAN ends.
You should also think about future growth: as you add more compute resources and SANs to the cloud you will need to be able to grow the physical connectivity without downtime on the Storage Network.
Host Components - Compute Resource Connectivity to the Storage Network
You will need to make sure that your ethernet or HBA drivers are stable in this release. We recommend that you test this thoroughly before handing over to OnApp to deploy your cloud on your infrastructure.
You will also need to think about the throughput, and whether the connectivity on compute resources will be suitable for the virtual servers they'll be running. A bottleneck here will cause major performance issues.
Consider adding multiple HBAs or NICs if you plan to run a redundant switching mesh (see the fabric section below) as bonding or multipath will be required, unless the redundancy is built into the physical switch chassis (failover backplanes for example).
Storage Components - SAN Chassis, Controllers and Disk Trays
You need to take into consideration the size of storage required and the physical capacity you have to achieve this. This will give you a good idea on the size of disks you will be adding into the array and the RAID level you will choose.
As a general rule, more spindles in the array will give you better performance: you should avoid using a small number of large disks, or you will start to see I/O bottlenecks as you make increasing use of the storage in future.
You should also think about the physical storage hardware, and whether you'll be using SATA, SAS or SSD. Again, this will have a great impact on the I/O capabilities of the array.
It's also a good idea to consider RAID levels carefully and look into the advantages and disadvantages of each. We recommend RAID10.
Although you will lose 50% of your capacity you will see good performance for both read and write, which is important for primary storage. RAID10 will also give you much better redundancy on the array.
Controller caching is another issue to consider. You should always aim to have both read and write caching. If you are looking at write caching you should also look at battery backups for the write cache. Some controllers also support SSD caching which can be a great advantage.
As with the host components, you should also take your HBA and Ethernet connectivity into consideration, to ensure you have both the redundancy and throughput required for your cloud infrastructure.
Integrated Storage (OnApp Storage)
OnApp Storage is a distributed block storage system that allows you to build a highly scalable and resilient SAN using local disks in compute resources. With OnApp Storage you create a virtual data store that spans multiple physical drives in compute resources, with RAID-like replication and striping across drives. The SAN is fully integrated into the compute resource platform, and the platform is completely decentralized. There is no single point of failure: for example, if a compute resource fails, the SAN reorganizes itself and automatically recovers the data.
The following requirements are recommended for integrated storage implementation:
- Integrated Storage can group together any number of drives across any compute resource. We strongly recommend a minimum of 2 drives per compute resource to enable redundant datastore configurations.
- SSD drives are recommended for best performance
- At least 1 dedicated NIC assigned per compute resource for the storage network (SAN)
- Multiple NICs bonded or 10GBit/s Ethernet (recommended)
MTU on storage NIC: 9000 (recommended)
IGMP snooping must be disabled on storage switch for storage network
Enabling jumbo frames MTU > 1500, up to a maximum of 9000, requires NIC and switch support. Ensure that your network infrastructure has jumbo frame support and that jumbo frames are enabled in any switches. Otherwise leave MTU as default 1500 for storage NICs. Additionally, MTU must be equal for all storage NICs for compute resources, including for Backup servers.
The Bonded NICs for the management/boot interface are not yet available (they will be introduced in future releases)
SolidFire Integration
You can perform the following options with SolidFire:
- Utilize SolidFire SAN in the OnApp cloud.
- Allocate dedicated LUNs from the SF cluster per virtual server disk, when creating a VS. (LUN is created per each VS disk, with a separate lun per swap disk.)
- Manage SolidFire LUNs automatically via API.
- Create virtual servers without the swap disk.
- Implement backups / snapshots using SF CloneVolume method
There is a disk dependency between OnApp and SolidFire - when a new disk is created on the OnApp side, a new LUN is created automatically on the SF side, using the CreateVolume API call. The LUNs on the SolidFire are managed automatically vis SolidFire API.
Inasmuch SolidFire data store has two interfaces: OnApp and SolidFire, you have to specify two IP addresses when creating a SolidFire Data Store.
To be able to use the SF volume, you have to enable export to this device (compute resource or a data store). To do that, you need to send an account username and initiator password to the iscsi_ip address. You will be able to use this device after the authorization.
The following options are not available under SolidFire:
It is not possible to migrate SolidFire disks, as SF virtualizes the storage layer.
SolidFire does not support live disk resize. To resize disk, you need to shut down the virtual server first and use the CloneVolume functionality to increase the disk size. After the disk resize operation is complete, the original volume will be replaced with the new one and deleted, after that the VS will be booted.
Now proceed to configuring servers.
This section is the part of the OnApp preparation guide. |
---|
Configure Networks > Configure Storage > Configure servers |