Types of cloud storage – Compare block storage types

Published on April 6, 2018
Share this on

In this article, let’s look at some of the recent innovations with cloud storage, block storage – the kinds of device types associated with compute (EC2) types, the financial impact of various choices and its performance impact. Block storage devices are what’s needed to connect computing devices, and there are two types of storage classifications based on if it has moving parts or not:

  • Hard Disk Drives (mechanical moving parts)
    • st1 (Throughput Optimized)
    • sc1 (Cold HDD)
  • Solid State Drives (electronic – no moving parts)
    • gp2 (General purpose SSD)
    • io1 (Provisioned IOPS)

SSD vs HDD types

Many of the compute EC2 types have specific requirements on the startup volume, including minimum size, typically it has to be a solid state drive (gp2) category. However, added drives can be on any of these four types. How do these four types compare?

Each of these storage types supports 0.5 GB to 16TB on a single partition, with varying prices and throughput capabilities.

quadrants - Cloud storage types cost savings types

Did you know that barring the requirements of the EC2 startup volume type and minimum size, you can alter a drive’s type and size?

Until recently outside of the minimum size for these storage types, one had to plan how much contiguous space should a volume be, and these volume sizes were immutable. Additional storage, when added before this innovation, isn’t contiguous space in Windows, and to a lesser extent on the Unix side.

Add in the variations on the HDD types to support RAID 5/10 the more prevalent types, that were used mainly for databases, depending on if they were OLTP, DSS or Data warehouse types.

Raid 5 type setup for cloud

Source, Wikipedia – https://upload.wikimedia.org/wikipedia/commons/thumb/6/64/RAID_5.svg/1000px-RAID_5.svg.png


Raid 10 in the cloud

Source, Wikipedia – https://upload.wikimedia.org/wikipedia/commons/0/0d/RAID_100.png

A new measure called IOPS (Input/output operations per second) introduced early in this century started out to be a comparative performance measure across HDD, SSD types. Previously the standard of was in MegaBytes per second, and this got replaced with an amount of output over a fixed time.

This measure, however, isn’t natural to compare drive types as the number of units sent depends partly on the block size of a device. There are two types of partitioning schemes at AWS – MBR (which is a 32-bit addressing scheme) and GPT (a 64-bit addressing scheme). Windows boot partitions can only be MBR partitioned (with a max size of 2 TB) while Linux can be either, without the 2TB boot partition limitation.

The block sizes of these storage partitioning schemes range from 4KB up to 64KB, and the block volume size impacts the IOPS. SSD volumes handle smaller random I/O much more efficiently than HDD volumes. Imagine if the actual data necessary is 4KB unit, but the partitioning scheme is 64KB, you have increased the amount of useless data inflating the IOPS number, without adding any useful data to an application. IOPS by itself is a meaningless metric. Without clarity on latency, read vs. write % and I/O size (to name a few), an IOPS number is useless.

Cloudwatch makes available these additional metrics that give applications users better insight as to the right choice of volume types; block sizes patterns, disk contention:

  • BurstBalance
  • VolumeReadBytes
  • VolumeWriteBytes
  • VolumeReadOps
  • VolumeWriteOps
  • VolumeQueueLength

More on these metrics can be studied here.

Staying on the theme of innovation the relevant feature recently enabled is the ability to change a drive type, from hard disk to solid state, between gp2, st1, io1 barring certain EC2 restrictions for the boot drive and some operating system considerations. Not only can you change types, but you can also extend a drive capacity when needed. Something that was 100GB can be altered to 200GB when necessary vs. the prior practice of adding a secondary volume or backup-recovery approach to drop the previous data onto a larger storage unit.

Changing Block Storage Types in the Cloud

Your current volume type can be altered by selecting the volume and clicking on Modify Volume:

aws actions on cloud

The possible volume types you can move to and its size can be altered:

how to modify ebs

Once you click modify you get a message like:

EBS setting modification in aws

For Windows one needs to use the Disk Management utility or for Unix the df utility.

The IOPS value on its own is a poor indicator of any real value. IOPS is a marketing metric that storage manufacturers used and it has made it to the cloud. One vendor claims to have 8000 IOPS vs another to claim 1.4 million IOPS. Without knowing the block size, relevance of the data retrieved the interface capabilities, own its own it isn’t a useful metric. Instead look at the number of bytes or operations of reading/writes that are occurring. Ask the question are you read-intense or is your application write-intense? Does the application have the right block size for the nature of your application request? Are you over allocating disk storage or using a type that adds more expensive than what you can use now? Changing from a mechanical drive to solid state type in the cloud is a rather quick operation!

Gone are the days of forecasting the extent of data you will likely grow to in three years, then plan for storage growth by having the desired end state storage device now. The cloud changes things and what was once a three-year process, is getting to be real-time. Cloud capacity planning requires a new thought process!

The same theory applies to the computing category. You can change an instance type from a C/M/R category to a T, or turn on T2 unlimited in near real time.

Feel free to test your selected storage types against our AI engine that analyses these cloud storage metrics, so you know what capacity are using if you have savings opportunities! Are you underutilizing or is there overutilization? If the stats are right, there is $62 billion wasted with overprovisioning in the cloud.

Download the whitepaper that explains why cloud capacity planning needs a different thought process from traditional on-premise infrastructure.

We wont spam you or sell your email address