Myths about SSDs: 5 Major Misconceptions about SSDs

Myths about SSDs: 5 Major Misconceptions about SSDs

Read 10 minutes

Solid State Drives (SSDs) are no longer the newest technology. Many users are already familiar with these disks. In enterprise storage applications, SSDs can now far outperform hard disk drives (HDDs) in terms of performance, ease of management, and affordability. Still, in spite of SSDs' popularity in the mainstream storage and data center market. There are still misconceptions about their performance, cost, and use.

IT professionals and storage administrators increasingly prefer SSDs over hard drives. Once all the myths about SSDs have been dispelled, it's clear that SSDs are a real innovation in computer management, and that they can greatly improve data center efficiency.

Myth 1: SSDs lack capacity

The alleged lower capacity of SSDs compared to HDDs has long been a problem for SSDs, even despite the fact that SSDs actually outperform many HDDs. Currently, 2.5-inch SSDs with a capacity of 32TB are already available on the market. IT professionals can expect SSDs with a capacity of 50 TB or even more to hit the market in the near future. While hard drives are only capable of 16TB or less, SSDs are getting smaller, consume less power, and perform consistently better. The truth is - SSD capacity isn't an issue in data centers anymore.

Myth 2: SSDs are too expensive

Another major problem with SSDs is their high cost. Over the past few years there has been a rapid decline in SSD prices. Still with the transition to a new manufacturing technology - Intel 3D NAND - the cost of SSDs has risen and leveled off. As of today, this problem has been solved and we can expect SSD prices to go down again.

And yet, there is still a price gap between SSDs and HDDs. Servers with SSDs have the advantage of doing more work and doing it faster, so the price difference is more than compensated by the benefits they provide. Furthermore, SSDs are significantly cheaper per terabyte of disk space than HDDs because of their efficient data compression mechanism.

Additionally, the price differential between HDDs and SSDs, which is frequently discussed in the high-tech industry, is mostly considered in the context of SATA hard drives merging with enterprise-class SSDs. The cost of SDDs connected via SATA is already nearly half the price than that of SAS HDDs. Also, despite comparable sizes, SSDs are significantly faster and have a number of advantages over HDDs.

Non-volatile memory drives (NVMe drives) are also expensive, but their cost is rapidly approaching that of SATA SSDs due to their equivalent capacities.

Myth 3: SSDs are short-lived

Myths about SSDs: 5 Major Misconceptions about SSDs


Although the myth that SSDs wear out faster is partly true, today's SSDs are built to last for years. This is due to improved electronics and signal processing. Plus smarter fault detection and correction.

In addition, keep in mind that SSDs are designed for both small and large operating data storage volumes, which are measured by the amount of information (MB or TB) written per day on the entire drive. In contrast to SSDs, drives are designed to handle only large amounts of data and have more free disk space, which in turn increases their cost.

Some hard drives' specifications include a measure of the amount of data the drive writes per day. In general, SSD specifications do not differ much from these amounts. In addition, any HDD is not immune to exceeding its guaranteed write endurance and can fail quickly. There is only one conclusion, SSDs are just as reliable as HDDs, but they are much faster.

Myth 4: SSDs can replace HDDs in disk arrays

Myths about SSDs: 5 Major Misconceptions about SSDs

This is another SSD problem that can scare off IT professionals. SSDs today are so fast that disk array controllers can only keep up with a few SSD models. Arrays were designed with the I/O performance of hard disk drives in mind, which means 1,000 times slower random read/write operations, and 100 times slower linear read/write operations.

Array controllers were designed to combine multiple slow data streams on hard disk drives into a pair of relatively fast Fibre Channel links, and this is becoming a serious problem for SSDs. The solution may be to use storage devices that focus on SSDs. Alternatively, 100 Gigabit Ethernet links may be considered for backbone transmission and storage.

A similar problem arises with servers, because an outdated SAS interface and a more modern SATA interface cannot handle the drive's speed. NVMe is much faster and can reduce system costs significantly by reducing interruptions and simplifying queue management. Today, IT professionals and storage administrators are choosing NVMe technology as a way to share drives across a cluster of servers, thereby speeding up HCI systems.

Myth 5: The state of SSD drives is difficult to monitor

Previously, one of the problems with SSDs was a phenomenon known as Write Amplification (WA). Flash memory doesn't allow simple overwriting of data blocks, erasing them is required to make them available. The difficulty is that fast operation of flash memory is achieved by reading, writing, and erasing data in large blocks. Usually 2 MB in size. This means that all data in the erase megablock must be read first, then modified, then the block must be cleared, and then the data must be written again to it.

The phenomenon of amplification significantly slows down the process of writing data, even if the fast memory buffer saves it immediately after sending from the server. In this case the best way out is to clear the block beforehand using a special TRIM command. This command is present by default in Windows starting with Windows 7.

In order to enable this feature, you first need to determine whether the TRIM command is available in your OC, and if support for it is disabled.

Using TRIM command is an important factor to maintain high SSD performance during its lifetime. Writing to an SSD won't erase old data - it's just as fast as writing to an empty drive. With better electronics, signal processing, and smarter ways of detecting failures, SSDs last longer than ever before.

One more nuance: Defragmenting your SSD is not recommended. Generally, defragmentation has a negative impact on I/O performance and can shorten the drive's lifespan, at the very least. The reason is simple: unlike hard drives, SSDs are designed to distribute data blocks across the entire disk space as much as possible during recording and to access any block without delay. Not to forget about the fast and efficient flash data compression mechanism used by SSDs allows servers to perform better, as well as to increase their effective capacity by about 5 times.

Another advantage of using these drives in a network storage system is that the data load on the network will be reduced by 5 times when compression and decompression are performed on the server. This is a huge money saver due to the large number of additional I/O cycles SSDs have.

DedicServerEN