Macrium’s view on the recent SMR disk controversy

Posted at Apr 28, 12:00h in mr Macrium Software, Marketing Categories: mr, macrium, shingledmagneticrecording, macrium-reflect, smr-disks

In recent weeks there has been significant controversy caused by reports that a number of big hardware vendors are quietly shipping SMR disks in their NAS (Network Attached Storage) products.

The reasons for this are not particularly surprising. By using SMR, vendors can, as this Ars Technica article does a good job of explaining, “eke out higher storage densities, netting more TB capacity on the same number of platters — or fewer platters, for the same amount of TB.” This is at the cost of performance, as well as a number of other compatibility issues.

While the companies involved have responded to criticisms from both users and sections of the tech press, we thought the controversy would be a good opportunity to take a closer look at current trends and issues across the spinning disk market, including SMR, TRIM, and RAID.

There’s undoubtedly a lot of technical complexity at play here. This makes it particularly hard to determine the extent to which the companies alleged to have been using SMR were acting with malicious intent or merely failed to properly anticipate product issues for users. However, because such hardware can have a significant impact on performance and system resilience, understanding the differences between them is absolutely essential. For this reason, we believe manufacturers should have been clear from the start which devices utilised SMR.

What is SMR (Shingled Magnetic Recording)?

Innovation in magnetic disk technology is always moving towards higher data density and lower cost per GB. SMR continues this trend. However, unlike other recent incremental improvements, it has a significant impact on device performance.

A traditional disk with standard, non-overlapping tracks is typically called a CMR (or sometimes PMR) disk. Each CMR track (and a sector within a track) can be written without overwriting neighbouring tracks; the ‘cost’ of an update is only the time to slew the disk head and the sector to rotate under the head.

SMR uses the property that a narrower track can be read than the minimum written width. By writing overlapped tracks, the achievable data density can be significantly increased. This increase in density should in principle reduce the cost per GB and a small power reduction per GB where the number of platters can be reduced.

There is a cost, however; to write an update, all the tracks have to be re-written. The primary mitigation is to divide the shingled areas into zones. This limits the re-write to the extent of the zone, instead of the entire disk surface.

To update a byte on the disk, the entire zone must be read, updated in memory, and then written back out to disk. This is known as write amplification. Flash media is similarly impacted, although to a lesser extent. Due to its structure, erasure must be applied to a whole page of memory.

Further mitigations

There are a number of other ways you can mitigate the need to re-write all tracks.

  1. An area of the disk is reserved for CMR tracks. Incoming data is initially written here avoiding the re-write overhead. The disk firmware subsequently moves data into the SMR zones when the disk is idle. If the disk experiences sustained high usage, the CMR tracks become full and there will be a sudden drop in performance as the firmware moves data to the SMR zones.
  2. TRIM. This an additional SATA / NVMe command (Unmap for SCSI) typically used by file systems to indicate to the underlying storage that a region is no longer in use. This enables the disk firmware to avoid the read/copy/write pattern for that region when it is next written to. It was originally implemented for flash media but is also of utility for SMR disks.

SMR implementations

SMR is managed in two fundamental ways. The difference is important; Host managed SMR disks are specialist devices potentially yielding the highest performance, with the constraint that they are not a general purpose device and can not be used as a replacement for a standard CMR disk.

Device-managed SMR: The management of the re-write of SMR zones is transparent to the operating system and is handled by the disk firmware. These devices can be a drop in replacement for a CMR device. It is this type of device that was been sold as a standard disk.

Host-managed SMR: The management in this case is delegated to the attached computer. There are an additional set of management commands to enable this. These are called Zoned Access commands or Zoned Block Commands if you speak SCSI and enable you to query and gain access to a zone. All writes must be sequential from the start of the zone. These are targeted solely at enterprise applications.

Host-aware SMR: A hybrid of the above, combining compatibility whilst still enabling optimisations for SMR aware file-systems and applications.

Performance characteristics of SMR disks

The performance of device-managed SMR disks is usually acceptable in scenarios where they are lightly loaded on write operations or where write operations are sequential in nature. They work well, for example, for backup or archival storage where the key figure of merit is cost per GB, or, in some cases, power usage.

When should you avoid SMR disks?

SMR disks should be avoided if periods of poor performance under sustained random write access cannot be tolerated.

Specifically, they should never be used in RAID arrays. This is because it will multiply the rebuild time, sometimes to the point of causing the rebuild to fail.

Host managed SMR disks are a different proposition. When you have control of the entire i/o stack, for many applications you can design your i/o write patterns to match those where SMR works best. This means you can avoid any performance penalty and make significant power and cost savings.

This case study from Dropbox illustrates this really well.

Are there any file system tuning options for SMR devices?

For conventional file systems, the answer is no. Unlike SSDs where the page size is typically 4K, SMR Zones are much larger — typically 256K. Simply setting the file system atomic size (for NTFS this is the cluster size) to 256K will lead to very space in-efficient storage. In contrast, the page size of SSDs map nicely onto the default cluster size of NTFS.

Some special purpose and experimental filesystems not available on the Windows platform (e.g. F2FS and ext4-lazy) are being developed to accommodate SMR. Microsoft may introduce file system extensions if the SMR cost or size advantage becomes more compelling.

There is some clue that ReFS v2 does support Host Managed SMR devices. Oddly the documentation is non existent [5,6]. ReFS on Device Managed SMR will not take advantage of this support.

Should SMR disks always be avoided?

Outside data centres and other highly specialised applications, we suggest that SMR disks should be avoided where there is no significant price, size or power advantage. If the price differential starts to open up, backup storage is a perfect application as the Macrium backups are written out almost entirely sequentially as it is a performance advantage even for conventional disks.

We believe that disk manufacturers were incorrect in placing device managed SMR disks in their standard range. Though they have a place in the storage roadmap, and the firmware can make them a drop in replacement in some cases, for many applications they will perform very poorly. Fortunately, they have now been forced to be clearer in their specifications. Despite this, we recommend that you take extra care purchasing disks esp for RAID or highly loaded random access applications.

The future of magnetic storage

The ongoing incremental reduction in cost of flash storage is such that the vast majority of computer local storage is now flash based. Magnetic disks are increasingly being banished to NAS devices and ultimately will only be found in storage-centric data-centres where cost per GB will always be the defining parameter. The increasing prevalence of SMR on large disks will accelerate this trend where the complexity in getting reasonable performance and extended RAID rebuild times will only be acceptable to data centre applications [7].

[1] https://blocksandfiles.com/2020/04/23/western-digital-blog-wd-red-nas-smr-drives-overuse/

[2] https://news.ycombinator.com/item?id=22939319

[3] https://www.usenix.org/system/files/login/articles/login_summer17_03_aghayev.pdf

[4] https://dropbox.tech/infrastructure/smr-what-we-learned-in-our-first-year

[5] https://www.snia.org/sites/default/files/SDC/2017/presentations/smr/Das_Rajsekhar_ReFS_Support_For_Shingled_Magnetic_Recording_Drives.pdf

[6] https://docs.google.com/document/d/1XioV6xLpTorRXfzRbx8UvINEkN4LN46INYMMEM7SLJk/edit?usp=sharing

[7] https://blocksandfiles.com/2020/02/07/hard-disks-disappear-small-data-centres/


Previous Post

5 reasons to upgrade to Macrium Home Edition

Next Post

Backup, disaster recovery, and business continuity: how do they compare?