High-Speed Storage Interface Reduces System Overhead
Although solid-state disk drives (SSDs) with SATA interfaces fill part of the performance gap between traditional hard-disk drives (HDDs) and dynamic RAM (DRAM) main memory, the gap continues to grow as DRAM performance improves while native HDD performance has barely improved (see the graph).
Faster SATA interfaces have upped the data transfer rate to 6 Gbits and beyond for short bursts, but there is an overhead associated with the SATA interface – translating the native drive control and data stream into SATA signals from the drive, and then decoding the SATA signals from the host. This translation overhead also exists on the host side and thus adds significant overhead to the overall data request and transfer operations.
However, by removing the overhead through the use of an enhanced PCIexpress interface that adds only a handful of storage-specific instructions, the new non-volatile memory PCIexpress (NVMe) interface can deliver much higher data rates by using multiple serial PCIe channels. The new interface, NVMe 1.0, developed by an industry consortium of over 80+ members, was first published in early 2011. Prototypes of SSDs that employ the interface were demonstrated at both the Flash Memory Summit and the Intel Developer Forum (IDF), both recently held in Santa Clara and San Francisco, respectively. The storage-related commands added to the PCIe command set include ten administrative commands and three I/O related commands.
At the conferences, Integrated Device Technology demonsrated a NVME enterprise flash memory controller with native support for the PCIe Gen 3 interface. Additionally, IP-Maker and Teledyne LeCroy displayed a NVME demonstration platform that highlighted IP-Maker’s NVMe core intellectual property running in an FPGA and Teledyne LeCroy’s T3-8 analyzer viewing the core’s activities. Some of the other companies demonstrating prototypes and test systems included Agilent Technologies, Cadence Design Systems, Dell, EMC, IBM, Intel, LSI, Micron, NetApp, OCZ Technology, SanDisk, STEC, and Vitident Systems.
The NVMe standard defines a command set optimized for storage and is scalable for the future while avoiding the need to burden the device with legacy support requirements. Existing applications and software infrastructure built upon the SCSI architectural model can be handled by defining a translation document that defines a mapping between SCSI and NVM Express specifications. That will permit a seamless transition to NVM Express by preserving existing software infrastructure investments. This translation may be done as a layer within the NVMe driver.
Currently, the adoption of PCIe SSDs is hindered by the many different implementations and unique drivers provided by each SSD vendor that the SSD OEM customer must validate. Each SSD vendor implements a different subset of features in a different way leading to needless extra qualification effort by the OEM. The NVMe standard eliminates that tower of Babel by defining a common register programming interface, command set, and feature set definition. This permits standard drivers to be written for each operating system and enables interoperability between various SSD vendors, thus shrinking the OEM qualification cycles.
By using the latest Gen 3 version of the PCIe interface, designers can transfer 6 Gbytes/s over an eight-lane channel vs only 6 Gbits/s using the latest SATA standard. Latency is also reduced by several microseconds since the SATA overhead is eliminated and the PCIe interface can directly attach to the host CPU’s chipset. This can also lower system cost and power since no external host-bus adapter is needed. The need for reduced latency is critical – Intel, for example, highlighted an issue that Amazon encounters. For every 100 ms delay it takes a site to load, Amazon loses 1% of their sales. In a comparison between NVMe and SCSI/SAS storage interfaces, Intel claims the older SCSI/SAS interface has a latency of 6 microseconds, while the NVMe interface drops the latency by more than 50% to just 2.8 microseconds.
NVMe is basically a scalable host-controller interface designed for Enterprise and Client systems and leverages the PCI express serial interface. The interface provides an optimized command issue and completion path. It includes support for parallel operation by supporting deep queues – up to 64K commands within an I/O Queue and up to 64K I/O Queues. Additionally, the interface has many Enterprise capabilities like end-to-end data protection (compatible with T10 DIF and DIX standards), enhanced error reporting, and virtualization. An enhanced version, NVMe 1.1, which includes additional features to enhance Enterprise and Client system performance, is expected to be released later this year. For more about NVMe, go to www.nvmexpress.org.
Chip Design Magazine