Understanding NVMe over Fabrics (NVMe-oF)


Overview – Understanding NVMe over Fabrics (NVMe-oF)

Solid-state storage, namely non-volatile memory express (NVMe), has eliminated one of IT’s major bottlenecks. NVMe enables computing to interface with storage significantly faster than spinning disks.

NVMe enables high input/output operations per second (IOPS), low latency, and, most importantly, numerous parallel channels between storage and CPU. This results in much higher storage performance than traditional disk interfaces such as SAS and SATA.

NVMe over fabrics (NVMe-oF) is, in many ways, the next evolution. NVMe communicates over the server’s PCIe bus. This is OK for local storage, but businesses depend extensively on networked storage for economies of scale, redundancy, and simplicity of administration. NVMe over fabrics extends NVMe technology to storage arrays over the local area network (LAN).

And the market is expanding quickly. IDC, an industry analysis organization, forecasts NVMe storage to account for half of the market’s main storage sales by the end of this year, with NVMe-oF being used in the majority of systems.

What Exactly Is NVMe Over Fabrics?

NVMeOF, or NVMe over Fabrics, is a network protocol similar to iSCSI that allows a host to connect with a storage device via a network (also known as a fabric). It relies on and needs the usage of RDMA. NVMe over Fabrics supports any RDMA technology, including InfiniBand, RoCE, and iWARP.

NVMeOF offers substantially reduced latency than iSCSI, often adding just a few microseconds to traverse the network. This reduces the disparity between local and distant storage.

As with any network protocol, it may be used to access a basic feature-free storage box (JBOF) and a feature-rich block storage system (SAN). It allows you to access a storage system that was constructed using previous-generation (SATA) devices. However, it is primarily connected with NVMe devices because of the combined performance advantages.

NVMe over Fabrics (NVMeOF) is a developing technology. It offers data centers unparalleled access to NVMe SSD storage.

How Are NVMe-oF Systems Configured?

Organizations transitioning to NVMe over fabrics may use any major network transports. Fiber Channel, iWARP, RoCE (RDMA over Converged Ethernet), Infiniband, and, most recently, TCP have all been mentioned. NVMe-oF may eventually be able to handle additional protocols.

Although Fiber Channel is a popular solution with a 32Gbps throughput, some providers now promise speeds of up to 100Gbps for Ethernet-based systems.

However, to function, NVMe-oF requires “bindings.” These link the transport protocol to the host and the storage array and manage, authenticate, and control capabilities.

According to one expert, “They are the glue that holds the NVMe communication language to the underlying fabric transport (whether it is Fiber Channel, InfiniBand, or various forms of Ethernet).” You can learn more about NVMe-oF on the SNIA blog.

The most current specification, NVMe-oF 1.1, allows for TCP binding. This enables an NVMe-oF SAN to operate over a traditional Ethernet network.

This standards-based approach has the benefit of providing purchasers with more options for NVMe-oF. They have the flexibility to combine servers, arrays, and fabrics. Furthermore, as SNIA’s Metz points out, non-standardized implementations are available on the market. These might function quite well, particularly in single-supplier systems.

Why Is NVMe-oF the Future of Data Storage? 

Before NVMe-oF, selecting a connection type for your storage area network (SAN) was essentially limited to three options:

  • Serial attached SCSI (SAS): A point-to-point serial protocol for transmitting SCSI instructions using SAS connections. SAS employs expanders with SAS channels to connect up to 128 drives using host bus adapters (HBA). The speed possibilities are 3GB/s, 6GB/s, 12GB/s, and 22.5GB/s.
  • iSCSI: An internet protocol (IP)-based storage networking standard for sending SCSI (small computer systems interface) instructions over a TCP/IP-based intranet. iSCSI connects to storage devices using regular Ethernet connections and switches, normally running at 1GB/s. High bandwidth Ethernet cards may reach speeds of more than 10GB/s.
  • Fiber Channel Protocol (FCP): FCP is a protocol for transporting SCSI instructions across fiber optic (or copper) lines. Fiber Channel (FC) networks are termed fabrics because all network switches may function as a single large switch. FC uses light to convey messages, avoiding the electromagnetic interference (EMI) difficulties associated with conventional network technologies. FC rates commonly vary between 1 and 128 GB/s.

However, although these technologies worked flawlessly for hard disk drives (HDDs) and tape storage systems, they proved to be performance bottlenecks for solid-state drives (SSDs). SATA and PCIe SSDs could not fully exploit the untapped potential of flash memory.

Enter NVMe, a transfer protocol created expressly linking SSDs to PCIe busses. The enterprise-ready all-flash array was created after being scaled to the data center. Adding FC’s lossless high-speed data transfer capabilities to NVMe with NVMe-oF was the next natural step in developing the all-flash array.

NVMe-oF Benefits

NVMe over Fabrics offers several benefits to organizations that effectively use it, including decreased vendor lock-in, more networking flexibility, and faster data transfer rates.

Vendor and Network Flexibility

NVMe over Fabrics (NVMe-oF) provides enterprises with unprecedented flexibility by supporting many network protocols and storage systems. This adaptability reduces the possibility of vendor lock-in by allowing enterprises to choose their preferred combinations freely.

Companies may use NVMe-oF to arrange their installations to support a wide range of networking technologies and storage systems, assuring compatibility and flexibility to changing requirements. This flexibility enables firms to adjust their infrastructure based on unique performance, scalability, and cost constraints. Consequently, NVMe over Fabrics has emerged as a key option for businesses seeking flexible and future-proof storage solutions in dynamic IT settings.

Data Transfer Speeds

NVMe’s increased performance over predecessors such as SAS and SATA is widely established, thanks to its specialized architecture for flash storage. NVMe-oF builds on this advantage over network fabrics, allowing quick data processing suited for mission-critical applications. Its high throughput and low latency make it excellent for demanding data center tasks, such as rapidly retrieving object-stored data.

Organizations may maximize the potential of their storage infrastructure by adopting NVMe-oF, guaranteeing flawless operation and responsiveness to the changing needs of contemporary computing environments.


NVMe’s substantial queue depth allows it to handle several concurrent requests effectively, making it ideal for enterprises operating memory-intensive applications. This parallel processing capability assures peak performance even with high workloads.

NVMe over Fabrics (NVMe-oF) improves on this parallelism by extending it throughout the whole network rather than limiting it to a single server or application. For enterprises with geographically scattered local area networks (LANs), NVMe-oF enables seamless data access and processing by harnessing the network’s collective capabilities to offer high throughput and low latency for different workloads across various locations.

Challenges and Solutions of NVMe over Fabrics

Implementing NVMe over Fabrics (NVMe-oF) involves several obstacles, including network concerns and compatibility issues. However, different methods have been created to solve these issues. Here are some of the main difficulties and their solutions:

Network Congestion and Bandwidth Management

NVMe-oF may create huge network traffic, potentially causing congestion and bandwidth conflict, particularly in large-scale implementations.

Using Quality of Service (QoS) techniques at the network level helps prioritize NVMe-oF traffic, ensuring that vital storage traffic has appropriate bandwidth and minimal latency. Furthermore, adopting high-speed network fabrics like Ethernet with RDMA (e.g., RoCE or iWARP) or Fiber Channel helps to relieve congestion by providing enough bandwidth for storage traffic.

Latency and Performance Optimization

Providing low-latency access to distant NVMe storage devices is critical for preserving application performance and responsiveness.

Implementing RDMA (Remote Direct Memory Access) technologies like RoCE (RDMA over Converged Ethernet) or iWARP reduces delay caused by standard networking protocols. These technologies enable direct memory access between the host and the storage device without contacting the CPU, decreasing latency and higher performance.

Security Concerns

Sensitive data must be sent across a network with security safeguards to prevent unwanted access, interception, or alteration.

Encryption and authentication technologies such as IPsec (Internet Protocol Security) or TLS (Transport Layer Security) improve data security in transit. Furthermore, implementing network segmentation and access control measures limits access to NVMe-oF storage resources to authorized users and devices, improving overall security.

Interoperability and Compatibility

Interoperability across multiple NVMe-oF implementations, including hardware and software components, may be difficult due to specification differences and vendor-specific implementations.

Industry standards organizations, such as NVM Express, Inc., play an important role in establishing and maintaining NVMe-oF specifications and assuring compatibility across vendor offerings. Compliance testing and certification procedures assist in certifying interoperability and compatibility across NVMe-oF devices from different suppliers, allowing for easy integration into various settings.

Management and Monitoring

Specialized tools and knowledge are required to manage and monitor NVMe-oF systems, which includes setup, performance optimization, and troubleshooting.

Using management frameworks and software-defined storage solutions that provide complete management and monitoring capabilities makes it easier to manage NVMe-oF environments. These tools give access to performance indicators, configuration settings, and system health, allowing for proactive management and quick resolution of problems.

By solving these issues with proper solutions and best practices, companies may efficiently deploy and exploit NVMe over Fabrics to reap the performance advantages of NVMe storage in distributed and scalable storage infrastructures.


To summarize, NVMe over Fabrics is more than just a phrase; it is a game changer in storage technology. By using the power of NVMe SSDs and high-speed Fabrics, enterprises may achieve unparalleled performance, scalability, and efficiency levels in their data centers.

Whether you’re an IT expert looking for new storage solutions or a tech enthusiast interested in the newest advances, NVMe-oF is a technology to watch. With its many features, diversified use cases, and bright future, NVMe-oF is poised to transform the storage environment and usher in a new era of data storage excellence.

Related Posts