NVIDIA, MCX654106A-ECAT, ConnectX-6 VPI Adapter Card, HDR100 EDR InfiniBand and 100GbE, Dual-Port QSFP56, Socket Direct 2x, PCIe 3.0 x16, Tall Brackets
Features:
- Up to HDR100 EDR InfiniBand and 100GbE Ethernet connectivity per port
- Max bandwidth of 200Gb/s
- Up to 215 million messages/sec
- Sub 0.6usec latency
- Block-level XTS-AES mode hardware encryption
- FIPS capable
- Advanced storage capabilities including block-level encryption and checksum offloads
- Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports
- Best-in-class packet pacing with sub-nanosecond accuracy
- PCIe Gen 3.0 and Gen 4.0 support
- RoHS compliant
- ODCC compatible
Benefits:
- Industry-leading throughput, low CPU utilization and high message rate
- Highest performance and most intelligent fabric for compute and storage infrastructures
- Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)
- Host Chaining technology for economical rack design
- Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms
- Flexible programmable pipeline for network flows
- Efficient service chaining enablement
- Increased I/O consolidation efficiencies, reducing data center costs & complexity
ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of HDR100 EDR InfiniBand and 100GbE Ethernet connectivity, sub-600ns latency and 215 million messages per second, highest performance and most flexible solution.
HPC Environments
ConnectX-6 supports the evolving co-design paradigm, which transforms the network into a distributed processor. With its In-Network Computing and In-Network Memory capabilities, offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 VPI utilizes both IBTA RDMA (Remote Direct Memory Access) and RoCE (RDMA over Converged Ethernet) technologies, delivering low-latency and high performance. It enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.
Machine Learning and Big Data Environments
Machine learning relies on especially high throughput and low latency to train eep neural networks and to improve recognition and classification accuracy. As the first adapter card to deliver 200Gb/s throughput, ConnectX-6 is the perfect solution to provide machine learning applications with the levels of performance and scalability that they require.
Security:
Block-level encryption offers a critical innovation to network security. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. Includes a hardware Root-of-Trust (RoT) that uses HMAC relying on a device-unique key. This provides both secure boots as well as cloning protection.
NVIDIA Socket Direct technology improves the performance of multi-socket servers, by enabling each of their CPUs to access the network through its dedicated PCIe interface. This enables data to bypass the QPI (UPI) and the other CPU, improving latency, performance and CPU utilization. NVIDIA Socket Direct also enables GPUDirect RDMA for all CPU/GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. NVIDIA Socket Direct enables Intel DDIO optimization on both sockets by creating a direct connection between the sockets and the adapter card. NVIDIA Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a 350 mm long harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200Gb/s to servers with PCIe Gen3-only support.
Adapter Cards Mechanical Drawing
Bracket Mechanical Drawings and Dimensions