NVIDIA, MCX653435A-EDAI, ConnectX-6 VPI Adapter Card for OCP3.0 with Host Management, HDR100 EDR InfiniBand and 100GbE Single-Port QSFP56, PCIe 3.0/4.0 x16, Internal Lock
Features:
- Up to HDR100 EDR InfiniBand and 100GbE Ethernet connectivity per port
- Max bandwidth of 200Gb/s
- Up to 215 million messages/sec
- Sub 0.6usec latency
- Block-level XTS-AES mode hardware encryption
- FIPS capable
- Advanced storage capabilities including block-level encryption and checksum offloads
- Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports
- Best-in-class packet pacing with sub-nanosecond accuracy
- PCIe Gen 3.0 and Gen 4.0 support
- RoHS compliant
- ODCC compatible
Benefits:
- Industry-leading throughput, low CPU utilization and high message rate
- Highest performance and most intelligent fabric for compute and storage infrastructures
- Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)
- Host Chaining technology for economical rack design
- Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms
- Flexible programmable pipeline for network flows
- Efficient service chaining enablement
- Increased I/O consolidation efficiencies, reducing data center costs & complexity
ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of HDR100 EDR InfiniBand and 100GbE Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. Offers a number of enhancements to further improve performance and scalability.
HPC Environments
ConnectX-6 VPI delivers the highest throughput and message rate in the industry. As the first adapter to deliver 200Gb / s HDR InfiniBand, 100Gb / s HDR100 InfiniBand, and 200Gb / s Ethernet speeds, ConnectX-6 VPI is the perfect product to lead HPC data centers toward Exascale levels of performance and scalability. ConnectX-6 supports the evolving co-design paradigm, which transforms the network into a distributed processor. With its In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 VPI utilizes both IBTA RDMA (Remote Direct Memory Access) and RoCE technologies, delivering low-latency and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.
Machine Learning and Big Data Environments
Data analytics has become an essential function within many enterprise data centers, clouds, and Hyperscale platforms. Machine learning relies on especially high throughput and low latency to train eep neural networks and to improve recognition and classification accuracy. As the first adapter card to deliver 200Gb/s throughput, ConnectX-6 is the perfect solution to provide machine learning applications with the levels of performance and scalability that they require. ConnectX-6 utilizes the RDMA technology to deliver low-latency and high performance.
Offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys. By performing block-storage encryption in the adapter, excludes the need for self-encrypted disks. This allows customers the freedom to choose their preferred storage device, including byte-addressable and NVDI mm devices that traditionally do not provide encryption. Moreover, ConnectX-6 can support Federal Information Processing Standards (FIPS) compliance. ConnectX-6 also includes a hardware Root-of-Trust (RoT) that uses HMAC relying on a device-unique key. This provides both secure boots as well as cloning protection.
Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.
Adapter Cards Mechanical Drawing
Bracket Mechanical Drawings and Dimensions