Mellanox, MCX653436A-HDAI, ConnectX-6 VPI Adapter Card for OCP3.0 with Host Management, HDR InfiniBand and 200GbE, Dual-Port QSFP56, PCIe 4.0 x16, Internal Lock
Features:
- Up to HDR100 EDR InfiniBand and 100GbE Ethernet connectivity per port
- Max bandwidth of 200Gb/s
- Up to 215 million messages/sec
- Sub 0.6usec latency
- Block-level XTS-AES mode hardware encryption
- FIPS capable
- Advanced storage capabilities including block-level encryption and checksum offloads
- Supports both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports
- Best-in-class packet pacing with sub-nanosecond accuracy
- PCIe Gen 3.0 and Gen 4.0 support
- RoHS compliant
- ODCC compatible
Benefits:
- Industry-leading throughput, low CPU utilization and high message rate
- Highest performance and most intelligent fabric for compute and storage infrastructures
- Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)
- Host Chaining technology for economical rack design
- Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms
- Flexible programmable pipeline for network flows
- Efficient service chaining enablement
- Increased I/O consolidation efficiencies, reducing data center costs & complexity
ConnectX-6 Virtual Protocol Interconnect (VPI) cards are a groundbreaking addition to the ConnectX series of industry-leading network adapter cards. Providing two ports of HDR100 EDR InfiniBand and 100GbE Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards enable the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications. In addition to all the existing innovative features of past versions, ConnectX-6 cards offer a number of enhancements to further improve performance and scalability.
HPC Environments
As the first adapter to deliver 200Gb / s HDR InfiniBand, 100Gb / s HDR100 InfiniBand, and 200Gb / s Ethernet speeds, ConnectX-6 VPI is the perfect product to lead HPC data centers toward Exascale levels of performance and scalability. ConnectX-6 supports the evolving co-design paradigm, which transforms the network into a distributed processor. With its In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 VPI utilizes both IBTA RDMA (Remote Direct Memory Access) and RoCE (RDMA over Converged Ethernet) technologies, delivering low-latency and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.
Machine Learning and Big Data Environments
Machine learning relies on especially high throughput and low latency to train eep neural networks and to improve recognition and classification accuracy. ConnectX-6 utilizes the RDMA technology to deliver low-latency and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.
Security:
The ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys. By performing block-storage encryption in the adapter, ConnectX-6 excludes the need for self-encrypted disks. This allows customers the freedom to choose their preferred storage device, including byte-addressable and NVDI mm devices that traditionally do not provide encryption. Moreover, ConnectX-6 can support Federal Information Processing Standards (FIPS) compliance. ConnectX-6 also includes a hardware Root-of-Trust (RoT) that uses HMAC relying on a device-unique key. This provides both secure boots as well as cloning protection. Delivering best-in-class device and firmware protection, ConnectX-6 also provides secured debugging capabilities without the need for physical access.
Host Management includes NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.
Adapter Cards Mechanical Drawing
Bracket Mechanical Drawings and Dimensions