Mellanox, MCX614106A-CCAT, ConnectX-6 EN Adapter Card, 100GbE Dual-Port QSFP56, Socket Direct 2x, PCIe 3.0 x16, Tall Brackets
Features with all optional accessories:
- Up to 200GbE connectivity per port
- The maximum bandwidth of 200Gb/s
- Up to 215 million messages/sec
- Sub 0.8usec latency
- Block-level XTS-AES mode hardware encryption
- Optional FIPS-compliant adapter card
- Support both 50G SerDes (PAM4) and 25G SerDes (NRZ) based ports
- Best-in-class packet pacing with sub-nanosecond accuracy
- PCIe Gen4/Gen3 with up to x32 lanes
- RoHS compliant
- ODCC compatible
Benefits:
- Most intelligent, highest performance fabric for computing and storage infrastructures
- Cutting-edge performance in virtualized HPC networks including Network Function Virtualization (NFV)
- Advanced storage capabilities including block-level encryption and checksum offloads
- Host Chaining technology for economical rack design
- Smart interconnect for x86, Power, Arm, GPU, and FPGA-based platforms
- Flexible programmable pipeline for network flows
- Enabler for efficient service chaining
- Efficient I/O consolidation, lowering data center costs and complexity
ConnectX-6 EN provides up to two ports of 200GbE connectivity, sub 0.8usec latency, and 215 million messages per second, enabling the highest performance and most flexible solution for the most demanding data center applications. ConnectX-6 is a groundbreaking addition to the Mellanox ConnectX series of industry-leading adapter cards. In addition to all the existing innovative features of past versions, ConnectX-6 offers a number of enhancements to further improve performance and scalability, such as support for 200/100/50/40/25/10/1 GbE Ethernet speeds and PCIe Gen 4.0. Moreover, ConnectX-6 Ethernet cards can connect up to 32-lanes of PCIe to achieve 200Gb/s of bandwidth, even on Gen 3.0 PCIe systems.
Security:
Block-level encryption offers a critical innovation to network security. The ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also guarantees protection for users sharing the same resources through the use of dedicated encryption keys. By performing block-storage encryption in the adapter, ConnectX-6 excludes the need for self-encrypted disks. This gives customers the freedom to choose their preferred storage device, including byte-addressable and NVDI mm devices that traditionally do not provide encryption. It can support Federal Information Processing Standards (FIPS) compliance.
Machine Learning and Big Data Environments
Machine learning relies on especially high throughput and low latency to train deep neural networks and to improve recognition and classification accuracy. As the first adapter card to deliver 200GbE throughput, ConnectX-6 is the perfect solution to provide machine learning applications with the levels of performance and scalability that they require. It utilizes the RDMA technology to deliver low-latency and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet-level flow control.
Mellanox Socket Direct®
Mellanox Socket Direct technology improves the performance of dual-socket servers, such as by enabling each of their CPUs to access the network through a dedicated PCIe interface. As the connection from each CPU to the network bypasses the QPI (UPI) and the second CPU, Socket Direct reduces latency and CPU utilization. Moreover, each CPU handles only its own traffic (and not that of the second CPU), thus optimizing CPU utilization even further. Mellanox Socket Direct also enables GPUDirect® RDMA for all CPU/ GPU pairs by ensuring that GPUs are linked to the CPUs closest to the adapter card. Mellanox Socket Direct technology is enabled by the main card that houses the ConnectX-6 adapter card and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a 350 mm long harness. The two PCIe x16 slots may also be connected to the same CPU. In this case, the main advantage of the technology lies in delivering 200GbE to servers with PCIe Gen3-only support.
Mechanical Drawing
Bracket Mechanical Drawing
Auxiliary PCIe Connection Card Tall Bracket