NVIDIA, MCX515A-CCUT, ConnectX-5 EN Adapter Card, 100GbE Single-Port QSFP28, PCIe 3.0 x16, UEFI Enabled x86/ARM, Tall Bracket
Features:
- Tag matching and rendezvous offloads
- Adaptive routing on reliable transport
- Burst buffer offloads for background checkpointing
- NVMe over Fabric offloads
- Backend switch elimination by host chaining
- Embedded PCIe switch
- Enhanced vSwitch/vRouter offloads
- Flexible pipeline
- RoCE for overlay networks
- PCIe Gen 4.0 support
- RoHS compliant
- ODCC compatible
Benefits:
- Up to 100Gb/s connectivity per port
- Industry-leading throughput, low latency, low CPU utilization, and high message rate
- Innovative rack design for storage and Machine Learning based on Host Chaining technology
- Smart interconnect for x86, Power, Arm, and GPU-based compute and storage platforms
- Advanced storage capabilities including NVMe over Fabric offloads
- Intelligent network adapter supporting flexible pipeline programmability
- Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)
- Enabler for efficient service chaining capabilities
- Efficient I/O consolidation, lowering data center costs and complexity
Provide high-performance and flexible solutions with up to two ports of 100GbE connectivity and 750ns latency. For storage workloads, ConnectX-5 delivers a range of innovative accelerations, such as Signature Handover (T10-DIF) in hardware, an embedded PCIe Switch, and NVMe over Fabric target offloads. ConnectX-5 adapter cards also bring advanced Open vSwitch offloads to Telecommunications and cloud data centers to drive extremely high packet rates and throughput with reduced CPU resource consumption, thus boosting data center infrastructure efficiency. ConnectX-5 cards also offer advanced NVIDIA Multi-Host® and NVIDIA Socket Direct® technologies.
ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs), and tenants to co-exist on the same hardware. Supported vSwitch/vRouter offload functions:
- Overlay Networks (e.g., VXLAN, NVGRE, MPLS, GENEVE, and NSH) header encapsulation & decapsulation.
- Stateless offloads of inner packets and packet headers' re-write, enabling NAT functionality and more.
- Flexible and programmable parser and match-action tables, which enable hardware offloads for future protocols.
- SR-IOV technology, providing dedicated adapter resources, guaranteed isolation and protection for virtual machines (VMs) within the server.
- Network Function Virtualization (NFV), enabling a VM to be used as a virtual appliance. The full datapath operation offloads, hairpin hardware capability, and service chaining enable data to be handled by the virtual appliance with minimum CPU utilization.
Storage Environments
NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.
The embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with earlier generations of ConnectX adapters, standard block and file access protocols leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. ConnectX-5 enables an innovative storage rack design, Host Chaining, which enables different servers to interconnect without involving the Top of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers the data center's total cost of ownership (TCO) by reducing CAPEX (cables, NICs, and switch port expenses). OPEX is also reduced by cutting down on switch port management and overall power usage.
Bracket Mechanical Drawing