# NVIDIA ConnectX InfiniBand adapters

<mark style="color:blue;">**NVIDIA ConnectX InfiniBand adapters**</mark> are high-performance networking solutions designed for  workloads in high-performance computing (HPC), artificial intelligence (AI), and hyperscale cloud infrastructures.&#x20;

They provide ultra-fast, low-latency connectivity between servers, storage systems, and other devices in data centre environments.&#x20;

<figure><img src="https://1839612753-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FpV8SlQaC976K9PPsjApL%2Fuploads%2FrlNoHt4S2zQf6j8r4Mlw%2Fimage.png?alt=media&#x26;token=c9b4d820-4d34-4724-aa1e-410bed6cc0f4" alt=""><figcaption><p>NVIDIA ConnectX-7 InfiniBand Adapter</p></figcaption></figure>

### <mark style="color:purple;">Technology and Features</mark>

<mark style="color:blue;">**InfiniBand:**</mark> ConnectX adapters *<mark style="color:yellow;">**support the InfiniBand networking protocol**</mark>*, which offers high bandwidth, low latency, and efficient computing through [<mark style="color:blue;">**RDMA (Remote Direct Memory Access)**</mark> ](https://training.continuumlabs.ai/infrastructure/data-and-memory/remote-direct-memory-access-rdma)capabilities.

<mark style="color:blue;">**Bandwidth:**</mark> The latest generation, <mark style="color:blue;">**ConnectX-7**</mark>, supports up to <mark style="color:yellow;">400 Gbps</mark> data rates, while previous generations like ConnectX-6 support up to 200 Gbps.

<mark style="color:blue;">**RDMA:**</mark> RDMA allows direct memory access between servers without involving the CPU, reducing latency and CPU overhead. This enables efficient data movement and higher application performance.

<mark style="color:blue;">**In-Network Computing:**</mark> ConnectX adapters feature NVIDIA In-Network Computing engines that offload and accelerate network processing tasks, freeing up CPU resources for application processing.

<mark style="color:blue;">**NVIDIA GPUDirect:**</mark> GPUDirect technology enables direct communication between GPUs and ConnectX adapters, minimising latency and maximising bandwidth for GPU-to-GPU communication.

<mark style="color:blue;">**NVIDIA SHARP:**</mark> The Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) technology accelerates collective operations, improving performance and scalability in large-scale deployments.

<mark style="color:blue;">**Virtualisation:**</mark> ConnectX adapters support Single Root I/O Virtualisation (SR-IOV), enabling efficient sharing of adapter resources among multiple virtual machines while maintaining isolation and quality of service.

### <mark style="color:purple;">Practical Applications</mark>

<mark style="color:blue;">**High-Performance Computing (HPC):**</mark> ConnectX adapters are widely used in HPC clusters for scientific simulations, data analysis, and other compute-intensive workloads that require high-speed, low-latency interconnects.

<mark style="color:blue;">**Machine Learning and AI:**</mark> The high bandwidth and low latency of ConnectX adapters make them suitable for training large-scale AI models and enabling fast data transfer between GPUs and storage systems.

<mark style="color:blue;">**Clustered Databases and Data Warehousing:**</mark> ConnectX adapters accelerate data access and enable efficient communication between nodes in clustered database environments, improving query performance and data processing speeds.

<mark style="color:blue;">**Accelerated Storage:**</mark> ConnectX adapters support various storage protocols like NVMe over Fabrics (NVMe-oF), enabling high-performance access to networked storage systems.

<mark style="color:blue;">**Financial Services:**</mark> Low-latency connectivity provided by ConnectX adapters is crucial for financial applications like high-frequency trading and real-time risk analysis.

In summary, NVIDIA ConnectX InfiniBand adapters are high-performance networking solutions that leverage InfiniBand technology, RDMA, and advanced acceleration features to provide fast, efficient, and scalable connectivity for demanding workloads in HPC, AI, and data centre environments.&#x20;

They offer industry-leading performance, innovative features like In-Network Computing and GPUDirect, and come in various form factors to suit different system requirements.
