Lambda Hyperplane 8-H100
The Lambda Hyperplane 8-H100 is a high-performance computing system designed specifically for demanding AI workloads.
GPUs (Graphics Processing Units)
The system features 8 NVIDIA H100 SXM5 GPUs, each with 80GB of VRAM (Video Random Access Memory).
The H100 GPUs are NVIDIA's latest and most powerful GPUs, delivering exceptional performance for AI applications.
The GPUs are interconnected using NVLink and NVSwitch technology, enabling high-speed communication and data transfer between the GPUs.
CPU (Central Processing Unit)
The system can be configured with either 2 AMD EPYC or 2 Intel Xeon processors.
AMD EPYC 7004 (Genoa) Series Processors offer up to 192 cores in total, while Intel Xeon 4th Gen (Sapphire Rapids) Scalable Processors provide up to 112 cores.
The CPUs handle general-purpose computing tasks and orchestrate the overall system operations.
AMD EPYC 7004 (Genoa) Series Processors
These processors offer up to 192 cores in total, spread across two CPU sockets.
Each core is capable of executing multiple threads simultaneously, enabling highly parallel processing.
The high core count allows for efficient handling of multi-threaded applications and workloads that can be parallelized.
Intel Xeon 4th Gen (Sapphire Rapids) Scalable Processors
These processors provide up to 112 cores in total, also distributed across two CPU sockets.
Similar to AMD EPYC, each core can execute multiple threads concurrently, facilitating parallel processing.
The slightly lower core count compared to AMD EPYC is still substantial and suitable for demanding workloads.
To put the power of these processors into perspective, let's compare them to a typical consumer-grade CPU:
A high-end consumer desktop CPU, such as an Intel Core i9 or AMD Ryzen 9, typically has 8 to 16 cores.
The AMD EPYC 7004 Series Processors in the Lambda Hyperplane 8-H100 offer up to 12 times more cores than a high-end consumer CPU.
The Intel Xeon 4th Gen Scalable Processors provide up to 7 times more cores than a high-end consumer CPU.
This significant increase in core count allows the Lambda Hyperplane 8-H100 to handle much more complex and demanding workloads, such as large-scale simulations, data analysis, and machine learning tasks, with improved performance and efficiency compared to consumer-grade CPUs.
Regarding the memory, the Lambda Hyperplane 8-H100 offers 256 GB to 8 TB of DDR5 memory, with a minimum of 2 TB recommended for optimal performance. This memory is indeed attached to the CPUs through the 32 DIMM (Dual In-line Memory Module) slots available on the system.
The memory is shared between the two CPU sockets, allowing both processors to access the entire memory pool. This shared memory architecture enables efficient communication and data sharing between the cores, facilitating seamless collaboration on complex workloads.
The use of DDR5 memory technology provides faster data transfer rates and improved power efficiency compared to previous generations. With up to 8 TB of memory capacity, the Lambda Hyperplane 8-H100 can handle memory-intensive applications and large datasets with ease, ensuring smooth performance and minimizing bottlenecks.
In summary, the AMD EPYC 7004 Series Processors and Intel Xeon 4th Gen Scalable Processors in the Lambda Hyperplane 8-H100 offer a significant leap in core count and processing power compared to consumer-grade CPUs. The shared DDR5 memory, with its high capacity and fast data transfer rates, complements the powerful CPUs to deliver exceptional performance for demanding AI, scientific computing, and data-intensive workloads.
Memory
The system supports 256 GB to 8 TB of DDR5 memory, with a minimum of 2 TB recommended for optimal performance.
DDR5 is the latest generation of high-speed memory, offering faster data transfer rates and improved power efficiency compared to previous generations.
Regarding the memory, the Lambda Hyperplane 8-H100 offers 256 GB to 8 TB of DDR5 memory, with a minimum of 2 TB recommended for optimal performance. This memory is indeed attached to the CPUs through the 32 DIMM (Dual In-line Memory Module) slots available on the system.
The memory is shared between the two CPU sockets, allowing both processors to access the entire memory pool. This shared memory architecture enables efficient communication and data sharing between the cores, facilitating seamless collaboration on complex workloads.
The use of DDR5 memory technology provides faster data transfer rates and improved power efficiency compared to previous generations. With up to 8 TB of memory capacity, the Lambda Hyperplane 8-H100 can handle memory-intensive applications and large datasets with ease, ensuring smooth performance and minimizing bottlenecks.
Storage
The system provides various storage options:
OS storage: up to 2x 3.84 TB NVMe onboard M.2 slots for fast boot and operating system storage.
Extra storage: up to 16x 30.72 TB NVMe hot-swap U.2 bays for high-capacity and high-speed data storage
NVMe (Non-Volatile Memory Express) is a high-performance storage interface designed for fast data access and low latency.
Storage Jargon
NVMe (Non-Volatile Memory Express): NVMe is a high-performance storage interface designed for solid-state drives (SSDs). It provides faster data transfer speeds and lower latency compared to traditional storage interfaces like SATA.
M.2: M.2 is a form factor for SSDs that allows for compact and high-speed storage devices. M.2 SSDs are typically installed directly onto the motherboard using dedicated M.2 slots.
U.2: U.2 is a form factor for SSDs that is designed for hot-swappable drive bays. U.2 SSDs are larger than M.2 SSDs and are commonly used in server and enterprise storage solutions.
In the given specifications:
OS storage: The system provides up to 2x 3.84 TB NVMe onboard M.2 slots for installing SSDs to store the operating system and other critical files. These M.2 slots allow for fast boot times and quick access to frequently used data.
Extra storage: The system offers up to 16x 30.72 TB NVMe hot-swap U.2 bays for additional storage capacity. These U.2 bays enable the installation of large-capacity SSDs that can be easily swapped or replaced without shutting down the system.
Recommended Storage for AI, LLM Workloads, and Vector Databases
NVMe SSDs
NVMe SSDs provide high-speed data access and low latency, making them ideal for AI and LLM workloads that require fast data retrieval and storage.
They offer significantly faster read and write speeds compared to traditional hard disk drives (HDDs) or SATA SSDs.
NVMe SSDs are well-suited for storing and accessing large datasets, model checkpoints, and intermediate results during training and inference.
High-Capacity SSDs
AI and LLM workloads often involve large datasets and require substantial storage capacity.
High-capacity SSDs, such as the 30.72 TB NVMe U.2 drives mentioned in the specifications, provide ample storage space for storing extensive datasets, pre-trained models, and generated outputs.
Having high-capacity SSDs allows for efficient data storage and retrieval without the need for frequent data migration or external storage solutions.
Parallel Storage
AI and LLM workloads can benefit from parallel storage architectures, where multiple SSDs are used simultaneously to improve data throughput and performance.
Configuring the storage system with multiple NVMe SSDs in a RAID (Redundant Array of Independent Disks) configuration can enhance read and write speeds and provide data redundancy.
Parallel storage enables faster data access and processing, which is crucial for training large models and performing data-intensive tasks.
Scalable Storage
As AI and LLM workloads grow in size and complexity, having scalable storage options is important.
The ability to add or expand storage capacity using hot-swappable U.2 drives allows for easy scalability without disrupting ongoing workloads.
Scalable storage ensures that the system can accommodate increasing data storage requirements as the AI and LLM projects evolve.
When configuring storage for AI, LLM workloads, and vector databases, it's recommended to prioritise NVMe SSDs for their high performance and low latency.
Using a combination of onboard M.2 slots for the operating system and critical files, along with high-capacity U.2 drives for bulk data storage, provides a balanced and efficient storage setup.
Additionally, considering factors such as data throughput, parallel storage architectures, and scalability will help optimize storage performance and ensure that the storage system can keep pace with the demands of AI and LLM workloads.
Networking
The system includes a dual-port 10GbE RJ45 network interface as default, which allows for high-speed network connectivity.
For enhanced storage connectivity, the system can be configured with up to 1 NVIDIA ConnectX-7 InfiniBand/Ethernet 400 Gb/s PCIe NIC (dual-port).
For GPU clustering and high-speed interconnect between multiple systems, up to 8 NVIDIA ConnectX-7 InfiniBand 400 Gb/s PCIe NICs (single-port) can be added.
Networking Default: 1x Dual Port 10GbE RJ45 (modular)
This refers to the standard network connectivity provided by the system.
It includes one dual-port 10 Gigabit Ethernet (10GbE) network interface card (NIC) with RJ45 connectors.
"Dual Port" means the NIC has two independent Ethernet ports, allowing for connection to two separate networks or for link aggregation to increase bandwidth.
"10GbE" indicates that each port supports data transfer speeds of up to 10 gigabits per second (Gbps).
"RJ45" refers to the type of connector used for the Ethernet cables.
"Modular" suggests that the NIC can be replaced or upgraded if needed.
Storage: Up to 1x NVIDIA ConnectX-7 InfiniBand/Ethernet 400 Gb/s PCIe NIC (dual-port)
This option is specifically for high-speed storage connectivity.
It allows for the installation of up to one NVIDIA ConnectX-7 NIC.
The ConnectX-7 NIC supports both InfiniBand and Ethernet protocols.
InfiniBand is a high-performance, low-latency networking standard commonly used in high-performance computing (HPC) and storage systems.
The NIC provides data transfer speeds of up to 400 gigabits per second (Gbps) per port.
It connects to the system through a PCIe (Peripheral Component Interconnect Express) slot, which is a high-speed interface for connecting hardware components.
The "dual-port" configuration means the NIC has two independent ports, allowing for redundancy or increased bandwidth.
GPU clustering: Up to 8x NVIDIA ConnectX-7 InfiniBand 400 Gb/s PCIe NICs (single-port)
This option is designed for GPU clustering, which involves connecting multiple GPU-equipped systems together to form a high-performance computing cluster.
It allows for the installation of up to eight NVIDIA ConnectX-7 NICs.
Each NIC supports InfiniBand connectivity with data transfer speeds of up to 400 Gbps.
The NICs are connected to the system through PCIe slots.
"Single-port" means each NIC has one InfiniBand port.
By using multiple high-speed InfiniBand NICs, the system can establish low-latency, high-bandwidth connections with other nodes in the GPU cluster.
This enables efficient communication and data exchange between the GPUs across different systems, facilitating parallel processing and accelerating AI and HPC workloads.
Options and Considerations:
The default 10GbE connectivity provides standard Ethernet networking capabilities for general-purpose network communication.
For high-speed storage, the NVIDIA ConnectX-7 NIC with InfiniBand/Ethernet support enables fast data transfer between the system and storage devices or storage networks.
When building a GPU cluster, the multiple NVIDIA ConnectX-7 InfiniBand NICs allow for high-bandwidth, low-latency interconnects between the nodes, enabling efficient distributed computing and parallel processing across the cluster.
The choice of network configuration depends on the specific requirements of the workload, such as the need for high-speed storage access or GPU clustering capabilities.
It's important to consider the compatibility of the network components with the rest of the system and the infrastructure, as well as the availability of switches, cables, and other network equipment that support the chosen protocols and speeds.
Proper network configuration and optimization are crucial for maximizing the performance and scalability of AI, machine learning, and HPC workloads.
By leveraging high-speed networking technologies like InfiniBand and Ethernet, along with advanced NICs like the NVIDIA ConnectX-7, the system can provide the necessary network connectivity and bandwidth to support demanding workloads and enable efficient data transfer and communication between components.
Software
The system comes pre-installed with either Ubuntu or Red Hat Enterprise Linux operating system.
Lambda Stack, a software suite optimized for AI workloads, is included, comprising CUDA, cuDNN, TensorFlow, PyTorch, and Keras.
These software components provide the necessary libraries, frameworks, and tools for developing and running AI applications.
Lambda Stack
In addition to the operating system, the Lambda Hyperplane 8-H100 system comes with Lambda Stack, a comprehensive software suite optimized for AI workloads. Lambda Stack includes the following key components:
CUDA (Compute Unified Device Architecture):
CUDA is a parallel computing platform and programming model developed by NVIDIA.
It allows developers to harness the power of NVIDIA GPUs for general-purpose computing and accelerate computationally intensive tasks.
CUDA provides a set of libraries, tools, and compiler directives that enable efficient utilization of GPU resources.
It is essential for developing and running GPU-accelerated applications, particularly in the field of AI and machine learning.
cuDNN (CUDA Deep Neural Network library):
cuDNN is a GPU-accelerated library of primitives for deep neural networks.
It provides highly tuned implementations of standard deep learning operations, such as convolution, pooling, normalization, and activation functions.
cuDNN is designed to deliver high-performance and efficient execution of deep learning algorithms on NVIDIA GPUs.
It is widely used by deep learning frameworks to accelerate training and inference tasks.
TensorFlow:
TensorFlow is an open-source machine learning framework developed by Google.
It provides a comprehensive ecosystem for building and deploying machine learning models, particularly deep neural networks.
TensorFlow offers a flexible and scalable architecture that allows for both high-level API usage and low-level customization.
It supports a wide range of hardware platforms, including CPUs, GPUs, and TPUs (Tensor Processing Units).
TensorFlow has a large community and extensive documentation, making it a popular choice for AI and machine learning projects.
PyTorch:
PyTorch is an open-source machine learning library developed by Facebook.
It emphasizes simplicity, flexibility, and dynamic computation graphs, making it intuitive and easy to use.
PyTorch provides a Pythonic interface and integrates seamlessly with the Python ecosystem.
It offers strong support for GPU acceleration and is known for its efficient memory usage and fast execution.
PyTorch is widely used in research and rapid prototyping of AI models.
Keras:
Keras is a high-level neural networks API written in Python.
It is designed to be user-friendly, modular, and extensible, allowing for quick and easy prototyping of deep learning models.
Keras provides a simple and intuitive interface for building and training neural networks.
It can run on top of other deep learning frameworks, such as TensorFlow or Theano, abstracting away the low-level details.
Keras is popular among beginners and experienced practitioners alike due to its simplicity and ease of use.
By providing a pre-installed operating system and the Lambda Stack software suite, the Lambda Hyperplane 8-H100 system offers a ready-to-use environment for AI and machine learning development.
The combination of Ubuntu or RHEL with CUDA, cuDNN, TensorFlow, PyTorch, and Keras enables developers to quickly start building and training AI models without the need for extensive setup and configuration.
The Lambda Hyperplane 8-H100 is designed to deliver exceptional performance for AI workloads, with the ability to deliver up to 32 petaFLOPS of FP8 performance. This means it can process vast amounts of data and perform complex calculations at an astonishing speed, enabling faster training and inference of AI models.
Moreover, the system can be easily expanded to a multi-node Lambda Echelon cluster by adding more servers interconnected with high-speed Mellanox InfiniBand or Ethernet network interfaces. This scalability allows you to accommodate growing computational needs as your AI projects evolve.
Lambda takes care of the system engineering, optimization, and support, providing you with a turnkey solution that is ready to use for your specific AI workloads. With enterprise-class support and optional on-site parts replacement services, you can focus on your research and development while Lambda handles the hardware and system administration aspects.
In summary, the Lambda Hyperplane 8-H100 is a powerful and highly optimized system designed to accelerate AI workloads, featuring cutting-edge hardware components, high-speed interconnects, and a software stack tailored for AI development and deployment. It offers scalability, performance, and support to meet the demanding requirements of today's AI projects.
This table provides a clear comparison of the key performance metrics across the H100 SXM, H100 PCIe, and H100 NVL variants, highlighting the differences in floating-point operations, integer operations, memory, and interconnect bandwidth.
Last updated