The NVIDIA A10 Tensor Core GPU delivers a versatile platform for mainstream enterprise workloads, like AI inference,
training, and HPC. With TF32 and FP64 Tensor Core support, as well as an end-to-end software and hardware solution
stack, A10 ensures that mainstream AI training and HPC applications can be rapidly addressed. Multi-instance GPU
(MIG) ensures quality of service (QoS) with secure, hardware-partitioned, right-sized GPUs across all of these
workloads for diverse users, optimally utilizing GPU compute resources.
Scope of delivery:
• NVIDIA A10
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and HPC
to tackle the world’s toughest computing challenges.
As the engine of the NVIDIA data center platform, A100 can efficiently scale up to thousands of GPUs or, using new
Multi-Instance GPU (MIG) technology, can be partitioned into seven isolated GPU instances
to accelerate workloads of all sizes. A100’s third-generation Tensor Core technology now accelerates more levels of
precision for diverse workloads, speeding time to insight as well as time to market.
Scope of delivery:
• NVIDIA A100
• 8-pin male (graphics card) to 2x 8-pin female (power supply)
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and HPC
to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center
platform, A100 can efficiently scale up to thousands of GPUs or, using new Multi-Instance GPU (MIG) technology, can
be partitioned into seven isolated GPU instances to accelerate workloads of all sizes.
A100’s third-generation Tensor Core technology now accelerates more levels of precision for diverse workloads,
speeding time to insight as well as time to market.
Scope of delivery:
• NVIDIA A100 80GB
• 8-pin male (graphics card) to 2x 8-pin female (power supply)
The NVIDIA A2 Tensor Core GPU provides entry-level inference with low power, a small footprint, and high performance
for NVIDIA AI at the edge. Featuring a low-profile PCIe Gen4 card and a low 40-60 watt (W) configurable thermal
design power (TDP) capability, the A2 brings adaptable inference acceleration to any server.
Scope of delivery:
• NVIDIA A2
• Low-Profile (SFF) Bracket mounted
• Additional Full-Height (ATX) Bracket
The NVIDIA A30 Tensor Core GPU delivers a versatile platform for mainstream enterprise workloads, like AI inference,
training, and HPC. With TF32 and FP64 Tensor Core support, as well as an end-to-end software and hardware solution
stack, A30 ensures that mainstream AI training and HPC applications can be rapidly addressed. Multi-instance GPU
(MIG) ensures quality of service (QoS) with secure, hardware-partitioned, right-sized GPUs across all of these
workloads for diverse users, optimally utilizing GPU compute resources.
Scope of delivery:
• NVIDIA A30
• 8-pin male (graphics card) to 2x 8-pin female (power supply)
NVIDIA® A40 delivers the data center-based solution designers, engineers, artists, and scientists need to meet
today’s challenges. Built on the NVIDIA Ampere architecture, the A40 combines the latest generation RT Cores, Tensor
Cores, and CUDA® Cores with 48GB of graphics memory for unprecedented graphics, rendering, compute, and AI
performance. From powerful virtual workstations accessible from anywhere, to dedicated render nodes, the A40 is
built to tackle the most demanding visual computing workloads from the data center.
Scope of delivery:
• NVIDIA A40
• 8-pin male (graphics card) to 2x 8-pin female (power supply)
NVIDIA® A40 delivers the data center-based solution designers, engineers, artists, and scientists need to meet
today’s challenges. Built on the NVIDIA Ampere architecture, the A40 combines the latest generation RT Cores, Tensor
Cores, and CUDA® Cores with 48GB of graphics memory for unprecedented graphics, rendering, compute, and AI
performance. From powerful virtual workstations accessible from anywhere, to dedicated render nodes, the A40 is
built to tackle the most demanding visual computing workloads from the data center.
Scope of delivery:
• NVIDIA A40
• 8-pin male (graphics card) to 2x 8-pin female (power supply)
Der NVIDIA L40 liefert eine noch nie dagewesene visuelle Rechenleistung für das Rechenzentrum und bietet Grafik-,
Rechen- und KI-Funktionen der nächsten Generation für GPU-beschleunigte Anwendungen. Basierend auf der Ada Lovelace
GPU-Architektur verfügt der L40 über RT Cores der dritten Generation, die die Echtzeit-Raytracing-Fähigkeiten
verbessern, und Tensor Cores der vierten Generation mit Unterstützung für das FP8-Datenformat, die eine
Inferenzleistung von über einem Petaflop liefern. Diese neuen Funktionen werden mit CUDA Cores der neuesten
Generation und 48 GB Grafikspeicher kombiniert, um Visual Computing-Workloads von hochleistungsfähigen virtuellen
Workstation-Instanzen bis hin zu großen digitalen Zwillingen in NVIDIA Omniverse zu beschleunigen.
Scope of delivery:
• NVIDIA L40
Der NVIDIA L40S liefert eine noch nie dagewesene visuelle Rechenleistung für das Rechenzentrum und bietet Grafik-,
Rechen- und KI-Funktionen der nächsten Generation für GPU-beschleunigte Anwendungen. Basierend auf der Ada Lovelace
GPU-Architektur verfügt der L40S über RT Cores der dritten Generation, die die Echtzeit-Raytracing-Fähigkeiten
verbessern, und Tensor Cores der vierten Generation mit Unterstützung für das FP8-Datenformat, die eine
Inferenzleistung von über einem Petaflop liefern. Diese neuen Funktionen werden mit CUDA Cores der neuesten
Generation und 48 GB Grafikspeicher kombiniert, um Visual Computing-Workloads von hochleistungsfähigen virtuellen
Workstation-Instanzen bis hin zu großen digitalen Zwillingen in NVIDIA Omniverse zu beschleunigen.
Scope of delivery:
• NVIDIA L40S
€8,135.00*
This website uses cookies to ensure the best experience possible. More information...