NVIDIA A100 Tensor Core GPUs
petaFLOPs of AI Performance
DGX A100: Modular All-in-one AI System
AI is the Future
Artificial intelligence (AI) augments human capabilities to underpin the future. It can be medical research for faster diagnoses or to obtain a higher-grade product on a production line. Such advances in collaborative intelligence are only possible with equal advances and breakthroughs in IT architecture and systems.
Fully-optimised & Ready for AI
The NVIDIA DGX A100 is powered by the world’s fastest accelerator based on the NVIDIA Ampere architecture.It is the modular all-in-one system for all AI workloads in a modern data centre cluster supporting training, inference and analytics. It offers the largest leap in AI performance with an unrivalled 5 petaFLOPS. Eight NVIDIA A100 Tensor Core GPUs are integrated into the NVIDIA DGX™ A100, and includes a fully-optimised software stack and end-to-end machine-learning acceleration for data analytics, training and inference.
3 Benefits of NVIDIA DGX A100
1. Enable Elastic & Scalable AI Capability
Having a NVIDIA DGX A100 provides a solid foundation for your organisation to scale AI processes and infrastructure. With Multi-instance GPU (MIG), each A100 GPU can be partitioned into up to seven independent instances for a total of 56 instances, providing multiple users with separate GPU resources. Thanks to this feature, workloads can be run parallel to each other to maximise utilisation.
2. Robust Security for Enterprise AI
The NVIDIA DGX A100 has one of the best security postures on the market to provide your organisation with best-in-class protection against cyber threats. The NVIDIA DGX A100 takes a multi-layered approach to secure all major hardware and software components. Physical security is reinforced with a multi-layered approach stretching across the baseboard management controller (BMC), CPU board and self-encrypted drives.
3. Industry-leading Performance
The NVIDIA DGX A100 delivers unprecedented performance compared to previous NVIDIA products or the competition. Integrated within a NVIDIA DGX A100 are 8 NVIDIA A100 Tensor Core GPUs and fully-optimised NVIDIA CUDA-X software. The A100 GPUs offer a new industry benchmark, Tensor Float 32 (TF32) providing 20 times higher FLOPS for AI versus the previous generation FP32
NVIDIA DGX A100 Specifications
- 5 petaFLOPS AI 10 petaOPS INT8
- GPU Memory:
- 640 GB total
- Dual AMD Rome 7742, 128 cores total, 2.25 GHz base
- 3.4 GHz (max boost)
- System Memory:
- up to 2 terabyte (TB)
- 8x Single-Port Mellanox ConnectX-6 VPI
- 200 GB/s HDR InfiniBand
- 1x Dual-Port Mellanox ConnectX-6 VPI
- 10,25,50,100,200 GB/s Ethernet
Unrivalled AI Experience & Expertise
Robovision is a specialised AI-enabling company focused on the agri-food, manufacturing and healthcare sectors. Since 2017, we have helped original equipment (OE) integrators and commercial organisations to design and implement strategic and ethical uses of AI. Robovision has a team of NVIDIA DGXperts who are here to give guidance and expertise to help you to accelerate time-to-value of your NVIDIA DGX A100.
All facts and figures taken from official NVIDIA sources provided to Robovision as a “NVIDIA Preferred Partner”. NVIDIA, the NVIDIA logo, DGX, DGX A100, DGX POD, DGX Station, DGX SuperPOD, and NVLink are trademarks and or or registered trademarks of NVIDIA Corporation. Features, pricing, availability, and specifications are all subject to change without notice by NVIDIA.
All content has been proofread and approved by NVIDIA. 12 April 2022