Optimize your Machine Learning and LLM workloads with HPE enterprise grade dedicated servers powered by AMD Instinct.
Purpose-built for AI, machine learning, and large language model workloads. Our AMD Instinct GPU servers deliver exceptional compute density with HBM3 memory, CDNA3 architecture, and Zen 4 CPU cores for demanding AI/HPC applications.
High-performance accelerators designed for exascale HPC and AI workloads
| MI210 | L40S | A100 | H100 | |
|---|---|---|---|---|
| GPU Architecture | CDNA 2.0 | Ada Lovelace | NVIDIA Ampere | Hopper |
| GPU Memory | 64GB HBM2e | 48GB GDDR6 | 80GB HBM2e | 80GB HBM3 |
| GPU Memory Bandwidth | 1638 GB/s | 864 GB/s | 1935 GB/s | 3352 GB/s |
| FP32 | 22.63 TFLOPS | 91.6 TFLOPS | 19.5 TFLOPS | 51 TFLOPS |
| TF32 Tensor Core | 312 TFLOPS | 366 TFLOPS | 312 TFLOPS | 756 TFLOPS |
| FP16/BF16 Tensor Core | 181 TFLOPS | 733 TFLOPS | 624 TFLOPS | 1513 TFLOPS |
| Power | Up to 300W | Up to 350W | Up to 400W | Up to 350W |
| Loading... | Loading... | Loading... | Loading... |
Common questions about deploying and managing your AMD Instinct GPU-accelerated servers for AI, HPC, and machine learning workloads.