About ASPIRE2A+

ASPIRE2A+ Supercomputer is powered by an NVIDIA DGX SuperPOD™ system consisting of 40 DGX H100 nodes. It is an Al powerhouse that features the groundbreaking NVIDIA H100 Tensor Core GPU. It comes with a high-performance parallel file systems solution that provides 2.5 PB usable capacity on NVMe for scratch storage and 27.5 PB usable capacity of home storage on DDN Lustre File storage and NVIDIA InfiniBand high speed interconnect.

NVIDIA H100 compute node lustre SSD based STORAGE pools nvidia infiniband
  • Dual Intel® Xeon® Platinum 8480C Processors 112 cores
  • 8 NVIDIA H100 GPUs
  • 2 TB System Memory
  • 30 TB NVMe drives
Lustre – 2.5 PB

Scratch/Buffer Filesystem

Lustre HDD based STORAGE pools
Lustre – 27.5 PB

Project/Home Filesystem

  • The compute node has 4x Dual port InfiniBand NDR 400Gb/s OSFP ports connecting into the InfiniBand NDR leaf switches.
  • The topology is connected in a rail optimized, full fat-tree topology, maximizing the InfiniBand network capability for the DGX H100.
  • Offers point-to-point bidirectional serial links
  • The DGX H100 systems are connected to InfiniBand fabric to access DDN storage.

The NSCC HPC system comprises of the following:

ASPIRE2A+

  • NVIDIA DGX SuperPOD™ system is an Al powerhouse that features the groundbreaking NVIDIA H100 Tensor Core GPU.
  • It consists of 40 DGX H100 compute nodes and each node has eight NVIDIA H100 GPUs
  • Each GPU card has 80GB of Memory (8 x 80 GB = 640 GB)
  • Each compute node has Intel(R) Xeon(R) Platinum 8480C CPU with 112 cores.
  • Please refer this link for more info – https://help.nscc.sg/wp-content/uploads/2024/06/ASPIRE2A-General-Quickstart-Guide-1.pdf

Other Features of Aspire2A+

  • All compute nodes are connected to Lustre parallel file system.
  • Air-cooled racks for storage, login and compute nodes.
  • Altair Workload Manager/Scheduler

Technical Specifications

Compute Node Architecture

DGX H100 SuperPOD System

NVIDIA H100 Tensor Core GPU
Processor Dual Intel® Xeon® Platinum 8480C Processors
Total cores 2 x 56 cores = 112 Cores
Total memory 2 TB
GPUs NVIDIA H100 GPUs
Interconnect NVIDIA InfiniBand Interconnect
Compute performance 32 petaFLOPS FP8
Storage capacity 8 x 3.84 TB NVMe drives

Choose Your HPC System
Before Proceeding: