Log Request | Search

Software/Hardware Information


NSCC’s core computing capabilities deliver a level of performance and flexibility needed to support a multifaceted array of HPC applications. The computational components are balanced with high speed storage subsystems and a low latency high speed interconnect that ensures to deliver the highest levels of performance across a broad spectrum of applications.

The NSCC HPC system comprises of the following:


  • Fujitsu PRIMERGY servers providing total compute capacity of up to 1 PFlops and 128 GB memory per node or 5.33 GB memory per core
  • GPU compute capability with NVIDIA Tesla K40

AI System

  • 6 units of NVIDIA DGX-1 Deep Learning System, each with the following specifications:
    • 2 x Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
    • 512GB ECC memory
    • 8TB usable local SSD
    • 8 x NVIDIA V100 (16GB) GPUs
    • 4 x IB EDR NIC
  • Access to the AI System is via ASPIRE 1


  • DDN storage solution with high-speed and flexible management of data within 3 tiers
  • Integrated migration for Tier0 data to the HSM storage area
  • GPFS & Lustre File Systems
  • I/O bandwidth up to 500GB/s

InfiniBand Interconnection

  • Latest EDR interconnect technology for high-throughput low-latency inter-node communication with a full bisectional bandwidth non-blocking communication path between all nodes and storage

Other Features

  • Robust ad complete HPC software stack with leading edge management capabilities
  • Web based front-end for user submission, management and job results viewing
  • Remote extended RDMA network connections to the A*STAR,  NUS, and NTU sites.

Technical Specs

Compute Node Architecture
Processor Intel E5-2690v3 (2.60GHz, 12 cores)
Total cores 31,392 cores
Total memory 229TByte
GPUs nVidia Tesla K40
Interconnect InfiniBand EDR
Compute performance 1PFlops
Storage architecture DDN Infinite Memory Engine
Storage capacity 13PByte
Tier 0 storage performance 500GB/s
Back to Top