Top Supercomputers-India Home

January 2021

Following is the ranking of the systems in terms of Rmax (Linpack Benchmark performance). For systems with high Rpeak (theoretical peak performance) and without Rmax, visit Rpeak-only Systems.

 

RankSite SystemCores/Processor Sockets/Nodes Rmax (TFlops)Rpeak (TFlops)
1 Centre for Development of Advanced Computing(C-DAC) , Pune The Supercomuter PARAM Siddhi-AI consists of 42 nos of NVIDIA DGX A100 nodes, Dual socket populated with AMD EPYC 7742@2.25GHz 64Core (128 Cores/Node) CPU , Mellanox HDR Infiniband
OEM: ATOS under NSM initiative
41664/42 /1284619 5267.14
2 Indian Institute of Tropical Meteorology(IITM), Pune Cray XC-40 class system with 3315 CPU-only (Intel Xeon Broadwell E5-2695 v4 CPU ) nodes with Cray Linux environment as OS, and connected by Cray Aries interconnect.
OEM: Cray Inc., Bidder: Cray Supercomputers India Pvt. Ltd.
119232/ /33123763.9 4006.19
3 National Centre for Medium Range Weather Forecasting (NCMRWF), Noida Cray XC-40 class system with 2322 CPU-only (Intel Xeon Broadwell E5-2695 v4 CPU ) nodes with Cray Linux environment as OS, and connected by Cray Aries interconnect
OEM: Cray Inc., Bidder: Cray Supercomputers India Pvt. Ltd.
83592//2322

2570.4 2808.7
4 Indian Institute of Technology (IITK), Kharagpur The supercomputer PARAM Shakti is based on a heterogeneous and hybrid configuration of Intel Xeon Skylake(6148, 20C, 2.4Ghz) processors, and NVIDIA Tesla V100. The system was designed and implemented by HPC Technologies team, Centre for Development of Advanced Computing (C-DAC) with total peak computing capacity of 1.66 (CPU+GPU) PFLOPS performance. The system uses the Lustre parallel file system (primary, storage 1.5 PiB usable with 50 GB/Sec write throughput. Archival Storage 500TiB based on GPFS)|
OEM: Atos India Pvt Ltd., Bidder: Atos India Pvt Ltd.
17280/2/432935 1290.2
5 Sankhya Sutra Labs Ltd Cray CS500 ( RUDRA ) 256-Node Compute,AMD Rome based Cluster, AMD EPYC 7742 64-Core Processor with 128core,2 Master nodes,2 login nodes and Mellanox HDR 100 interconnect
OEM: HPE/Cray
32768/../256

922.48 1179.65
6 National Atmospheric Research Laboratory(NARL),Tirupati The Supercomputer PARAM AMBAR is based on a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors,NVIDIA Tesla V100 with nvlink, Intel OPA 100Gbps 100% non-blocking.The system was designed and implemented by HPC Technologies team, Centre for Development of Advanced Computing (C-DAC) with C-DAC software stack
OEM: Tyrone, Bidder: M/s.Netweb Technologies.
18816/2/196

919.61 1384.85
7 Supercomputer Education and Research Centre (SERC), Indian Institute of Science (IISc), Bangalore Cray XC-40 Cluster (1468 Intel Xeon E5-2680 v3 @ 2.5 GHz dual twelve-core processor CPU-only nodes, 48 [Intel Xeon E5-2695v2 @ 2.4 Ghz single twelve-core processor+Intel Xeon Phi 5120D] Xeon-phi nodes, 44 [Intel Xeon E5-2695v2 @ 2.4 Ghz single twelve-core processor+NVIDIA K40 GPUs] GPU nodes) w/ Cray Aries Interconnect. HPL run on only 1296 CPU-only nodes.
OEM: Cray Inc., Bidder: Cray Supercomputers India Pvt. Ltd.
36336C + 2880ICO + 126720G/
3028C + 48ICO + 44G/
1560C + 48ICO + 44G
901.51
(CPU-only)
1244.00
(CPU-only)
8 Indian Institute of Technology(IIT), Kanpur The PARAM SANGANAK is a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors, NVIDIA Tesla V100 with NVLink, Mellanox HDR interconnect with C-DAC HPC Software stack having peak capacity of 1.6 Peta Flops. This system is designed and implemented by the HPC Technologies Group, C-DAC at IIT Kanpur under National Super Computing Mission (NSM)
OEM: ATOS Under NSM initiative
13834/ /292851.3 1350
9 Indian Institute of Tropical Meteorology, Pune IBM/Lenovo System X iDataPle DX360M4, Xeon E5-2670 8C 2.6 GHz, Infiniband FDR
OEM: IBM/Lenovo, Bidder: IBM India Pvt. Ltd.
38016/ /719.2 790.7
10 Indian Lattice Gauge Theory Initiative, Tata Institute of Fundamental Research (TIFR), Hyderabad Cray XC-30 cluster (Intel Xeon E5-2680 v2 @ 2.8 GHz ten-core CPU and 2688-core NVIDIA Kepler K20x GPU nodes) w/Aries Interconnect
OEM: Cray Inc., Bidder: Cray Supercomputers India Pvt. Ltd.
4760C + 1279488G/
476C + 476G/
476C + 476G
558.7 730.00
11 Indian Institute of Technology, Delhi HP Proliant XL230a Gen9 and XL250a Gen9 based cluster (Intel Xeon E5-2680v3 @ 2.5 GHz dual twelve-core CPU and dual 2880-core NVIDIA Kepler K40 GPU nodes) w/Infiniband
OEM: HP, Bidder: HP
10032C + 927360G/
836C + 322G/
418C + 161G
524.40 861.74
12 Indian Institute of Science Education And Research,Pune The PARAM Brahma DCLC based HPC cluster has 179 Intel Xeon Platinum 8268 nodes (Bull Sequana CPU only compute blades are manufatured in India) constituting a total of 8592 CPU cores with C-DAC software stack ,total storage of 1PiB storage system and with HDR100 Infiniband Interconnect.The system was designed and implemented by HPC Technologies team, Centre for Development of Advanced Computing (C-DAC).
OEM: Atos India Pvt Ltd., Bidder: Atos India Pvt Ltd.
8592/2/162472.8 721.16
13 Indian Institute of Technology (IITBHU), Varanasi The PARAM Shivay, first Super Computer installed by C-DAC under National Supercomputing Mission(NSM) project. It is based on a heterogeneous and hybrid configuration of BullSequana X400 series system with 192 CPU nodes, 20 High Memory nodes and 11 GPU nodes with 2xNvidia V100 accelerators. Each node having 2xIntel Xeon Skylake 6148, 20 cores processors with C-DAC Software stack. Primary Network Mellanox 100Gbps EDR Infiniband 100% fully non-blocking FatTree architecture. Lustre based primary storage 750TiB usable with 25GB/Sec write throughput. Archival Storage 250TiB based on GPFS.
OEM: Atos India Pvt Ltd., Bidder: Atos India Pvt Ltd.
210/2/8400456.9 645.12
14 Institute For Plasme Research(IPR), Gujarat ANTYA HPC cluster consists of Acer hardware servers having 236 CPU-only nodes, 22 GPU nodes, 2 High Memory nodes and 1 Visualization node with Red Hat Enterprise Linux OS, and connected by 100Gbps Enhanced Data Rate (EDR) InfiniBand (IB) network. Total storage of 2 PB using GPFS file system
OEM: ACER, Bidder: Locuz Solutions Pvt Ltd
9440/2/236/
/
22GPU
446.9
(CPU-only)
-----------
181.3
(GPU-only)
724.992
(CPU-only)
-----------
274.384
(GPU-only)
15 Aeronautical Development Agency, DRDO, Bangalore 256 node Dual Intel Skylake with 20 core processor at 2.0 GHz w/ EDR Infiniband
OEM: M/s Dell , Bidder: M/s Dell
259/2/20437 655
16 Jawaharlal Nehru Center for Advanced Scientific Research(JNCASR),Bangalore The PARAM YUKTI is a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors, NVIDIA Tesla V100 with NVLink, Mellanox HDR interconnect with C-DAC HPC Software stack having peak computing capacity of 838 TFlops. This system is designed and implemented by the HPC Technologies Group, C-DAC at JNCASR Bangalore under National Super Computing Mission (NSM).
OEM: Atos Under NSM initiative
6528/../..405.6 641.43
17 Center for Development of Advanced Computing (C-DAC), Pune Param Yuva2 System (Intel Xeon E5-2670 (Sandy Bridge) @ 2.6 GHz dual octo-core CPU and Intel Xeon Phi 5110P dual 60-core co-processor nodes) w/Infiniband FDR
OEM: Intel, Bidder: Netweb Technologies
3536C + 26520 ICO/
442C + 442 ICO/
221C + 221 ICO
388.44 520.40
18 Indian Institute of Technology ( IITB) Bombay Cray XC-50 class system with 202 CPUs (Intel Xeon Skylake 6148 CPU @ 2.4GHz) regular node sand connected by Cray Aries interconnect.
OEM: OEM: Cray Inc., Bidder: Cray Supercomputers India Pvt. Ltd.
8000/400/200384.83 620.544
19 CSIR Fourth Paradigm Institute (CSIR-4PI), Bangalore HP Cluster Platform 3000 BL460c (Dual Intel Xeon 2.6 GHz eight core E5-2670 w/Infiniband FDR)
OEM: HP, Bidder: HCL Infosystems Ltd.
17408/2176/1088334.38 362.09
20 National Centre For Medium Range Weather Forecasting, Noida IBM/Lenovo System X  iDataPlex DX360M4, Xeon E5-2670 8C 2.6 GHz, Infiniband FDR
OEM: IBM/Lenovo, Bidder: IBM India Pvt. Ltd.
16832/ /318.4 350.1
21 Indian Institute of Technology, Kanpur Cluster Platform SL230s Gen8, Intel Xeon E5-2670v2 10C 2.5 GHz, Infiniband FDR.
OEM: HP, Bidder: HP
15360/1536/768295.25 307.2
22 Inter-University Centre for Astronomy and Astrophysics (IUCAA) Twenty Four numbers of Apollo 2000 Chassis, each having 4 * ProLiant XL170r Gen10 compute node with dual CPUs (Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz, 20 Cores ) running Scientific Linux Operating System Version 7.5, and connected by 50 G ETH.
OEM:HPE, Bidder: Concept Info. Tech. (I) Pvt. Ltd.
3840
/
192.5 307.2
23 Vikram Sarabhai Space Centre (VSSC), Indian Space Research Organization (ISRO), Trivandrum HP and Wipro Heterogeneous Cluster (dual Intel Xeon E5530 quad core and dual Intel Xeon E5645 hexa core CPUs, and dual 448-core NVIDIA C2070 and dual 512-core M2090 GPUs) w 40 Gbps Infiniband network
OEM: HP&Wipro, Bidder: HP&Wipro
3100C + 301824G/
640C + 640G/
320C + 320G
188.7 394.76
24 Oil and Natural Gas Corporation (ONGC), Dehra Dun Fujitsu cluster (Intel Xeon E5-2670v3 @ 2.3 GHz dual twelve-core processor nodes) w/Infiniband
OEM: Fujitsu, Bidder: Wipro
6240/520/260182.09 229.63
25MRPU, ChennaiSupermicro servers ( 2 x Ivy Bridge 12C E5-2697 V2@ 2.7GHz)
OEM: Boston, Bidder: Locuz Solutions Pvt. Ltd.
9600/800/400175.0207.0
26 National Institute of Science Education and Research(NISER), Bhubaneswar A HPC of 64 new compute nodes, 32 old compute nodes and 01 GPU node with 04 nos. of NVIDIA Tesla K40c has been named KALINGA for the School of Physical SciencesIt has 205 TB of storage with Luster Parallel File System and interconnected with 216-Port EDR 100Gb/s Infiniband Smart Director Switch populated with 108 ports.,
OEM: Netweb Technologies India Pvt Ltd
3616/../..161.42 249.37
27 Indian Institute of Technology(IIT), Jammu The Supercomputer Agastya is based on a hybrid configuration of Intel Xeon Cascade Lake Gold 6248 20C, 2.5GHz Processors and NVIDIA Tesla V100. The system was designed with total peak computing capacity of 256 TFLOPS on CPU and 56 TFLOPS on GPU performance. The system uses the Lustre parallel file system of 795TB usable storage with 30 GB/Sec write throughput. The Internetworking is done via High Speed Low Latency Infiniband Architecture Mellanox HDR100 100Gbps using fat-tree topology configured as a 100% non-blocking network
OEM: Netweb Technologies India Pvt Ltd
3200/../..161
(CPU-only)
-----------
46
(GPU-only)
256
(CPU-only)
-----------
56
(GPU-only)
28 Indian Institute of Science Education and Research (IISER), Thiruvananthapuram HPE Apollo K-6000 chassis with 88 ProLiant XL230K Gen 10 compute nodes each with dual Intel Xeon Gold 6132 processors (Total 2464 cores at 2.6 Ghz), 3 ProLiant DL380 Gen 10 GPU nodes (Nvidia P100, 4 cards total), Intel omnipath infiniband 100 Gb, 500 TB Lustre FS DDN storage
OEM: HPE, Bidder: SS Information Systems Pvt Ltd
2464/2/88141.3 205
29Tata Consultancy Services Pvt. Ltd., Pune HP Cluster Platform 3000 BL460c (Dual Intel Xeon 3 GHz quad core E5365 (Clovertown) w/Infiniband 4X DDR)
OEM: HP, Bidder: HP
14400/3600/1800132.80 172.60
30Indian Institute of Technology, GuwahatiAcer AW170hF3 cluster (Intel Xeon E5-2680 v3 dual twelve- core compute nodes)
OEM: Acer, Bidder: Locuz Solutions Pvt. Ltd.
3888/324/162122.71155.52
31  The Thematic Unit of Excellence on Computational Materials Science (TUE-CMS), Solid State and Structural Chemistry Unit, Indian Institute of Science, Bangalore HPC Cluster (Intel Xeon E5-2630V4 @2.2 GHz dual twelve-core compute nodes) w/Infiniband
OEM: IBM/Lenovo, Bidder: HCL
3168C + 46080G + 492ICO/
264C + 16G + 8ICO/
132 C + 8G + 4ICO
96.89 150.83
32 Inter-University Centre for Astronomy and Astrophysics (IUCAA) Pegasus supercomputer is of 159.72 TF and built with 60 nodes cluster using Intel Gold 6142 Processor,1 PF of PFS Storage with 15Gbps performance, EDR interconnect, IBM Spectrum cluster manager and PBS Pro Job scheduler. Compute hardware is on Lenovo make.
OEM: Lenovo, Bidder: Locuz Enterprise Solutions Ltd
192093.9 156
33S.N.Bose National Institute of Basic sciences,Kolkata CRAY XC50:24 Compute Nodes, each with 2 X Intel SKL 6148 Processor, 12 X 16 GB 2666MHz Memory with 2 dedicated Boot NodesCray Aries Interconnect,Cray XC-50 Air Cooled System with 48 x Dual Processor Compute Nodes, with Skylake Gold 6148 2.4Ghz CPUs 192GB Per Node Memory in 44 Nodes and 768GB per Node Memory in 4 nodes. Plus Additional 8 Service nodes. Interconnect is Cray Proprietary ARIES Interconnect.
OEM: HPE/Cray
192091.96 147.45
34 Tata Consultancy Services, Bangalore HPC Cluster with (Intel Xeon E5-2680v3 @2.5 GHz dual twelve-core compute nodes) w/Infiniband
OEM: IBM/Lenovo, Bidder: IBM/Lenovo
2688/224/11286.09 107.52
35 Inter-University Centre for Astronomy and Astrophysics (IUCAA) Two numbers of Apollo k6000 Chassis, each having 24 * ProLiant XL230k Gen10 compute node with dual CPUs (Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz ) running Scientific Linux Operating System Version 7.5, and connected by 10 G ETH.
OEM: HPE Bidder:Concept Info. Tech. (I) Pvt. Ltd.
1536
/
80.06 127.97
36 Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), Bangalore HPC cluster (Intel Xeon E5-2680v3 @ 2.5 GHz dual twelve-core processor nodes) w/Infiniband
OEM: IBM/Lenovo, Bidder: Wipro
2592/216/10877.13 103.68
37 Indian Institute of Technology, Madras, Chennai IBM/Lenovo System X iDataPlex dx360 M4 Server Cluster (Intel(R) Xeon(R) CPU E5-2670 @ 2.6 GHz dual octo-core processor nodes w/Infiniband)
OEM: IBM/Lenovo, Bidder: SBA Info Solutions
4672/584/29275.5 97.2
38 National Institute of Technology(NIT), Calicut 31 number lenovo SR630 each with dual Intel Xeon 6248 20C processor and 2 number lenovo SR650 each with dual Intel Xeon 6248 20C processor and dual Nvidia V100 32GB GPU1320 cores HPC Cluster build over Lenovo servers
OEM: Locuz Enterprise Solutions Ltd
1280/ / 69 102
39 Center For Developement of Advanced Computing(CDAC),Pune 100TF HPC system based on the A64FX ARM processor, Mellanox HDR as an interconnect with C-DAC HPC Software stack. This system is designed and implemented by the HPC Technologies Group at C-DAC Pune under National Super Computing Mission (NSM)
OEM: HPE Bidder: Concept Information Tech Pvt. Ltd
1824/ / 68.5 105
40 Aeronautical Development Agency, DRDO, Bangalore Fujitsu Cluster PRIMERGY CX250 S2 Cluster (Intel Xeon E5-2697v2 @ 2.7 GHz dual twelve-core compute nodes) w/ Mellanox FDR Infiniband
OEM: Fujitsu, Bidder: Tata Elxsi
3072/256/12860.55 66.35
41 S.N. Bose National Centre for Basic Sciences, Kolkata Cray XE6 cluster (AMD Opteron 6300 Series (ABU DHABI) @2.4 GHz dual sixteen-core processor nodes w/Cray's Gemini primary interconnect)
OEM: Cray Inc. Bidder: Cray Supercomputers India Pvt. Ltd.
7808/488/24460.00 74.95
42 Physical Research Lab (PRL), Ahmedabad IBM/Lenovo NeXtScale System (Intel Xeon Processor E5-2670 v3 @ 2.355 GHz dual twelve-core processor CPU nodes. Twenty nodes also contain dual 2880-core NVIDIA K40 GPUs) w/ 4X FDR Infiniband primary interconnect
OEM: Lenovo/IBM, Bidder: TCS Ltd.
2328C + 115200G/
194C + 40G/
97C + 20G
55.60
(CPU-only)
-----------
51.61
(GPU-only)
68.00
(CPU-only)
-----------
74.86
(GPU-only)
43 Indian Institute of Tropical Meteorology, Pune IBM cluster (IBM P6 4.7 GHz sixteen dual-core processor 575 nodes w/Infiniband 4X)
OEM: IBM Corp., Bidder: HCL Infosystems Ltd.
3744/480/11745.84 66.18
44 Tata Institute of Fundamental Research (TIFR), Hyderabad HPC Cluster (Intel Xeon E5-2630V4 @2.2 GHz dual ten-core compute nodes) w/Infiniband
OEM: Supermicro, Bidder: Netweb Technologies
1360C + 46080G/
136C + 16G/
68C + 4 G
43.59 70.85
45 Variable Energy Cyclotron Centre (VECC) , Kolkata Dell PowerEdge PowerEdge FC630 Server Nodes --- 48Nos 02Nos x Intel Xeon E5-2680 v4 2.4GHz,35M Cache,9.60GT/s QPI,Turbo,HT,14C/28T (120W) , 128GB
OEM: DELL, Bidder: Micropoint Computers Private Limited
1344/4843.1 51.9
46 Research Centre Imarat, Hyderabad HP cluster (Intel Xeon E5-2670 @ 2.6 GHz dual octo-core CPU nodes) w/Infiniband QDR
OEM: HP, Bidder: HP
2432/304/15241.95 50.58
47 High Energy Materials Research Laboratory (HEMRL) ,Pune ISSAC HPC CLuster is 53.76 TF cluster, built with 20 CPU nodes and 4 GPU nodes, Using Xeon-G 6230 Processor.PFS storage with 220 TB Lustre based. Primary interconnect is Infiniband.
OEM: HPE, Bidder: Concept IT (I) Pvt Ltd
800/2/2039.52 53.76
48 Institute of Physics(IOP), Bhubaneswar Fujitsu PY CX400 M1 (20 Nodes with dual E5-2680 V3 @ 2.5Ghz processor, 128 GB RAM, 2 Nos. Intel Co-Processor 7120P,60 CPU Nodes, 40 GPU-K80
OEM: Fujitsu, Bidder: Micropoint Computers Private Limited
40Xeon-Phi
1440//60/
40G
41.79
(co-pro)
38.33
39.06
(GPU-only)
48.00
(co-pro)
57.60
74.80
(GPU-only)
49National Institute of Technology, Raipur Fujitsu Server PRIMERGY RX2530 M2 Master node with 54 Nodes/12core per Sockets with Dual E5-2680 V4@ 2.10Ghz processor 128GB RAM connected with Intel OmniPath 100Gbps
OEM: Fujitsu (OEM), Wizertech (SI).
1296/2/5439.545.6
50Centre for Modeling and Simulation, Pune University, PuneFujitsu cCX2550 Cluster (Intel Xeon E5-2697 @ 2.6GHz dual fourteen-core CPU nodes) w/Infiniband
OEM: Fujitsu, Bidder: Locuz Solutions Pvt. Ltd.
1344/96/4839.2955.91
51 Indian Institute of Technology(IIT),Goa 16 nos of Compute nodes with 200TB Parallel File system with 5Gbps write throughput, Mellanox 100Gbps InfiniBand Switch for Primary communication
OEM:Hpe Bidder: S S Information Systems Pvt. Ltd.
720/ /16 39 57
52 National Institute of Science Education and Research(NISER) ,Bhubaneswar Hartree is a HPC of 40 compute nodes and 2 master nodes in HA configuration with FDR 56Gbps Interconnect and 230TB Storage in Lustre parallel file system.
OEM: Supermicro SuperServer , Bidder: Netweb Technologies
1344/4038.87 51.9
53 Center for Development of Advanced Computing (C-DAC), Pune PARAMcluster (Intel Xeon (Tigerton) 2.93 GHz quad core quad processor X73xx nodes w/Infiniband)
OEM: HP, Bidder: HP
4608/1152/28838.1 53.63
54 Space Applications Centre (SAC), ISRO, Ahmedabad HP Proliant DL360Gen9/DL380Gen9 Servers (Intel Xeon E5-2699v3 @2.3 GHz dual eighteen-core processor nodes) w/Infiniband 4x FDR Interconnect
OEM: HP, Bidder: TCS Ltd.
1368 C + 244 ICO /
76 C + 4 ICO /
38 C+2 ICO
36.67
(CPU-only)
50.34
(CPU-only)
55 Indian Institute of Science Education and Research, Bhopal IBM/Lenovo System X iDataPlex dx360 M4 Server Cluster (Intel(R) Xeon(R) CPU E5-2670 @ 2.6 GHz dual octo-core processor nodes w/Infiniband)
OEM: IBM/Lenovo, Bidder: Locuz Enterprise Solutions Ltd.
2112/264/13235.36 43.92
56 Center For Developement of Advanced Computing(CDAC), Pune CDAC HPC is of 100 TF built with 16 CPU only nodes and 4 GPU V100 nodes cluster using AMD EPYC ROME Processor. 240 TB of Lustre PFS with 5.7 Gbps performance. EDR Interconnect
Bidder- Concept IT (I) Pvt Ltd , OEM- HPE
1024/ /16 29.4 40.96



Summary