Rank |
Site |
System |
Core/Processor/Socket/Nodes |
Rmax (TFlops) |
Rpeak (TFlops) |
1 |
Centre for Development of Advanced Computing(C-DAC),Pune |
NVIDIA DGX A100, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband .consists of 42 nos of NVIDIA DGX A100 nodes, Dual socket populated with AMD EPYC 7742@2.25GHz 64Core (128 Cores/Node) CPU , Mellanox HDR Infiniband OEM:ATOS under NSM initiative, Bidder: |
41664/42/236 compute nodes each with 40cores . |
4619 |
5267.14 |
2 |
Indian Institute of Tropical Meteorology(IITM),Pune |
Cray XC-40 class system with 3315 CPU-only (Intel Xeon Broadwell E5-2695 v4 CPU ) nodes with Cray Linux environment as OS, and connected by Cray Aries interconnect. Total storage of 10.686 PB using Lustre Parallel file system ( Sonexion ) and 30 PB archive tier.Intel Xeon Broadwell E5-2695 v4 CPU ) nodes with Cray Linux environment as OS, and connected by Cray Aries interconnect. OEM:Cray, Bidder:Cray |
119232//Cray XC-40 class system with 3315 CPU-only (Intel Xeon Broadwell E5-2695 v4 CPU ) . |
3763.9 |
4006.19 |
3 |
National Centre for Medium Range Weather Forecasting (NCMRWF),Noida |
Intel Xeon Broadwell E5-2695 v4 CPU ) nodes with Cray Linux environment as OS, and connected by Cray Aries interconnect OEM:Cray, Bidder:Cray |
83592/2/Cray XC-40 class system with 2322 CPU-only (Intel Xeon Broadwell E5-2695 v4 CPU ) . |
2570.4 |
2808.7 |
4 |
Indian Institute of Technology (IITK),Kharagpur |
The supercomputer PARAM Shakti is based on a heterogeneous and hybrid configuration of Intel Xeon Skylake(6148, 20C, 2.4Ghz) processors, and NVIDIA Tesla V100. The system was designed and implemented by HPC Technologies team, Centre for Development of Advanced Computing (C-DAC) with total peak computing capacity of 1.66 (CPU+GPU) PFLOPS performance. The system uses the Lustre parallel file system (primary, storage 1.5 PiB usable with 50 GB/Sec write throughput. Archival Storage 500TiB based on GPFS)|based on a heterogeneous and hybrid configuration of Intel Xeon Skylake(6148, 20C, 2.4Ghz) processors, and NVIDIA Tesla V100. The system was designed and implemented by HPC Technologies team, Centre for Development of Advanced Computing (C-DAC) with total peak computing capacity of 1.66 (CPU+GPU) PFLOPS performance. The system uses the Lustre parallel file system (primary, storage 1.5 PiB usable with 50 GB/Sec write throughput. Archival Storage 500TiB based on GPFS) OEM:Atos, Bidder:Atos |
17280//432 Nodes total of 17280 . |
935 |
1290.2 |
5 |
Sankhya Sutra Labs Ltd,Bangalore |
AMD Rome based Cluster256-Node Compute,AMD Rome based Cluster, AMD EPYC 7742 64-Core Processor with 128core,2 Master nodes,2 login nodes and Mellanox HDR 100 interconnect OEM:HPE, Bidder:Cray |
32768//256 compute nodes |
922.48 |
1179.65 |
6 |
National Atmospheric Research Laboratory(NARL),Tirupati |
The Supercomputer PARAM AMBAR is based on a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors, NVIDIA Tesla V100 with nvlink, 100Gbps Intel OPA 100% fully non-blocking based with C-DAC software stack. |based on a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors,NVIDIA Tesla V100 with nvlink, Intel OPA 100Gbps 100% non-blocking.The system was designed and implemented by HPC Technologies team, Centre for Development of Advanced Computing (C-DAC) with C-DAC software stack OEM:Tyrone, Bidder:Netweb |
18816//196 compute nodes constituting a total of 18816 CPU cores . |
919.61 |
1384.85 |
7 |
Supercomputer Education and Research Centre (SERC), Indian Institute of Science (IISc),Bangalore |
The Cray XC-40 System installed at SERC, IISc.(1468 Intel Xeon E5-2680 v3 @ 2.5 GHz dual twelve-core processor CPU-only nodes, 48 [Intel Xeon E5-2695v2 @ 2.4 Ghz single twelve-core processor+Intel Xeon Phi 5120D] Xeon-phi nodes, 44 [Intel Xeon E5-2695v2 @ 2.4 Ghz single twelve-core processor+NVIDIA K40 GPUs] GPU nodes) w/ Cray Aries Interconnect. HPL run on only 1296 CPU-only nodes. OEM:Cray, Bidder:Cray |
36336C + 2880ICO + 126720G/1 CPU + 1 GPU/CPU-only nodes: Dual Intel Xeon E5-2680 v3 (Hawell) twelve-core processor at 2.5 GHz, 128 GB memory per node with Cray Linux environment as the OS. Xeon Phi nodes: one Intel Xeon E5-2695 v2 (Ivybridge) twelve-core processor CPU at 2.4 GHz, one Intel Xeon Phi 5120D containing 60 xeon phi cores, 64 GB memory per node. GPU nodes: one Intel Xeon E5-2695 v2 (Ivybridge) twelve-core processor CPU at 2.4 GHz, one NVIDIA K40 GPU containing 2880 GPU cores, 64 GB memory per node. |
901.51 |
1244 |
8 |
Indian Institute of Technology(IIT),Kanpur |
The PARAM SANGANAK is a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors, NVIDIA Tesla V100 with NVLink, Mellanox HDR interconnect with C-DAC HPC Software stack having peak capacity of 1.6 Peta Flops. This system is designed and implemented by the HPC Technologies Group, C-DAC at IIT Kanpur under National Super Computing Mission (NSM). is a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors, NVIDIA Tesla V100 with NVLink, Mellanox HDR interconnect with C-DAC HPC Software stack having peak capacity of 1.6 Petaflops. This system is designed and implemented by the HPC Technologies Group, C-DAC at IIT Kanpur under National Super Computing Mission (NSM) OEM:ATOS, Bidder:ATOS |
13834//292 |
851.3 |
1350 |
9 |
Indian Institute of Tropical Meteorology,Pune |
Aaditya HPC is built on IBM P6 575 nodes totaling 117 numbers including 4 GPFS nodes and 2 Login nodes. Each node is populated with 32 cores of IBM P 6 CPU running at 4.7 G Hz.The total number of cores works to 3744 cores.The Aaditya HPC has Interconnectivity using Infiniband Switches and Ethernet switches for Management purposes. Total of 200 TB of Storage including Online, Nearline and Archival Storgae.GPFS and other Managment SW.IBM/Lenovo System X iDataPle DX360M4, Xeon E5-2670 8C 2.6 GHz, Infiniband FDR OEM:IBM, Bidder:IBM |
38016/16/117 |
719.2 |
790.7 |
10 |
Indian Lattice Gauge Theory Initiative, Tata Institute of Fundamental Research (TIFR),Hyderabad |
Cray XC-30 LC with Nvidia K20x GPUs consisting of 476 nodes. The CPUs are based on Intel Xeon Processor E5-2680 v2, and each node has a NVIDIA K20x GPU. The nodes are interconnected by Cray Proprietary - Aries interconnect. The total storage is 1.1 PB based on Lustre Parallel Filesystem.476 Intel Xeon Processor E5-2680 v2, 10Core 2.8 GHz plus 476 NVIDIA K20x GPU OEM:Cray, Bidder:Cray |
4760C + 1279488G/1 CPU plus 1 GPU/Intel Xeon Processor E5-2680 v2 ten-core processor at 2.8 GHz with Cray Linux environment, 32 GB RAM. |
558.7 |
730 |
11 |
Indian Institute of Technology,Delhi |
HP ProLiant XL230a Gen9 Server (257 CPU nodes) and HP ProLiant XL250a Gen9 Server with GPU k40m cards (161 nodes), total of 418 nodes, total of 322 K40M GPU cards, with 1.5 PB total storage with Lustre DDN, with Infiniband interconnect(Intel Xeon E5-2680v3 @ 2.5 GHz dual twelve-core CPU and dual 2880-core NVIDIA Kepler K40 GPU nodes) w/Infiniband OEM:HP, Bidder:HP |
10032C + 927360G/2/Dual Intel Xeon E5-2680v3 twelve-core processor at 2.5 GHz. Dual K40M GPU cars on GPU nodes, 64 GB RAM, with RedHat Enterprise Linux, 600 GB storage. |
524.4 |
861.74 |
12 |
Indian Institute of Science Education And Research,Pune |
The Param Brahma cluster has 162 compute nodes constituting a total of 7776 CPU cores and with Linux 64-bit (CentOS 7.6) operating system, total storage of 1PiB DDN storage system and with HDR100 Infiniband Interconnect |based HPC cluster has 179 Intel Xeon Platinum 8268 nodes (Bull Sequana CPU only compute blades are manufatured in India) constituting a total of 8592 CPU cores with C-DAC software stack ,total storage of 1PiB storage system and with HDR100 Infiniband Interconnect.The system was designed and implemented by HPC Technologies team, Centre for Development of Advanced Computing (C-DAC). OEM:Atos, Bidder:Atos |
8592//162 compute nodes constituting a total of 7776 CPU cores . |
472.8 |
721.16 |
13 |
CDAC Bangalore,Bangalore |
The PARAM UTKARSH is a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors, NVIDIA Tesla V100 with NVLink, Mellanox HDR interconnect with C-DAC HPC Software stack having peak capacity of 838 TeraFlops.
This system is designed and implemented by the HPC Technologies Group, C-DAC at Banglore under National Super Computing Mission (NSM).2 OEM:ATOS, Bidder:ATOS |
7488//156 |
458 |
641 |
14 |
Indian Institute of Technology (IITBHU),Varanasi |
First Super Computer Installed by C-DAC under National Supercomputing Mission(NSM) Project. Atos BullSequana X series system with 192 cpu nodes, 20 High Memory nodes and 11 gpu nodes with 2xNvidia V100 16GB PCIe accelerator cards each having 5120 cuda cores.each node having 2xIntel Xeon Skylake 6148, 20 cores processors with CentOS Linux environment as OS.Primary Network Mellanox EDR Infiniband FatTree Interconnect.Lustre based primary,storage 750TiB usable with 25GB/Sec write throughput. Archival Storage 250TiB based on GPFS..first Super Computer installed by C-DAC under National Supercomputing Mission(NSM) project. It is based on a heterogeneous and hybrid configuration of BullSequana X400 series system with 192 CPU nodes, 20 High Memory nodes and 11 GPU nodes with 2xNvidia V100 accelerators. Each node having 2xIntel Xeon Skylake 6148, 20 cores processors with C-DAC Software stack. Primary Network Mellanox 100Gbps EDR Infiniband 100% fully non-blocking FatTree architecture. Lustre based primary storage 750TiB usable with 25GB/Sec write throughput. Archival Storage 250TiB based on GPFS. OEM:Atos, Bidder:Atos |
210//Atos BullSequana X series system with 192 cpu nodes, 20 High Memory nodes and 11 gpu nodes with 2xNvidia V100 16G . |
456.9 |
645.12 |
15 |
Institute For Plasme Research(IPR),Ahmedabad |
ANTYA HPC cluster consists of Acer hardware servers having 236 CPU-only nodes, 22 GPU nodes, 2 High Memory nodes and 1 Visualization node with Red Hat Enterprise Linux OS, and connected by 100Gbps Enhanced Data Rate (EDR) InfiniBand (IB) network. Total storage of 2 PB using GPFS file system. |consists of Acer hardware servers having 236 CPU-only nodes, 22 GPU nodes, 2 High Memory nodes and 1 Visualization node with Red Hat Enterprise Linux OS, and connected by 100Gbps Enhanced Data Rate (EDR) InfiniBand (IB) network. Total storage of 2 PB using GPFS file system OEM:Acer, Bidder:Locuz |
9440//236 compute nodes each with 40cores . |
446.9 |
724.992 |
16 |
Aeronautical Development Agency, DRDO,Bangalore |
256 node Dual Intel Skylake with 20 core processor at 2.0 GHz w/ EDR Infiniband256 node Dual Intel Skylake with 20 core processor at 2.0 GHz w/ EDR Infiniband OEM:Dell, Bidder:Dell |
259//256 compute nodes each with 20cores . |
437 |
655 |
17 |
NABI,MOHALI,Mohali |
The PARAM SMRITI is a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors, NVIDIA Tesla V100 with NVLink, Mellanox HDR interconnect with C-DAC HPC Software stack having peak capacity of 838 TeraFlops.
This system is designed and implemented by the HPC Technologies Group, C-DAC at NABI,Mohali under National Super Computing Mission (NSM).2 OEM:ATOS, Bidder:ATOS |
7488//156 |
429 |
641 |
18 |
Indian institute of Technology,HYderabad,HYderabad |
The PARAM SEVA is a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors, NVIDIA Tesla V100 with NVLink, Mellanox HDR interconnect with C-DAC HPC Software stack having peak capacity of 838 TeraFlops.
This system is designed and implemented by the HPC Technologies Group, C-DAC at IIT,Hyderabad under National Super Computing Mission (NSM).2 OEM:ATOS, Bidder:ATOS |
7488//156 |
428 |
641 |
19 |
Jawaharlal Nehru Center for Advanced Scientific Research(JNCASR),Bangalore |
The PARAM YUKTI is a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors, NVIDIA Tesla V100 with NVLink, Mellanox HDR interconnect with C-DAC HPC Software stack having peak computing capacity of 838 TFlops. This system is designed and implemented by the HPC Technologies Group, C-DAC at JNCASR Bangalore under National Super Computing Mission (NSM).is a heterogeneous and hybrid configuration of Intel Xeon Cascade Lake processors, NVIDIA Tesla V100 with NVLink, Mellanox HDR interconnect with C-DAC HPC Software stack having peak computing capacity of 838 TFlops. This system is designed and implemented by the HPC Technologies Group, C-DAC at JNCASR Bangalore under National Super Computing Mission (NSM). OEM:ATOS, Bidder: |
6528// |
405.6 |
641.43 |
20 |
Centre for Development of Advanced Computing (C-DAC),Pune |
The Param Yuva II cluster has 221 Intel Xeon E5-2670 (Sandy Bridge) nodes that also consist of Intel Xeon Phi 5110P (Knights Corner) co-processors, constituting a total of 3536 CPU cores and 26520 co-processor cores, with Linux 64-bit (CentOS 6.2) operating system, and with FDR Infiniband Interconnect(Intel Xeon E5-2670 (Sandy Bridge) @ 2.6 GHz dual octo-core CPU and Intel Xeon Phi 5110P dual 60-core co-processor nodes) w/Infiniband FDR OEM:Intel, Bidder:Netweb |
3536C + 26520 ICO/2/Each node has dual octo-core 2.6 GHz Intel Xeon E5-2670 processors with dual 60-core Intel Xeon Phi 5110P co-processors. The CPU in each node has 32 KB L1 cache and 256KB L2 cache. Each node has 64 GB memory, and 146 GB storage. |
388.44 |
520.4 |
21 |
Indian Institute of Technology ( IITB),Bombay |
Cray XC-50 class system with 202 CPUs (Intel Xeon Skylake 6148 CPU @ 2.4GHz) regular nodes, 10 MAMU nodes, 4 High Memory nodes, 64 accelerator nodes (Nvidia P100) with Cray Linux environment as OS, and connected by Cray Aries interconnect. Total storage of 1200 TB using Lustre Parallel file system ( ClusterStor 3000 )Intel Xeon Skylake 6148 CPU @ 2.4GHz) regular node sand connected by Cray Aries interconnect. OEM:Cray, Bidder:Cray |
8000//Cray XC-50 class system with 202 CPU-only (Intel Skylake 6148 CPU ) . |
384.83 |
620.544 |
22 |
CSIR Fourth Paradigm Institute (CSIR-4PI),Bangalore |
CSIR C-MMACS 362TF HPC Cluster based on Cluster Platform BL460c Gen8, Intel Xeon E5-2670 8C 2.6GHz, FDRIntel Xeon Processor E5-2670 2.6GHz OEM:HP, Bidder:HCL |
17408/2/Dual Intel Xeon E5-2670 eight core processor at 2.6 GHz with Linux OS, 65.5 GB RAM and 300 GB storage. |
334.38 |
362.09 |
23 |
National Centre for Medium Range Weather Forecasting,Noida |
Intel Xeon Broadwell E5-2695 v4 CPU ) nodes with Cray Linux environment as OS, and connected by Cray Aries interconnect)System X iDataPlex DX360M4, Xeon E5-2670 8C 2.6 GHz, Infiniband FDR OEM:IBM, Bidder:IBM |
16832// |
318.4 |
350.1 |
24 |
Indian Institute of Technology,Kanpur |
HPC Cluster based on Cluster Platform HP SL230s Gen8 nodes in SL6500 chassis, 768 nodes, 15360 cores, with total memory of 98 TB, RHEL 6.5 operating system, 500 TB total storage with DDN based Lustre file system, nodes connected by FDR Infiniband interconnect.Cluster Platform SL230s Gen8, Intel Xeon E5-2670v2 10C 2.5 GHz, Infiniband FDR. OEM:HP, Bidder:HP |
15360/2/Dual Intel Xeon E5-2670v2 ten-core processor at 2.5 GHz with Redhat Enterprise Linux 6.5 OS, 128 GB RAM. |
295.25 |
307.2 |
25 |
Inter-University Centre for Astronomy and Astrophysics (IUCAA),Pune |
Twenty Four numbers of Apollo 2000 Chassis, each having 4 * ProLiant XL170r Gen10 compute node with dual CPUs (Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz, 20 Cores ) running Scientific Linux Operating System Version 7.5Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz, 20 Cores ) running Scientific Linux Operating System Version 7.5, and connected by 50 G ETH. OEM:HPE, Bidder:Concept |
3840// |
192.5 |
307.2 |
26 |
Vikram Sarabhai Space Centre (VSSC), Indian Space Research Organization (ISRO),Thiruvananthapuram |
The SAGA Cluster has HP SL390 G7 servers and NVIDIA GPUs. It is a heterogeneous system consisting of 320 nodes with Intel Xeon E5530 and Intel Xeon E5645 CPUs, and C2070 and M2090 GPUs. The system has 153600 CUDA Cores on NVIDIA Tesla M2090 GPGPU and 1416 Intel Xeon Westmere-EP cores alongwith WIPRO servers (202 nos) of Intel Xeon Westmere and Nehalem cores, each with 2 numbers of NVIDIA GPGPU C2070 per server.dual Intel Xeon E5530 quad core and dual Intel Xeon E5645 hexa core CPUs, and dual 448-core NVIDIA C2070 and dual 512-core M2090 GPUs) w 40 Gbps Infiniband network OEM:Wipro, Bidder:HP |
3100C + 301824G/3/The system has three kinds of nodes:
1. 185 nodes (WIPRO Z24XX(ii) model Servers) consist of dual quad-core Intel Xeon E5530 CPU @ 2.4 GHz, and dual C2070 GPUs.
2. 17 nodes (WIPRO Z24XX(ii) model Servers) consist of dual hexa-core Intel Xeon E5645 CPU @ 2.4 GHz, and dual C2070 GPUs.
3. 118 nodes (HP SL390 G7 servers) consist of dual hexa-core Intel Xeon E5645 CPU @ 2.4 GHz, and dual M2090 GPUs.
All nodes has 24 GB memory each. |
188.7 |
394.76 |
27 |
Oil and Natural Gas Corporation (ONGC),Dehradun |
Fujitsu Cluster, 260 nodes, 6240 cores, RHEL-6.5 OS, with Infiniband interconnect.(Intel Xeon E5-2670v3 @ 2.3 GHz dual twelve-core processor nodes) w/Infiniband OEM:Fujitsu, Bidder:Wipro |
6240/2/Dual Intel Xeon E5-2670v3 twelve-core processor at 2.3 GHz with RHEL-6.5 OS, 262 GB RAM. |
182.09 |
229.63 |
28 |
MRPU,Chennai |
ANTYA HPC cluster consists of Acer hardware servers having 236 CPU-only nodes, 22 GPU nodes, 2 High Memory nodes and 1 Visualization node with Red Hat Enterprise Linux OS, and connected by 100Gbps Enhanced Data Rate (EDR) InfiniBand (IB) network. Total storage of 2 PB using GPFS file system. |2 x Ivy Bridge 12C E5-2697 V2@ 2.7GHz OEM:Boston, Bidder:Locuz |
9600//400 |
175 |
207 |
29 |
National Institute of Science Education and Research(NISER),Bhubaneswar |
A HPC of 64 new compute nodes, 32 old compute nodes and 01 GPU node with 04 nos. of NVIDIA Tesla K40c has been named KALINGA for the School of Physical SciencesIt has 205 TB of storage with Luster Parallel File System and interconnected with 216-Port EDR 100Gb/s Infiniband Smart Director Switch populated with 108 ports.,of 64 new compute nodes, 32 old compute nodes and 01 GPU node with 04 nos. of NVIDIA Tesla K40c has been named KALINGA for the School of Physical SciencesIt has 205 TB of storage with Luster Parallel File System and interconnected with 216-Port EDR 100Gb/s Infiniband Smart Director Switch populated with 108 ports., OEM:Netweb, Bidder: |
3616//64 New Compute nodes, 32 old compute nodes and 1 GPU Node |
161.42 |
249.37 |
30 |
Indian Institute of Technology(IIT),Jammu |
The Supercomputer Agastya is based on a hybrid configuration of Intel Xeon Cascade Lake Gold 6248 20C, 2.5GHz Processors and NVIDIA Tesla V100. The system was designed with total peak computing capacity of 256 TFLOPS on CPU and 56 TFLOPS on GPU performance. The system uses the Lustre parallel file system of 795TB usable storage with 30 GB/Sec write throughput. The Internetworking is done via High Speed Low Latency Infiniband Architecture Mellanox HDR100 100Gbps using fat-tree topology configured as a 100% non-blocking network.is based on a hybrid configuration of Intel Xeon Cascade Lake Gold 6248 20C, 2.5GHz Processors and NVIDIA Tesla V100. The system was designed with total peak computing capacity of 256 TFLOPS on CPU and 56 TFLOPS on GPU performance. The system uses the Lustre parallel file system of 795TB usable storage with 30 GB/Sec write throughput. The Internetworking is done via High Speed Low Latency Infiniband Architecture Mellanox HDR100 100Gbps using fat-tree topology configured as a 100% non-blocking network OEM:Netweb, Bidder: |
3200// |
161 |
256 |
31 |
Indian Institute of Science Education and Research (IISER),Thiruvananthapuram |
HPE Apollo K-6000 chassis with 88 ProLiant XL230K Gen 10 compute nodes each with dual Intel Xeon Gold 6132 processors (Total 2464 cores at 2.6 Ghz), 3 ProLiant DL380 Gen 10 GPU nodes (Nvidia P100, 4 cards total), Intel omnipath infiniband 100 Gb, 500 TB Lustre FS DDN storageTotal 2464 cores at 2.6 Ghz), 3 ProLiant DL380 Gen 10 GPU nodes (Nvidia P100, 4 cards total), Intel omnipath infiniband 100 Gb, 500 TB Lustre FS DDN storage OEM:HPE, Bidder:SS Information Systems Pvt Ltd . |
2464//88compute nodes and total 2464cores . |
141.3 |
205 |
32 |
Tata Consultancy Services,Pune |
The EKA supercomputer is a Linux based cluster which uses most of the proven opensource cluster tools. EKA uses C7000 enclosures with 16 nos of BL460c blades populated and connected through 4X DDR Infiniband interconnect, this uses most efficient routing algorithms developed by CRL.
HP Cluster Platform 3000 BL460c, 1800 nodes, 14400 cores, 29.5 TB total memory, 80 TB total storage based on HP SFS (Lustre based) storage system with 5.2 GB per second throughput, RedHat Linux operating system.Dual Intel Xeon 3 GHz quad core E5365 (Clovertown) w/Infiniband 4X DDR OEM:HP, Bidder:HP |
14400/2/dual Intel Xeon quad core E5365(Clovertown) (3 GHz, 1333 MHz FSB) processor sockets, 16 GB RAM, 73 GB |
132.8 |
172.6 |
33 |
Indian Institute of Technology,Guwahati |
Param-ishan- 126 compute nodes with dual Intel Xeon E5-2680 v3, 12 Core, 2.5 GHz; 16 compute nodes with dual Intel Xeon E5-2680 v3, 12 Core, 2.5 GHz and dual Nvidia K40 Card; 16 compute nodes with Intel Xeon E5-2680 v3, 12 Core, 2.5 GHz and dual Xeon Phi 7120p card; 500TB PFS from DDN; Brigh cluster Manager, Slurm; HPL submitted was run on all 162 nodes using cpu cores only. OS: CentOS 6.6, Storage using Lustre PFS DDN SFA-77003888 OEM:Acer, Bidder:Locuz Enterprise Solutions Ltd |
//162 |
122.71 |
155.52 |
34 |
SETS Chennai,Chennai |
PARAM Spoorthi is a heterogeneous and hybrid configuration of IBM Power 9 CPUs, NVIDIA Tesla V100 NVLink, Mellanox HDR interconnect having peak capacity of 100 TeraFlops with 200 TB Parallel Storage . This system is designed and implemented by the HPC Technologies Group, C-DAC under National Super Computing Mission (NSM)2 OEM:IBM, Bidder:Micropoint |
240//6 |
98.24 |
124.8 |
35 |
The Thematic Unit of Excellence on Computational Materials Science (TUE-CMS), Solid State and Structural Chemistry Unit, Indian Institute of Science,Bangalore |
The 140 node system consists 113 nodes with 125 GibiBytes RAM, 12 nodes with 503 GibiBytes RAM, 8 nodes with nVidia K40x GPU, 4 nodes with Intel 7120p Xeon Phi Coprocessor, 2 Master Nodes, 1 Login Node, 400 TB DDN Storage and Intel 12800 QDR Infiniband Switch.Intel Xeon E5-2630V4 @2.2 GHz dual twelve-core compute nodes) w/Infiniband OEM:IBM, Bidder:HCL |
3168C + 46080G + 492ICO/2/Each computational node has Dual Intel Xeon CPU E5-2670v3 twelve-core processor at 2.3 GHz with CentOS Linux 7.2, 128 GB RAM. Eight of the nodes have dual NVIDIA Kepler K40 GPUs and four of the nodes have dual Intel Xeon Phi coprocessors. |
96.89 |
150.83 |
36 |
Rudra PoC,Pune |
The Rudra PoC is a build with indigenously developed Rudra servers. Each densely designed server is 2U half-width form factor which has Intel Xeon Cascade Lake processors(dual socket, 6240R, 24C, 2.4GHz), two NVIDIA A100 (40GB), Mellanox HDR100 interconnect. This system is designed and implemented by the HPC Technologies Group, C-DAC under National Super Computing Mission (NSM). This PoC system is deployed at C-DAC, pune2 OEM:C-DAC, Bidder:C-DAC |
912//4 |
94.63 |
126.88 |
37 |
Inter-University Centre for Astronomy and Astrophysics (IUCAA),Pune |
Pegasus supercomputer is of 159.72 TF and built with 60 nodes cluster using Intel Gold 6142 Processor,1 PF of PFS Storage with 15Gbps performance, EDR interconnect, IBM Spectrum cluster manager and PBS Pro Job scheduler. Compute hardware is on Lenovo make.is of 159.72 TF and built with 60 nodes cluster using Intel Gold 6142 Processor,1 PF of PFS Storage with 15Gbps performance, EDR interconnect, IBM Spectrum cluster manager and PBS Pro Job scheduler. Compute hardware is on Lenovo make. OEM:Lenovo, Bidder:Locuz |
1920// |
93.9 |
156 |
38 |
S.N.Bose National Institute of Basic sciences,Kolkata |
each with 2 X Intel SKL 6148 Processor, 12 X 16 GB 2666MHz Memory with 2 dedicated Boot NodesCray Aries Interconnect,Cray XC-50 Air Cooled System with 48 x Dual Processor Compute Nodes, with Skylake Gold 6148 2.4Ghz CPUs 192GB Per Node Memory in 44 Nodes and 768GB per Node Memory in 4 nodes. Plus Additional 8 Service nodes. Interconnect is Cray Proprietary ARIES Interconnect. OEM:HPC, Bidder:Cray |
1920// |
91.96 |
147.45 |
39 |
Tata Consultancy Services,Bangalore |
176 Nodes, 2816 Cores, 4.2TB Total nodes Memory, Gluster Parallel file system, with 88TB Usable storage Space.
Intel Xeon E5-2680v3 @2.5 GHz dual twelve-core compute nodes) w/Infiniband OEM:IBM, Bidder:Lenovo |
2688/2/112 |
86.09 |
107.52 |
40 |
Inter-University Centre for Astronomy and Astrophysics (IUCAA),Pune |
Two numbers of Apollo k6000 Chassis, each having 24 * ProLiant XL230k Gen10 compute node with dual CPUs (Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz ) running Scientific Linux Operating System Version 7.5, and connected by 10 G ETH.Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz ) running Scientific Linux Operating System Version 7.5, and connected by 10 G ETH. OEM:HPE, Bidder:Concept |
1536// |
80.06 |
127.97 |
41 |
Jawaharlal Nehru Centre for Advanced Scientific Research,Bangalore |
The HPC-cluster at JNCASR contains 2 master nodes ,1 login node , 108 compute nodes,1 spare node , 1 iml server, 3 GPU nodes, 2 lustre MDS nodes, 2 lustre OSS nodes with 200TB of usable PFS storage configured using NetApp E5500 - hardware RAID storage array with dual controllers each with 12GB Cache and dual port IB interfaces and 168 x 2TB 7.2K RPM NL-SAS disk drives.Intel Xeon E5-2680v3 @ 2.5 GHz dual twelve-core processor nodes) w/Infiniband OEM:Lenovo/IBM, Bidder:Wipro |
2592/2/Dual Intel Xeon E5-2680v3 twelve-core processor at 2.56 GHz with Cent OS 6.5, 98 GB RAM. |
77.13 |
103.68 |
42 |
Indian Institute of Technology, Madras,Chennai |
The Virgo cluster is a IBM System x iDataPlex cluster with 292 dx360 M4 Server Intel Xeon Processor E5-2670 based nodes as Compute Nodes, 4672 cores, with SLES 11 SP2 OS, 19.13 Terabytes of total memory, w/Infiniband FDR, fully non-blocking FAT tree. 150 TB Usable Capacity Storage based on IBM General Parallel File System over 2 x IBM DS3534 Storage System. The cluster also has 2 master nodes and 4 storage nodes based on Intel Xeon Processor E5-2670 processors with 65 GB memory per node. The storage nodes are based on IBM DS3524 Dual Controller Storage Subsystem.Intel(R) Xeon(R) CPU E5-2670 @ 2.6 GHz dual octo-core processor nodes w/Infiniband OEM:IBM, Bidder:SBA |
4672/2/Each node is a dual octo-core Intel Xeon Processor E5-2670 CPU @ 2.6 GHz with 64 GB RAM. |
75.5 |
97.2 |
43 |
National Institute of Technology(NIT),Calicut |
31 number lenovo SR630 each with dual Intel Xeon 6248 20C processor and 2 number lenovo SR650 each with dual Intel Xeon 6248 20C processor and dual Nvidia V100 32GB GPU1320 cores HPC Cluster build over Lenovo serverseach with dual Intel Xeon 6248 20C processor and 2 number lenovo SR650 each with dual Intel Xeon 6248 20C processor and dual Nvidia V100 32GB GPU1320 cores HPC Cluster build over Lenovo servers OEM:Locuz, Bidder: |
1280// |
69 |
102 |
44 |
Indian Institute of Technology(IIT),Goa |
100TF HPC system based on the A64FX ARM processor, Mellanox HDR as an interconnect with C-DAC HPC Software stack. This system is designed and implemented by the HPC Technologies Group at C-DAC Pune under National Super Computing Mission (NSM)Mellanox 100Gbps InfiniBand Switch for Primary communication OEM:HPE, Bidder:SS Information Systems Pvt Ltd . |
720//16 |
68.5 |
105 |
45 |
Aeronautical Development Agency, DRDO,Bangalore |
Fujitsu Cluster PRIMERGY CX250 S2 cluster based on Dual Intel Xeon E5-2697 V2 twelve core processor at 2.7 GHz w/Infiniband FDR, fully non-blocking FAT tree. Total memory of 12.28 TB, and total storage 150 TB based on IBRIX-Parallel File System. Connected by Mellanox Infiniband system.Intel Xeon E5-2697v2 @ 2.7 GHz dual twelve-core compute nodes) w/ Mellanox FDR Infiniband OEM:Fujitsu, Bidder:TATA Elxi |
3072/2/Dual Intel Xeon E5-2697 v2 twelve-core processor at 2.7 GHz with 96 GB RAM, 600 GB storage |
60.55 |
66.35 |
46 |
S.N. Bose National Centre for Basic Sciences,Kolkata |
SNB Cray XE6 Cluster based on AMD Opteron 6300 series (Abu Dhabi) processors, 244 nodes, 7808 cores, with total memory of 8.39 TB, CRAY LINUX ENVIRONMENT ( CLE) Compute Node Linux operating system, 250 TB total usable storage with Lustre file system, nodes connected by Cray Gemini interconnect.AMD Opteron 6300 Series (ABU DHABI) @2.4 GHz dual sixteen-core processor nodes w/Crays Gemini primary interconnect OEM:Cray, Bidder:Cray |
7808/2/Based on AMD Opteron 6300 series (Abu Dhabi) processors @ 2.4 GHz with 32 GB RAM |
60 |
74.95 |
47 |
Physical Research Laboratory,Ahmedabad |
IBM/Lenovo NeXtScale System- M5 based Servers, Xeon E5-2670v3 12C 2.3 GHz, Nvidia K40 GPU, Infiniband FDR Interconnect with 300TB GPFS parallel file system. The system having 2328 CPU Cores/194 CPU Socket/77 Pure CPU Compute Node and 11520 GPU Cuda Cores/40 GPU/20 Hybrid(CPU+GPU)Compute Nodes.Intel Xeon Processor E5-2670 v3 @ 2.355 GHz dual twelve-core processor CPU nodes. Twenty nodes also contain dual 2880-core NVIDIA K40 GPUs) w/ 4X FDR Infiniband primary interconnect OEM:Lenovo, Bidder:TCS |
2328C + 115200G/2/2x Intel Xeon Processor E5-2670 v3 twelve-core processor at 2.355 GHz, 262 GB RAM, 30 MB L1 cache, 2 x 900 GB SAS 10k RPM, with RedHat Enterprise Linux. Twenty nodes contain 2 X Nvidia K40 GPU. |
55.6 |
68 |
48 |
Indian Institute of Tropical Meteorology,Pune |
Prithvi HPC is built on IBM P6 575 nodes totaling 117 numbers including 4 GPFS nodes and 3 Login nodes. Each node is populated with 32 cores of IBM P 6 CPU running at 4.7 G Hz. The total number of cores works to 3744 cores.
The Prithvi HPC has Interconnectivity using Infiniband Switches and Ethernet switches for Management purposes.
15.34 TB total memory.
Total of 2800 TB of IBM DS Storage including Online, Nearline and Archival Storage. The cluster also has GPFS and other Managment SW.IBM P6 4.7 GHz sixteen dual-core processor 575 nodes w/Infiniband 4X OEM:IBM, Bidder:HCL |
3744/2/IBM P 575 node with 32 cores of Power 6 processors running at 4.7 GHz. Each node has 4 MCM modules in four sockets, thus totaling 16 modules/sockets per node. Each socket houses a dual core processor, constituting 32 cores per node. AIX operating system. 131 GB RAM, 2x146 GB secondary storage per node |
45.84 |
66.18 |
49 |
IGIB Delhi,Delhi |
The system consists of Cascade lake 6230 , 24 nodes each with 2 sockets (2 x20 core) interconnect with IB Switch HDR 100.2 OEM:Lenovo, Bidder:Lenovo |
960/2/24 |
44.81 |
64.5 |
50 |
TATA INSTITUTE OF FUNDAMENTAL RESEARCH,Hyderabad |
Intel Xeon Phi cluster, 68 CPU nodes, 1360 CPU cores, 4 GPU nodes of a total of 16 NVIDIA K40 GPUs, Cent OS Linux 7.2, 4.45 TB total memory, with Infiniband interconnect.Intel Xeon E5-2630V4 @2.2 GHz dual ten-core compute nodes) w/Infiniband OEM:Supermicro, Bidder:Netweb |
1360C + 46080G/2/Dual Intel Xeon E5-2630v4 ten-core processor at 2.2 GHz with CentOS Linux 7.2, 64 GB RAM. Four of the nodes have GPUs with each GPU node having 4 NVIDIA Kepler K40 GPUs. |
43.59 |
70.85 |
51 |
Variable Energy Cyclotron Centre (VECC),Kolkata |
Dell PowerEdge FX2 Enclosure 2U Chassis. --- 12Nos Server Nodes - Dell™ PowerEdge™ PowerEdge FC630 Server Nodes --- 48Nos 02Nos x Intel Xeon E5-2680 v4/14C/ 2.4GHz,35M Cache,9.60GT/s QPI,Turbo,HT,14C/28T (120W) 800GB Solid State Drive 128GB RAM Total CPU : 96NosPowerEdge FC630 Server Nodes --- 48Nos 02Nos x Intel Xeon E5-2680 v4 2.4GHz,35M Cache,9.60GT/s QPI,Turbo,HT,14C/28T (120W) , 128GB OEM:Dell, Bidder:Micropoint |
1344//Each computational node has dual Intel Xeon 2680-v4 42 Core 2.4 GHz processor, 128 GB RAM. |
43.1 |
51.9 |
52 |
Research Centre Imarat,Hyderabad |
RCI-PGAD-Advanced Numerical Simulation for enabling Research ( ANSER )Intel Xeon Processor E5-2670 2.6GHz OEM:HP, Bidder:HP |
2432/2/Each node has dual octo-core 2.6 GHz Intel Xeon E5-2670, 96 GB RAM per node. |
41.95 |
50.58 |
53 |
Institute of Physics(IOP),Bhubaneswar |
Fujitsu PY CX400 M1 (20 Nodes with dual E5-2680 V3 @ 2.5Ghz processor, 128 GB RAM, 2 Nos. Intel Co-Processor 7120P Total XEON PHI : 40Nos , Fujitsu E5-2680 V3 @ 2.5Ghz processor, 128 GB RAM, 2 Nos. Nvidia Tesla K8020 Nodes with dual E5-2680 V3 @ 2.5Ghz processor, 128 GB RAM, 2 Nos. Intel Co-Processor 7120P,60 CPU Nodes, 40 GPU-K80 OEM:Fujitsu, Bidder:Micropoint |
40Xeon-Phi//2 no of Intel Xeon processors E5-2680 V3 @ 2.5Ghz with 12 Core in each processor/ 20 Nodes with dual E5-2680 V3 @ 2.5Ghz processor, 128 GB RAM, 2 Nos. Nvidia Tesla K80/ 60 Nodes with dual E5-2680 V3 @ 2.5Ghz processo |
41.79 |
48 |
54 |
National Institute of Technology,Raipur |
Fujitsu Server PRIMERGY RX2530 M2 One Master node with 54 Compute Nodes/12core per Sockets with Dual E5-2680 V4@ 2.10Ghz processor 128GB RAM connected with Intel OmniPath 100GbpsPRIMERGY RX2530 M2 Master node with 54 Nodes/12core per Sockets with Dual E5-2680 V4@ 2.10Ghz processor 128GB RAM connected with Intel OmniPath 100Gbps OEM:Fujitsu, Bidder:Wizertech |
1296//54 |
39.5 |
45.6 |
55 |
High Energy Materials Research Laboratory (HEMRL),Pune |
Fujitsu Server PRIMERGY RX2530 M2 One Master node with 54 Compute Nodes/12core per Sockets with Dual E5-2680 V4@ 2.10Ghz processor 128GB RAM connected with Intel OmniPath 100Gbpsis 53.76 TF cluster, built with 20 CPU nodes and 4 GPU nodes, Using Xeon-G 6230 Processor.PFS storage with 220 TB Lustre based. Primary interconnect is Infiniband. OEM:HPE, Bidder:Concept |
800//2O compute nodes each with 40cores . |
39.5 |
53.76 |
56 |
Centre for Modeling and Simulation, Pune University,Pune |
IASRI-Delhi 256 Node HPC cluster based on HP SL390 G7 nodeIntel Xeon E5-2697 @ 2.6GHz dual fourteen-core CPU nodes OEM:Fujitsu, Bidder:Locuz |
1344/2/48 |
39.29 |
55.91 |
57 |
Indian Institute of Technology(IIT),Goa |
100TF HPC system based on the A64FX ARM processor, Mellanox HDR as an interconnect with C-DAC HPC Software stack. This system is designed and implemented by the HPC Technologies Group at C-DAC Pune under National Super Computing Mission (NSM)Mellanox 100Gbps InfiniBand Switch for Primary communication OEM:HPE, Bidder:SS Information Systems Pvt Ltd . |
720//16 |
39 |
57 |
58 |
National Institute of Science Education and Research(NISER),Bhubaneswar |
Hartree cluster is a HPC of 40 compute nodes and 2 master nodes in HA configuration. It is based on Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz. Two sockets for each node (32-cores per node) on Supermicro SuperServer SYS-6028R-TR with 64 GB of RAM per nodeof 40 compute nodes and 2 master nodes in HA configuration with FDR 56Gbps Interconnect and 230TB Storage in Lustre parallel file system. OEM:Netweb, Bidder:Netweb |
1344//Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz. Two sockets for each node (32-cores per node)with 64 GB of RAM per node. |
38.87 |
51.9 |
59 |
Space Applications Centre (SAC), ISRO,Ahmedabad |
The HPC System named as AKASH is equipped with two Master Nodes with High Availability, 36 Compute Nodes (only CPU) & 2 Hybrid nodes (CPU+ Xeon Phi). it offers 1,368 CPU cores, 244 Xeon Phi Cores, ~10 TB of RAM, 40 TB usable of High Performance Parallel Lustre File System and 120 TB Archival Storage.Intel Xeon E5-2699v3 @2.3 GHz dual eighteen-core processor nodes) w/Infiniband 4x FDR Interconnect OEM:HP, Bidder:TCS |
1368 C + 244 ICO/2 Sockets per node/Dual Intel Xeon E5-2699v3 eighteen-core processor at 2.3 GHz, Two of the nodes have two Intel Xeon Phi 7120 P cards each. Each node has 45 MB L1 cache, with Cent OS 6.5, 256 GB RAM, 2 x 300 GB SAS 15k RPM. |
36.67 |
50.34 |
60 |
Indian Institute of Science Education and Research, Bhopal,Bhopal |
IBM System x iDataPlex with 132 dx360 M4 Server as Compute Nodes, 2112 cores, 9.2 Terabytes of total memory, w/Infiniband QDR, fully non-blocking FAT tree. 50 GB Usable Capacity Storage based on IBM General Parallel File System over 2 x IBM DS3534 Storage System.Intel(R) Xeon(R) CPU E5-2670 @ 2.6 GHz dual octo-core processor nodes w/Infiniband OEM:IBM, Bidder: |
2112/2/Each node is a dual octo-core Intel Xeon Processor E5-2670 CPU @ 2.6 GHz with 64 GB RAM. |
35.36 |
43.92 |
61 |
Center For Developement of Advanced Computing(CDAC),Pune |
CDAC HPC is of 100 TF built with 16 CPU only nodes and 4 GPU V100 nodes cluster using AMD EPYC ROME Processor. 240 TB of Lustre PFS with 5.7 Gbps performance. EDR Interconnectis of 100 TF built with 16 CPU only nodes and 4 GPU V100 nodes cluster using AMD EPYC ROME Processor. 240 TB of Lustre PFS with 5.7 Gbps performance. EDR Interconnect OEM:HPE, Bidder:Concept IT |
1024//16 CPU only and 4 GPU Nodes |
29.4 |
40.96 |
62 |
Centre for Development of Advanced Computing (C-DAC),Pune |
The Trinetra test cluster consists of 8 compute nodes having peak computing capacity of 7 TFlops. Each node has Intel Xeon Skylake processors (Gold 5118, 24C, 2.3 GHz) and 96 GB memory. The cluster is build using C-DAC’s indigenously developed Trinetra interconnect. Trinetra is 3D-Torus based interconnect. This system is designed and implemented by the HPC Technologies Group, C-DAC under National Supercomputing Mission (NSM).2 OEM:Supermicro, Bidder:Netweb Technologies |
192//8 |
5.205 |
7.065 |