Supercomputer

 

spectrum.ieee.org

Supercomputer

Supercomputer Definition

A supercomputer is a computer with a high level of performance compared to a general-purpose computer.

A supercomputer is a computer that performs at a higher level than a general-purpose computer. Supercomputers typically feature hundreds or millions of processors, as well as massive amounts of memory and storage, and are built to handle massive amounts of data or do incredibly difficult computational tasks.

Supercomputers are used in many fields of scientific research, including weather forecasting, physics simulations such as molecular dynamics, climate modeling (climate models include atmospheric dynamics and ocean mixing), nuclear weapon simulations (HEDP), astronomy (compiling catalogs), molecular biology research, petroleum exploration (oil exploration), and other fields where large amounts of data must be processed quickly.

History

In the 1980s, India encountered difficulties in acquiring supercomputers for academic and weather forecasting purposes. The National Aerospace Laboratories (NAL) launched the Flosolver project in 1986 with the goal of developing a computer for computational fluid dynamics and aerospace engineering. The Flosolver MK1, a parallel processing system, began operations in December 1986.

The Indian government requested the purchase of a Cray XMP supercomputer in 1987, but the US government denied the request because the machine could be used for weapons development as well. Following this issue, the Government of India decided to promote an indigenous supercomputer development programme the following year. Various groups, including the Centre for Development of Advanced Computing (C-DAC), the Centre for Development of Telematics (C-DOT), the National Aerospace Laboratories (NAL), the Bhabha Atomic Research Centre (BARC), and the Advanced Numerical Research and Analysis Group, were commissioned to work on various projects (ANURAG). C-DOT developed "CHIPPS," which stands for C-DOT High-Performance Parallel Processing System. NAL began working on the Flosolver in 1986. The Anupam series of supercomputers was developed by BARC. The PACE series of supercomputers was developed by ANURAG.


What Features Make a Computer a Supercomputer?

nimbix.net



A vast number of processing units

Today's supercomputers can have hundreds of thousands, if not a million, of processing units, CPUs, or GPUs working in tandem using massively parallel computing, depending on the architecture. This characteristic, known as capability computing, distinguishes supercomputers from regular computers. As a result of this function, the old computer performance gauge, CPU speed, became obsolete. The performance of a supercomputer is measured in FLOPS (Floating Point Operations Per Second). Because floating-point operations require significantly more computer effort than additions and subtraction, they are mathematically multiplications and divisions. HPC clusters are quickly gaining a reputation for scalability and processing power.

An immense collection of RAM-type memory units

A supercomputer's random-access memory (RAM) capacity is only the first piece of the memory puzzle. Despite the fact that RAM modules on supercomputers are spread across multiple nodes (a "node" is a high-performance computer with multiple processing units or cores), they can be viewed as a single pool. The RAM modules can work in tandem thanks to specialised software known as middleware. Supercomputers usually have multiple terabytes of RAM. Aside from its size, another distinguishing feature of supercomputer memory is that it is accessible to all computing units, enhancing a supercomputer's ability to solve large mathematical problems.

High-speed interconnect between nodes

The vast number of nodes (think of nodes as massive computers on a rack) that comprise a supercomputer communicate via high-speed switches. To provide high data throughput, the nodes are typically connected in a non-blocking, fat-tree architecture. This not only provides up to 200Gb/s of bandwidth between nodes, but also in-network processing acceleration for communications frameworks (MPI). Supercomputers are distinguished by their specialised high-speed interconnect (bandwidth and latency), which is required to fully utilise their massive number of CPUs and RAM.

High input/output and file systems speeds

The high-speed computations of supercomputers are supported by equally fast data writing and reading capabilities. This is made possible by parallel file systems such as Lustre or GPFS. Another important feature of a supercomputer is data access that is quick to read and write.

Custom software and specialized support

Most of us only see the machine when we think of a supercomputer. One of the most underappreciated "benefits" is the team that supports a supercomputer. Hundreds of brilliant programmers and IT professionals work tirelessly to create innovative solutions that ensure the highest computing performance and output levels from these complex systems.

Effective thermal management

Most people think of a supercomputer in terms of the machine. One of the most underappreciated "benefits" of a supercomputer is its crew. Hundreds of brilliant programmers and IT professionals work tirelessly to create innovative solutions that ensure the best computing performance and output levels from these complex systems.


Functions of Supercomputer


chtips.com


Supercomputers are used for large-scale computations and data processing. They are used in scientific research, engineering, and mathematical modelling.

Supercomputers are used in data-intensive and computationally intensive scientific and engineering applications such as quantum mechanics, weather forecasting, oil and gas exploration, molecular modelling, physical simulations, aerodynamics, nuclear fusion research, and cryptanalysis. To increase the speed of each supercomputer, early operating systems were custom-made. In recent years, the architecture of supercomputers has shifted away from proprietary, in-house operating systems and toward Linux. Although most supercomputers run Linux, each manufacturer optimises its own Linux derivative for maximum hardware performance. SUSE Enterprise Linux Server was used by half of the world's top 50 supercomputers in 2017.







Components of a Supercomputer


tech.hindustantimes.com


Supercomputers and their integrated devices are composed of hundreds of thousands of individual components. The vast majority of them are designed to support the underlying mechanisms that enable supercomputers to generate massive amounts of computational power.

  • Processors – Supercomputers can perform billions of demanding operations in a single second thanks to tens of thousands of processors. These processors fetch and execute programme instructions to perform calculations and begin memory access.

  • Memory – Supercomputers have a large amount of memory, which allows them to retrieve data at any time. A node is made up of a memory block and a set of CPUs. Modern supercomputers contain tens of thousands of these nodes.

  • Interconnect – Instead of nodes working on separate tasks concurrently, the interconnect enables nodes to collaborate on a single job solution. The interconnect also creates a connection between the nodes and the I/O devices.

  • I/O System – The I/O system, which includes disc storage, networking, and tape devices, supports the peripheral subsystem.

  • Power supply – Supercomputers frequently require processing power of up to five megawatts. As a result, power supplies are regularly upgraded and updated to meet changing demands.



How much does a supercomputer cost?



Performance Level

Year Achieved

Est.Original Cost/

Inflation Adjusted

Inflation Adjusted Cost

(2022$)

Megaflop

1964

(CDC 6600)

$7 million

$59,780,000

Gigaflop

1985

(Cray-2)

$16 million

$39,040,000

Teraflop

1996

(ASCI Red)

$55 million

$92,720,000

Petaflop

2008

(Roadrunner)

$100 million

$123,220,000

10 Petaflop

2011

(KComputer)

$1.25 billion

$1,475,000,000

(costs $10 million annually

to operate)


So you're in the market for a powerful supercomputer. Aside from the $6 to $7 million in annual energy costs, design and assembly costs could range between $100 million and $250 million, not including maintenance costs.

While the fastest supercomputer "Fugaku" costs around 1.2 billion dollars to build.



Types of Supercomputers


There are two types of supercomputers:

  1. General purpose

  2. Special purpose


General purpose supercomputers are further classified into three types:

Vector processors: This machine was developed between the 1980s and the 1990s. In which all processors are arranged in an array and the CPU is capable of performing all massive mathematical operations in a short period of time.

Tightly connected cluster computer: In these systems, all groups of computers are connected and tasks are assigned to all groups equally, resulting in increased computer speed. Director-based clusters, two-node clusters, multi-node clusters, and massively parallel clusters are the four types of clusters.

Commodity Cluster: The Commodity computer interconnected high-bandwidth low-latency local area networks in this system.

Special purpose supercomputers, on the other hand, are supercomputers designed specifically to perform a specific task or achieve a specific goal. They typically employ Application-Specific Integrated Circuits (ASICs), which provide improved performance. Special-purpose supercomputers include Belle, Deep Blue, and Hydra, which were all designed to play chess, as well as Gravity Pipe for astrophysics and MDGRAPE-3 for protein structure computation and molecular dynamics.


How is it different from others?


Key Difference: A computer is a general-purpose programmable machine that performs arithmetic and logical operations in response to a set of instructions. Supercomputers are the most powerful computers, and as a result, they are more expensive than other types of computers.

lansweeper.com

A personal computer is a computer that can handle all of its own input, processing, output, and storage. A processor, memory, and one or more input, output, and storage devices are all part of a personal computer. Personal computers frequently include a communications device. The PC and Apple are two popular personal computer architectures. Any personal computer that is PC-compatible is one that is based on the original IBM personal computer design. PC-compatible computers are sold by companies such as Dell and Toshiba. A Windows operating system is typically used on PCs and PC-compatible computers. A Macintosh operating system is typically used by Apple computers (Mac OS). Desktop computers and notebook computers are the two types of personal computers.

The following hardware components are required to build a general-purpose computer–

  • The Central Processing Unit (CPU): is the primary component of a computer that executes instructions.

  • Memory: is where data, programmes, and intermediate results are stored.

  • Mass storage device: Large amounts of data and programmes can be permanently stored on it.

  • Input device: typically a keyboard or mouse; with these devices, data and instructions are entered into a computer.

  • Output Device: a device that displays the computer's results. For example, a display screen, printer, and so on.

A typical PC can run multiple programmes at the same time. Supercomputers, on the other hand, have been designed to run as few programmes as possible as quickly as possible. The term "computer" usually refers to any general-purpose type of computer, whereas supercomputers are highly specialised computers. Supercomputers operate in a regulated environment.


Parameters

Computer

Supercomputer

Definition

A general-purpose programmable machine that performs arithmetic and logical operations in response to a set of instructions.

A computer that is extremely fast, capable of performing hundreds of millions of instructions per second.

Components

A motherboard, CPU, memory (or RAM), hard drive, and video card are all components of a personal computer.

Processors, memory, an I/O system, and an interconnect are all part of a supercomputer.

Example

A mainframe computer is a computer that is used for mission-critical applications such as bulk data processing, enterprise resource planning, and transaction processing.

Deep Blue is a well-known chess playing supercomputer.

Application areas

Education, health and medicine, inventory reporting, ticket booking, accounting and administration, teaching learning tools, hotel management, banking and finance sector, and so on.

Development of nuclear weapons, accurate weather forecasting,

Host processes for a local computer, and so on.

Types

1.Supercomputers 

2.Mainframe Computers 

3.Minicomputers 

4.Microcomputers 

5.Microcomputers 

are further classified as desktop computers, laptop computers, and handheld computers.


There are many different types of

Supercomputers -


1.Vector machines

2.Parallel computers

3.Cluster

4.special purpose 


Difference between Supercomputing and Quantum Computing

In contrast to conventional computers, which use long strings of binary information, quantum computers use qubits, or quantum bits (i.e. bits). Qubits are made up of specialized atoms or ions that are manufactured, isolated, and controlled in a quantum state in vacuum chambers or cryostats at sub-zero temperatures. Controlled qubits have the potential to provide significantly more processing power than binary's ones and zeros.


Quantum computers can also perform superposition, which allows quantum particles to exist in multiple states at the same time until a measurement is performed. This otherworldly but very real occurrence, which cannot be detected in physical space, is responsible for quantum computers' famed speed.


A supercomputer, as opposed to a general-purpose computer, is any computing device with a high compute to input/output ratio and a large number of effective processing cycles per second for solving complex problems. Supercomputers operate at or near the highest computer rate currently available.


Quantum computers, on the other hand, use revolutionary quantum algorithms to accelerate digital computing in ways that traditional computers cannot. However, the field is still in its infancy, and the technology is still in its infancy.



Supercomputing

Quantum computing

Supercomputing is the use of supercomputers to solve extremely complex and massively data-laden problems.

Quantum computing is a new type of information processing that is based on quantum mechanics principles.

Supercomputers are systems that operate at high I/O rates and produce a large number of effective computing cycles per second.

A quantum computer is a computer that uses new quantum algorithms to speed up digital computation.

It is a tool that enables scientists and engineers to solve computational problems that would otherwise be intractable due to their size and complexity.

It is the application of quantum mechanical phenomena, such as superposition and entanglement, to data operations.

Simulation and modelling of physical phenomena such as climate change, explosions, and molecule behaviour are common applications.

The practical application of quantum computers is still in its early stages.



Supercomputers in India



Supercomputing in India began in 1980, when the Indian government established an indigenous development programme in response to the numerous challenges associated with obtaining supercomputers from outside sources. The National Aerospace Laboratories began work on the "Flosolver MK1" parallel processing system project in December 1986. Following that, other initiatives were commissioned from various organisations, including C-DAC, C-DOT, NAL, BARC, and ANURAG. C-DOT created the C-DOT High-Performance Parallel Processing System, and BARC created the Anupam series of supercomputers. ANURAG created the PACE family of supercomputers.

Despite the fact that the C-DAC project released the "PARAM" family of supercomputers, the National SuperComputing Mission, which began in 2015, improved Indian supercomputers. NSM announced a seven-year plan to install 73 indigenous supercomputers by 2022, costing Rs 4,500 crore.

PARAM Siddhi-AI is India's fastest supercomputer, ranking 63rd on the TOP500 list in November 2020.


The Centre for Development of Advanced Computing (C-DAC) was given a three-year budget of Rs 375 million in November 1987 to develop 1000Mflops (1Gflops) supercomputers.

Three C-DAC missions revealed the "PARAM" (Parallel Machine) supercomputer series.



Sr.No.

Supercomputers

1.

PARAM 8000

The PARAM 8000, a 64-node machine, was the first machine developed from the ground up, and it was introduced in August 1991.

The PARAM 8000 is based on a distributed memory MIMD architecture with a reprogrammable connectivity network.

2.

PARAM 8600

PARAM 8600 was introduced in 1992 as an enhanced version of PARAM 8000.

C-DAC intended to add extra power by integrating the Intel i860 CPU.

Each 8600 cluster had the same processing power as four PARAM 8000 clusters.

3.

PARAM 9000

The PARAM 9000 was introduced in 1994 with the goal of combining cluster processing with massively parallel processing computer tasks. The Clos network design allowed this system to scale up to 200 CPUs with 32–40 processors.

4.

PARAM 10000

In 1998, the PARAM 10000 was unveiled.

Each node of this supercomputer was built on a Sun Enterprise 250 server, which contained two 400 Mhz UltraSPARC II processors.

This system's maximum speed was 6.4 Gflops.

5.

PARAM Padma

In December 2002, PARAM Padma was introduced.

It was the first Indian supercomputer to join the Top500 list of supercomputers in June 2003, where it was rated 171.

PARAM Padma's top speed was 1024 Gflops (about 1 Tflops).

6.

PARAM ISHAN

PARAM-ISHAN, a 250 Teraflop hybrid HPC at IIT Guwahati, was launched in September 2016.

It comprises 162 computing nodes and a luster parallel file system with 300TB of storage.

7.

PARAM Brahma

This supercomputer had a computing capacity of 850 Teraflops and a storage capacity of 1 PetaByte.

The cooling method for 'PARAM Brahma' is termed direct contact liquid, and it is only accessible in India.

The thermal conductivity of liquids, specifically water, is efficiently used in this cooling system to maintain the system's temperature during operation.

As of 2020,This supercomputer is available at IISER Pune.

8.

PARAM Siddhi-AI

PARAM Siddhi-AI is India's fastest high-performance computing-artificial intelligence (HPC-AI) supercomputer, with Rpeak of 5.267 Pflops and Rmax of 4.6 Pflops.

PARAM Siddhi-AI was ranked 63rd among the world's most powerful supercomputers in November 2020. This supercomputer is based on the NVIDIA DGX SuperPOD networking standard architecture, as well as C-own DAC's HPC-AI engine, software frameworks, and cloud platform.

9.

PARAM Shivay

PARAM Shivay was a high-performance, high-computing cluster with an 833 Teraflop capacity created at IIT-BHU under the NSM at a cost of Rs 32.5 crore. The PARAM Shivay supercomputer has about one lakh twenty thousand compute cores (CPU + GPU cores) and can perform 833 Teraflops of computation.

10.

Pravega PARAM

In January 2022, PARAM Pravega, a supercomputer funded by NSM, was deployed at the Indian Institute of Science.

It runs CentOS 7.x, has 4 petabytes of storage, and 3.3 petaflops of peak computational capability.

A cutting-edge supercomputing center was created several years ago at the IISC Bengaluru.

The Institute purchased and deployed SahasraT, the country's fastest supercomputer, in 2015.


Top 5 Supercomputers in World

Peak performance in supercomputing is an ever-changing target. A supercomputer, in fact, is defined as any machine that "performs at or near the currently highest operational rate." The field is a never-ending battle to be the best. Those who achieve the highest rank may only have it for a short time.

Competition is what keeps supercomputing exciting, pushing engineers to achieve feats that were unthinkable only a few years ago. To commemorate this amazing technology, let's look at the fastest computers as defined by the computer ranking project TOP500—as well as what these machines are used for.






5. Tianhe-2 (China)

Pinterest.com

China's Tianhe-2 is used mainly for government security applications at China's National University of Defense Technology. Courtesy NUDT.


Tianhe-2, which translates as "MilkyWay-2," debuted as the world's number one in June 2013. However, despite upgrades over the years to 4,981,760 cores running at 61.4 petaFLOPS, it is now barely hanging on to a top-five spot. Such is the ephemeral beauty of a modern supercomputer.

According to TOP500, the machine was developed by China's National University of Defense Technology (NUDT) and is primarily intended for government security applications. This means that much of Tianhe-2's work is kept secret, but judging by its processing power, it must be working on some pretty important projects.

4. Sunway TaihuLight (China)

Sunway TaihuLight, another former number one, dominated the list for two years after its debut in June 2016. It had 93.01 petaFLOPS and 10,649,000 cores at the time, making it the world's most powerful supercomputer by a wide margin, with more than five times the processing power of its nearest competitor (ORNL's Titan) and nearly 19 times more cores.

However, due to the rapid pace of technological advancement, no position is ever secure for long. In June 2018, TaihuLight lost the top spot to competitors.

<strong>Supercomputers save lives</strong> by forecasting serious storms like Cyclone Felling in the Southern Indian Ocean. Courtesy William Straka, UWM/NASA/NOAA.

Supercomputers save lives by forecasting serious storms like Cyclone Felling in the Southern Indian Ocean. Courtesy William Straka, UWM/NASA/NOAA.


TaihuLight's creators are using the supercomputer at the National Supercomputing Center in Wuxi, China, for tasks ranging from climate science to advanced manufacturing. It has also been successful in marine forecasting, assisting ships in avoiding rough seas and assisting with offshore oil drilling..

3. Sierra (US) 

Sierra from Lawrence Livermore National Laboratory (LLNL) debuted at #3 on the list in June 2018 with 71.6 petaFLOPS. Later optimization increased the processing speed of its 1,572,480 cores to 94.6 petaFLOPS, propelling it to second place in November 2018. However, the election of a new number one in June 2020 relegates Sierra to third place.

Sierra is specifically designed for modelling and simulations required by the US National Nuclear Security Administration and incorporates both IBM central processing units (CPUs) and NVIDIA graphics processing units (GPUs).

2. Summit (US)

The summit is poised. After two years in first place, the US Department of Energy's Oak Ridge National Laboratory (ORNL) in Tennessee has dropped to second place. Courtesy ORNL.

As part of the US Department of Energy's renewed commitment to supercomputing power, Oak Ridge National Laboratory's (ORNL) Summit took the top spot from China for the first time in 6 years in June 2018.

Summit increased its initial High Performance Linpack (HPL) performance from 122.3 to 148.6 petaFLOPS since its debut on the list in June 2018. Summit was initially ranked third on the GreenTop500, which measures energy efficiency in supercomputers, which is unusual for such a powerful machine, but it has since dropped to ninth place.

The 2018 Gordon Bell Prize was awarded to a seven-member ORNL team for their use of Summit to process genetic data in order to better understand how individuals develop chronic pain and respond to opioids. Summit is now playing an important role in the global race to find treatments and vaccines for COVID-19.

1. Fugaku (Japan)

Japan's Fugaku, developed in collaboration with RIKEN and Fujitsu, is the world's fastest supercomputer. Since June 2011, when Fugaku's predecessor, the K computer, debuted first, Japan has not had a system take the top spot.

<strong>The world’s fastest supercomputer, Fugaku</strong>, boasts several architectural innovations that may pave the way for even greater performance. Courtesy RIKEN.

Fugaku, the world's fastest supercomputer, features several architectural innovations that could pave the way for even higher performance. RIKEN provided the image.

Fugaku outperforms the previous #1 Summit's 148.6 PetaFLOPS by nearly 7.3 million cores, bringing HPC technology one step closer to the promised exascale era.

Fugaku is the world's first top-ranked system to use ARM processors. Other new features include hybrid memory cubes attached to each processor and a new iteration of the Tofu network that provides tight integration between all system nodes.

The ARM-based architecture represents a significant departure from the type of compute traditionally used in supercomputers. Its designers see its success as proof that there is still room for innovation in high-performance computing. 

Fugaku, which is housed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan, is designed for applications that address high-priority social and scientific issues. These include drug discovery, personalised medicine, weather and climate forecasting, clean energy development, and research into the universe's fundamental laws. It is already being used as a test bed for COVID-19 research. Fugaku will officially launch in April 2021.

The race to own the most powerful supercomputer is never truly over. This friendly competition between countries has fueled a surge in processing power that shows no signs of abating anytime soon. We can only hope that scientists will continue to use supercomputers for important projects such as curing debilitating diseases in the coming years.


Comparison of system specification of Fugaku and PARAM Ganga


Sr.No.

System Specifications

Fugaku

PARAM Ganga

1.

Nodes

158,976 nodes

332 node

2.

Peak performance

  • Double Precision (64 bit) 537 Petaflops

  • Single Precision (32 bit) 1.07 Exaflops

  • Half Precision (16 bit) 2.15 Exaflops

  • Integer (8bit) 4.30 Exaflops

1.67 PF/FLOPS



3.

Total memory

4.85-Petabyte

104,832 TB

4.

Total Memory Bandwidth

163 PB/s

100Gbps

5.

Compiler

Fortran 2008 Fortran 2018 subset

C11&GNU and Clang extensions

C++ 14&C++ 17 subset and GNU and Clang extensions

OpenMP 4.5&OpenMP 5.0 subset

intel-2018.4 

6.

Script language

Python+Numpy+Scipy,Ruby

Python, Numpy, RAPIDSN, 

7.

OS

Red Hat Enterprise Linux8

McKernel

Centos7.6

8.

MPI

Fujitsu MPI(Based on OpenMPI),RIKEN-MPICH(Based on MPICH)

INTEL MPI, Open MPI

9.

File IO

LLIO

NFS local FS (XFS),lustre,GPFS

10. 

Processors 

Fujitsu A64FX processor

Intel Xeon Cascade lake processors, and NVIDIA Tesla V100.

Comments

  1. Very good work πŸ‘ well done

    ReplyDelete
  2. Worthy Efforts πŸ‘ŒπŸΌπŸ‘πŸΌ

    ReplyDelete
  3. Amazing Write-up.
    Very informative blog.
    Well Done. πŸ’―πŸ‘

    ReplyDelete
  4. This blog has helped me greatly. It is amazing and very helpful. It gives you insight on supercomputer it's types, cost, comparisons, components, functions etc. Thank you very much for your hardwork. It helped me greatly.

    ReplyDelete
  5. I got lots of information from this blog. Appreciated,Well πŸ‘✅

    ReplyDelete
  6. Enjoyed reading the above blog, really explains everything in detail, the article is very interesting and effective.Good luck in the upcoming articles.

    ReplyDelete
  7. This comment has been removed by the author.

    ReplyDelete
  8. Great and Useful information πŸ‘Œ

    ReplyDelete

Post a Comment