Overview of Supercomputers and Their Evolution
Overview of Supercomputers and Their Evolution
Supercomputer
A supercomputer is a type of computer with a high level of performance as compared to a general-
purpose computer. The performance of a supercomputer is commonly measured in floating-point
operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017,
supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS,
100 petaFLOPS or 100 PFLOPS).[3] For comparison, a desktop computer has performance in the range
of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013).[4][5] Since November 2017, all of the
world's fastest 500 supercomputers run on Linux-based operating systems.[6] Additional research is
being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster,
more powerful and technologically superior exascale supercomputers.[7]
Supercomputers play an important role in the field of computational science, and are used for a wide
range of computationally intensive tasks in various fields, including quantum mechanics, weather
forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures The IBM Blue Gene/P supercomputer "Intrepid" at
Argonne National Laboratory runs 164,000 processor
and properties of chemical compounds, biological macromolecules, polymers, and crystals), and
cores using normal data center air conditioning,
physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft grouped in 40 racks/cabinets connected by a high-
aerodynamics, the detonation of nuclear weapons, and nuclear fusion). They have been essential in the speed 3D torus network.[1][2]
field of cryptanalysis.[8]
Supercomputers were introduced in the 1960s, and for several decades the fastest was made by Seymour
Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or
monogram. The first such machines were highly tuned conventional designs that ran more quickly than
their more general-purpose contemporaries. Through the decade, increasing amounts of parallelism
were added, with one to four processors being typical. In the 1970s, vector processors operating on large
arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector
computers remained the dominant design into the 1990s. From then until today, massively parallel
supercomputers with tens of thousands of off-the-shelf processors became the norm.[9][10]
The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted
dominance of the field, and later through a variety of technology companies. Japan made major strides
in the field in the 1980s and 90s, with China becoming increasingly active in the field. As of May 2022,
Computing power of the top 1 supercomputer each
the fastest supercomputer on the TOP500 supercomputer list is Frontier, in the US, with a LINPACK
year, measured in FLOPS
benchmark score of 1.102 ExaFlop/s, followed by Fugaku.[11] The US has five of the top 10; China has
two; Japan, Finland, and France have one each.[12] In June 2018, all combined supercomputers on the
TOP500 list broke the 1 exaFLOPS mark.[13]
History
In 1960, UNIVAC built the Livermore Atomic Research Computer (LARC), today considered among the first
supercomputers, for the US Navy Research and Development Center. It still used high-speed drum memory, rather than
the newly emerging disk drive technology.[14] Also, among the first supercomputers was the IBM 7030 Stretch. The IBM
7030 was built by IBM for the Los Alamos National Laboratory, which then in 1955 had requested a computer 100 times
faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions,
prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was
completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by
the Los Alamos National Laboratory. Customers in England and France also bought the computer, and it became the
basis for the IBM 7950 Harvest, a supercomputer built for cryptanalysis.[15]
A circuit board from the IBM 7030
The third pioneering supercomputer project in the early 1960s was the Atlas at the University of Manchester, built by a
team led by Tom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because
magnetic storage with such a capacity was unaffordable, the actual core memory of the Atlas was only 16,000 words,
with a drum providing memory for a further 96,000 words. The Atlas operating system swapped data in the form of
pages between the magnetic core and the drum. The Atlas operating system also introduced time-sharing to
supercomputing, so that more than one program could be executed on the supercomputer at any one time.[16] Atlas was
a joint venture between Ferranti and Manchester University and was designed to operate at processing speeds
approaching one microsecond per instruction, about one million instructions per second.[17]
The CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from germanium to silicon The CDC 6600. Behind the system
transistors. Silicon transistors could run more quickly and the overheating problem was solved by introducing console are two of the "arms" of the
refrigeration to the supercomputer design.[18] Thus, the CDC6600 became the fastest computer in the world. Given that plus-sign shaped cabinet with the
the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and covers opened. Each arm of the
defined the supercomputing market, when one hundred computers were sold at $8 million each.[19][20][21][22] machine had up to four such racks.
On the right is the cooling system.
Cray left CDC in 1972 to form his own company, Cray Research.[20]
Four years after leaving CDC, Cray delivered the
80 MHz Cray-1 in 1976, which became one of the most successful supercomputers in history.[23][24] The Cray-2 was
released in 1985. It had eight central processing units (CPUs), liquid cooling and the electronics coolant liquid Fluorinert was pumped through the
https://en.wikipedia.org/wiki/Supercomputer 1 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
supercomputer architecture. It reached 1.9 gigaFLOPS, making it the first supercomputer to break the gigaflop barrier.
[25]
In 1982, Osaka University's LINKS-1 Computer Graphics System used a massively parallel processing architecture, with
514 microprocessors, including 257 Zilog Z8001 control processors and 257 iAPX 86/20 floating-point processors. It
was mainly used for rendering realistic 3D computer graphics.[28] Fujitsu's VPP500 from 1992 is unusual since, to
achieve higher speeds, its processors used GaAs, a material normally reserved for microwave applications due to its
toxicity.[29] Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994
with a peak speed of 1.7 gigaFLOPS (GFLOPS) per processor.[30][31] The Hitachi SR2201 obtained a peak performance
of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network.[32][33][34]
The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations and was ranked the fastest
in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two-dimensional
mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface.[35]
Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar A cabinet of the massively parallel
designs using custom hardware were made by many companies, including the Evans & Sutherland ES-1, MasPar, Blue Gene/L, showing the stacked
blades, each holding many
nCUBE, Intel iPSC and the Goodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so
processors
much in that a supercomputer could be built using them as the individual processing units, instead of using custom
chips. By the turn of the 21st century, designs featuring tens of thousands of commodity CPUs were the norm, with later
machines adding graphic units to the mix.[9][10]
In 1998, David Bader developed the first Linux supercomputer using commodity parts.[36] While at the University of New Mexico, Bader sought to build a
supercomputer running Linux using consumer off-the-shelf parts and a high-speed low-latency interconnection network. The prototype utilized an Alta
Technologies "AltaCluster" of eight dual, 333 MHz, Intel Pentium II computers running a modified Linux kernel. Bader ported a significant amount of
software to provide Linux support for necessary components as well as code from members of the National Computational Science Alliance (NCSA) to
ensure interoperability, as none of it had been run on Linux previously.[37] Using the successful prototype design, he led the development of "RoadRunner,"
the first Linux supercomputer for open use by the national science and engineering community via the National Science Foundation's National Technology
Grid. RoadRunner was put into production use in April 1999. At the time of its deployment, it was considered one of the 100 fastest supercomputers in the
world.[37][38] Though Linux-based clusters using consumer-grade parts, such as Beowulf, existed prior to the development of Bader's prototype and
RoadRunner, they lacked the scalability, bandwidth, and parallel computing capabilities to be considered "true" supercomputers.[37]
Systems with a massive number of processors generally take one of two paths. In the grid computing approach, the
processing power of many computers, organized as distributed, diverse administrative domains, is opportunistically
used whenever a computer is available.[39] In another approach, many processors are used in proximity to each other,
e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect
becomes very important and modern supercomputers have used various approaches ranging from enhanced Infiniband
systems to three-dimensional torus interconnects.[40][41] The use of multi-core processors combined with centralization
is an emerging direction, e.g. as in the Cyclops64 system.[42][43]
As the price, performance and energy efficiency of general-purpose graphics processing units (GPGPUs) have improved, The CPU share of TOP500
a number of petaFLOPS supercomputers such as Tianhe-I and Nebulae have started to rely on them.[44] However, other
systems such as the K computer continue to use conventional processors such as SPARC-based designs and the overall
applicability of GPGPUs in general-purpose high-performance computing applications has been the subject of debate, in that while a GPGPU may be tuned
to score well on specific benchmarks, its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application
to it.[45] However, GPUs are gaining ground, and in 2012 the Jaguar supercomputer was transformed into Titan by retrofitting CPUs with GPUs.[46][47][48]
High-performance computers have an expected life cycle of about three years before requiring an upgrade.[49] The Gyoukou supercomputer is unique in
that it uses both a massively parallel design and liquid immersion cooling.
https://en.wikipedia.org/wiki/Supercomputer 2 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
A number of special-purpose systems have been designed, dedicated to a single problem. This allows the use of specially
programmed FPGA chips or even custom ASICs, allowing better price/performance ratios by sacrificing generality.
Examples of special-purpose supercomputers include Belle,[50] Deep Blue,[51] and Hydra[52] for playing chess, Gravity
Pipe for astrophysics,[53] MDGRAPE-3 for protein structure prediction and molecular dynamics,[54] and Deep Crack for
breaking the DES cipher.[55]
Heat management is a major issue in complex electronic devices and affects powerful computer systems in various
ways.[65] The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional
computer cooling technologies. The supercomputing awards for green computing reflect this issue.[66][67][68] Diagram of a three-dimensional
torus interconnect used by systems
The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be such as Blue Gene, Cray XT3, etc.
dealt with. The Cray-2 was liquid cooled, and used a Fluorinert "cooling waterfall" which was forced through the
modules under pressure.[62] However, the submerged liquid cooling approach was
not practical for the multi-cabinet systems based on off-the-shelf processors, and in
System X a special cooling system that combined air conditioning with liquid cooling
was developed in conjunction with the Liebert company.[63]
In the Blue Gene system, IBM deliberately used low power processors to deal with
heat density.[69] The IBM Power 775, released in 2011, has closely packed elements
that require water cooling.[70] The IBM Aquasar system uses hot water cooling to
achieve energy efficiency, the water being used to heat buildings as well.[71][72]
An IBM HS20 blade
The energy efficiency of computer systems is generally measured in terms of "FLOPS The Summit supercomputer was as
per watt". In 2008, Roadrunner by IBM operated at 376 MFLOPS/W.[73][74] In of November 2018 the fastest
supercomputer in the world.[56] With
November 2010, the Blue Gene/Q reached 1,684 MFLOPS/W[75][76] and in June 2011 the top two spots on the Green
a measured power efficiency of
500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster
14.668 GFlops/watt it is also the
in Nagasaki placing third with 1375 MFLOPS/W.[77] third most energy efficient in the
world.[57]
Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or
circulating refrigerants can remove waste heat,[78] the ability of the cooling systems to remove waste heat is a limiting
factor.[79][80] As of 2015, many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine – designers
generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the
supercomputer. Designs for future supercomputers are power-limited – the thermal design power of the supercomputer as a whole, the amount that the
power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power
consumption of the electronic hardware.[81]
Operating systems
Since the end of the 20th century, supercomputer operating systems have undergone major transformations, based on the changes in supercomputer
architecture.[82] While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house
operating systems to the adaptation of generic software such as Linux.[83]
Since modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run
different operating systems on different nodes, e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger
system such as a Linux-derivative on server and I/O nodes.[84][85][86]
While in a traditional multi-user computer system job scheduling is, in effect, a tasking problem for processing and peripheral resources, in a massively
parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal
with inevitable hardware failures when tens of thousands of processors are present.[87]
Although most modern supercomputers use Linux-based operating systems, each manufacturer has its own specific Linux-derivative, and no industry
standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware
design.[82][88]
https://en.wikipedia.org/wiki/Supercomputer 3 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for
tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the
interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time
waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming
models such as CUDA or OpenCL.
Moreover, it is quite difficult to debug and test parallel programs. Special techniques need to be used for testing and
debugging such applications.
Opportunistic approaches
Opportunistic supercomputing is a form of networked grid computing whereby a "super virtual computer" of many
loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied
to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However,
basic grid and cloud computing approaches that rely on volunteer computing cannot handle traditional supercomputing
tasks such as fluid dynamic simulations.[91]
The fastest grid computing system is the volunteer computing project Folding@home (F@h). As of April 2020, F@h
reported 2.5 exaFLOPS of x86 processing power. Of this, over 100 PFLOPS are contributed by clients running on
various GPUs, and the rest from various CPU systems.[92]
Example architecture of a grid
computing system connecting many
The Berkeley Open Infrastructure for Network Computing (BOINC) platform hosts a number of volunteer computing
personal computers over the
projects. As of February 2017, BOINC recorded a processing power of over 166 petaFLOPS through over 762 thousand internet
active Computers (Hosts) on the network.[93]
As of October 2016, Great Internet Mersenne Prime Search's (GIMPS) distributed Mersenne Prime search achieved about 0.313 PFLOPS through over
1.3 million computers.[94] The PrimeNet server has supported GIMPS's grid computing approach, one of the earliest volunteer computing projects, since
1997.
Quasi-opportunistic approaches
Quasi-opportunistic supercomputing is a form of distributed computing whereby the "super virtual computer" of many networked geographically disperse
computers performs computing tasks that demand huge processing power.[95] Quasi-opportunistic supercomputing aims to provide a higher quality of
service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about
the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding
parallel computing software in grids should be achieved through the implementation of grid-wise allocation agreements, co-allocation subsystems,
communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.[95]
In 2016, Penguin Computing, Parallel Works, R-HPC, Amazon Web Services, Univa, Silicon Graphics International, Rescale, Sabalcore, and Gomput
started to offer HPC cloud computing. The Penguin On Demand (POD) cloud is a bare-metal compute model to execute code, but each user is given
virtualized login node. POD computing nodes are connected via non-virtualized 10 Gbit/s Ethernet or QDR InfiniBand networks. User connectivity to the
POD data center ranges from 50 Mbit/s to 1 Gbit/s.[100] Citing Amazon's EC2 Elastic Compute Cloud, Penguin Computing argues that virtualization of
compute nodes is not suitable for HPC. Penguin Computing has also criticized that HPC clouds may have allocated computing nodes to customers that are
far apart, causing latency that impairs performance for some HPC applications.[101]
Performance measurement
Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many
small problems.[102] Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically
considered supercomputers, given that they do not solve a single very complex problem.[102]
Performance metrics
https://en.wikipedia.org/wiki/Supercomputer 4 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
No single number can reflect the overall performance of a computer system, yet the goal of the Linpack Top supercomputer speeds: logscale speed over
benchmark is to approximate how fast the computer solves numerical problems and it is widely used in 60 years
the industry.[105] The FLOPS measurement is either quoted based on the theoretical floating point
performance of a processor (derived from manufacturer's processor specifications and shown as
"Rpeak" in the TOP500 lists), which is generally unachievable when running real workloads, or the achievable throughput, derived from the LINPACK
benchmarks and shown as "Rmax" in the TOP500 list.[106] The LINPACK benchmark typically performs LU decomposition of a large matrix.[107] The
LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of
many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or
may need a high performance I/O system to achieve high levels of performance.[105]
This is a recent list of the computers which appeared at the top of the TOP500 list,[108] and the "Peak
speed" is given as the "Rmax" rating. In 2018, Lenovo became the world's largest provider for the
TOP500 supercomputers with 117 units produced.[109]
https://en.wikipedia.org/wiki/Supercomputer 5 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
Rmax Accelerator
Rank
(previous) Rpeak (PetaFLOPS) Name Model CPU cores (e.g. GPU) Interconnect Manufacturer count
cores
591,872
(9,248 × 64-core
1,102.00 36,992 × 220 Oak Rid
Optimized 3rd
1 1,685.65 Frontier HPE Cray EX235a AMD Instinct Slingshot-11 HPE Laborat
Generation EPYC MI250X Un
64C @2.0 GHz)
7,630,848
442.010 (158,976 × 48-core RIKEN
Tofu
2 537.212 Fugaku Supercomputer Fugaku Fujitsu A64FX 0 Fujitsu Compu
interconnect D
Jap
@2.2 GHz)
150,528
(2,352 × 64-core
309.10 9,408 × 220
Optimized 3rd EuroHP
3 428.70 LUMI HPE Cray EX235a AMD Instinct Slingshot-11 HPE
Fin
Generation EPYC MI250X
64C @2.0 GHz)
110,592
174.70 (3,456 × 32-core Xeon 13,824 × 108
Nvidia HDR100 EuroHP
4 255.75 Leonardo BullSequana XH2000 Platinum 8358 Nvidia
Infiniband
Atos
Ital
Ampere A100
@2.6 GHz)
202,752
148.60 (9,216 × 22-core IBM 27,648 × 80 Oak Rid
5 (4) 200.795 Summit IBM Power SystemAC922 POWER9 Nvidia Tesla InfiniBand EDR IBM Laborat
V100 Un
@3.07 GHz)
190,080
94.640 17,280 × 80 Lawren
(8,640 × 22-core IBM
6 (5) 125.712 Sierra IBM Power SystemS922LC Nvidia Tesla InfiniBand EDR IBM Nationa
POWER9 @3.1 GHz) V100 Un
10,649,600
93.015 (40,960 × 260-core Nationa
7 (6) 125.436 SunwayTaihuLight Sunway MPP Sunway SW26010 0 Sunway[111] NRCPC Center
Ch
@1.45 GHz)
70.87
? × ?-core AMD Epyc ? × 108 Nvidia NERSC
8 (7) 93.75 Perlmutter HPE Cray EX235n
7763 64-core @2.45 GHz Ampere A100
Slingshot-10 HPE
Un
71,680
63.460 (1,120 × 64-core AMD 4,480 × 108
Mellanox HDR Nvidia
9 (8) 79.215 Selene Nvidia Epyc 7742 Nvidia
Infiniband
Nvidia
Un
Ampere A100
@2.25 GHz)
427,008
61.445 (35,584 × 12-core 35,584 ×
Nationa
Matrix-
10 (9) 100.679 Tianhe-2A TH-IVB-FEP Intel Xeon E5–2692 TH Express-2 NUDT Center
2000[112] 128- Ch
v2 @2.2 GHz) core
Applications
The stages of supercomputer application may be summarized in the following table:
https://en.wikipedia.org/wiki/Supercomputer 6 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
2000s 3D nuclear test simulations as a substitute for legal conduct Nuclear Non-Proliferation Treaty (ASCI Q).[118]
The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral
cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to
simulate a number of artificial neurons equivalent to the entirety of a rat's brain.[121]
Modern-day weather forecasting also relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch
hundreds of millions of observations to help make weather forecasts more accurate.[122]
In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale
project.[123]
The Advanced Simulation and Computing Program currently uses supercomputers to maintain and simulate the United States nuclear stockpile.[124]
In early 2020, COVID-19 was front and center in the world. Supercomputers used different simulations to find compounds that could potentially stop the
spread. These computers run for tens of hours using multiple paralleled running CPU's to model different processes.[125][126][127]
The cost of operating high performance supercomputers has risen, mainly due to increasing power
consumption. In the mid-1990s a top 10 supercomputer required in the range of 100 kilowatts, in 2010
the top 10 supercomputers required between 1 and 2 megawatts.[134] A 2010 study commissioned by
DARPA identified power consumption as the most pervasive challenge in achieving Exascale computing.
[135] At the time a megawatt per year in energy consumption cost about 1 million dollars.
Supercomputing facilities were constructed to efficiently remove the increasing amount of heat
produced by modern multi-core central processing units. Based on the energy consumption of the Green
500 list of supercomputers between 2007 and 2011, a supercomputer with 1 exaFLOPS in 2011 would
have required nearly 500 megawatts. Operating systems were developed for existing hardware to
conserve energy whenever possible.[136] CPU cores not in use during the execution of a parallelized
application were put into low-power states, producing energy savings for some supercomputing
applications.[137]
The increasing cost of operating supercomputers has been a driving factor in a trend toward bundling of
resources through a distributed supercomputer infrastructure. National supercomputing centers first
emerged in the US, followed by Germany and Japan. The European Union launched the Partnership for Distribution of TOP500 supercomputers among
different countries, in November 2015
Advanced Computing in Europe (PRACE) with the aim of creating a persistent pan-European
supercomputer infrastructure with services to support scientists across the European Union in porting,
scaling and optimizing supercomputing applications.[134] Iceland built the world's first zero-emission supercomputer. Located at the Thor Data Center in
Reykjavík, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the
need for active cooling, making it one of the greenest facilities in the world of computers.[138]
Funding supercomputer hardware also became increasingly difficult. In the mid-1990s a top 10 supercomputer cost about 10 million euros, while in 2010
the top 10 supercomputers required an investment of between 40 and 50 million euros.[134] In the 2000s national governments put in place different
strategies to fund supercomputers. In the UK the national government funded supercomputers entirely and high performance computing was put under the
control of a national funding agency. Germany developed a mixed funding model, pooling local state funding and federal funding.[134]
In fiction
Examples of supercomputers in fiction include HAL 9000, Multivac, The Machine Stops, GLaDOS, The Evitable Conflict, Vulcan's Hammer, Colossus,
WOPR, AM, and Deep Thought. The Cray X-MP was mentioned as the supercomputer used to sequence the DNA extracted from preserved parasites in the
Jurassic Park series.
https://en.wikipedia.org/wiki/Supercomputer 7 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
See also
ACM/IEEE Supercomputing Conference
ACM SIGHPC
High-performance computing
High-performance technical computing
Jungle computing
Nvidia Tesla Personal Supercomputer
Parallel computing
Supercomputing in China
Supercomputing in Europe
Supercomputing in India
Supercomputing in Japan
Testing high-performance computing applications
Ultra Network Technologies
Quantum computing
References
1. "IBM Blue gene announcement" (http://www-03.ibm.com/press/us/en/pressrelease/21791.wss). 03.ibm.com. 26 June 2007. Retrieved 9 June 2012.
2. "Intrepid" (https://archive.today/20130507051619/https://www.alcf.anl.gov/intrepid). Argonne Leadership Computing Facility. Argonne National
Laboratory. Archived from the original (https://www.alcf.anl.gov/intrepid) on 7 May 2013. Retrieved 26 March 2020.
3. "The List: June 2018" (https://www.top500.org/lists/2018/06/). Top 500. Retrieved 25 June 2018.
4. "AMD Playstation 5 GPU Specs" (https://www.techpowerup.com/gpu-specs/playstation-5-gpu.c3480). TechPowerUp. Retrieved 11 September 2021.
5. "NVIDIA GeForce GT 730 Specs" (https://www.techpowerup.com/gpu-specs/geforce-gt-730.c1988). TechPowerUp. Retrieved 11 September 2021.
6. "Operating system Family / Linux" (https://www.top500.org/statistics/details/osfam/1). TOP500.org. Retrieved 30 November 2017.
7. Anderson, Mark (21 June 2017). "Global Race Toward Exascale Will Drive Supercomputing, AI to Masses." (https://spectrum.ieee.org/tech-talk/computi
ng/hardware/global-race-toward-exascale-will-drive-supercomputing-ai-to-masses) Spectrum.IEEE.org. Retrieved 20 January 2019.
8. Lemke, Tim (8 May 2013). "NSA Breaks Ground on Massive Computing Center" (http://odenton.patch.com/articles/nsa-breaks-ground-on-massive-com
puting-center). Retrieved 11 December 2013.
9. Hoffman, Allan R.; et al. (1990). Supercomputers: directions in technology and applications. National Academies. pp. 35–47. ISBN 978-0-309-04088-4.
10. Hill, Mark Donald; Jouppi, Norman Paul; Sohi, Gurindar (1999). Readings in computer architecture. Gulf Professional. pp. 40–49. ISBN 978-1-55860-
539-8.
11. Paul Alcorn (30 May 2022). "AMD-Powered Frontier Supercomputer Breaks the Exascale Barrier, Now Fastest in the World" (https://www.tomshardwar
e.com/news/amd-powered-frontier-supercomputer-breaks-the-exascale-barrier-now-fastest-in-the-world). Tom's Hardware. Retrieved 30 May 2022.
12. "Japan Captures TOP500 Crown with Arm-Powered Supercomputer - TOP500 website" (https://top500.org/news/japan-captures-top500-crown-arm-po
wered-supercomputer/). www.top500.org.
13. "Performance Development" (https://www.top500.org/statistics/perfdevel/). www.top500.org. Retrieved 27 October 2022.
14. Eric G. Swedin; David L. Ferro (2007). Computers: The Life Story of a Technology. JHU Press. p. 57. ISBN 9780801887741.
15. Eric G. Swedin; David L. Ferro (2007). Computers: The Life Story of a Technology. JHU Press. p. 56. ISBN 9780801887741.
16. Eric G. Swedin; David L. Ferro (2007). Computers: The Life Story of a Technology. JHU Press. p. 58. ISBN 9780801887741.
17. The Atlas (https://web.archive.org/web/20120728105352/http://www.computer50.org/kgill/atlas/atlas.html), University of Manchester, archived from the
original (http://www.computer50.org/kgill/atlas/atlas.html) on 28 July 2012, retrieved 21 September 2010
18. The Supermen, Charles Murray, Wiley & Sons, 1997.
19. Paul E. Ceruzzi (2003). A History of Modern Computing (https://archive.org/details/historyofmodernc00ceru_0). MIT Press. p. 161 (https://archive.org/d
etails/historyofmodernc00ceru_0/page/161). ISBN 978-0-262-53203-7.
20. Hannan, Caryn (2008). Wisconsin Biographical Dictionary (https://books.google.com/books?id=V08bjkJeXkAC&pg=PA83). State History Publications.
pp. 83–84. ISBN 978-1-878592-63-7.
21. John Impagliazzo; John A. N. Lee (2004). History of computing in education (https://archive.org/details/springer_10.1007-b98985). Springer Science &
Business Media. p. 172 (https://archive.org/details/springer_10.1007-b98985/page/n179). ISBN 978-1-4020-8135-4.
22. Andrew R. L. Cayton; Richard Sisson; Chris Zacher (2006). The American Midwest: An Interpretive Encyclopedia (https://books.google.com/books?id=
n3Xn7jMx1RYC&pg=PA1489). Indiana University Press. p. 1489. ISBN 978-0-253-00349-2.
23. Readings in computer architecture by Mark Donald Hill, Norman Paul Jouppi, Gurindar Sohi 1999 ISBN 978-1-55860-539-8 page 41-48
24. Milestones in computer science and information technology by Edwin D. Reilly 2003 ISBN 1-57356-521-0 page 65
25. Due to Soviet propaganda, it can be read sometimes that the Soviet supercomputer M13 was the first to reach the gigaflops barrier. Actually, the M13
building began in 1984, but it was not operational before 1986. Rogachev Yury Vasilievich, Russian Virtual Computer Museum (https://www.computer-m
useum.ru/english/galglory_en/Rogachev.php)
26. "Seymour Cray Quotes" (https://www.brainyquote.com/quotes/seymour_cray_103779). BrainyQuote.
27. Steve Nelson (3 October 2014). "ComputerGK.com : Supercomputers" (http://www.computergk.com/computers/supercomputers/).
28. "LINKS-1 Computer Graphics System-Computer Museum" (http://museum.ipsj.or.jp/en/computer/other/0013.html). museum.ipsj.or.jp.
29. "VPP500 (1992) - Fujitsu Global" (https://www.fujitsu.com/global/about/corporate/history/products/computer/supercomputer/vpp500.html).
30. "TOP500 Annual Report 1994" (http://www.netlib.org/benchmark/top500/reports/report94/main.html). Netlib.org. 1 October 1996. Retrieved 9 June
2012.
31. N. Hirose & M. Fukuda (1997). "Numerical Wind Tunnel (NWT) and CFD Research at National Aerospace Laboratory". Proceedings High Performance
Computing on the Information Superhighway. HPC Asia '97. Proceedings of HPC-Asia '97. IEEE Computer SocietyPages. pp. 99–103.
doi:10.1109/HPC.1997.592130 (https://doi.org/10.1109%2FHPC.1997.592130). ISBN 0-8186-7901-8.
32. H. Fujii, Y. Yasuda, H. Akashi, Y. Inagami, M. Koga, O. Ishihara, M. Syazwan, H. Wada, T. Sumimoto, Architecture and performance of the Hitachi
SR2201 massively parallel processor system (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.5625&rep=rep1&type=pdf), Proceedings of
11th International Parallel Processing Symposium, April 1997, pages 233–241.
33. Y. Iwasaki, The CP-PACS project, Nuclear Physics B: Proceedings Supplements, Volume 60, Issues 1–2, January 1998, pages 246–254.
34. A.J. van der Steen, Overview of recent supercomputers (https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.79.7986&rep=rep1&type=pdf),
https://en.wikipedia.org/wiki/Supercomputer 8 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
https://en.wikipedia.org/wiki/Supercomputer 9 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
https://en.wikipedia.org/wiki/Supercomputer 10 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
102. The Potential Impact of High-End Capability Computing on Four Illustrative Fields of Science and Engineering by Committee on the Potential Impact of
High-End Computing on Illustrative Fields of Science and Engineering and National Research Council (28 October 2008) ISBN 0-309-12485-9 page 9
103. Xingfu Wu (1999). Performance Evaluation, Prediction and Visualization of Parallel Systems (https://books.google.com/books?id=IJZt5H6R8OIC&pg=P
A116). Springer Science & Business Media. pp. 114–117. ISBN 978-0-7923-8462-5.
104. Brehm, M. and Bruhwiler, D. L. (2015) ‘Performance Characteristics of the Plasma Wakefield Acceleration Driven by Proton Bunches’.’ Journal of
Physics: Conference Series ,
105. Dongarra, Jack J.; Luszczek, Piotr; Petitet, Antoine (2003), "The LINPACK Benchmark: past, present and future" (http://www.netlib.org/utk/people/Jack
Dongarra/PAPERS/hplpaper.pdf) (PDF), Concurrency and Computation: Practice and Experience, 15 (9): 803–820, doi:10.1002/cpe.728 (https://doi.or
g/10.1002%2Fcpe.728), S2CID 1900724 (https://api.semanticscholar.org/CorpusID:1900724)
106. "Understanding measures of supercomputer performance and storage system capacity" (https://kb.iu.edu/d/apeq#measure-flops). Indiana University.
Retrieved 3 December 2017.
107. "Frequently Asked Questions" (https://www.top500.org/resources/frequently-asked-questions/). TOP500.org. Retrieved 3 December 2017.
108. Intel brochure – 11/91. "Directory page for Top500 lists. Result for each list since June 1993" (https://web.archive.org/web/20101218032041/http://www.
top500.org/sublist/). Top500.org. Archived from the original (http://www.top500.org/sublist) on 18 December 2010. Retrieved 31 October 2010.
109. "Lenovo Attains Status as Largest Global Provider of TOP500 Supercomputers" (https://www.businesswire.com/news/home/20180625005341/en/).
Business Wire. 25 June 2018.
110. "November 2022 | TOP500" (https://www.top500.org/lists/top500/2022/11/). www.top500.org. Retrieved 7 December 2022.
111. "China Tops Supercomputer Rankings with New 93-Petaflop Machine – TOP500 Supercomputer Sites" (https://www.top500.org/news/china-tops-super
computer-rankings-with-new-93-petaflop-machine/).
112. "Matrix-2000 - NUDT - WikiChip" (https://en.wikichip.org/wiki/nudt/matrix-2000). en.wikichip.org. Retrieved 19 July 2019.
113. "Tianhe-2A - TH-IVB-FEP Cluster, Intel Xeon E5-2692v2 12C 2.2GHz, TH Express-2, Matrix-2000 | TOP500 Supercomputer Sites" (https://www.top500.
org/system/177999/). www.top500.org. Retrieved 16 November 2022.
114. "The Cray-1 Computer System" (http://archive.computerhistory.org/resources/text/Cray/Cray.Cray1.1977.102638650.pdf) (PDF). Cray Research, Inc.
Archived (https://ghostarchive.org/archive/20221009/http://archive.computerhistory.org/resources/text/Cray/Cray.Cray1.1977.102638650.pdf) (PDF)
from the original on 9 October 2022. Retrieved 25 May 2011.
115. Joshi, Rajani R. (9 June 1998). "A new heuristic algorithm for probabilistic optimization". Computers & Operations Research. 24 (7): 687–697.
doi:10.1016/S0305-0548(96)00056-1 (https://doi.org/10.1016%2FS0305-0548%2896%2900056-1).
116. "Abstract for SAMSY – Shielding Analysis Modular System" (http://www.nea.fr/abs/html/iaea0837.html). OECD Nuclear Energy Agency, Issy-les-
Moulineaux, France. Retrieved 25 May 2011.
117. "EFF DES Cracker Source Code" (https://www.cosic.esat.kuleuven.be/des/). Cosic.esat.kuleuven.be. Retrieved 8 July 2011.
118. "Disarmament Diplomacy: – DOE Supercomputing & Test Simulation Programme" (https://web.archive.org/web/20130516033550/http://www.acronym.o
rg.uk/dd/dd49/49doe.html). Acronym.org.uk. 22 August 2000. Archived from the original (http://www.acronym.org.uk/dd/dd49/49doe.html) on 16 May
2013. Retrieved 8 July 2011.
119. "China's Investment in GPU Supercomputing Begins to Pay Off Big Time!" (https://web.archive.org/web/20110705021457/http://blogs.nvidia.com/2011/0
6/chinas-investment-in-gpu-supercomputing-begins-to-pay-off-big-time/). Blogs.nvidia.com. Archived from the original (http://blogs.nvidia.com/2011/06/c
hinas-investment-in-gpu-supercomputing-begins-to-pay-off-big-time/) on 5 July 2011. Retrieved 8 July 2011.
120. Andrew, Scottie (19 March 2020). "The world's fastest supercomputer identified chemicals that could stop coronavirus from spreading, a crucial step
toward a treatment" (https://www.cnn.com/2020/03/19/us/fastest-supercomputer-coronavirus-scn-trnd/index.html). CNN. Retrieved 12 May 2020.
121. Kaku, Michio. Physics of the Future (New York: Doubleday, 2011), 65.
122. "Faster Supercomputers Aiding Weather Forecasts" (https://web.archive.org/web/20050905005850/http://news.nationalgeographic.com/news/2005/08/
0829_050829_supercomputer.html). News.nationalgeographic.com. 28 October 2010. Archived from the original (http://news.nationalgeographic.com/n
ews/2005/08/0829_050829_supercomputer.html) on 5 September 2005. Retrieved 8 July 2011.
123. "IBM Drops 'Blue Waters' Supercomputer Project" (http://search.ebscohost.com/login.aspx?direct=true&db=bwh&AN=8OGE.2B33479B.C267DC93&sit
e=ehost-live). International Business Times. 9 August 2011. Retrieved 14 December 2018. – via EBSCO (https://www.ebsco.com) (subscription
required)
124. "Supercomputers" (https://web.archive.org/web/20170307210251/https://nnsa.energy.gov/aboutus/ourprograms/defenseprograms/futurescienceandtec
hnologyprograms/asc/supercomputers). U.S. Department of Energy. Archived from the original (https://nnsa.energy.gov/aboutus/ourprograms/defensep
rograms/futurescienceandtechnologyprograms/asc/supercomputers) on 7 March 2017. Retrieved 7 March 2017.
125. "Supercomputer Simulations Help Advance Electrochemical Reaction Research" (https://ucsdnews.ucsd.edu/pressrelease/supercomputer-simulations-
help-advance-electrochemical-reaction-research). ucsdnews.ucsd.edu. Retrieved 12 May 2020.
126. "IBM's Summit—The Supercomputer Fighting Coronavirus" (http://emag.medicalexpo.com/summit-the-supercomputer-fighting-coronavirus/).
MedicalExpo e-Magazine. 16 April 2020. Retrieved 12 May 2020.
127. "OSTP Funding Supercomputer Research to Combat COVID-19 – MeriTalk" (https://www.meritalk.com/articles/ostp-funding-supercomputer-research-to
-combat-covid-19/). Retrieved 12 May 2020.
128. "EU $1.2 supercomputer project to several 10-100 PetaFLOP computers by 2020 and exaFLOP by 2022 | NextBigFuture.com" (https://www.nextbigfutu
re.com/2018/02/eu-1-2-supercomputer-project-to-several-10-100-petaflop-computers-by-2020-and-exaflop-by-2022.html). NextBigFuture.com. 4
February 2018. Retrieved 21 May 2018.
129. DeBenedictis, Erik P. (2004). "The Path To Extreme Computing" (https://web.archive.org/web/20070803175503/http://www.zettaflops.org/PES/0-Organi
zation-DeBenedictis.pdf) (PDF). Zettaflops. Sandia National Laboratories. Archived from the original (http://www.zettaflops.org/PES/0-Organization-De
Benedictis.pdf) (PDF) on 3 August 2007. Retrieved 9 September 2020.
130. Cohen, Reuven (28 November 2013). "Global Bitcoin Computing Power Now 256 Times Faster Than Top 500 Supercomputers, Combined!" (https://ww
w.forbes.com/sites/reuvencohen/2013/11/28/global-bitcoin-computing-power-now-256-times-faster-than-top-500-supercomputers-combined/#660eb2ff6
e5e). Forbes. Retrieved 1 December 2017.
131. DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing" (http://portal.acm.org/citation.cfm?id=1062325). Proceedings of the 2nd conference
on Computing frontiers. ACM Press. pp. 391–402. ISBN 978-1-59593-019-4.
132. "IDF: Intel says Moore's Law holds until 2029" (https://web.archive.org/web/20131208075357/http://www.h-online.com/newsticker/news/item/IDF-Intel-s
ays-Moore-s-Law-holds-until-2029-734779.html). Heise Online. 4 April 2008. Archived from the original (http://www.h-online.com/newsticker/news/item/I
DF-Intel-says-Moore-s-Law-holds-until-2029-734779.html) on 8 December 2013.
133. Solem, J. C. (1985). "MECA: A multiprocessor concept specialized to Monte Carlo" (https://digital.library.unt.edu/ark:/67531/metadc1089522/). Monte-
Carlo Methods and Applications in Neutronics, Photonics and Statistical Physics. Lecture Notes in Physics. Vol. 240. Proceedings of the Joint los
Alamos National Laboratory – Commissariat à l'Energie Atomique Meeting Held at Cadarache Castle, Provence, France 22–26 April 1985; Monte-Carlo
Methods and Applications in Neutronics, Photonics and Statistical Physics, Alcouffe, R.; Dautray, R.; Forster, A.; Forster, G.; Mercier, B.; Eds. (Springer
Verlag, Berlin). pp. 184–195. Bibcode:1985LNP...240..184S (https://ui.adsabs.harvard.edu/abs/1985LNP...240..184S). doi:10.1007/BFb0049047 (http
s://doi.org/10.1007%2FBFb0049047). ISBN 978-3-540-16070-0. OSTI 5689714 (https://www.osti.gov/biblio/5689714).
134. Yiannis Cotronis; Anthony Danalis; Dimitris Nikolopoulos; Jack Dongarra (2011). Recent Advances in the Message Passing Interface: 18th European
MPI Users' Group Meeting, EuroMPI 2011, Santorini, Greece, September 18-21, 2011. Proceedings. Springer Science & Business Media.
https://en.wikipedia.org/wiki/Supercomputer 11 / 12ページ
Supercomputer - Wikipedia 2024/05/20 22:28
MPI Users' Group Meeting, EuroMPI 2011, Santorini, Greece, September 18-21, 2011. Proceedings. Springer Science & Business Media.
ISBN 9783642244483.
135. James H. Laros III; Kevin Pedretti; Suzanne M. Kelly; Wei Shu; Kurt Ferreira; John Van Dyke; Courtenay Vaughan (2012). Energy-Efficient High
Performance Computing: Measurement and Tuning (https://archive.org/details/energyefficienth00iiij). Springer Science & Business Media. p. 1 (https://a
rchive.org/details/energyefficienth00iiij/page/n9). ISBN 9781447144922.
136. James H. Laros III; Kevin Pedretti; Suzanne M. Kelly; Wei Shu; Kurt Ferreira; John Van Dyke; Courtenay Vaughan (2012). Energy-Efficient High
Performance Computing: Measurement and Tuning (https://archive.org/details/energyefficienth00iiij). Springer Science & Business Media. p. 2 (https://a
rchive.org/details/energyefficienth00iiij/page/n12). ISBN 9781447144922.
137. James H. Laros III; Kevin Pedretti; Suzanne M. Kelly; Wei Shu; Kurt Ferreira; John Van Dyke; Courtenay Vaughan (2012). Energy-Efficient High
Performance Computing: Measurement and Tuning (https://archive.org/details/energyefficienth00iiij). Springer Science & Business Media. p. 3 (https://a
rchive.org/details/energyefficienth00iiij/page/n13). ISBN 9781447144922.
138. "Green Supercomputer Crunches Big Data in Iceland" (https://web.archive.org/web/20150520034755/http://www.intelfreepress.com/news/green-superc
omputer-crunches-big-data-in-iceland/39/). intelfreepress.com. 21 May 2015. Archived from the original (http://www.intelfreepress.com/news/green-sup
ercomputer-crunches-big-data-in-iceland/39/) on 20 May 2015. Retrieved 18 May 2015.
External links
McDonnell, Marshall T. (2013). "Supercomputer Design: An Initial Effort to Capture the Environmental, Economic, and Societal Impacts" (https://trace.te
nnessee.edu/utk_chembiopubs/93/). Chemical and Biomolecular Engineering Publications and Other Works.
https://en.wikipedia.org/wiki/Supercomputer 12 / 12ページ