The pinnacle of computer technology is the supercomputer. Never mind gaming rigs like the Alienware Area 51, if one of these babies were ever used for gaming, they’d blow all competition out of their water cooled existence. These systems are powerful enough that they have their own unit of measurement for their processing speed – the petaFLOPS - FLoating point OPerations per Second – odd name, but you can’t deny the power it represents – trillions to quadrillions of operations every nanosecond.
[Just so we don't get in trouble with the noids that know, a FLOPS is not an actual measured unit. an expression like is actually interpreted as ]
These 3 supercomputers stand out above all others as the best of the best, fastest of the uberfast. Curiously, all of these systems run Linux, wink, wink ::::
3. China’s National Supercomputing Center: Nebulae
Developed in May 2010, China’s Nebulae supercomputer, located in the city of Shenzen,actually only has a speed of 1.271 petflops. Furthermore, in addition to its basic speed of 1.271, it’s said that it could -theoretically- perform at almost 3 petaflops- a result of Nvidia GPU accelerators built into the computer. What this means is, should the scientists in the Shenzen center choose to make a few small modifications to the Nebulae, it could out-perform Japans, Tokyo Institute of Tehnology’s Tsubame – 4th most powerful – perhaps even our number 2 – Jaguar – with relative ease. It’s this systems graphical capabilities that put it a spot ahead of Tsubame. Like the 2 other systems on this list, Nebulae’s chief purpose is research- mostly related to computer science.
2. The U.S. Department of Energy: Jaguar
This Super System was developed by the U.S. Department of Energy, the Jaguar makes its home in the Oak Ridge National Laboratory in Tennessee. Originally developed in 2005, the Jaguar’s undergone several upgrades that’s allowed it to keep its spot high up on this list. Its peak speed is somewhere around 1.75 petaflops – from its original speed of 275 teraflops – it boasts somewhere around 224,000 processors. Jaguar seeks to address some of the most challenging scientific problems in areas such as climate modeling, renewable energy, materials science, seismology, chemistry, astrophysics, fusion, and combustion. Annually, 80% of Jaguar’s resources are allocated through DOE’s Innovative and Novel Computational Impact on Theory and Experiment program, a competitively-selected, peer-reviewed process open to researchers from universities, industry, government, and non-profit organizations, essentially, it’s used to aid advanced scientific, climate and energy research.
1. China’s National Supercomputing Center: Tianhe-I
The top spot on the list – the most powerful computer in the world – and yes it’s from China. Completed in 2009 and upgraded October 2010, the Tianhe-I has a peak speed of 2.57 petaflops. It can also theoretically perform at 4.071 petaflops. This thing is a beast- 14,336 processors and over 7,000 Nvidia Tesla GPUs. China uses it for petroleum exploration – locating veins of petroleum – as well as advanced aircraft design. However, China is also planning to make this an internationally accessible supercomputer, meaning that any country with the funds to afford it can contract the Tianhe-I out for their own needs. not a bad idea, given that it’s the most powerful in the world.We don’t reckon they’ll have any trouble finding clients.
So what is a supercomputer?
A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s “supercomputer market crash”.
Today, supercomputers are typically one-of-a-kind custom designs produced by “traditional” companies such as Cray, IBMand Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. Since October 2010, the Tianhe-1Asupercomputer has been the fastest in the world; it is located in China.
The term supercomputer itself is rather fluid, and the speed of today’s supercomputers tends to become typical of tomorrow’s ordinary computers. CDC’s early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processingsystems with thousands of “ordinary” CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on “off the shelf” server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most[which?] modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Supercomputers use custom CPUs, they traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used fortransaction processing.
As with all highly parallel systems, Amdahl’s law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.
- A supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1Aconsumes 4.04 Megawatts of electricity. The cost to power and cool the system is usually one of the factors that limit the scalability of the system. (For example, 4MW at $0.10/KWh is $400 an hour or about $3.5 million per year).
- Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray’s supercomputer designs attempted to keep cable runs as short as possible for this reason, hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1–5 microseconds to send a message between CPUs are typical.
- Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, “A supercomputer is a device for turning compute-bound problems into I/O-bound problems.” Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:
- Vector processing
- Liquid cooling
- Non-Uniform Memory Access (NUMA)
- Striped disks (the first instance of what was later called RAID)
- Parallel filesystems
Processing: Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers.
Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers’ claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).
The current Top500 list (from May 2010) has 3 supercomputers based on GPGPUs. In particular, the number 3 supercomputer, Nebulae built by Dawning in China, is based on GPGPUs.[2
OS: Supercomputers today most often use variants of Linux. Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. In a similar manner, different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of computer systems such as Cray’s Unicos, or Linux.
Programming: The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is, in general, Fortran or C, using special libraries to share data between nodes. In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize a problem for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. The new massively parallel GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA and OpenCL.
Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf, WareWulf, and openMosix, which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple’s Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates disruptive technology
IBM is developing the Cyclops64 architecture, intended to create a “supercomputer on a chip”. Other PFLOPS projects include one by Narendra Karmarkar in India, a C-DAC effort targeted for 2010, and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011). In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflops computer, Pleiades, in 2009, scaling up to 10 PFLOPs by 2012. Meanwhile, IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory, named Sequoia, which is scheduled to go online in 2011. Given the current speed of progress, supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019. Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two week time span accurately. Such systems might be built around 2030.