Supercomputer Power: Flops and Floating Points
The Titan supercomputer at Oak Ridge National Laboratory was crowned the fastest in the world just a few weeks after its debut.
A supercomputer is a high-performance computing machine with powers and abilities far beyond those of normal computers. They are designed to be very powerful, have extremely fast processing speeds and make billions and billions of calculations in a single second.
They are also huge. Some of them consist of a series of computers and can fill entire rooms. One of the world's top supercomputers, the Titan at Oak Ridge National Laboratory, takes up 4,352 square feet. It has 299,008 central processing unit (CPU) cores. Titan has performed 17 quadrillion calculations per second. In theory, according to the Oak Ridge lab, Titan will be able to reach 20 quadrillion calculations per second. China's Tianhe-2 has reportedly achieved 30.65 quadrillion calculations per second.
In computer-speak, 20 quadrillion calculations per second is referred to as 20 petaflops. "Flops" is an acronym for floating-point operations per second. To better understand what flops are, it helps to define some related terms:
Floating-point numbers have decimal points in them, such as 5.24. A number without a decimal, such as 5, is an integer. Floating-point numbers are usually represented as the product of a fixed number, called the significand (or mantissa), and a base, which is scaled with an exponent. The base is usually 10, but can also be 2 or 16.
For example, 1.2345 would be represented as a floating-point number like this: 12345 x 10-4. The significand is 12345, the base is 10 and the exponent is -4. The "floating point" refers to the fact that the decimal point "floats" or can be placed anywhere relative to the significand, and is determined by the exponent. For computer calculations, floating-point numbers are converted into binary equivalents using 1s and 0s.
An operation is the effect of a mathematical operator — such as addition, subtraction, multiplication or division — on an expression or equation.
Floating-point operations, then, are calculations of mathematical equations involving very small or very large numbers with decimals, such as the distance between atoms or the distance between galaxies. These tasks typically take longer to execute than simple binary integer operations.
Flops and megaflops
Today's computers can perform so many flops that prefixes are added to form terms for multiple numbers of them:
- 1,000 flops = 1 kiloflop
- 1 million flops = 1 megaflop
- 1 billion flops = 1 gigaflop
- 1 trillion flops = 1 teraflop
- 1 quadrillion flops = 1 petaflop
- 1 quintillion flops = 1 exaflop
Leaps and bounds
Computer performance has increased steadily over the past 20 years, and yesterday's supercomputer is today's handheld gadget. In 1993, the world's fastest supercomputer achieved 59.7 gigaflops. By contrast, an iPad's graphics processing unit (GPU) can reach 76.8 gigaflops.
With 30 petaflops apparently achieved, scientists are reaching for the next milestone. DARPA, the U.S. defense research agency, expects a supercomputer to be able to perform 1 exaflop by 2018.