A Faster Fourier Transform
A mathematical upgrade promises a speedier digital world.
- May/June 2012
- By Mark Anderson
Piotr Indyk, Dina Katabi, Eric Price, and Haitham Hassanieh (left to right) have created a faster way to break down complex signals into combinations of simple waves for processing. Credit: Webb Chappell
In January, four MIT researchers showed off a replacement for one of the most important algorithms in computer science. Dina Katabi, Haitham Hassanieh, Piotr Indyk, and Eric Price have created a faster way to perform the Fourier transform, a mathematical technique for processing streams of data that underlies the operation of things such as digital medical imaging, Wi-Fi routers, and 4G cellular networks.
The principle of the Fourier transform, which dates back to the 19th century, is that any signal, such as a sound recording, can be represented as the sum of a collection of sine and cosine waves with different frequencies and amplitudes. This collection of waves can then be manipulated with relative ease—for example, allowing a recording to be compressed or noise to be suppressed. In the mid-1960s, a computer-friendly algorithm called the fast Fourier transform (FFT) was developed. Anyone who's marveled at the tiny size of an MP3 file compared with the same recording in an uncompressed form has seen the power of the FFT at work.
To read the entire article you must log in:
Most of our content — all daily news, blogs, and videos — is free. Magazine stories are paid. To read this story, you must have a subscription or you must use a reading credit. Registration to Technology Review is free and entitles registrants to three free reading credits.