The Fabulous Fast Fourier Transform
A lot of students first encounter the sine and cosine wave in Algebra II. Teachers introduce ideas about amplitude, frequency, wavelength, and periodicity, drawing analogies to the circular motion of things like bicycle tires and Ferris wheels. By the time kids get to high school physics, all this trigonometry actually becomes very useful when we talk about oscillating springs, sound waves, and the electromagnetic spectrum.
Today we’re going to talk about the Fast Fourier Transform (FFT), recognized by the Institute of Electrical and Electronics Engineers (IEEE) in 2000 as one of the top ten algorithms of the twentieth century. Little known outside engineering and computer science circles, the FFT is a fundamental component of our modern technological world, powering remarkable engines of signal processing for music, medical imaging, radar, communications and cell phones. What is this beast?
Around the turn of the nineteenth century, the mathematician Jean-Baptiste Joseph Fourier concluded that any continuous function f(t) defined on an interval could be represented by a series of sines and cosines. He wrote this as Fourier series:
f(t) = 0.5*a(0) + sum over n of [a(n)*cos(nt) + b(n)*sin(nt)]
Note here that the “n” corresponds to the frequency of the sine and cosine waves. The Fourier Transform (FT) extends these ideas to any continuous function of infinite extent, and expresses it in terms of sines and cosines using calculus.
So what’s the Fast Fourier Transform, or FFT? The calculation of the FT with N inputs requires on the order of N^2 (N squared) computations. In 1965, two mathematicians, James Cooley and John Tukey, published a paper describing how to perform a fast Fourier transform in around N log N computations. For a 1000-point FT, that’s a huge difference in computation, 1,000,000 versus 3000!
Tukey had his original insight while attending a meeting of President Kennedy’s Science Advisory Committee, where participants were brainstorming about ways to detect Soviet nuclear testing offshore. This was important because ratification of a “proposed United States – Soviet Union nuclear test ban depended the development of a method to detect the test without actually visiting Soviet nuclear facilities.” Proposed data sources included offshore seismometers to detect vibrations in the earth’s crust, and acoustic signals collected from submarines. Either way, the Tukey FFT idea would make it possible to analyze the collected data quickly as part of detecting Soviet compliance with the test ban.
So how does the FFT help? Suppose you have a microphone connected to a device called a spectrum analyzer. As you speak or sing into the microphone, the analyzer’s display shows squiggly waves moving across the x-axis in time with amplitude (loudness) displayed along the y-axis. If you flip a switch, the spectrum analyzer takes discrete samples of your voice and performs an FFT computation. Now the equipment’s display shows your speaking or singing in the frequency domain, where the x-axis now represents frequency and the y-axis corresponds to the power of the signal. How cool is that?
The point is that this quick transformation between the time and frequency domains allows modern technology to apply many useful algorithms which improve both signal quality and data rates. These algorithms, which include the FFT, are often hosted on Digital Signal Processing (DSP) computer chips, which perform computations on audio and video signals in real time. They are integrated into things like cell phones, wireless (including noise cancelling) headphones, headsets and earbuds. Obviously, non-consumer applications for the telecommunications and defense community are plentiful.
This kind of math can seem tough at first. It’s unlike other math you’ve seen before, and its usefulness can’t be appreciated without a certain level of proficiency. But really, it’s not so hard. In the future, I’ll explain it yet another way, starting with the story of Tukey’s insight.