FFT

  • Thread starter Thread starter Jon Harrop
  • Start date Start date
J

Jon Harrop

Anyone know of an FFT implementation for .NET that works for arbitrary "n"
(not just integral powers of two)?
 
This isn't a direct answer to your question, but the authors of Numerical
Recipes (Cambridge University Press) recommend that you use ONLY FFT's with
N an integral power of two, since those algorithms are by far the easiest.
If the length of your dataset is not an integral power of two, they suggest
as the most simplistic approach that you pad it with zeroes up to the next
power of two. Of course they also have more sophisticated suggestions that I
won't attempt to paraphrase here.

If you don't have a copy on your shelf, I recommend that you get one. Early
editions came with code in C or a couple of other languages of your choice
(depending on which edition you bought), but I believe that they later
stopped that practice.

HTH,
Tom Dacon
Dacon Software Consulting
 
This isn't a direct answer to your question, but the authors of Numerical
Recipes (Cambridge University Press) recommend that you use ONLY FFT's with
N an integral power of two, since those algorithms are by far the easiest.
If the length of your dataset is not an integral power of two, they suggest
as the most simplistic approach that you pad it with zeroes up to the next
power of two. Of course they also have more sophisticated suggestions that I
won't attempt to paraphrase here.

Zero-padding changes the quantity that you are computing, which may or
may not be acceptable depending on your application. (The non-
zeropadded outputs are *not* simply a subset of the zero-padded
outputs unless the zeropadded size is an integer multiple of the
original one.) Restricting yourself to power-of-two sizes is also
problematic for multi-dimensional transforms, where the difference
between powers of two is large.

It is certainly true that the radix-2 algorithms are the easiest to
implement. They also tend to be quite slow, contrary to what the
Numerical Recipes authors state (they say it is at worst 20-30%
slower, which might have been true 20 years ago). I've never seen a
vanilla radix-2 implementation (that is, code using a loop of log2(n)
passes over the array, including the NR one) that was not several
*times* slower than the fastest implementations, and I've benchmarked
quite a few (www.fftw.org/speed). The NR FFT code also uses a
trigonometric recurrence resulting in rounding errors that grow as
O(sqrt(N)), instead of O(sqrt(log N)) which is possible for an
accurate FFT. NR is a great book for getting a feel for many
different topics and seeing short pedagogical examples, but it is not
a great source of code for serious numerical work.

And if you don't care about speed to within an order of magnitude,
it's actually quite easy to implement an FFT that works for arbitrary
N, using Bluestein's algorithm (Google it) and a power-of-two FFT. If
you care about speed, it's usually best to simply download and call an
optimized (native code) FFT, of which there are several good choices
that support non powers of two.

Regards,
Steven G. Johnson
 
Tom said:
This isn't a direct answer to your question, but the authors of Numerical
Recipes (Cambridge University Press) recommend that you use ONLY FFT's
with N an integral power of two, since those algorithms are by far the
easiest. If the length of your dataset is not an integral power of two,
they suggest as the most simplistic approach that you pad it with zeroes
up to the next power of two. Of course they also have more sophisticated
suggestions that I won't attempt to paraphrase here.

That is almost always awful advise. Padding changes results in unpredictable
ways and introduces unnecessary errors. From the point of view of a
physical scientist, this is an insane thing to do as it adds greatly to
your workload and undermines the reliability of your results when some FFTs
are within 4x the performance with only numerical error.

I have actually seen researchers waste months of time analysing results only
to find that their data were spoiled by errors introduced by unnecessary
padding. So this is one topic that I can't stress enough in the books that
I write for scientists and engineers: you must use a decent FFT
implementation.

My background is Linux and the FFT of choice there is FFTW (by Steven G.
Johnson et al.). Hearing the .NET hype and Bill Gates pushing Microsoft to
cater for scientists, I was quite surprised to find nothing bundled in the
way of FFTs and matrix packages. Instead, complex numbers appear to be the
state-of-the-art in the C# world.

I've solved the FFT problem by writing minimal bindings to a precompiled
FFTW (apparently vanilla VS can't compile it due to its compiler bugs?!)
from F#. The result is quite elegant and almost as fast as Linux.
If you don't have a copy on your shelf, I recommend that you get one.
Early editions came with code in C or a couple of other languages of your
choice (depending on which edition you bought), but I believe that they
later stopped that practice.

I've had NR since I was at university and I must say that I'm not a big fan
of that book. There are really two major topics that I'd expect to be
covered in such a book, eigenvalues and FFTs, and NR does neither subject
justice. Moreover, NR is generally very out of date and its advice on
performance is often wrong as a consequence.
 
Back
Top