Average Waterfalls Or Average Orders?

[latexpage]One would expect that averaging waterfalls and then extracting orders would give the same result as extracting orders from individual waterfalls and then averaging them. This is not the case.

Why is there a difference? The answer is what is sometimes called the picket fence effect and, at other times, is called “scalloping loss”.

When we average waterfalls we are averaging individual Fourier Transforms. That is we are taking point values. For simplicity let us consider just one as those FFT’s and one single order. Basically it is unlikely that the true frequency of the order will exactly match any of the frequencies at which the FFT is evaluated. If we have a signal of length $T$ then the frequency spacing using FFT techniques will be $df$ Hz where $df =1/T$ and individual “analysis” frequencies, $f_a$, are at $(k*df)$ Hz. If $S$ is the sample rate and $N$ is the FFT size then $T=N/S$ so that $f_a = kS/N$.

The question we need to ask is that if the true order frequency at some rpm is $f_t$ and the true amplitude is $A$, then what will be the amplitude, $a_n$, computed by the FFT?

It may be shown (1, 2) that the relationship between the true amplitude $A$ and the nearest FFT value $a_n$ is given for a Hanning Window by


a_n = A \frac{Sinc({\pi}{\alpha})}{(\alpha + 1)(\alpha – 1)}\,where\,-0.5\le\alpha\le 0.5


and for a Rectangular Window by


a_n = ASinc({\pi}{\alpha})


where $\alpha = \frac{(Actual Frequency – Analysis Frequency)}{deltaF}$ and $deltaF$ is the frequency spacing.

When the actual frequency and the “analysis” frequency coincide then $\alpha = 0$ and we have an exact result. The graphs below show the shape of the functions.

With a Hanning Window then there could be as much as a 1.42dB loss (17.8%) and with a Rectangular Window the loss can be upto 36%, (about 4dB).

What does this mean in practice? A simple example is to just Fourier Transform a perfect sinewave whose frequency does not exactly match one of the FFT frequencies for the given sample rate and number of samples. For instance if we have a sinewave of amplitude 1.0 at 63.75Hz sampled at 1024 samples per second for one second then the analysis frequencies will be every 1.0Hz. Remembering that the FFT gives half amplitudes, then instead of the calculated amplitude being 0.5, it will be 0.449815 for Rectangular and 0.480169 for Hanning.

Because several Waterfalls are averaged together and no two waterfalls are identical then typically the variation between the order frequency and the FFT frequency will be halfway between perfect coincidence and maximum loss of coincidence. With a Hanning Window this is theoretically a 4% loss.

In reality the loss is higher as the error curve is non linear so that frequencies further from exact coincidence have an increased error (e.g. at coincidence then zero error, at 50% then 4% error, at 100% then 17.8% error).

Note we cannot get around this by using a larger FFT size. Whilst a smaller resolution will increase the chance of exact coincidence it equally increases the chance of being at a maximum lack of coincidence. Using a flat top window would however minimise the effects.

Order Cuts

Now with the order cuts the extraction method is an RMS value. That is the cut process takes several FFT amplitudes centred around the order frequency and uses the sum of the squares of these to compute to rms level over that frequency band. This then minimises the effect as it uses several frequency estimates. Basically it is using Parsevals theorem. If we find the rms value of the above two spectra over a frequency band centered at 63.75 Hz then we find

rms (hanning) = 0.7071

rms (rectangular) = 0.704

Clearly these are much more accurate. This is the effective method used by the software in making an order cut. Of course, if the Order Cut just selects the local peak it will have the same problem as averaging waterfalls. Note that the DATS order cut methods include applying compensation for the non coincidence. The preferred method is however the rms evaluation.


When averaging waterfalls or orders then care has to be taken that the speed ranges are the same. Errors in mismatched speeds, especially at start and end, generally give much more error than lack of coincidence.


Whilst in a theoretical world both methods appear identical in the real world where we have to use finite length Fourier Transforms then the methods give potentially different results.The above is nothing to do with exactness of the speed signal or the other half dozen or so error possibilities as these are common to both approaches. The effective order amplitude loss is due in this case to the “coincidence” effect.

In conclusion, the better method with standard time based sampling is to extract orders and average them rather than averaging waterfalls. Of course if we use synchronous sampling then we will always have coincidence and we may use direct peak extraction with confidence.

The following two tabs change content below.

Dr Colin Mercer

Chief Signal Processing Analyst (Retired) at Prosig
Dr Colin Mercer was formerly at the Institute of Sound and Vibration Research (ISVR), University of Southampton where he founded the Data Analysis Centre. He then went on to found Prosig in 1977. Colin retired as Chief Signal Processing Analyst at Prosig in December 2016. He is a Chartered Engineer and a Fellow of the British Computer Society.

Latest posts by Dr Colin Mercer (see all)

0 0 vote
Article Rating
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Inline Feedbacks
View all comments
Would love your thoughts, please comment.x