## Average Waterfalls Or Average Orders?

[latexpage]One would expect that averaging waterfalls and then extracting orders would give the same result as extracting orders from individual waterfalls and then averaging them. This is not the case.

Skip to content
##
Average Waterfalls Or Average Orders?

##
Audio Equalisation Filter & Parametric Filtering

##
Cleaning Up Data

##
What Is A Fourier Transform?

##
Data Smoothing : RC Filtering And Exponential Averaging

##
Spectrum Smoothing : Why and How?

##
Measuring Torsional Twist & Vibration Along a Shaft or Through a Geartrain

##
Order Tracking, Frequency and Hertz

##
Dynamic Range And Overall Level : What Are They ?

##
High Pass Filtering And Tacho Signals

##
Don’t Let Spikes Spoil Your Data

##
Non Linear Calibration Curve And Polynomial

##
A Weighting. And B. And C.

##
Understanding The Cross Correlation Function

##
How Do I Downsample Data?

##
Does The Signal Have A Gaussian Probability Density?

##
Removing Phase Delay Using A Phaseless Filter

##
Methods To Remove Spikes From Data

##
Time Varying Overall Level Vibration (or Noise)

##
Interpretation of the Articulation Index

[latexpage]One would expect that averaging waterfalls and then extracting orders would give the same result as extracting orders from individual waterfalls and then averaging them. This is not the case.

When working with audio signals a common requirement is to be able to equalise, cut or boost various frequency bands. A large number of hardware devices on the market provide this capability. The key aspect is that such filters are able to control bandwidth, centre frequency and gain separately. There are broadly two classes of filter used, a “shelving” filter and an “equalising “filter (also known as a “peak” filter). A shelving filter is akin to low pass and high pass filters. An equalising filter is like a bandpass or band reject filter.

When we have a very noisy signal with a large number of spikes and signal bursts then if all else fails try Median Filtering. This is a technique often used in cleaning up pictures. The operation is almost childishly simple in concept but we will save the details until we have examined an example.

A Fourier Transform takes a signal and represents it either as a series of cosines (real part) and sines (imaginary part) or as a cosine with phase (modulus and phase form). As an illustration we will look at Fourier analysing the sum of the two sine waves shown below. The resultant summed signal is shown in the third graph.

*[Updated 12th March 2013]*

What are RC Filtering and Exponential Averaging and how do they differ? The answer to the second part of the question is that they are the same process! If one comes from an electronics background then RC Filtering (or RC Smoothing) is the usual expression. On the other hand an approach based on time series statistics has the name Exponential Averaging, or to use the full name Exponential Weighted Moving Average. This is also variously known as EWMA or EMA. (more…)

Sometimes data has spikes which are clearly artefacts of the processing or are due to some other external source. One is used to seeing these on time series but in some cases there are unrepresentative “spikes” in the frequency analysed data. Here we discuss how we can use spectrum smoothing to alleviate the problem. An example spectrum is shown below.

The measurement of torsional twist, or the twist angle, between two points along a shaft or through a gear train may be derived from a pair of tacho signals, one at each end of the shaft. Typically the tacho signals would be derived from gear teeth giving a known number of pulses/revolution. For example one end of a shaft could have a gear wheel with say 60 teeth giving 60 pulses/revolutions when measured with say an inductive or eddy current probe. (more…)

The most common form of digitising data is to use a regular time based method. That is data is sampled at a constant rate specified as a number of samples/second. The Nyquist frequency, f_{N}, is defined such that f_{N} = SampleRate/2. As discussed elsewhere Shannon’s Sampling Theorem tells us that if the signal we are sampling is band limited so that all the information is at frequencies less than f_{N} then we are alias free and have a valid digitised signal. Furthermore the theorem assures us that we have all the available information on the signal. (more…)

Accurate measurement of a signal depends on the dynamic range and the overall level of the data acquisition system. The overall level setting may be thought of as determining the largest signal that can be measured. This clearly depends on the present gain setting. That is the overall level is related to the gain. Clearly if the overall level is too small (gain too high) then the signal will be clipped and we will have poor quality data. The dynamic range then tells us that for the given overall level what is the smallest signal we can measure accurately whilst simultaneously measuring the large signal.

In a very simple sense suppose we have an artificial signal which consists of a sinewave at a large amplitude **A** for the first half and that this is followed by a sinewave with a small amplitude **a** for the second half. We will set the gain (the overall level) to allow the best measurement of the **A** sinewave. The dynamic range tells us how small **a** may be so we can also measure that without changing settings.

(more…)

It is sometimes necessary to perform high pass filtering to eliminate low frequency signals. These may arise for instance from whole body vibrations when perhaps our interest is in higher frequency components from a substructure such as an engine or gearbox mounting. The vibration levels are speed sensitive and the usual scheme is to record a once per revolution ‘tacho’ signal with the vibration data. The tacho signal, which ideally is a nice regular pulse train, is processed to find rotational speed and hence to select which part of the vibration signal is to be frequency analyzed. The most common form of analysis is a waterfall type such as shown below.

In many real-world applications it is impossible to avoid “spikes” or “dropouts” in data that we record. Many people assume that these only cause problems with their data if they become obvious. This is not always the case. Consider the following two time histories.

Not all systems vary linearly. One very well known case is, of course, thermocouples. International standard curves are available for these so they present little difficulty. The issue discussed here is determining a non linear calibration curve and if appropriate reducing to a polynomial.

Some devices, particularly digital tape recorders, apply A-weighting to all their data in order to achieve acceptable data compression. This is fine unless you want to analyse the unweighted data or apply a different weighting factor. Using Prosig’s DATS software it is a simple task to instruct the WEIGHT module to either simply unweight the data or remove one weighting factor and apply another.

To illustrate the use of the cross correlation function, a source location example is shown below. For this, it is assumed that there is a noise source at some unknown position between 2 microphones. A cross correlation technique and a transfer function like approach were used to determine the location. (more…)

*Sometimes we have digitised data at a much higher rate than we need. How can we downsample data? If I wanted to say halve the sample rate can I just throw away every other data point?*

The answer is NO, except in pathological conditions where you know that there is no frequency content above the new Nyquist frequency.

The PROB module in DATS for Windows provides, amongst other options, a probability density analysis. Also, the signal generation suite has a module, GENPRB, which generates a classical gaussian probability density curve (and others). How then may these be used to compare the probability density of our measured signal with that of a true Gaussian one. The method is quite straight forward and is a matter of scaling.

In many instances we need to filter a signal to remove unwanted frequencies. If we use classical filters such as Butterworth, Chebyshev or even Bessel then a phase delay is introduced. This phase delay is itself a function of frequency so that the signal content at one frequency is delayed a different amount to that at another frequency. Why does this matter?

For various reasons data captured in the real world often contains spikes that will give erroneous results when analysed. The DATS software package provides various ways of editing and to remove spikes from data. Let us consider a real life case history.

A common requirement to measure overall level vibration (or noise) as a function of time. Now, the overall level is a measure of the total dynamic energy in the signal. That is it does not contain the energy due to the DC level, which is the same as the mean value. The overall level is often loosely referred to as the signal RMS value. However the formal definition of the RMS level is that it contains the DC level as well as the dynamic energy level. If only the dynamic contribution is required then the measure needed is, strictly speaking, the Standard Deviation (SD). Sometimes it is useful to refer to the SD as the Dynamic RMS.

The Articulation Index or AI gives a measure of the intelligibility of hearing speech in a given noise environment. The metric was originally developed in 1949 in order to give a single value that categorised the speech intelligibility of a communication system. The basic interpretation of the AI value is the higher the value then the easier it is to hear the spoken word. The AI value is expressed either as a factor in the range zero to unity or as a percentage.

- Go to the previous page
- 1
- …
- 6
- 7
- 8
- 9