Whether you call them spikes, glitches, anomalies or data dropouts, these phenomena have been a problem to engineers ever since they started recording data. There are any number of reasons why these problems occur.
Sometimes it may be possible to repeat a test, but more often a busy engineer doesn’t have time or the test item has long gone. So, the most important consideration is ‘Can I still get meaningful results from this data?’ Fortunately, in some cases, all is not lost.
Below we have collected some of our previous posts on the subject.
In many real-world applications it is impossible to avoid “spikes” or “dropouts” in data that we record. Many people assume that these only cause problems with their data if they become obvious. This is not always the case.
Sometimes data has spikes which are clearly artefacts of the processing or are due to some other external source. One is used to seeing these on time series but in some cases there are unrepresentative “spikes” in the frequency analysed data.
A shaft has been instrumented with two shaft encoders, one at each end. Each shaft encoder gives out a once/rev pulse and a 720 pulses/rev signal. Each signal was digitised at 500,000 samples/second. The objective is to measure the twist in the shaft and analyze into orders.
When we have a very noisy signal with a large number of spikes and signal bursts then if all else fails try Median Filtering. This is a technique often used in cleaning up pictures. The operation is almost childishly simple in concept but we will save the details until we have examined an example.
For various reasons data captured in the real world often contains spikes that will give erroneous results when analysed. DATS for Windows provides various ways of editing and removing these anomalies. Let us consider a real life case history.