Developing an Algorithm for Tick Detection
|An investigation was made of a sample of automotive components where some were exhibiting a high frequency “tick” or rattle during each operating cycle. This feature could be heard above the normal operating noise. The problem this posed was to measure and analyze components in an objective fashion and classify components as “good” or “bad”.
A microphone was mounted on the test jig to verify that the sound could be recorded sufficiently well to discriminate the two conditions. That is, could the “tick” on bad components be heard on the recorded signal. After capture, the two signals were replayed through headphones and it was established that the “tick” was still prominent in the recorded signal. The time histories from the microphone are shown in Figure 1 below (click any of the figures for a larger view). Both look very similar and it is clear that there are no particular features that would aid in classifying the problem.

Click the play button or link below to listen to the sample signals1. The first is a component that doesn’t have a “tick” and the second one does.
Listen to microphone signal of component without tick
Listen to microphone signal of component with tick
What was required was a method of detecting the “tick” algorithmically. By this time, both signals had been loaded into a DATS worksheet. It was plain from the listening to the component and the recording that the tick was of a relatively high frequency. So, the first obvious step was to calculate the frequency spectrum for each signal and overlay them to see how they differed. As we needed to compare the relative amplitudes of the two signals we first normalized with respect to the standard deviation (SD) of the signals to remove any variability due to measurement conditions.

Once this was done and the spectra had been computed (see Figure 2) it is obvious that there is more energy in the signal with the “tick”, but it is spread out over quite a wide frequency range making it unsuitable to detect in a narrow band. An easier way of detecting the condition may be to look at the overall RMS in a frequency band between 5 and 15kHz. To do this a band pass filter is applied to each time history signal. It is quite useful to listen to the two signals again, now that the filter has been applied, to ensure that the feature we are trying to detect is still present.
Listen to filtered microphone signal of component without tick
Listen to filtered microphone signal of component with tick

Looking at the spectra of the signals now (Figure 3) it is still difficult to see any major differences on a log scale but if we change to a linear scale (Figure 4) then the difference in amplitude of the frequency components in this frequency band can be seen quite clearly.

The difference in the signals is spread over a relatively wide band, but the total energy is obviously greater in the signal with the “tick”. So using the RMS value of the filtered signal may be sufficient. If we calculate the RMS for the “good” and “bad” signals we get values of 0.0082 and 0.0230. This is nearly a factor of 3 and would appear to be a good enough discriminator if we can show that it holds true over a number of samples.
Another way of looking at the energy in these signals is to calculate the trend of the RMS from the original time history.
Looking at this raises some interesting issues. It shows us that our supposedly clean reference signal does occasionally exhibit the same phenomena. It is too infrequent to influence the overall RMS as this is heavily weighted by the good cycles. However if it was necessary to detect any occurrence of the tick then checking the threshold of the RMS Trend would provide finer discrimination than just the RMS value in the band.
Performing the Analysis
The analysis for this test was performed using Prosig’s DATS package. The software lends itself to both the intensive interactive work required to investigate and understand the data and is then ideal to refine a final application that can be used to test components in a production environment. The initial worksheet created in DATS to help classify the data can be seen in Figure 5 (click to see a larger view). This is a typical “engineers” worksheet and is analyzing the data in many different ways in order to understand the problem.

Once the results of this analysis were examined it became clear how the data should be analyzed as detailed above. The worksheet could then be stripped down to the analysis chain required to classify our components.

The worksheet in Figure 6 takes the “good” and “bad” signals, analyzes them and presents the results of the RMS calculation in a message box. This was then refined further.

Obviously, when one wishes to test a single component it is only necessary to analyze one signal at a time. Figure 7 shows the worksheet to take one signal from a captured data file and analyze it to produce a single RMS value that can be used to decide if a component is “good” or “bad”.

In the final iteration of the worksheet, shown in Figure 8, the initial disk file input has been exchanged for a data acquisition component so that the microphone capture becomes part of the classification process. In addition, after several components had been tested it was possible to set an acceptance limit for the measured RMS value. This was then encapsulated in the analysis as a decision box in the worksheet that gave a pass or fail message.
The flexibility of the DATS worksheets opens up many possibilities for future development of the application. For instance, the whole process could be run in a loop allowing for a continuous testing process. The DATS data acquisition software has a range of sophisticated triggers and, therefore, the data capture stage could be automatically triggered. If this were coupled with the report generation capabilities and the export options to formats like Microsoft Excel then the testing process could be made fully hands-off.
1 The audio clips above have been converted to compressed MP3 files for convenience when adding to this blog post. For the analysis described above they were reviewed at full, uncompressed quality.
Chris Mason
Latest posts by Chris Mason (see all)
- 5 Cool Acoustics Links - December 18, 2020
- Prosig supply high channel count data acquisition system to new RAL Space NSTF facility - April 16, 2020
- Prosig Student Project Prize for University of Portsmouth – 2018 - August 8, 2018
I rarely see people categorizing the problem and comparing each step for easy understanding of the viewer. I appreciate the effort and hope that others (Analysis Software makers) emulate this. Cheers. Hare Krishna.
I see you comparing the data in log and finally linearly but I don’t see some of the steps you have in the flowchart such as filtering. As a newby I would like to know how you get the RMS value. Maybe you can collect some from all your samples and put it in an FAQ but for now just how did you get the RMS (I’m assuming it’s trivial but I’d still like to know)
Hi Marco, thanks for your question. The RMS value used in the example is calculated by the STAT operation. This calculates a whole bunch of statistical values for the signal and stores them all in Named Elements with the original data.
The following are an example of the statistical information that can be calculated…
The #$REAL_ part of the Named Element name reflects the fact that I was looking at a real signal. If I had been using complex data then further values would be calculated for imaginary or phase parts.
So, in our example, the STAT function is performed and then the decision box checks the value of the Named Element #$REAL_RMS and proceeds accordingly.