These are two different techniques aimed at different objectives. First consider a simple sinewave that has been sampled close to the Nyquist frequency (sample rate/2).

Visually this looks very pointy. We will examine it using a filter based interpolation and a classical curve fitting procedure to obtain a better representation.

As a second example we will look at a sinewave with a spike as illustrated below.

### Filter based Interpolation (and Decimation)

Interpolation (and decimation) in the DATS software operates on a time history using Fourier and phaseless filter based algorithms. It is not interpolating in the regular curve fitting sense, which uses points local to a new sample point to make a curve fit by one of several algorithms. Rather it makes use of the Nyquist Sampling Theorem, which says that one just needs to sample at twice the highest frequency to capture all the information. It is restricted to an integer multiple of the existing sample rate. But by interpolating (up sampling) and then decimating (down sampling) one may get a non-integer total effective sample rate, eg up sample by 4 and down sample by 5 gives a sample rate which is 0.8 times the original. Decimation is quite simple to understand. First apply a phaseless filter whose frequency is the required downsampled factor of the original signal’s Nyquist frequency. Then remove the redundant data points. For example, a decimation of three low pass filters set at a cut off of the original sample rate divided by six (two times three). Remove every second and third point. Interpolation (up sampling) is equally simple, but seems to be a little more difficult conceptually. If for instance we are interpolating by, say, 3 then insert two zeroes between each actual data point. We now have three points for every original point covering the same time range. So in some sense the sample rate is has been increased by a factor of three. Now adjust the effective sample rate and low pass filter at the new Nyquist frequency, actually about 0.8 times. Because we are filtering, adding the zeroes is adding zero amplitude information! There can be problems at the start and end of the signal due to filter start up.

The interpolated data sits exactly on the original sample points.

The technique effectively uses all the data points “simultaneously”.

### Re-sampling

When using resampling in DATS a local curve fit procedure is used that is totally unaware of any sampling rate criterion. It is based around a Lagrange curve fitting process. This has several useful features, one of them being that existing data points are maintained when using an integer multiple. Another feature is that the Lagrange method is allied to the ideal or perfect resampler, namely the theoretical “sinc” resampling scheme. Using a sinc scheme is not practical as it requires an infinite length of data! The Lagrange method may be considered as a very good finite length approximation to the ideal sinc scheme. Because it only uses local information, basically a few points at a time, then it will follow the general characteristic of the input data.

The original data again fits on the resampled data but the resampled. version is only using “local” data. It will follow local details more accurately, but may lower the signal to noise ratio.

Examining the frequency content can be quite illustrative. The dB spectra below shows that the main frequency content is unchanged, but the resampled one has much more “noise” to achieve its result. This is largely because the original data was basically under-sampled for that technique. Any frequency content above sample rate/4 is likely to be poor with the resampling method.

Another example is a case with a spike on a sine wave.

In this case looking at the time signals then the resample gives a better result as the interpolation method has filter ringing.

If we look at the frequency content then from the amplitude the filter interpolated scheme appears better, but taking the phase into consideration one would choose the resampled version as being the better representation.

In summary, if one just needs a simple increase in sample rate then the interpolation method is fine. But if we need to ensure local features are retained then the resample scheme is advantageous. The penalty of the resampling method is a decrease in the signal to noise ratio.

Presently, Prosig are investigating an enhancement to improve the signal to noise ratio.

#### Dr Colin Mercer

#### Latest posts by Dr Colin Mercer (see all)

- Is That Tone Significant? – The Prominence Ratio - September 18, 2013
- A Guide To Digital Filtering - June 4, 2013
- Reference frequency for third octave filters - January 28, 2013

This is very good and easy to understand.

Short explanation but very clear, Thanks. This technique was used in data or signal compression technology, right?

I have some comments regrding this blog which, hopefully, you can clarify.

Regarding the first example, interpolation looks to be far superior, recreating the original pure sine wave. Resampling looks to creating local variations rather than responding to them.

Regarding the second example, I suspect the resampled signal contains data above the Nyquist frequency of the original sampled data at the peak and therefore violates Shannon’s sampling/reconstruction theory. Assuming the data is sampled correctly the interpolated ‘signal’ (I know it’s still sampled) is the only one that could result in the sampled data. [others would not be sampled correctly at the same rate bacause of their higher frequency content].

Also the interpolated data does not appear to coincide with the original data points

If you import data that has some data points missing but it includes a reference time signal, is there a way of resampling that references this time signal to end up with a signal with a constant sample rate?