Back in 2000, Dr. Michael Unser of the Swiss Federal Institute of Technology in Lausanne, published an interesting technical paper entitled “Sampling – 50 years after Shannon”. In this paper he considers the state of the art in digital sampling. It is not a puff piece. It requires a post-graduate level grasp of mathematics if one is to follow it in any serious detail. It mostly goes over the top of my head, for starters. But in doing so, it makes some interesting points, including the dry observation that the so-called “Nyquist-Shannon” theorem handily predates both Nyquist and Shannon!
One key finding is as follows. It reduces the problem of regular digital sampling to a ‘general theorem’. In other words, all methods of digitally sampling a continuous function will be subsets of this general theorem. It goes something like this:
All continuous functions (such as waveforms) can be represented as the sum of a number of “orthogonal functions”. Orthogonal functions are like the X-Y-Z axes of a co-ordinate system, where an object’s location in three-dimensional space can be unambiguously specified by its co-ordinates, given by its positions along the X-axis, Y-axis, and Z-axis. If the object moves purely along the direction of the X-axis, then it’s Y-axis and Z-axis co-ordinates will remain unchanged. In fact, I can change its position along any one of the three axes without affecting its position along the other two. It is this property that makes the three axes “orthogonal”. The same property makes for “orthogonal” functions – you can independently change any one of them without affecting any of the others.
An example of this would be the frequencies of an audio signal. I can change the amount of the 1kHz frequency content, and it will have no impact on any of the other frequencies present. The frequencies – or more specifically the sine waves exhibiting them – are therefore “orthogonal functions”.
Usable families of orthogonal functions can range from simple to very complex. The set of families of orthogonal functions may even be infinitely large. The simplest members are base functions such as sine waves. For sine waves, Unser’s set of coefficients is obtained by performing a Fourier Transform. Slightly more elaborate families include such things as “wavelets” which are best described as short bursts of sine waves; and Splines, which are best known as curve-fitting functions. Much interest over the past 20 years has been focused on wavelets, and it seems likely that this will accelerate in the future, as the computing power required to use them to their best advantage becomes more commonplace.
Unser’s paper tells us how to examine any set of orthogonal functions to determine whether they are suitable for representing a waveform. Unfortunately, the test itself is mathematically obtuse, and does not lend itself to a pithy description in plain English. But if a set of orthogonal functions proves to be suitable, then our waveform can be fully represented by determining a corresponding number (in mathematical terms a “coefficient”) for each of the orthogonal functions. We can then store those numbers, and use them to fully and accurately reconstruct the waveform at some future time.
This is Unser’s general theorem of digital sampling, and he uses it to ask and explore some very interesting questions, ones which may well prove to be useful in the near future. But before discussing that, we’ll just take a quick look at how Nyquist-Shannon sampling theory fits into it. Suppose we choose as our family of orthogonal functions the Sinc() function:
Sinc(x) = Sin(x)/x
As it happens, when we work out what the corresponding coefficients are for the Sinc() functions, they turn out to be the real values of the waveform itself as it evolves with time. In other words, turning the whole thing backwards, if we sample our waveform in time, the resultant sample values will be the coefficients of an orthogonal family of Sinc() functions which can be used to exactly reconstruct the original waveform.
I have stated glibly that we can choose any family of orthogonal functions which meet some incomprehensible criteria, and fully represent our waveform by storing only the coefficients of these functions. However, this is of no practical use if our family of orthogonal functions is infinitely large, because we’d then have to store an infinitely large set of coefficients. This is where the concept of “bandwidth limitation” comes in.
We are familiar with the Nyquist Criterion, which states that our waveform must contain no frequencies above one half of the sampling rate. In the context of Unser, this means that by reducing our infinitely large family of orthogonal functions to a finite set – such as by eliminating all those which correspond to frequencies above our Nyquist Criterion – we can represent our waveform using a finite set of coefficients. We can apply this kind of logic to any family of suitable orthogonal functions. By appropriately reducing the size of the family to a finite subset we will end up with a finite set of coefficients. The smaller this set can be, the fewer the amount of numbers that would be needed to fully represent the waveform.
For the most part, this analysis appears only to be of use for the purpose of data compression, where it has limited applicability. At the end of the day, information theory already tells us most of what we need to know to determine just how much compression can actually be achieved. But where Unser’s paper gets really interesting is where it heads next.
Unser invokes the Physicist’s “frog on a lily pad”. This is where a frog attempts to cross a lake by jumping from lily pad to lily pad. Each lily pad is exactly half as close to the far side of the lake as the previous pad. The mathematician says that the frog will never reach the other side, but the Physicist observes that at some point the gap will be so small as to be meaningless. Unser recognizes that there is a distinction between a mathematically exact representation, and one where any errors in the representation are practically irrelevant.
Before you get too excited, Unser does not take us anywhere immediately usable with this analysis. He merely illustrates some ways in which this observation can be taken into account within his general theorem. But the concept is an intriguing and useful one. [It has been suggested – or rather hinted at – that some of these principles may be at play within Meridian’s controversial MQA technology, but at the time of writing MQA’s inner workings remain undisclosed.] As an example, conventional Nyquist-Shannon theory requires strict bandwidth limitation, but practical anti-aliasing filters can never be perfect. Unfortunately, the “better” the filter, the worse its time domain (i.e. phase) response will be. Unser’s analysis may provide a mathematical framework within which practical issues such as this can be formalized.