ADC Wiggles to Bits 1 Introduction

Wiggles to Bits Part 1 Introduction

Measurement Accuracy: The Input Side of ADCs

The world is digital!” … so I’ve been told for decades now.

I beg to differ: the world is “analog”; it is simply now common to represent the world as some configuration of bits. Of course, as the number of bits increases, the digital world will tend to represent the analog world. As a mathematical example, consider a Poisson probability distribution – one which deals with positive discrete integer numbers. As the integer numbers get larger, the discrete Poisson Distribution begins to become indistinguishable from the continuous Gaussian Distribution. And the world completes the circle back to analog signals …

One may think of a “true” analog signal as one having an infinite Nyquist frequency and infinite bit resolution. Well, actually finite – ultimately limited by the Heisenburg Uncertainty Principle – but actually by the representation of one electron per bit at the timing limit imposed by the interspatial distance of the conductor lattice. At which point, the worlds of analog and digital become one.

The art of “digital” – and of “analog” – is not to that point yet …

While much can be done with post-quantization digital signal processing (DSP), much can be done with pre-quantization analog signal processing – without the structural and computational overhead. But I’ll leave a discussion of that balance point for another time. My goal here is to discuss how to remove both excessive “garbage” and excess bits from a measurement process. That requires an understanding of what “excessive” really means.

The “desired” digital information is limited in two significant ways:

1) the number of bits per sample limits the possible resolution (small changes in the signal are lost due to the coarseness of the bit magnitude), and,

2) higher frequency information is lost (changes in amplitude between samples).

On the other hand, the analog input signal also has limitations. Noise and other uncertainties in the signal limit the magnitude resolution and the bandwidth – perhaps constrained by the transducer or environment … or by the constraints of a feasible sample frequency limits the fast transitions of the desired information.

Most phenomena to be measured is continuous in both magnitude and time: a change in temperature does not happen in discrete steps nor at discrete time intervals. Skylight intensity changes over the course of 24 hours – slowly enough to not be noticeable from one second to the next – but a most definite change between noon and midnight. An analog signal is also continuous in both magnitude and time: the term “analog” refers to the electronic signal being an “analogue” of the physical phenomena.

Digital information differs from analog in that it is both sampled and quantized: the information occurs at discrete intervals of time and at discrete magnitudes. There is no information between information point 1 and information point 2 in either time or magnitude. This limits the amount of information available in a digital data set.

It is often felt that the process of defining the quality and quantity of information desired will dictate the requirements of the number of bits and the sample frequency – which in turn dictate the needs of pre-quantization signal processing – but in practice, it is the characteristics of the phenomena to be measured, the transducer used to convert the phenomena to an electrical signal, and the characteristics of both the physical environment and electronic networks that place limits on both the sample frequency and feasible number of bits possible. To say nothing of available technology, money, and development time. And marketing needs.

The goal of this discussion is how to combine and optimize these sometimes conflicting goals.

Introduction

As measurement instruments continue to become increasingly complex at the same time systems are under pressure to “do more with less”, it becomes necessary to re-address the issue of optimizing the methodology used to make these measurements. The advent of calculators over the past decades has inadvertently trained scientists and engineers to overlook the significance of significant numbers. This has led to an apparent belief that an increase in digits corresponds to an increase in accuracy “We need more bits to achieve the precision we need“. Adding more bits will often do no more than increase the resolution of noise.

“Raw data” is not the output of the ADC; the closest electronic information to raw data is the output of the transduction element. That output is poked and stroked to fit into the ADC which then provides an approximate representation of the already distorted ADC input – warts and all.

Practical realities usually limit measurement accuracy in a non-laboratory environment to something near 1%. Achieving 0.1% precision throughout the signal chain is possible but not trivial – and in many cases, it’s the transducer element itself that limits the accuracy of measure.

For straightforward measurements, the trend towards 16-bit or higher resolution ADCs often only adds random numbers to the data stream: under ideal conditions, a 7-bit converter has resolution of better than 1% (1 part of 128); a 12-bit converter has ideal resolution of 0.024% (1 part of 4096). The use of lower resolution converters will not degrade measurement accuracy, lower resolution converters are easier to obtain in rad-hard versions, and data transmission and storage requirements are lessened. But the flip-side of this is the 16-bit converter has room to accept “slop” and provide 12-bit precision – as long as the extra digits are dumped in later processing. One wouldn’t want to claim a measurement accuracy of 1 part of 400 ppmv (0.25%) simply because the converter has the resolution when the input signal is only valid to 10,000 ppm.

The error budget for 16-bit accuracy is small; one 16-bit LSB represents 0.0015% – 15 ppm. This places stringent requirements on all components in the signal chain … and on the physical construction of the signal chain … and on the environment in which the system operates … and on the equipment with which the system is tested.

Much data processing, storage, transmission, and information interpretation time can be significantly reduced – or enhanced – by reconsidering the signal chain.

The consequences of improvements in electronics manufacturing technology have been most visible in the proliferation of portable devices and personal computers: Dick Tracy’s fictional technology of the 1930s was left in the dust decades ago. Lesser-used capabilities are lesser known: few people in the digital field recall the electronic networks that easily deal with many mathematical operations – the process has been presented as “numerical methods” in recent years. Summation/subtraction, multiplication/division, integration/differentiation, logarithms/exponents are among the many operations that can be accomplished in “real-time” hardware without a computer/software system; the computer can control the operational parameters and process quantized results rather than expend unnecessary effort in processing unneeded bits. Quantize the mathematical result: Why quantize (and store) “1” and “2”, then later process “add”, when only the result “3” needs be processed?

But let’s not discuss the possibilities of alternative methods of digital signal processing right now; let’s discuss the consequences of input uncertainty on data conversion.

Part 2

That’s good for now

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top