ADC 1 – Significant Digits

Preachin’ to the choir (I hope).

One of the auxiliary concerns of my professional specialty is the problem of transferring and interpreting large amounts of data. Of what value is a set of measurements if so much data is collected it can not be transmitted efficiently – perhaps from limitations of bandwidth or storage capability? – or properly interpreted? – perhaps due to the sheer magnitude of the process of reducing Tbytes of data to a manageable form? … much of which is related to tweaking or reducing the data set.

As a practitioner of developing methods to make the measurement, I become responsible for considering part of a possible “solution” … and I think of two possible methodologies right off. One is to leave behind the concept of “digitize immediately, process later” and utilize more analog signal processing before quantization (“opamp” is short hand for “operational amplifier” where “operational” is intended to mean the performance of mathematical operations rather than an “operating” amplifier). But I’ll leave that topic for another discussion.

The other method is to match the quantization to the signal integrity. Beyond a certain level – usually lower than expected – an increase in number of bits only increases the resolution of noise but does not increase the measurement accuracy or precision. If I need to know the value of \pi to 4 decimal places (3.1416), I really have no need for my calculator to show 16 decimal places (3.1415 9265 3589 7932) and less need to transmit those extra numbers.

There is no Nobel Prize buried in additional bits.

Herein I use “decimal places” to indicate the number of digits to the right of the decimal place – which is different than “significant digits”. Knowing a value to 3 significant digits doesn’t directly tell me the resolution of the number: 123, 12.3, 1.23, 0.123 ???

 

“The significant digits of a number are those that contain meaning of the number’s precision. This excludes leading zeroes, trailing zeroes beyond the number precision, and spurious digits that carry from calculations at a greater precision than that of the original information“.

 

In scientific/engineering applications, the digits to the right of a decimal point – including zeroes – define the degree of uncertainty. This uncertainty may also be defined as “precision”.

But there may be a bit of confusion here  –  the numbers 543.21 and 5.4321 both have 5 significant digits but the first implies a resolution to 1/100, the second to 1/10,000. Not the same thing. Do I need a measurement resolution of 10 mA or 100 μA? (in “scientific notation”, these numbers would be represented as  5.4321 × 10^2  and  5.4321 × 10^0. These appear to have the same number of decimal places – but they don’t have the same resolution). I would describe the 1st as having two decimal places; the 2nd as having 5 decimal places. An “engineering form” of numbers uses multiples of powers-of-3: 1000 \Omega is more often presented as 1 k\Omega than 1 \times 10<sup>3</sup>. 1 \muA has the same value as 1\text{e-6} A.  123.45\muA has the same resolution as 3.45\muA. And so on …

 

The advent of computers and calculators in the 1970s has led to an explosion of significant digits far beyond the justifiable precision of a calculation except in the most critical of applications.For example, NIST has defined electron charge (often designated “q”) as having value 1.6021766208 × 10-19 C with an uncertainty of 0.000 000 0098 × 10-19 such that the formal value of q is 1.6021766208(98). Yet a magnitude of 1.6022 × 10-19 is good to more precision than usually necessary for basic engineering purposes.

Per IEEE “Standard for Floating Point Arithmetic” (IEEE 754), arithmetic operations assume infinite precision followed by rounding to the appropriate format. There are four possible methods for defining the “correct” answer:

1) round to value where least-significant-bit is 0. (default for floating point operations)
2) round to value closest to +∞  [“Ceiling”]
3) round to value closest to -∞  [“Floor”]
4) round to value closest to 0 by truncation

 

Consider that “rounding” is the process of replacing the “exact” value with a shorter representation: π = 3.1416 rather than π = 3.1415926…  It serves the purpose of avoiding a misrepresentation of precision – a computed value of 1.23456 may actually only have a precision of 1/100, or 1.23.

There can be quite an extensive discussion on rounding rules, made more complex in computer arithmetic by variations in the way programming languages may define “rounding”, however, one “by-hand” method reduces the number magnitude to the lowest allowable significant digit of precision. If the least significant digit is 5 or greater, the number is rounded up to one less significant digit: if the precision is 1/100, a number such as 1.23456 would be reduced to 1.23, not 1.2346 → 1.235 → 1.24. 1.235 would go to 1.24. The negative number would follow the same pattern: -1.23456 →  -1.23. -1.235 would go to -1.24. An “exact” number – say 1.2 – would be presented as 1.20.

 

The number of useful significant digits is determined by the accuracy of the measurement; the measure is precise only to the degree defined by the least significant digit: a measure of 12.34 is assumed accurate to ±0.01 but the true value may lie between 12.335 and 12.345. This reading has four significant digits and a resolution of 1/100.

If a measure is expressed in a manner similar to “253,400”, it is unclear how many significant digits exist – but the implication is a resolution of ±1 but it might be ±100. If the measurement accuracy was in units of 100, it would be more appropriate to express the quantity as 253.4 kUnits: the number of significant digits is explicit – the resolution is ±100.

A decimal point is an indicator of precision: a number 100 is of ambiguous precision (but implied to be ±1); a number of 100. with a decimal place indicated has precision to units, a number of 100.0 has precision to tenths. This may be one of the more common ambiguities: the expression of “100” is often taken to imply precision to units … the same could be expressed as 100  ±1%.

Regardless of number of expressed digits, a calculation is no more accurate than the least accurate element of the calculation. Most physical measurements are no more accurate than 0.1% – 1%, usually limited by the transducer. Intermediary calculations may propagate an additional digit or two to minimize accumulative round-off errors, but the final result still has no better accuracy than the transducer – and quite likely significantly less. I have observed claims of data accuracy to less than 0.25% when the original information error was likely greater than 1%. Just say No.

But accurate passive transducers do exist. Platinum resistance thermometers (PRT) are among the most accurate transducers with a possible maximum error of 0.13°C (or lower) at 0°C over a range of -200°C to +500°C. The PRT temperature coefficient may be on the order of 3900 ppm Ω/°C. (other methods of accurately measuring temperature exist … a subject for a different time)

 

Consider a measure of mass and volume. The density is calculated as:

    \begin{displaymath} \text{density} \; = \; \frac{\text{mass}}{\, \text{volume} \, } \end{displaymath}

Measurements are made: the mass is measured as 2.34 g and the volume is measured as 5.6 cm3.

The density is not:

    \begin{displaymath} \text{density} \; = \; \frac{\, 2.34 \,}{5.6} \, \frac{g}{\, cm^3 \, } \; = \; 0.417857 \; \frac{g}{\, cm^3 \, } \end{displaymath}

as my calculator states – this result implies a much greater precision than the measures justify. Nor would it be valid that the density is accurate to \microg resolution.

When multiplying or dividing, the answer is no more precise than the least precise term: the density is properly stated as:

    \begin{displaymath} \text{density} \; = \; \frac{\, 2.34 \,}{5.6} \, \frac{g}{\, cm^3 \, } \; = \; 0.42 \; \frac{g}{\, cm^3 \, } \end{displaymath}

which may be stated to be accurate to 10 mg/cm3.

Similar when adding or subtracting: the correct result is precise to the precision of the least precise measure:

    \begin{displaymath} \text{answer} \; = \; 12.3 \, + \, 5.6 \; = \; 12.9 ; \neq \; 12.94 \end{displaymath}

The number 1200 has two significant digits and is precise to 100s (as in: 1.2 × 102); the number 1200. has four significant digits and is precise to units (as in: 1.200 × 102); the number 1200.0 has five significant digits and is precise to tenths (as in: 1.2000 × 102). Same number, same accuracy (exact) – but different resolutions and precisions.

This becomes a problem when these digital representations of phenomena are used for calculations.

The processing, storing, and transmitting of excess digits represent a waste of resources …

 

Wow! What a long-winded way of stating that 16-bit data is no more accurate than 7-bit data for a 1% accurate number

Part 2

That’s good for now.

Scroll to Top