# How to measure the accuracy index of electronic measuring instruments?

Accuracy is the most important indicator to measure the performance of Electronic measuring instruments, which usually consists of two parts: reading accuracy and range accuracy. This article combines several specific cases to describe the generation, calculation and calibration methods of errors. A correct understanding of the accuracy index can help you choose suitable instruments.

Accuracy is the most important indicator to measure the performance of electronic measuring instruments, which usually consists of two parts: reading accuracy and range accuracy. This article combines several specific cases to describe the generation, calculation and calibration methods of errors. A correct understanding of the accuracy index can help you choose suitable instruments.

1 Definition of measurement error

The common representation methods of error are: absolute error, relative error, and reference error.

1. Absolute error:

Definition: The difference between the measured value x* and the measured true value x is called the absolute error of the approximate value x*, abbreviated as ε.
Calculation formula: absolute error = measured value – true value;

2. Relative error:

Definition: The value obtained by multiplying the ratio of the absolute error caused by the measurement to the (conventional) true value of the measurand multiplied by 100%, expressed as a percentage.
Calculation formula: relative error = (measured value – true value) / true value × 100% (that is, the percentage of absolute error in the true value);

3. Citation error

Definition: The ratio of the absolute error of the measurement to the full scale value of the instrument is called the reference error of the instrument, which is often expressed in percentages.

Calculation formula: reference error = (maximum absolute error/meter range) × 100%

The smaller the reference error, the higher the accuracy of the instrument, and the reference error is related to the range of the instrument, so when using the instrument with the same accuracy, it is often used to compress the range to reduce the measurement error.

01 Examples

Use a multimeter to measure the voltage of 1.005V, assuming that the true value of the voltage is 1V, the range of the multimeter is 10V, and the accuracy (quotation error) is 0.1%F. S, is the test error of the multimeter within the allowable range at this time?

The analysis process is as follows:

Absolute error: E = 1.005V – 1V = +0.005V;
Relative error: δ=0.005V/1V×100%=0.5%;
Multimeter reference error: 10V×0.1%F. S=0.1V;

Because the absolute error is 0.005V
2 The generation of measurement errors

Absolute error exists objectively, but people cannot determine it, and absolute error is unavoidable, and relative error can be minimized.

Error components can be divided into random errors and systematic errors, namely: error = measurement result – true value = random error + systematic error
Therefore, any error can be decomposed into algebraic and systematic errors of systematic and random errors:

1. System error

Definition: The difference between the average of the results obtained from an infinite number of measurements of the same measurand and the true value of the measurand under repeatability conditions.

Causes: measurement errors caused by the inherent errors of the measuring tools (or measuring instruments), the defects of the measuring principle or the theory of the measuring methods, the experimental operation and the restriction of the psychological and physiological conditions of the experimenters themselves.

Features: Under the same measurement conditions, the measurement results obtained by repeated measurement are always too large or too small, and the error value is constant or changes according to a certain law.

Optimization method: The method can usually change the measurement tool or measurement method, and can also take into account correction values ​​for the measurement results.

2. Random error

Definition: Random error, also known as chance error, refers to the difference between a measurement result and the average result of a large number of repeated measurements of the same measurement.

Cause: Even in the ideal situation of completely eliminating systematic errors, repeating the measurement of the same measurement object for many times will still cause measurement errors due to the interference of various accidental and unpredictable uncertain factors.

Features: Repeated measurement of the same measurement object, the error of the measurement result shows irregular fluctuations, which may be positive deviation or negative deviation, and the absolute value of the error fluctuates irregularly.

However, the distribution of errors obeys statistical laws, showing the following three characteristics:

Unimodal, that is, the small error is more than the large error;
Symmetry, that is, the probability of positive and negative errors is equal;
Bounded, that is, the probability of a large error is almost zero.

Optimization method: From the random error distribution law, it can be known that increasing the number of measurements and processing the measurement results according to statistical theory can reduce the random error.

3 Precision, Precision and Accuracy

Accuracy and error can be said to be twin brothers. Because of the existence of error, there is the concept of accuracy. In short, the accuracy of the instrument is the accuracy of the measurement value of the instrument close to the true value, usually expressed in relative percentage error (also called relative reduced error).

1. The size of the measurement accidental error reflects the precision of the measurement

The same measurement tool and method are used for multiple measurements under the same conditions. If the random error of the measurement value is small, that is, the fluctuation of each measurement result is small, it means that the measurement repeatability is good, which is called good measurement precision and good stability.

2. The size of the systematic error reflects the degree of accuracy that the measurement may achieve

According to the error theory, when the number of measurements increases infinitely, the random error can be made to tend to zero, and the degree of deviation of the obtained measurement result from the true value – the measurement accuracy, will fundamentally depend on the size of the systematic error.

3. Accuracy is the general term for the accuracy and precision of measurement

In actual measurement, the main factors affecting the accuracy may be systematic errors or random errors. Of course, it is also possible that the influence of both on the measurement accuracy cannot be ignored. In some measuring instruments, the concept of common accuracy actually includes two aspects of systematic error and random error. For example, commonly used instruments are often divided into instrument grades by accuracy.

4 Instrument Accuracy Class and Range

Accuracy is a very important quality indicator of the instrument, and it is often specified and expressed by the accuracy grade. The accuracy level is the maximum relative percent error minus the sign and %. According to the national unified regulations, the grades are 0.05, 0.02, 0.1, 0.2, 1.5 and so on. The smaller the number, the more accurate the instrumentation is.

The accuracy of the instrument is not only related to the absolute error, but also related to the measuring range of the instrument. If two instruments with the same absolute error have different measurement ranges, the instrument with a larger measurement range has a smaller relative percentage error and a higher instrument accuracy; and vice versa, two instruments with the same accuracy class have a larger range. The absolute error is also larger.

5 Choice of Application Accuracy

In the actual application process, the range and accuracy of the instrument should be selected according to the actual situation of the measurement. It is not necessary that the instrument with a small accuracy level will have the best measurement effect.

02 Examples

For example: to measure 10V standard voltage, use two multimeters with 100V block, 0.1 grade and 15V block, and 0.5 grade to measure, which meter has the smallest measurement error?

Solution: The maximum absolute allowable error measured by the first meter
△X1=±0.1%×100V=±0.10V.
The maximum absolute allowable error measured by the second meter
△X2=±0.5%×15V=±0.075V.

Comparing △X1 and △X2, it can be seen that although the accuracy of the first meter is higher than that of the second meter, the error produced by the measurement of the first meter is larger than that of the second meter. Therefore, it can be seen that when selecting instruments and meters, the higher the accuracy, the better, but also the appropriate range. Only by selecting the correct range can its potential accuracy be brought into play.