When converting continuous (analogue) signals, into discrete (digital) signals, we need to tell our MCU code a mathematical function to convert that binary input from the ADC to a value that it is meaningful and useful to us.
For example, when a multimeter measure a current, the MCU gets a voltage to its pins that is then converted into bits by the ADC, then the MCU software applies a mathematical transfer function that takes into account the previous analogue circuit and converts the bits into amperes which are then shown in the screen.
For the particular range of current that you select, there is a static transfer function that it is used. However, as we saw in the WCA article, this transfer function has a variation given by the tolerance shift of the elements of the function.
In the previous example, these tolerances elements would be the shunt resistor, opamp and ADC voltage reference. Because the multimeter is a high precision device, it needs to be shipped out with the lowest measurement error possible, therefore, a calibration process is required. The calibration process will take away the fixed errors such as the resistor and ADC Vref tolerances and the opamp input offset voltage. However, there are some errors that are not possible to take away with this method such as the temperature drift.
This simple method is valid for functions such as voltage and current measurement that have a directly proportional relationship with its measuring circuit:
- A voltage divider output voltage always increases when the input voltage of that divider increases
- The voltage drop in a shunt resistance always increases when the current flowing through it increases
For our Micro Course, we will assume that we have a voltage divider made with two10Kohms 5% tolerance resistors to scale down a 9V battery voltage in order to be measured by the MCU ADC in counts (for example 652 counts).
We will use an Arduino and a variable power supply and a multimeter as per in the following photo: