Whew! This is a complex topic.
First, the ADC is 11bit, the 12bit claim of the ADS1015 is misleading since the 12th bit is the sign bit for the differential feature and does not contribute any resolution in single-ended mode, it just disambiguates between positive and negative readings.
Additionally the full-scale range of the 0-2047 ADC when measuring 3.3v is 0-4.096v, which gives you a range of around 0-1649 (2047 / 4.096 *3.3
) possible values to represent the 0 to 25.85v input voltage supported by Automation HAT/pHAT.
25.85v / 1649 = 0.01467v
which is the lowest granularity, or best accuracy, that this setup is capable of achieving. But it’s not that simple, since at 3.3v the granularity is 0.002v (2mv), but when you scale that up to 25.85v in software it becomes 0.015v (the value has been snapped to the lower granularity and then multiplied up) per step. This is quantization error.
Additionally, since the maximum input voltage of 25.85v is scaled down to a maximum of 3.3v via an onboard resistor voltage divider (one for each of the 3 24v tolerant channels) you would have to account for the tolerance of those resistors. At ±1% the scaled value of 25.85v could range from as much as 3.24v to 3.36v - unsurprisingly around ±1%.
Here are the calculated worst case scenario variances for 1% resistor tolerance on the 120k and 820k resistors used:
in |
rA |
rB |
tA |
tB |
out |
internal |
result |
14.6 |
120 |
820 |
1 |
1 |
1.863829787 |
931 |
14.73677619 |
14.6 |
120 |
820 |
1.01 |
1 |
1.880067998 |
940 |
14.87923697 |
14.6 |
120 |
820 |
1 |
1.01 |
1.847711453 |
923 |
14.61014438 |
14.6 |
120 |
820 |
1.01 |
1.01 |
1.863829787 |
931 |
14.73677619 |
14.6 |
120 |
820 |
1.01 |
0.99 |
1.89659164 |
948 |
15.00586877 |
14.6 |
120 |
820 |
0.99 |
1.01 |
1.83155227 |
915 |
14.48351258 |
14.6 |
120 |
820 |
0.99 |
0.99 |
1.863829787 |
931 |
14.73677619 |
14.6 |
120 |
820 |
0.99 |
1 |
1.847550064 |
923 |
14.61014438 |
14.6 |
120 |
820 |
1 |
0.99 |
1.880231809 |
940 |
14.87923697 |
Note: These are worst case variances of 1% in either direction, in reality the tolerances are anything from -1% to +1% and will sometimes cancel each other out. (although from above you can see that quantization error will still cause inaccuracy even when this is the case)
IE: Assuming an input of 14.60, the ADC could be seeing 1.831v which is represented as 915 in the internal register.
If I run this through all the adjustment calculations that convert that value back into a usable voltage in our input range:
READING = 915
SCALE = 2047
GAIN = 4096
VCC = 3300
VMAX = 25.85
result = ((READING/SCALE) * GAIN) / VCC) * VMAX
Or:
(((915/2047)*4096)/3300) * 25.85 = 14.342
So it’s a mix of quantization error, resistor tolerance and potentially other minor factors.
To summarise:
- The finest granularity is 0.015v
- The voltage divider gives a worst case accuracy of ±1%
- Combined this gives an accuracy of ±3%