Automation HAT Accuracy

Measuring a nominal 12V battery, I’m reading 14.60. But the Automation HAT returns 14.46. No problem with precision; the measurements are reasonably consistent. But that lack of accuracy is disconcerting given that this is a 12-bit ADC with enough resolution to handle two significant digits. Even if I were to round 14.46 up to 1 sig, that’s still 14.5 and not 14.6.

Currently I’m compensating by simply adding 0.14 to the measurements. But can anyone with ADC experience chime in? That ADS1015 ADC has all kinds of features not exposed in the Python API. Perhaps more advanced calibration is one of them?

How are you taking the 14.60 reading? Do you know what the margin of error for that piece of equipment is? Are you using the same wires to take the measurement with Automation HAT? If not, do you know how the voltage drop across the two different sets of wire vary?

(Sorry for all the questions! Just want to get a handle on where the error might be getting introduced.)

Whew! This is a complex topic.

First, the ADC is 11bit, the 12bit claim of the ADS1015 is misleading since the 12th bit is the sign bit for the differential feature and does not contribute any resolution in single-ended mode, it just disambiguates between positive and negative readings.

Additionally the full-scale range of the 0-2047 ADC when measuring 3.3v is 0-4.096v, which gives you a range of around 0-1649 (2047 / 4.096 *3.3) possible values to represent the 0 to 25.85v input voltage supported by Automation HAT/pHAT.

25.85v / 1649 = 0.01467v which is the lowest granularity, or best accuracy, that this setup is capable of achieving. But it’s not that simple, since at 3.3v the granularity is 0.002v (2mv), but when you scale that up to 25.85v in software it becomes 0.015v (the value has been snapped to the lower granularity and then multiplied up) per step. This is quantization error.

Additionally, since the maximum input voltage of 25.85v is scaled down to a maximum of 3.3v via an onboard resistor voltage divider (one for each of the 3 24v tolerant channels) you would have to account for the tolerance of those resistors. At ±1% the scaled value of 25.85v could range from as much as 3.24v to 3.36v - unsurprisingly around ±1%.

Here are the calculated worst case scenario variances for 1% resistor tolerance on the 120k and 820k resistors used:

in rA rB tA tB out internal result
14.6 120 820 1 1 1.863829787 931 14.73677619
14.6 120 820 1.01 1 1.880067998 940 14.87923697
14.6 120 820 1 1.01 1.847711453 923 14.61014438
14.6 120 820 1.01 1.01 1.863829787 931 14.73677619
14.6 120 820 1.01 0.99 1.89659164 948 15.00586877
14.6 120 820 0.99 1.01 1.83155227 915 14.48351258
14.6 120 820 0.99 0.99 1.863829787 931 14.73677619
14.6 120 820 0.99 1 1.847550064 923 14.61014438
14.6 120 820 1 0.99 1.880231809 940 14.87923697

Note: These are worst case variances of 1% in either direction, in reality the tolerances are anything from -1% to +1% and will sometimes cancel each other out. (although from above you can see that quantization error will still cause inaccuracy even when this is the case)

IE: Assuming an input of 14.60, the ADC could be seeing 1.831v which is represented as 915 in the internal register.

If I run this through all the adjustment calculations that convert that value back into a usable voltage in our input range:

READING = 915
SCALE = 2047
GAIN = 4096
VCC = 3300
VMAX = 25.85

result = ((READING/SCALE) * GAIN) / VCC) * VMAX

Or:

(((915/2047)*4096)/3300) * 25.85 = 14.342

So it’s a mix of quantization error, resistor tolerance and potentially other minor factors.

To summarise:

  • The finest granularity is 0.015v
  • The voltage divider gives a worst case accuracy of ±1%
  • Combined this gives an accuracy of ±3%
2 Likes

How are you taking the 14.60 reading?

WIth a very expensive Fluke multimeter. When I use this same multimeter on my bench power suppy, it corresponds exactly to the voltage I select. When I use this same multimeter on my solar controllers, it corresponds exactly to what the controllers claim they put out. So I have good reason to believe the meter is accurate.

Wires to HAT are less than six inches and I’m using 20 gauge zip cord terminated with ferrules to fit into the HAT. Basically total overkill. No way can there be a 0.14 volt drop.

Sorry for all the questions! Just want to get a handle on where the error might be getting introduced.

Not at all. My apologies for not providing this info up front.

Wow thank you for such a long and detailed explanation! Pimoroni should add this to the docs, to manage expectations regarding what accuracy is achieveable.

As measurements at 14.6V are very consistent, do you think my simple solution of adding a constant is reasonable? I suppose I could take more measurements at known values throughout the desired range and make a compensation lookup table, or perhaps even come up with an algorithm. I also use an average of measurements to smooth out the occasional outlier.

I have to admit that if I have to choose between accuracy and precision, I’d rather have precision. In this regard the HAT works well. I just wish Pimoroni had chosen components that assured at least 0.1V accuracy even after all the potential variability. When you’re talking about 12V batteries, those fractional voltages matter!

You’re welcome! We’ll update our store pages and guides.

There’s a problem with your + CONSTANT solution- it doesn’t account for quantization error.

In fact here’s a graph up to around 3v, showing both worst-case %error due to the resistor divider, on top of quantization error.

At the low-end of the spectrum you can see a huge pronounced effect from quantization, and this gradually trails off as it becomes proportionally smaller compared to the value measured.

Adding a fixed value will account for the error introduced by the voltage divider, but you might still see some anomalies - some jitter - depending on how close your value is to the nearest whole reading.

You’ll have to keep this in mind if you’re measuring well into the low-end of the range.

I’m only interested in the following range: 12V to 14.6V.

At 14.6V the battery is fully charged, and at 12V it’s essentially dead.

Given this range, I appear to be safely ensconced in the flat part of the curve. Again, I average readings to deal with jitter.

In that case, your offset should suffice! You’ll only be dealing with 0.01% error for quantization.

With the ADC being unshielded, uninsulated, cheap and cheerful you’ll never see an Automation HAT compete with a Fluke- at least not at the price it is now!

If you want to attempt to mitigate resistor divider error you could measure your input using two ADC channels and take the average.

Thanks, it never occured to me to do that! I actually have a HAT totally dedicated to a 16-bit ADC. But I don’t use it because it’s got 16 channels and I only need one. But now I’ll consider sending some or all of those channels to the same battery for an instantaneous average.

Can you be really specific about what resistors you would use? I don’t mean the values. I mean a link to the through-hole resistors you would order (brand, vendor, type, etc). I can’t handle SMDs.

Pimoroni here’s an idea: create a HAT totally dedicated to voltage dividers using extremely accurate components. So a bank of high resistors, a bank of low ones, and switches to create numerous combos…

I think < ±1% the cost of resistors goes exponentially upwards, it’s not something I’ve looked into much and I think they’d be wasted in this kind of application since they wont fix quantization, or the innate inaccuracy of the ADC itself. It’d be crazy to pair them with a cheap ADC, versus just using a better part in the firs place (or one tolerant of 12v even)

In our case, we’re mostly optimising for price to bring a variety of features to hobbyists and tend to use parts we’re familiar with, or already use elsewhere.

As an example: http://uk.farnell.com/welwyn/pr5y-120kbi/resistor-metal-film-120k-0-5w/dp/1634104?st=120k%20resistor

Do you happen to know what the ADC voltage dividing resistor values are? Too tiny to read! I’m still wrapping my head around how you went from 2047 to 1649, and knowing the actual values will help.

The ADC voltage dividing resistors are 120k and 820k, which - ideally - divides 25.85v to 3.3v.

The reason why only 0-1649 of the 11bit ADC range is because the available full-scale ranges don’t map 1:1 to 0-3.3v - sadly!

At the only full-scale range that includes 3.3v, the actual maximum voltage is 4.096v. But the ADC cannot (safely) read values greater than VCC (actually VCC + 0.3v). When it’s powered at 3.3v it’s only capable of reading up to 3.3v.

This is covered in the datasheet:

Therefore to figure out how much of the range is available for 0v to 3.3v we can use the following calculation:

2047 / 4.093 * 3.3

And we arrive at: 1649.19433594

1 Like

None of these values map 1:1 with any commonly used VCC! Take Arduino, for example. Most are 5V. So in that case, the FSR would be 6.144V. So that’s 2047 / 6.144 * 5 = 1665. So 5V isn’t much better than 3.3V.

It seems misleading to call this ADC 12-bit when the most common scenarios (non-differential 3.3 or 5 VCC) will limit it to 10.5 bits. Thank god you guys didn’t use a 10-bit ADC…

Yeah that’s a very interesting point about them not mapping 1:1 with any typically used VCC. I’m guessing there’s a very good reason for this… it looks like they were aiming for a specific set of LSB sizes (3, 2, 1, 0.5, 0.25, 0.125 mV) with no regard for the full range. Perhaps there’s an application for this somewhere.

That said, something like the 10bit ADC in the ATmega328p would only give a granularity of 4mV at 5V.

Granted that Arduino microcontrollers are different horses for different courses, but their limited 10-bit ADCs are exactly why I’ve never used them. Better to use a dedicated higher-bit ADC if an ADC is what’s needed. I only mention this because Raspberry Pi’s often get dinged for not including an ADC. But I would rather have none than one that’s too limited to be of much use.