Last week i received an email from a former colleague regarding measurement uncertainty and found (to my horror) that i had a half written post on the very topic i was asked about that i hadn’t finished. So i will do it now and hopefully there may be some useful answers, not only for Gareth but also others.

When reporting our final expanded uncertainty we can think of doing it in one of two ways. Which we choose is determined by the method that we use to calculate uncertainty and how we want the result interpreted. The question is in the title, and my answer was………well both really. These were my thoughts on the topic.

What is absolute uncertainty?

At its simplest the absolute uncertainty is the range of results around the reported result within which the expected “true” result is expected to lie with a predetermined level of confidence. For example, a result of 10 +/- 1 tells us the result range is 9-11 with, for example, 95% confidence. Importantly, the absolute uncertainty is always reported in the same units as the result. It may seem obvious but if the units are not the same the absolute uncertainty takes on a completely different meaning.

In this scenario the absolute uncertainty is an absolute number and is determined specifically at the points the assessments were made. This is influenced by the contributors used in the budget. Lets use imprecision as a simple (and very common) example of an uncertainty contributor. If using IQC at two levels we can quantify the variation (in the form of the standard deviation) and convert that to the standard uncertainty using the mathematical transformations discussed elsewhere on this site. For IQC we should observe a normal distribution of IQC results around the mean/target so the transformation is a simple 1:1 i.e the standard deviation is the standard uncertainty. If the IQC results aren’t normally distributed – that needs tackled first! Our uncertainty therefore is only experimentally determined at those two points.

What is relative uncertainty?

Extending the above the relative uncertainty is the ratio of the uncertainty (absolute) to the result reported. In this context the above example would have a relative uncertainty of 1/10 or o.1. The relative uncertainty in this form is unitless as it is derived from a ratio.

This concept can be extended and it is here that the second use of relative uncertainty is encountered. If we use our CV (%) as a measure of our imprecision, and use that to calculate our uncertainty (along with our other contributors) the result is expressed as a percentage. The CV is the relative standard deviation. This has pros and cons

Pros:

It is simple to understand and means we can extrapolate the absolute uncertainty at any given measurand result by converting the percentage of the result to an absolute value. This is very useful if we have IQC levels that may conform to basic requirements “Normal/Abnormal” etc but do not traverse clinical decision limits that, in reality, we are most interested in knowing

Cons:

We are extrapolating uncertainty to results in ranges we haven’t assessed specifically for imprecision.

There is another consideration when comparing the two. Reporting an absolute uncertainty across an entire measurement range is fraught with danger – why? We assume the uncertainty is constant across all potential results. If we follow this through then the relative uncertainty is going to be significantly higher for “low” results compared to “high”. Is this a fair reflection of the assay performance or is it a reflection of the limitations of our approach. We know in reality, imprecision varies across an assay range quite often. a relative uncertainty takes this into account, by having the relative uncertainty constant – but the effect of the ABSOLUTE uncertainty varying according to the reported result

When could we combine both?

So what should/could we do. To assign an absolute uncertainty to an entire assay is difficult if we are not sure that all the contributors (and i don’t mean just the impression component – i do mean all) are constant across the entire assay range. That being said, an absolute uncertainty is what users want for a specific result within the individual, or at a clinical decision point.

In reality they are not exclusive and each inform in their own way. As a summary of the performance across the assay range, relative uncertainty is helpful, and can make monitoring against performance specifications simpler. In specific situations where a given result is reported with its uncertainty, the absolute uncertainty is better, as it is expressed in the same units as the result and makes interpretation simpler. I present both. the more information i have at my disposal the better. If a clinician asks me what the uncertainty of a result they got was i can answer with an absolute uncertainty as that is what they want.

Those are my thoughts, again i acknowledge my former colleague (he is still my colleague!) for reminding me of the topic and for getting me thinking about it. If only i had finished this when i had started it a year ago, the answers may have been clearer – lesson learnt on my part!

To add further questions into the mix – how does the state of the other (non imprecision) uncertainty contributors influence our method of reporting. Experimentally derived, Type A, uncertainties can be expressed absolutely or relatively. Type B uncertainties often don’t have as much information contained within them and consequently we must be careful of how the uncertainty propagates through functional relationships in the measurement – but that is a discussion for another day!

Photo by Pixabay on Pexels.com