Throughout pathology we use many different assays, some of them using calibration curves to assign a value to a measurand.
Such assays are termed indirect measurements. That is, we are not directly measuring the value itself but rather the activity or amount we can attribute to it based on a measurement we perform. A good example of this in another area of science is calculating the historical climate of a given year by measuring the width of tree rings. We don’t assess the climate directly, but rather infer the climate based upon (roughly) it’s correlation with the width of the ring.
Figure 1: The different widths of tree rings for any given time period (a year) is influenced by the climate during that period. In effect we are taking one measurement and correlating that measurement to using a relationship between what we are measuring and what we are reporting.
To use a pathology example, factor assays performed in haemostasis use the ability of a patient plasma to shorten the aPTT of a plasma deficient in a given factor. The degree of shortening of the aPTT is correlated to the activity of that coagulation factor within the patient plasma. That correlation, and assignment of a factor activity, is based upon what we expect to see based on the results of a standard plasma of know factor concentration tested at different dilutions. Simply put, the clot time of the patient factor assay is input as a variable in the calibration curve equation which produces the patient coagulation factor activity.
What are the sources of uncertainty associated with calibrated assays?
There will be some sources that are not discussed here including pipetting uncertainty for reconstitution of standards etc. as these are covered in other articles. The sources discussed below are those that we can attribute specifically to the standard and its use in our calibrated assays.
The calibrator value
The awareness of measurement uncertainty is increasing not just within pathology laboratories but also within the industrial setting, including manufacturers and suppliers of our reagents, including calibrators. Some sources now make the uncertainty of calibration reagents available to the end user (us). These are to be applauded, and give us the useful information we need – our standard calibration uncertainty. However, there are some calibrators for which this is not available. Here we need to derive an estimate of the uncertainty associated with the assigned value ourselves.
Fundamental to all calibrated assays is the use of the calibration curve to assign a value to a measurand. The units that value is assigned will depend upon the traceability of the standard used in the calibration, and what it was tested against to determine the assigned value. The uncertainty associated with a standard, based upon its traceability is defined as below:
Figure 2: The uncertainties associated with reference standards based upon what the source of the standard is. There is increasing uncertainty the further away the standard is from the national/international standard as each successive standard is calibrated against the previous highest standard and uncertainty propagates through the process.
Producing the calibration curve
It is helpful to think what our calibration curve is telling us. We use regression to predict a value for our output based upon an input we have measured. By using regression we attempt to minimise the “unexplained” variability, however we cannot eliminate it all. By fitting our calibration curve with a “line of best fit” we are producing a mathematical model that can best describe the relationship, and minimise this variability. The very nature of forming this relationship will inevitably introduce uncertainty in our final result.
The results produced from the calibration curve
Calibration curves have changed dramatically over the years. In the days when assays were first designed calibrations and result calculations were all done manually. With the advent of automated analysers much of this heavy lifting is done behind the scenes with more powerful software applications built in to the analysers. This is of course a good thing, but does it come at a cost?
Figure 3: How factor assays used to be calculated manually. The linear calibration curves have now been replaced wth complex polynomial regression equations to provide as high an R2 as possible – the impact on the final results of such an approach is a discussion for another day….
The linear calibration curves we were used to seeing (particularly whilst training and learning the basic concepts) have been replaced with very complicated polynomial regression mathematical models often with numerous variables and numbers that don’t appear to make a lot of sense intuitively compared to the y = mx + b curves (lines actually!) we used to construct.
The mathematical considerations behind the modelling used is a large topic, and one for another time. However, irrespective of what the curve looks like there is a very important fundamental mathematical concept we must consider when thinking about measurement uncertainty.
The value outputted from our calibrated assay is itself a statistical value. It is not – as is often thought – an absolute value that is a definite result. The interpolation of the result using the calibration curve is a representation of the mean value from the normally distributed population of values that are expected given our input variable (the initial assay result).
Stability of the calibration
Stability is not drift. Stability is exactly what the name suggests, how stable the assay (and in this case the calibration) is over time. Drift detects systematic error; stability is associated with random error. It gives us a measure of how the assay, equipment and entire measurement system is behaving over time. We can measure that using a defined standard or calibrator over time to detect what influence stability of the calibration may have. It is not a million miles away from our reproducibility data we have visited previously, and there may be some analyses that incorporates both, or a combination of them both to define our level of stability.
There is always uncertainty associated with a final result calculated using a calibration curve. We have briefly discussed where some of that uncertainty may come from. It is related to the performance of the calibration and how that propagates through our results. That may be a bit heavy to get our heads around but for now we just need to be aware that it exists and that we can account for it. In the next article in this series we will follow this through and see how we can handle the uncertainty associated with our calibrations and try to quantify it. We will then discuss whether it needs to be considered as something separate from our IQC data or whether we can incorporate it all as a single standard uncertainty.