There are many ways of formulating the final uncertainty budget. It’s reasonable to assume there is not a one size fits all approach that will work for all our assays. We know this and as such we take an approach for each assay based on how we perform the first two common steps.
The measurand definition allows us to detail the specifics of the components of the assay that define the assay, whereas the modelling step determines the aspects of the assay that are significant contributors to our overall uncertainty. It is to be expected that every assay will have a different measurand definition, and that there may be groups of assays that have the same, or similar results from the modelling process.
However, thought must be given to a third way of differentiating our results that up until now we have not discussed. That is the context and utility of the result. By that, I mean are there scenarios that will, in themselves, influence the information that is required and will in effect determine which of the processes are appropriate for determining the measurement uncertainty?
Questions such as:
- Is the assay only ever tested one way? on one analyser?
- Are the results only being used within our institution?
- Do we need to take into account our performance against our peers locally, nationally or internationally?
Inter Analyser Variability
Some assays are performed in a single manner, be it manually, semi automated or automated. The methods used to determine the final result (and therefore the method that we are assessing with our uncertainty calculations) are well defined.
However, there are many situations where multiple techniques (in particular analysers) of the same type are used to determine patient results for the same assay.
In these situations we need to ask three questions when calculating our uncertainty:
- Are we calculating our uncertainty based on the performance of one of these analysers or as the technique/family of analysers as a combination?
I think (hope) the answer to this question is fairy straight forward and should always be the latter
- Do our results identify which of the analysers was used to derive the result?
Often this is not the case, and perhaps it is not practical to clutter up already busy reports with such information
- How is the different performance of each analyser accounted for and is our uncertainty weighted to account for which analyser it was performed on?
This is an issue when only the Standard Uncertainty as derived by the Standard Deviation (SD) or Coefficient of Variation (CV) is used. Even though the results are combined there will be a difference in the imprecision – and therefore the uncertainty – between analysers albeit usually quite small. Using only the SD or CV will not account for that variability.
Transferability of results between centres
Being able to share results between institutions, and having a means to standardise those results, is not a new thing in haemostasis. In the 1980s the World Health Organisation adopted the use of the International Standardisation Ration (INR) as the standard by which the results of patients undergoing warfarin therapy could be monitored anywhere – not just the centre they were registered with. This result is now among the most commonly reported in haemostasis.
Centralisation of pathology services now means transport of samples between centres, sometimes a significant distance from where the patient actually lives and where their nearest hospital may be is commonplace. There may be significant implications for transferability of results. Cross site comparability is routinely assessed in pathology to alleviate some of the “uncertainty” in results determined at different centres. If a patient is routinely being tested between centres do we need to incorporate this analysis of cross-site comparability into our uncertainty budget?. Is that only for results we know are being generated in such a manner? How do we know if this is the case? Should we err on the side of caution and have this incorporated into all results and associated uncertainties?
It is possible we need to take uncertainty estimates quoted in the context of where they are assessed. I think this is a topic for debate, and as yet there is no guidance to suggest to us what we should do. What is important though is that if one centre has a larger uncertainty than another we should not immediately assume that the assay performs better there – using uncertainty as a quality metric to compare between sites may have its flaws. Many more factors may have been considered, and indeed may need to be considered at one site so an uncertainty estimate should be looked at accordingly. However, what may be useful for comparison is the contribution of the individual components of the uncertainty such as imprecision uncertainty.
This post has probably asked more questions than it has been able to answer. These questions are important though, and are at the centre of where we are now as integrated pathology service providers.