Back in October we discussed the first four useful sources of data that we can use for Uncertainty analysis in pathology. We continue our discussion here.
It is a running theme through many of the processes that I have highlighted on this site that we must be able to do our uncertainty assessments without having to perform more experiments – and importantly not spend any more money! It is my firm opinion that we can achieve that quite simply with the data we already have (if we don’t include the staff time to do it of course!). We will discuss another four sources of data (and yes I am including the fourth as a source of data) that we already have that we can justifiably incorporate into our uncertainty budgets.
Assay Validation and Verification
The first data sources to consider are assay validation, or more accurately for what we actually do in the lab, assay verification.
It is part of routine practice now to verify the performance of our assays prior to implementation into routine service – and it absolutely should be, it is a good thing! This is to be differentiated from the assay validation that is a requirement of our suppliers and manufacturers to gain CE marking, FDA approval and any other regulatory requirements they have. The important difference is that our verification is an independent verification of performance.
Each laboratory will have their own process for verification of individual or groups of assays into their service. However, what is common to all labs is the fact that this verification phase provides us with a lot of data. It would be a shame not to use all (or at least some) of that work.
As examples of data that are generated as part of a local verification exercise the following could be used as performance metrics of our assay that we could include in our analysis. This is not all the data but just some examples, there will be many more I am sure!
- Repeatability
- Reproducibility
- Calibration Verification
- Limit of detection
- Linearity
All of the above are really good sources of data for our analysis provided we can calculate a standard uncertainty from them. Also, what is helpful is that if MU is determined in the initial verification phase (as it should be) we know what we can expect from our assay prior to our MU reassessment in the future.
Reagent Acceptance Testing
Again, reagent acceptance testing is something that has become commonplace recently. It is extremely important for us to verify that the reagents we are using are fit for the analytical and clinical purposes we intend using them for.
As a result, during our verifications of reagents we have data about how they vary from lot number to lot number. This is important data for understanding how the assays perform generally, but also can give us some insight into what the influence we are seeing in our reagent validation assessments are on our imprecision and other standard uncertainties we are calculating, particularly our impression data from our IQC.
Calibration and Performance Certificates including Manufacturer documentation
Be it analyser, reagent or kit specific there is a lot of data available for performance criteria set by suppliers or regulatory authorities they need to submit data to. As such, that data is often readily available to us. It is useful for comparing how our assay actually performs against the recommended performance criteria from the supplier.
As a note of caution though, often the performance criteria may be significantly different from what we would actually be expected to observe in routine use. The reasons for this are reasonably obvious but include, aggregated data at the point of supplier assessment to allow limits to be set based on different reagent lots/multiple analysers/different users and many other variables that would be expected to influence the performance in the field. So whilst these performance limits cab be useful as a guide it is often useful, if not compulsory, to reassess for local performance criteria to be set.
Expert Opinion
The importance of this last point is not to be underestimated, and absolutely justifies its inclusion. At the end of the day the expert opinion of the analyst is as significant as any quantitative data for determining the MU. After all, nobody knows how our assays perform better than the people performing them. As such, we know what we expect the performance to be and at what point we need to action any deviations from that.
Much of the MU process is subjective from the definition of the measurand, modelling, and standard uncertainty quantification , the input of the scientist is key. Equally, it is the expert opinion of the scientist performing the assessment that validates the integrity of that subjectivity. Can the expert opinion be used to put a value to something? In some cases yes – in those rare circumstances where data is not available. Hypothetically, if we didn’t know what our assays imprecision was we could assign a reasonably accurate value just from our experience – couldn’t we? This can also be extended to other contributors we may have but don’t have data for. From a traceability and evidentiary perspective though it should probably be the source of data that we use least often and as a last resort.
Conclusion
That concludes our quick run down of data sources we can easily use for MU analysis. We already have access to all of these sources and they provide the valuable information we need to assess our assay performance. There are of course others as well, including our own expert opinion/knowledge/experience.