We have already defined the factors we need to consider for defining the measurand in the aPTT. Having completed that we will cover the entire process up to the formulation of the budget for the assay. This will include modelling, calculation of standard uncertainties of input variables, the entire process of budgeting (by a number of different methods) and finally reporting our expanded uncertainty for our assay. This will take a few articles to walk through but by the end we will have a template for one of our simpler assays that we can apply to (hopefully) all/most of our other assays.

As mentioned in the previous article the process of measurand definition gives us a head start with our modelling process. By considering what the nature of the assay is we can use the answers to those same questions to determine what the most influential sources of uncertainty for our assay is.

Our definition of the measurand was:

activated Partial Thromboplastin Time (aPTT) – Intrinsic coagulation activation using silica to determine the time to clot of plasma on the automated analyser (insert analyser name here) with an incubation time of 180 seconds.

Before we decide which of the budgeting approaches is most appropriate we will identify our contributors to see if that will help with our decision. For the purposes of this series we will compare some of the processes available to us so that we can gain experience of what is available. We will then be able to identify which, if any, gives us the most appropriate uncertainty estimation and what the differences in the results mean.

Identification of contributors:

As the assay is performed on an automated analyser, imprecision uncertainty as determined by our Internal Quality Control (IQC) performance is at the forefront of our thinking. As mentioned elsewhere in this site, it is considered mandatory to assess repeatability as part of all uncertainty estimations – and this is particular the case in our assays in pathology.

So, contributor 1 – Imprecision uncertainty as determined by a repeatability study using IQC as the measure.

Often, it is considered sufficient to stop at this point. However, that is rarely actually the case. We could consider bias. This is a debatable point and there are many conflicting approaches with regards to bias in the literature. It should be eradicated locally, by recalibration (if possible) or other means. However, even if systematic error in the form of bias is eradicated, the uncertainty associated with that bias may in some cases be included in the final budget. One of the approaches we will be taking in future articles will do just that.

So…(in some situations) we will include the uncertainty of our bias as contributor 2

The above discussion centres around that bias that is detected locally in the form of systematic error. However, we can consider that there is another source of bias in our assays – bias when comparing our results to our peers running the same assay (analyser/method/reagent etc). It is the function of External Quality Assurance to detect, and quantify this. We can use this to determine if there is a significant contribution to our assay result based on our performance in such schemes. It is again worth noting that many approaches used for laboratory uncertainty quantification suggest that bias should not be included, so its exclusion here can be justified by the literature should you wish to.

So… component 3 is uncertainty due to bias as based on EQA performance

And for our final uncertainty contributor we consider that the analyser used for the aPTT has an impact on the uncertainty. This is the analyser comparability. We need to work out how we can approach this, and again the many different methods available to us to calculate uncertainty give us the tools to do so. This will be the job of the data collection and analysis sections to follow but for now, this may be as simple as combining the standard uncertainty derived from the imprecision studies mentioned above.

However, we may have more data available that can identify if there is an analyser dependant difference in our uncertainty and if that is the case we need to know how to deal with it.

….contributor 4 is Inter Analyser Variability

That concludes the modelling portion of our assessment of the aPTT. We have our 4 contributors. We can now move onto collecting our data in the next post. It is not entirely unheard of to reevaluate the modelling conclusion once the data has been collected. Is this a valid approach? Some may say yes, some will disagree. The modelling process is one of subjective opinion by the expert performing the evaluation. The subjectivity can itself be reassessed based on new evidence i.e. the data. Maybe we will remove the bias contribution if we don’t think it is significant. We can also test to see if the bias is STATISTICALLY significant. These are all questions we will be able to discuss further in the coming articles. But for now, we can be happy that we have the first two approaches completed.