This article explains the importance of standard deviation and standard uncertainty in measurement uncertainty analysis, particularly in medical laboratory science. It highlights how these statistical tools quantify data variability and measurement reliability.
One of the fundamental statistical tools used to quantify uncertainty is the standard deviation. It is often linked to the building block of uncertainty, which is the standard uncertainty.
A Brief History of Standard Deviation
The concept of standard deviation (SD) has roots in the 19th century. It is credited to the work of several mathematicians and statisticians. Karl Pearson, a British mathematician famous for work on correlation, introduced the term standard deviation in the late 1890s. The concept itself predates Pearson. Contributions from Gauss and others developed the normal distribution, or the familiar bell shaped curve. The closely related SD is one of the parameters describing the nature of the bell shaped curve.
The development of the normal distribution and standard deviation marked a significant advancement in statistics. These tools allow scientists to better understand the variability in their data. This understanding became particularly useful in scientific fields where precision is crucial. Examples include astronomy, biology, and eventually, medical laboratory science.
What is Standard Deviation?
Standard deviation is a measure of the amount of variation or dispersion in a set of values. It quantifies how much individual measurements in a data set deviate from the mean (average) of the set. A low standard deviation indicates that the values tend to be close to the mean, suggesting high consistency. Conversely, a high standard deviation indicates more spread out values, pointing to greater variability.
Mathematically, standard deviation (σ) is calculated as the square root of the variance. The variance is the average of the squared differences from the mean. As a formula it is represented as:
Where:
- xi represents each value in the data set,
- μ is the mean of the data set,
- N is the number of values in the data set.
Note the denominator. It is equal to N if the entire population is being studied. More commonly, the denominator is N-1, representing the degrees of freedom of a sample. Be careful which formula in excel you use. You will get different answers!
Standard Deviation in Measurement Uncertainty
In measurement uncertainty, the SD plays a pivotal role. Measurement uncertainty refers to the doubt that exists about the result of any measurement. It is a parameter. It characterises the dispersion of values that could reasonably be attributed to the measurand (the quantity being measured).
When repeated measurements are taken, the standard deviation of these measurements provides a quantitative estimate of the uncertainty. If you measure the same quantity multiple times under the same conditions, the standard deviation of these measurements gives you an idea of the variability. This variability is due to random errors.
Example from Everyday Life: Measuring Coffee Temperature
Let’s consider an everyday example to illustrate standard deviation and measurement uncertainty. Imagine you are measuring the temperature of a cup of coffee every minute for five minutes using a digital thermometer. Your measurements are:
- 60°C
- 62°C
- 61°C
- 63°C
- 60°C
To calculate the standard deviation, we first determine the mean temperature:
Next, we calculate the variance by finding the squared differences from the mean, summing them, and dividing by the number of measurements:
Which leads to:
The standard deviation is the square root of the variance:
This standard deviation indicates that the temperature measurements typically vary by about 1.17°C from the mean. If you were assessing the consistency of the thermometer, a lower standard deviation would suggest that it provides more reliable measurements.
Standard Uncertainty and Its Connection to Standard Deviation
Standard uncertainty is a term used in metrology to express uncertainty. It is derived from the standard deviation and is often a key component in calculating total measurement uncertainty.
The standard uncertainty (u) of a mean of measurements is directly linked to the standard deviation. For a normally distributed dataset the standard uncertainty is equivalent to the SD
This relationship shows that the standard uncertainty decreases as the number of measurements increases. It highlights the importance of repeated measurements in obtaining reliable data. In essence, standard deviation provides an understanding of the spread of individual measurements. Standard uncertainty gives an estimate of the uncertainty associated with the mean value of those measurements.
Example: Blood Glucose Measurements
If we repeat glucose levels five times under the same conditions. The readings in mmol/L are as follows:
- 5.1
- 5.3
- 5.0
- 5.2
- 5.1
First, calculate the mean:
Next, find the variance:
The standard deviation is:
The standard uncertainty (u) of the mean based on the standard deviation is:
This standard uncertainty provides a quantifiable measure of how much the average blood glucose level is expected to vary due to measurement variability. Understanding this helps ensure accurate and reliable diagnostic information.
For a full in depth explanation of basic statistics used in the medical laboratory – watch the webinar below
Historical Development of Measurement Uncertainty
The formalisation of measurement uncertainty is relatively recent in the history of science. While the concept of uncertainty has always been implicit in scientific measurements, its standardisation came with the establishment of the International Organization for Standardization (ISO) and the International Bureau of Weights and Measures (BIPM).
The “Guide to the Expression of Uncertainty in Measurement” (GUM), first published in 1993 by ISO and BIPM, was a major milestone. It provided a framework for evaluating and expressing uncertainty in measurement, standardising the use of terms like standard uncertainty. This guide has been instrumental in promoting consistency and reliability in scientific measurements worldwide. for the medical laboratory ISO/TS 20914:2019 has given us a more accessible entry to the world of uncertainty allowing us to use the so-called top-down approach using QC and EQA data.
Importance of Understanding Standard Deviation and Uncertainty
In the quality control of laboratory assays, knowing the standard deviation helps in setting acceptable ranges for test performance. If the standard deviation is too high, it might indicate issues with the assay’s precision, prompting further investigation or adjustment. Similarly, understanding standard uncertainty allows scientists to report their findings with the appropriate level of confidence, which is critical in clinical decision-making.
Conclusion
Standard deviation and standard uncertainty are more than just statistical concepts; they are tools that enable scientists to measure and express the reliability of their data. The link between these two measures is fundamental to understanding and managing measurement uncertainty.
Want to learn more about the statistics of MU? – Sign up to the course below!
If you enjoyed this post, explore more on our online learning platform. We offer comprehensive online courses designed to enhance your knowledge and skills. Click here to check out our courses and continue your learning journey with us! Below is a free course to get you started with descriptive statistics in excel