It would appear that i pressed publish rather than save as draft today – I have now updated to actually cover the headings previously released – oops!

It is common to ask the question of how should we combine results of individual standard uncertainty contributors.

The simple (?) answer is there are a few ways that can be used to achieve it, and which one you use depends on what you are trying to achieve.

The two methods mentioned above are commonly used techniques. They have similar sounding names but in reality have quite different applications. Here we will compare and contrast the two methods.

a little background…

Both techniques involved square rooting. I will assume that if you have stumbled upon this site (or been bullied into looking at it by having the misfortune to bump into me!) you will have a basic understanding of some simple mathematical concepts. If not its nothing that a “quick google” can’t fix.


What we are talking about is how to combined our measures of variability. The two methods above (along with the pooled standard deviation which will following the next article) are the most useful tolls we have in our armoury for calculating uncertainty.

In general….

For both of these techniques we are talking about two closely related calculations that are being used to estimate how something we are measuring varies around an expected value or average value for a series of experimentally determined measurements.

It is here that the similarity ends though, both are trying to answer different questions and when thinking about how we calculate the uncertainty of measurement in our laboratories they are not interchangeable

The Root Mean Square

We approach the Root Mean Square (RMS) first. This tells us what we can expect the error to be in each of the measurements we have taken.


From the equation above you can see we simply square each measurement result and add them together. The average is calculated by dividing by the number of measurements (N). Finally, the square root is taken to provide the RMS.

This is one method by which we can determine our standard uncertainty from a repeatability experiment (Type A analysis). In fact, the standard deviation that we usually use is a form of root mean square (although similar, it is in fact very different). Also, some quality management systems advocate the use of the RMS in uncertainty estimation, and as such many automated analysers available in pathology laboratories will provide the RMS as part of their statistical packages.

The Root Sum of Squares

The root sum of squares is the way that combines the standard uncertainties of more than one contributor to provide our overall combined uncertainty. This is not influenced by the number of measurements we take to determine our standard uncertainty and there is no division by the number of measurements involved.

This works well provided the contributors we intend to combine are not correlated (either positively or negatively. The issue of correlation and the impact it has is the subject of the latest article published in Pathology in Practice this month (February 2018). I of course encourage you to read that to see the impact that correlation of contributors and how that works for the root sum of squares calculations.


So that concludes the quick compare and contrast of two of the commonly encountered terms in uncertainty analysis. Although they sound similar the mechanics of them being calculated differ significantly, and the part of the uncertainty process we apply them to differ.

If you found this post useful, there is more related content available over at including free video courses on laboratory statistics