Learn how to be an AASHTO lab 24/7

"“…if you do everything that the AASHTO accreditation requires you to do, if you execute all of the requirements of [AASHTO re:source], you will have a well-run laboratory."


Florida Department of Transportation

Tim Ruelke, P.E., Director, Office of Materials

re_graphic
55+ years of experience
3,000+
PSP participants
23,000+
samples shipped per year
1,000
laboratory assessments per year
2,000+
accredited labs









Metrology Musings: Calibration vs. Standardization

By Robert Lutz, AASHTO re:source Director

Posted: October 2010

"Absolute certainty is a privilege of uneducated minds and fanatics. It is for scientific folk an unattainable ideal."

Cassius J. Keyser, 1862 - 1947

I promised in the inaugural article of Metrology Musings that I'd discuss metrology topics such as calibration, traceability, and measurement uncertainty.  Let's use these terms to compare and contrast the two most common, and most commonly misunderstood concepts: calibration and standardization.  These terms are prevalent in many testing standards, as well as in AASHTO R 18, Establishing and Implementing a Quality Management System for Construction Materials Testing Laboratories, so it's important to understand the implications of each.

We need to first define and discuss the associated terms.  At the core of every metrology discussion is metrological traceability, defined by the International Vocabulary of Metrology - Basic and General Concepts and Associated Terms, also known as VIM, as the

"property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty."

Traceability is vital because it allows us to compare our measurement to the previous one, a measurement result from a year ago, or to the result of another laboratory's measurement that was performed anywhere else in the world.  Traceability gives validity to our measurement results.  That's what the testing business is all about, right?

As the definition states, traceability is obtained through calibration.  I've often heard calibration described as simply the comparison of a measuring instrument's results to a reference standard, with a correction applied to eliminate any offset.  This definition is missing a few pieces, so to get a handle on this let's look at the "official" definition of calibration - it's quite intimidating but don't let it scare you off.

VIM defines calibration this way:

"an operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication."

So what is calibration, really?  Let's translate this a bit...

1) Calibration is a Comparison
It's a process which compares the values indicated by a measuring instrument or system to the values assigned to a measurement standard.  The measurement standard must be calibrated and have a calibration certificate which indicates its measurement uncertainty.  Even the best measurement standard isn't perfect.

2) Conducted by Following a Process
The process should be conducted under specified conditions (temperature, humidity, etc.) and following a documented procedure.  Detailed instructions can improve the results.

3) Which Corrects for Known Systematic Error (Bias)For example, the comparison process indicates that the working thermometer reads 25.3°C when the measurement standard thermometer reads 25.0°C.  This equipment bias should be offset by applying a correction of -0.3°C.

4) Which Estimates the Measurement Uncertainty
Think of measurement uncertainty as the doubt that exists (and should exist) about the result of any measurement.  VIM defines measurement uncertainty as the "parameter characterizing the dispersion of the (quantity) values."  No measurement is perfect and some dispersion, some uncertainty, will always exist.

Calculating measurement uncertainty starts with identifying the sources of uncertainty in a measurement.  These can include, but are not limited to:

  • The uncertainty of the measurement standard is carried over or imported into the measurements you make.
  • The measuring instrument itself can suffer from bias (see above), poor repeatability, changes due to aging and wear, poor resolution, noise, etc.
  • The measurement process may be difficult.  See #2 above.  Improved instructions can help.
  • The environment - changes in temperature, humidity, pressure, etc. may affect your measuring instrument and your measurement standard.
  • The resolution of the measuring instrument.  Is 25.3°C really 25.27?  25.32?
  • The skill of the operator.  Does everybody read the thermometer at the same level?  Does everybody estimate the same way?  Is everybody's reaction time the same?  Of course not.

Measurement uncertainty is usually expressed as expanded uncertainty, an interval at a certain confidence level, such as ± 0.15°C at a confidence interval of 95%.

Estimating measurement uncertainty can be a daunting process and should, therefore, probably be left to calibration laboratories which are accredited to ISO/IEC 17025.

5) And Evaluates "Fitness for Purpose"
Although this is not included in the VIM definition, an evaluation of the "fitness for purpose" should be conducted by comparing the expanded measurement uncertainty to the required test tolerances.  For example, if the temperature of a viscosity bath must be maintained at 135 ± 0.05°C and the expanded uncertainty of the thermometer measurements is ± 0.18°C, then you cannot be confident that the temperature of the bath is being maintained within 134.95°C and 135.05°C.  This thermometer may be acceptable for other tests but is not "fit for purpose" in this particular application.

That's calibration... so what's standardization?
Standardization is very similar to calibration - identical, up to a point - but does not include an estimation of measurement uncertainty.  Simply put, standardization is steps 1 through 3 above.  Standardizations can be easily performed by most testing laboratories because the step for estimating measurement uncertainty is not required.  Standardization, rather than calibration, is specified in situations when the measurement's influence on a test result is not great, and when the probability that the measurement uncertainty, if calculated, would exceed the tolerance requirement is low.  For example, the temperature of an aggregate drying oven is required to be maintained at 110 ± 5°C.  Changes in temperature will not greatly affect gradation test results AND the probability that the measurement uncertainty would exceed 5°C is low.  There are many situations when calibration - with all it demands - is not necessary, thus the concept of standardization allows for a more appropriate activity.

Now for my next promise.  The next Metrology Musings article will dig deeper into measurement uncertainty, including how to estimate it, what factors influence it, and how to interpret measurement uncertainty estimates on calibration certificates.  That's for certain.

Printer Friendly Version

TwitterLinkedInYouTube-Icon-grey-30x30