Uncertainty Dictionary


Measurement IntroAP Physics HomeHelp
Absolute Uncertainty (Absolute Error)

The absolute uncertainty (usually called absolute error - but "error" connotes "mistake", and these are NOT mistakes) is the size of the range of values in which the "true value" of the measurement probably lies. If a measurement is given as25.4 +/- 0.1 cm, the absolute uncertainty is 0.1 cm. When adding or subtracting measurements, add their absolute uncertainties.

(Is "absolute uncertainty" an oxymoron? I wonder...)


Accepted Value

In physics lab, you will often be called upon to measure a well-known constant value that has been previously measured many times to high precision (many more significant digits than you can expect in a beginning physics lab). These are quantities like the free-fall acceleration at the surface of the Earth, g, the universal gravitation constant, G, the charge on an electron, e, or the universal gas constant, R (and on and on...). These measurements are called accepted values.


Accuracy

Accuracy is a measure of the degree to which two experimental results agree, or, more often, the degree to which an experimental result agrees with an accepted value. For instance, if the accepted value of "g" is 9.81 plus/minus 0.02 m/s2, an experimental result of = 9.9 0.3 m/s2 is more accurate than the result g = 10.6 0.03 m/s2. Accuracy is often expressed as a percent of difference (or percent error). An inaccurate result is often due to systematic uncertainties in the experiment.


Approximation Uncertainty (Approximation Error)

Approximation uncertainties are limits to precision due to unavoidable approximations in the measuring process, such as estimating the location of the center of mass of a pendulum bob in order to measure the length of a simple pendulum.


Confidence Interval

See uncertainty interval.


Mean Value

The mean (average) value of a data set is often used as the best estimate of the measurement. If there are n measurements x1..xn, then the mean, x bar, is:

defn. of x bar


Percent of Difference (Percent of Error)

The percent of difference between two values is the ratio of their absolute difference to the magnitude of the "accepted value", expressed as a percent. It quantifies the accuracy of a measurement. In other words:

Percent Difference definition
So, if your best estimate for the acceleration of gravity is 10.3 m/s2, and you use 9.80 m/s2 as your "accepted value", the percent of difference is:


Precision

The precision of a measurement or a set of measurements expresses the amount of confidence you have in the "reproducibility" of the measurement(s). In other words, "high precision" means that you are very confident that an additional measurement would produce a value very close to the previous measurements. "Low precision" means that you have little confidence in your ability to predict the result of the next measurement. A high precision measurement expresses high confidence that the measurement lies within a narrow range of values. Physicists express the precision of a measurement by the number of significant digits that they use to write it, as well as the uncertainty interval they assign to the measurement. Random uncertainty in an experiment tends to lower the precision of the measurements the experiment generates.


Random Uncertainty (Random Error)

Random uncertainties are limits to measurement precision due to unavoidable inability to duplicate all conditions of an experiment exactly from run to run, or at different points within the same run.

Some sources consider random uncertainty to be a class of uncertainty (as opposed to systematic uncertainty) which includes scale uncertainties and approximation uncertainties.


Relative Uncertainty (Relative Error)

Relative uncertainty is the ratio of the absolute uncertainty of a measurement to the best estimate. It expresses the relative size of the uncertainty of a measurement (its precision).

Relative uncertainty definition
Symbolically, if delta x is the absolute uncertainty in a measurement x, then the relative uncertainty in x, sx, is:

Definition of Relative Uncertainty

For example, the measurement 25.4 +/- 0.1 cmhas a relative uncertainty of:Relative Uncertainty Calculation Example


Scale Uncertainty (Scale Error)

Scale uncertainty is a limiting factor in the precision of a measurement. It is due to the fact that a measuring scale can have only finitely many divisions. It is reasonable (and expected) to estimate one digit between the finest markings on the scale (if possible).


Significant Digits

When a physicist writes down a measurement, the number of digits she writes indicates the precision of the measurement. Thus, 1 meter, 1.0 meter, and 1.00 meters are different measurements, since they contain different numbers of significant digits. (See rules for counting significant digits).


Standard Deviation,

The standard deviation of a data set measures the "grouping" or "clustering" of the data. A small standard deviation means that the data is closely spaced about the mean, while a large standard deviation means that the data is widely dispersed.


Standard Deviation of the Mean (SDOM)

A statistical measure of the uncertainty of the mean value of a data set. There is about a 2/3 probability that the "true value" of a measurement will lie within one SDOM of the mean value, and about a 95% probability that the "true value" will lie within 2 SDOMs of the mean.

SDOM = SDOM calculation

In our class, we will use the standard deviation of the mean as the uncertainty interval for the mean value of a set of measurements.


Systematic Uncertainty (Systematic Error)

Systematic uncertainty are limits to accuracy due to some aspect of the experiment which causes the experimental results to be consistently too high or too low. Examples would include parallax errors in reading scales, meter sticks too long or too short, meters that read consistently too high or too low, excessive friction, etc. Systematic uncertainties in an experiment often lead to accuracy problems.

If a systematic uncertainty can be identified, it is often possible to correct for it, but systematic errors are often very difficult to uncover.


T-Score

The t-score of a measurement is equal to the number of standard deviations between that score and the mean value.

t-score calculation


Uncertainty Interval (also called Confidence Interval or Error Interval)

An uncertainty interval (or error interval) is the "plus/minus" part of a measurement. It is used, along with significant digits, to indicate the precision of a measurement. Thus, in the two-significant-digit measurement 5.4 plus/minus 0.2 meters, the uncertainty interval is 0.2 meters. This means that the "true value" of the measurement is believed to lie between 5.2 meters and 5.6 meters.

In our class, we will use the standard deviation of the mean as the uncertainty interval or the mean value of a set of measurements.

 

Measurement IntroAP Physics HomeHelp
last update February 2, 2008 by JL Stanbrough