













Which of the five measurements above do I use to represent the length of my pendulum? Clearly, the best thing to do is calculate the average (mean) value of the data values and use this average to represent the best estimate of the length of the pendulum. Simply add up the data values and divide by the number of data values. Fortunately, this calculation is easily and conveniently handled by just about any calculator or spreadsheet. In the table above, the mean value has been calculated as 25.43 cm.
OK, you have decided that the length of the pendulum is most likely about 25.43cm  but how precise is this measurement?
You could plot these values on a number line to determine the range of likely values like this:Looking at this graph, you can easily determine that all of the measurements fall within 0.07 cm of the mean value. Therefore, if you were "on automatic pilot", you might conclude that the best estimate of the length is 25.43 cm with an uncertainty of 0.07 cm, or:
Actually, I wouldn't do that. Here is a general principle:
It is illogical to be extremely precise about uncertainty.
Look at the data table again. Notice that the actual uncertainty in the length is in the tenth's place  it varies from 3 to 5. I think that it makes more sense to express the length as . The 0.1 cm specifies the "uncertainty interval" (or "confidence interval", if you are an optimist) for this measurement.
Mistakes happen. Look again at the graph of the distribution of length measurements above. That 25.50 cm measurement jumps out as kind of a misfit, doesn't it. In fact, if we could ignore it, the pendulum length could be This is a much more precise measurement!
Can the 25.50 cm measurement be ignored? Well, it depends. If it is a mistake  if somebody or something messed up  then the answer is yes, you can ignore it (But NOT erase it! It really happened, and it might not actually be a mistake...)
How can you tell if this value is a mistake? Well, more data might help. If the 25.50 cm measurement continues to stick out "on the fringe" of things, it could very well be a mistake. If additional data make it look more "at home", maybe it isn't. It is also possible that more data won't resolve the issue. There is no "answer book" in which you can look up the "correct answer". It's a judgment call. Science is not nearly as "clear cut" as outsiders seem to think. You need to make your decision and be ready to discuss and defend it.
Certainly, you should not exclude data just because it gives you better results! On the other hand, if you can make a strong case that a value was a mistake  for instance, if you can identify the mistake ("Johnny tripped and smashed into the apparatus as the ball was falling." for instance.)  then you will have a "strong case" for excluding the data. You should realize, however, that many scientists believe that data should NEVER, NEVER, NEVER be discarded. They say, "If you think that a data value is a mistake  prove it by taking more data!" Given sufficient data, one "wild" value won't disturb your results to any significant degree.
No. This would make life simple (and simple is good), but physicists do not expect that every single measurement taken in an experiment falls within the uncertainty interval that you specify. Unusual things happen, and (particularly if you take a lot of data) some of your data will probably reflect unusual situations. There shouldn't be too many of these unusual situations (or they wouldn't be unusual, now would they?), but unusual things are bound to happen. The uncertainty interval should reflect the range in which we can reasonably expect a "reasonable" value to fall.
So then, how many measurements do I put inside the the uncertainty interval  what is "reasonable"? Well, it depends (of course!). Some physicists construct their uncertainty intervals so that they contain about 2/3 of their data values  so that there is about a 67% probability that a random trial will produce a measurement in the interval), and some place 95% of their data inside the confidence interval. You just need to specify what you are doing.
Hold on a minute! This is all very nice, but it seems unusually complicated. Haven't the mathematicians developed any tools that the poor old experimental physicist can use to analyze experimental uncertainty? Well, yes they have. Read on!