"By a comparison of the results of accurate measurements with the numerical predictions of the theory, we can gain considerable confidence that the theory is correct, and we can determine in what respects it needs to be modified. It is often possible to explain a phenomenon in several rough qualitative ways, and if we are content with that, it may be impossible to decide which theory is correct. But if a theory can be given which predicts correctly the results of measurements to four or five (or even two or three) significant figures, the theory can hardly be very far wrong. Rough agreement might be a coincidence, but close agreement is unlikely to be. Furthermore, there have been many cases in the history of science when small but significant discrepancies between theory and accurate measurements have led to the development of new and more far-reaching theories. Such slight discrepancies would not even have been detected if we had been content with a merely qualitative explanation of the phenomena." - Keith R. Symon, Mechanics, Second Edition, 1964
One of the things that scientists do is make predictions - predictions based on their hypotheses, laws, and theories. The test of a prediction is whether it works in the "real world" - do the results of experiments match the theoretical prediction? If the results don't match (and these results are confirmed by other competent scientists) then the hypothesis that generated the prediction must be modified or abandoned. The ultimate authority in science is nature - not "what it says in the book".
You may not have thought about it, but when you solve a "physics problem" in a text book you are making a theoretical prediction. When you calculate that a car should skid 20 meters in some situation, the real test of the correctness of your result is whether a real car would really skid 20 meters in that situation - not "what the book says" in the "Answer Section."
Physics is a quantitative science. Physicists deal in numbers - but not just the numbers of the mathematician. This is an important point that is often missed by beginning physicists. Physicist's numbers are often (or could be) measurements, not the pure numbers of the mathematician.
Therefore, physicists measure things. Measurement is very important in physics - physicists are serious about measurement. One of the major contributions of physics to other sciences and society are the many measuring devices and techniques that physics has developed. In "everyday life," we pick up a ruler and measure something without giving it much thought. Physicists think about their measurements, and need to have a much more sophisticated understanding of the measurement process than "normal" people do.
Beginning physicists often get a very distorted view of all of this. You may remember doing an experiment, like determining the acceleration of gravity. The acceleration of gravity is "supposed to be" 9.8 m/s2 - everybody knows that. Your "answer" came out 10.3 m/s2, so your experiment "didn't work" - you were "in error" - perhaps you even calculated your "percent of error." Many beginning physicists are burdened by the following misconceptions (that we will try to remedy in the pages to follow):
So, this unit begins with a brief introduction to the four types of numbers that an experimental physicist needs to deal with, followed by an extensive discussion of the measurement process - what precision is, why it is a concern, and how to deal with it in measurements and calculations. Then there is a discussion of accuracy, and finally, straight answers to the question "Ok, so how do I actually analyze this experiment, anyway?"