What Is Measurement
What Is Measurement
What Is Measurement
known predefined standard of that parameter. For instance, if we have to measure the temperature of the body, we measure it with the thermometer that has predefined scale indicating different values of the temperature. If we have to measure the length of the wall, we measure it with the measuring tape that has predefined markings on it. The measurement enables us obtaining magnitude of certain parameters whose value is not known by comparing them with the standards whose value is predefined.
For the measurements results to be accurate, two conditions should be met. Firstly, the standard which is used for comparison must be defined accurately and it should be universally accepted. For instance the weight cannot be just light or heavy. It should be light or heavy in comparison to some standard weight and should be measured accurately against it. The comparison of the unknown magnitude should be made with the recognized standard and it should produce some meaningful reading of the value. The second important condition to be met for measurements is that the procedure applied for the measurements should be provable and there should be provable instruments for measurements. This means the methods for making the measurements and the instruments used for them should be reliable enough to make the correct measurements.
Methods of Measurement
t of any quantity involves two parameters: the magnitude of the value and unit of measurement. For instance, if we have to measure the temperature we can say it is 10 degree C. Here the value 10 is the magnitude and C which stands for Celsius is the unit of measurement. Similarly, we can say the height of wall is 5 meters, where 5 is the magnitude and meters is the unit of measurement. There are two methods of measurement: 1) direct comparison with the standard, and 2) indirect comparison with the standard. Both the methods are discussed below:
In the direct comparison method of measurement, we compare the quantity directly with the primary or secondary standard. Say for instance, if we have to measure the length of the bar, we will measure it with the help of the measuring tape or scale that acts as the secondary standard. Here we are comparing the quantity to be measured directly with the standard.
Even if you make the comparison directly with the secondary standard, it is not necessary for you to know the primary standard. The primary standards are the original standards made from certain standard values or formulas. The secondary standards are made from the primary standards, but most of the times we use secondary standards for comparison since it is not always feasible to use the primary standards from accuracy, reliability and cost point of view. There is no difference in the measured value of the quantity whether one is using the direct method by comparing with primary or secondary standard.
The direct comparison method of measurement is not always accurate. In above example of measuring the length, there is limited accuracy with which our eye can read the readings, which can be about 0.01 inch. Here the error does not occur because of the error in the standards, but because of the human limitations in noting the readings. Similarly, when we measure the mass of any body by comparing with some standard, its very difficult to say that both the bodies are of exactly the same mass, for some difference between the two, no matter how small, is bound to occur. Thus, in direct method of measurement there is always some difference, however small, between the actual value of the quantity and the measured value of the quantity.
There are number of quantities that cannot be measured directly by using some instrument. For instance we cannot measure the strain in the bar due to applied force directly. We may have to record the temperature and pressure in the deep depths of the ground or in some far off remote places. In such cases indirect methods of measurements are used. In the indirect method of measurements some transducing devise, called transducer, is used, which is coupled to a chain of the connecting apparatus that forms the part of the measuring system. In this system the quantity which is to be measured (input) is converted into some other measurable quantity (output) by the transducer. The transducer used is such that the input and the output are proportional to each other. The readings obtained from the transducer are calibrated to as per the relations between the input and the output thus the reading obtained from the transducer is the actual value of the quantity to be measured. Such type of conversion is often necessary to make the desired information intelligible. The indirect method of measurements comprises of the system that senses, converts, and finally presents an analogues output in the form of a displacement or chart. This analogues output can be in various forms and often it is necessary to amplify it to read it accurately and make the accurate reading of the quantity to be measured. The majority of the transducers convert mechanical input into analogues electrical output for processing, though there are transducers that convert mechanical input into analogues mechanical output that is measured easily.
CLASSIFICATION OF INSTRUMENTS
Instruments may be classified according to their application, mode of operation, manner of energy conversion, nature of output signal and so on. The instruments commonly used in practice may be broadly categorized as follows: Deflection and Null Types A deflection type instrument is that in which the physical effect generated by the measuring quantity produces an equivalent opposing effect in some part of the instrument which in turn is closely related to some variable like mechanical displacement or deflection in the instrument. For example, the unknown weight of an object can be easily obtained by the deflection of a spring caused by it on the spring balance as shown in Fig. 5.1. Similarly, in a common Bourdon gauge, the pressure to be measured acts on the C-type spring of the gauge, which deflects and produces an internal spring force to counter balance the force generated by the applied pressure.
Fig. 5.1 A typical spring balance A deflection type weight measuring instrument A null type instrument is the one that is provided with either a manually operated or automatic balancing device that generates an equivalent opposing effect to nullify the physical effect caused by the quantity to be measured. The equivalent null-causing effect in turn provides the measure of the quantity. Consider a simple situation of measuring the mass of an object by means of an equal-arm beam balance. An unknown mass, when placed in the pan, causes the beam and pointer to deflect. Masses of known values are placed on the other pan till a balanced or null condition is obtained by means of the pointer. The main advantage of the null-type devices is that they do not interfere with the state of the measured quantity and thus measurements of such instruments are extremely accurate. Manually Operated and Automatic Types Any instrument which requires the services of human operator is a manual type of instrument. The instrument becomes automatic if the manual operation is replaced by an auxiliary device incorporated in the instrument. An automatic instrument is usually preferred because the dynamic response of such an instrument is fast and also its operational cost is considerably lower than that of the corresponding manually operated instrument.
Analog and Digital Types Analog instruments are those that present the physical variables of interest in the form of continuous or stepless variations with respect to time. These instruments usually consist of simple functional elements. Therefore, the majority of present-day instruments are of analog type as they generally cost less and are easy to maintain and repair. On the other hand, digital instruments are those in which the physical variables are represented by digital quantities which are discrete and vary in steps. Further, each digital number is a fixed sum of equal steps which is defined by that number. The relationship of the digital outputs with respect to time gives the information about the magnitude and the nature of the input data.
Self-Generating and Power-Operated Types In self-generating (or passive) instruments, the energy requirements of the instruments are met entirely from the input signal. On the other hand, power-operated (or active) instruments are those that require some source of auxiliary power such as compressed air, electricity, hydraulic supply, etc. for their operation. Contacting and Non-Contacting Types A contacting type of instrument is one that is kept in the measuring medium itself. A clinical thermometer is an example of such instruments. On the other hand, there are instruments that are of non-contacting or proximity type. These instruments measure the desired input even though they are not in close contact with the measuring medium. For example, an optical pyrometer monitors the temperature of, say, a blast furnace, but is kept out of contact with the blast furnace. Similarly, a variable reluctance tachometer, which measures the rpm of a rotating body, is also a proximity type of instrument. Dumb and Intelligent Types A dumb or conventional instrument is that in which the input variable is measured and displayed, but the data is processed by the observer. For example, a Bourdon pressure gauge is termed as a dumb instrument because though it can measure and display a car tyre pressure but the observer has to judge whether the car tyre air inflation pressure is sufficient or not. Currently, the advent of microprocessors has provided the means of incorporating Artificial Intelligence (AI) to a very large number of instruments. Intelligent or smart instruments process the data in conjunction with microprocessor (mP ) or an on-line digital computer to provide assistance in noise reduction, automatic calibration, drift correction, gain adjustments, etc. In addition, they are quite often equipped with diagnostic subroutines with suitable alarm generation in case of any type of malfunctioning. An intelligent or smart instrument may include some or all of the following: 1. The output of the transducer in electrical form.
to be converted to the digital form by means of analog-to-digital converter (A-D converter). 3. Interface with the digital computer. 4. Software routines for noise reduction, error estimation, self-calibration, gain adjustment, etc. 5. Software routines for the output driver for suitable digital display or to provide serial ASCII coded output.
FUNCTIONAL ELEMENTS OF MEASUREMENT SYSTEMS A generalized 'Measurement System' consists of the following: 1. Basic Functional Elements, and 2. Auxiliary Functional Elements. Basic Functional Elements are those that form the integral parts of all instruments. They are the following: 1. Transducer Element that senses and converts the desired input to a more convenient and practicable form to be handled by the measurement system. 2. Signal Conditioning or Intermediate Modifying Element for manipulating / processing the output of the transducer in a suitable form. 3. Data Presentation Element for giving the information about the measurand or measured variable in the quantitative form. Auxiliary Functional Elements are those which may be incorporated in a particular system depending on the type of requirement, the nature of measurement technique, etc. They are: 1. Calibration Element to provide a built-in calibration facility. 2. External Power Element to facilitate the working of one or more of the elements like the transducer element, the signal conditioning element, the data processing element or the feedback element.
3. Feedback Element to control the variation of the physical quantity that is being measured. In addition, feedback element is provided in the nullseeking potentiometric or Wheatstone bridge devices to make them automatic or self-balancing. 4. Microprocessor Element to facilitate the manipulation of data for the purpose of simplifying or accelerating the data interpretation. It is always used in conjunction with analog-to-digital converter which is incorporated in the signal conditioning element. PERFORMANCE CHARACTERISTICS The measurement system characteristics can be divided into two categories: (i) Static characteristics and (ii) Dynamic characteristics.
Static characteristics of a measurement system are, in general, those that must be considered when the system or instrument is used to measure a condition not varying with time. However many measurements are concerned with rapidly varying quantities and, therefore, for such cases the dynamic relations which exist between the output and the input are examined. This is normally done with the help of differential equations. Performance criteria based upon dynamic relations constitute the Dynamic Characteristics. Static characteristics Accuracy Accuracy of a measuring system is defined as the closeness of the instrument output to the true value of the measured quantity. It is also specified as the percentage deviation or inaccuracy of the measurement from the true value. For example, if a chemical balance reads 1 g with an error of 10-2g, the accuracy of the measurement would be specified as 1%. Accuracy of the instrument mainly depends on the inherent limitations of the instrument as well as on the shortcomings in the measurement process. In fact, these are the major parameters that are responsible for systematic or cumulative errors. For example, the accuracy of a common laboratory micrometer depends on instrument errors like zero error, errors in the pitch of screw, anvil shape, etc. and in the measurement process errors are caused due to temperature variation effect, applied torque, etc. The accuracy of the instruments can be specified in either of the following forms:
Precision Precision is defined as the ability of the instrument to reproduce a certain set of readings within a given accuracy. For example, if a particular transducer is subjected to an accurately known input and if the repeated read outs of the instrument lie within say 1 %, then the precision or alternatively the precision error of the instrument would be stated as 1%. Thus, a highly precise instrument is one that gives the same output information, for a given input information when the reading is repeated a large number of times. Precision of an instrument is in fact, dependent on the repeatability. The term repeatability can be defined as the ability of the instrument to reproduce a group of measurements of the same measured quantity, made by the same observer, using the same instrument, under the same conditions. The precision of the instrument depends on the factors that cause random or accidental errors. The extent of random errors of alternatively the precision of a given set of measurements can be quantified by performing the statistical analysis.
Accuracy v/s Precision The accuracy represents the degree of correctness of the measured value with respect to the true value and the precision represents degree of repeatability of several independent measurements of the desired input at the same reference conditions. Accuracy and precision are dependent on the systematic and random errors, respectively. Therefore, in any experiment both the quantities have to be evaluated. The former is determined by proper calibration of the instrument and the latter by statistical analysis. However, it is instructive to note that a precise measurement may not necessarily be accurate and vice versa. To illustrate this statement we take the example of a person doing shooting practice on a target. He can hit the target with the following possibilities as shown in Fig . 1. One possibility is that the person hits all the bullets on the target plate on the outer circle and misses the bull's eye. This is a case of high precision but poor accuracy. 2. Second possibility is that the bullets are placed as shown. In this case, the bullet hits are placed symmetrically with respect to the bull's eye but are not spaced closely. Therefore, this is case of good average accuracy but poor precision. 3. A third possibility is that all the bullets hit the bull's eye and are also spaced closely . As is clear from the diagram, this is a case of high accuracy and high precision. 4. Lastly, if the bullets hit the target plate in a random manner as shown, then this is a case of poor precision as well as poor accuracy.
Fig. Illustration of degree of accuracy and precision in a typical target shooting experiment Based on the above discussion, it may be stated that in any experiment the accuracy of the observations can be improved but not beyond the precision of the apparatus. Resolution (or Discrimination) It is defined as the smallest increment in the measured value that can be detected with certainty by the instrument. In other words, it is the degree of fineness with which a measurement can be made. The least count of any instrument is taken as the resolution of the instrument. For example, a ruler with a least count of 1 mm may be used to measure to the nearest 0.5 mm by interpolation. Therefore, its resolution is considered as 0.5 mm. A high resolution instrument is one that can detect smallest possible variation in the input. Threshold
It is a particular case of resolution. It is defined as the minimum value of input below which no output can be detected. It is instructive to note that resolution refers to the smallest measurable input above the zero value. Both threshold and resolution can either be specified as absolute quantities in terms of input units or as percentage of full scale deflection. Both threshold and resolution are not zero because of various factors like friction between moving parts, play or looseness in joints (more correctly termed as backlash), inertia of the moving parts, length of the scale, spacing of graduations, size of the pointer, parallax effect, etc. Static sensitivity Static sensitivity (also termed as scale factor or gain) of the instrument is determined from the results of static calibration. This static characteristic is defined as the ratio of the magnitude of response (output signal) to the magnitude of the quantity being measured (input signal), i.e.
where q0 and qi are the values of the output and input signals, respectively. In other words, sensitivity is represented by the slope of the input-output curve if the ordinates are represented in actual units.
ERRORS IN PERFORMANCE PARAMETERS The various static performance parameters of the instruments are obtained by performing certain specified tests depending on the type of instrument, the nature of the application, etc. Some salient static performance parameters are periodically checked by means of a static calibration. This is accomplished by imposing constant values of 'known' inputs and observing the resulting outputs. No measurement can be made with perfect accuracy and precision. Therefore, it is instructive to know the various types of errors and uncertainties that are in general, associated with measurement system. Further, it is also important to know how these errors are propagated.
Types of Errors Error is defined as the difference between the measured and the true value (as per standard). The different types of errors can be broadly classified as follows.
Systematic or Cumulative Errors Such errors are those that tend to have the same magnitude and sign for a given set of conditions. Because the algebraic sign is the same, they tend to accumulate and hence are known as cumulative errors. Since such errors alter the instrument reading by a fixed magnitude and with same sign from one reading to another, therefore, the error is also commonly termed as instrument bias. These types of errors are caused due to the following: Instrument errors: Certain errors are inherent in the instrument systems. These may be caused due to poor design / construction of the instrument. Errors in the divisions of graduated scales, inequality of the balance arms, irregular springs tension, etc., cause such errors. Instrument errors can be avoided by (i) (ii) (iii) selecting a suitable instrument for a given application, applying suitable correction after determining the amount of instrument error, calibrating the instrument against a suitable standard.
Environmental errors: These types of errors are caused due to variation of conditions external to the measuring device, including the conditions in the area surrounding the instrument. Commonly occurring changes in environmental conditions that may affect the instrument characteristics are the effects of changes in temperature, barometric pressure, humidity, wind forces, magnetic or electrostatic fields, etc.
Loading errors Such errors are caused by the act of measurement on the physical system being tested. Common examples of this type are: (i) introduction of additional resistance in the circuit by the measuring milliammeter which may alter the circuit current by significant amount, (ii) an obstruction type flow meter may partially block or disturb the flow conditions and consequently the flow rate shown by the meter may not be same as before the meter installation, (iii) introduction of a thermometer alters the thermal capacity of the system and thereby changes the original state of the system which gives rise to loading error in the temperature measurement. Accidental or Random Errors These errors are caused due to random variations in the parameter or the system of measurement. Such errors vary in magnitude and may be either positive or negative on the basis of chance alone. Since these errors are in either direction, they tend to compensate one another. Therefore,
these errors are also called chance or compensating type of errors. The following are some of the main contributing factors to random error. Inconsistencies associated with accurate measurement of small quantities The outputs of the instruments become inconsistent when very accurate measurements are being made. This is because when the instruments are built or adjusted to measure small quantities, the random errors (which are of the order of the measured quantities) become noticeable. Presence of certain system defects System defects such as large dimensional tolerances in mating parts and the presence of friction contribute to errors that are either positive or negative depending on the direction of motion. The former causes backlash error and the latter causes slackness in the meter bearings. Effect of unrestrained and randomly varying parameters Chance errors are also caused due to the effect of certain uncontrolled disturbances which influence the instrument output. Line voltage fluctuations, vibrations of the instrument supports, etc. are common examples of this type. Miscellaneous Type of Gross Errors There are certain other errors that cannot be strictly classified as either systematic or random as they are partly systematic and partly random. Therefore, such errors are termed miscellaneous type of gross errors. This class of errors is mainly caused by the following. Personal or human errors These are caused due to the limitations in the human senses. For example, one may sometimes consistently read the observed value either high or low and thus introduce systematic errors in the results. While at another time one may record the observed value slightly differently than the actual reading and consequently introduce random error in the data. Errors due to faulty components / adjustments Sometimes there is a misalignment of moving parts, electrical leakage, poor optics, etc. in the measuring system.
Improper application of the instrument Errors of this type are caused due to the use of instrument in conditions which do not conform to the desired design / operating conditions. For example, extreme vibrations, mechanical shock or
pick-up due to electrical noise could introduce so much gross error as to mask the test information.
Standards of Measurements
A standard of measurement is defined as the physical representation of the unit of measurement. A unit of measurement is generally chosen with reference to an arbitrary material standard or to a natural phenomenon that includes physical and atomic constants. For example, the S.I. unit of mass, namely kilogram, was originally defined as the mass of a cubic decimeter of water at its temperature of maximum density, i.e. at 4C. The material representation of this unit is the International Prototype kilogram which is preserved at the International Bureau of Weights and Measures at Sevres, France. Further, prior to 1960, the unit of length was the carefully preserved platinum-iridium bar at Sevres, France. In 1960, this unit was redefined in terms of optical standards, i.e. in terms of the wavelength of the orange-red light of Kr86 lamp. The standard meter is now equivalent to 1650763.73 wavelengths of Kr86 orange-red light. Similarly, the original unit of time was the mean solar second which was defined as 1/86400 of a mean solar day. Standards of measurements can be classified according to their function and type of application as: International standards: International standards are devices designed and constructed to the specifications of an international forum. They represent the units of measurements of various physical quantities to the highest possible accuracy that is attainable by the use of advanced techniques of production and measurement technology. These standards are maintained by the International Bureau of Weights and Measures at Sevres, France. For example, the International Prototype kilogram, wavelength of Kr86 orange-red lamp and cesium clock are the international standards for mass, length and time, respectively. However, these standards are not available to an ordinary user for purposes of day-to-day comparisons and calibrations. Primary standards Primary standards are devices maintained by standards organizations / national laboratories in different parts of the world. These devices represent the fundamental and derived quantities and are calibrated independently by absolute measurements. One of the main functions of maintaining primary standards is to calibrate / check and certify secondary reference standards. Like international standards, these standards also are not easily available to an ordinary user of instruments for verification / calibration of working standards. Secondary standards Secondary standards are basic reference standards employed by industrial measurement laboratories. These are maintained by the concerned laboratory. One of the important functions
of an industrial laboratory is the maintenance and periodic calibration of secondary standards against primary standards of the national standards laboratory / organization. In addition, secondary standards are freely available to the ordinary user of instruments for checking and calibration of working standards. Working standards These are high-accuracy devices that are commercially available and are duly checked and certified against either the primary or secondary standards. For example, the most widely used industrial working standard of length are the precision gauge blocks made of steel. These gauge blocks have two plane parallel surfaces a specified distance apart, with accuracy tolerances in the 0.25-0.5 micron range. Similarly, a standard cell and a standard resistor are the working standards of voltage and resistance, respectively. Working standards are very widely used for calibrating general laboratory instruments, for carrying out comparison measurements or for checking the quality (range of accuracy) of industrial products.