Module 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 38

Measurement and Measuring Instruments

Objectives:
Module 1:Basics of Measurement System

1. Elements of generalized measurement system

2. Static and Dynamic characteristics of Instruments

3. Errors in measurements

4.Sources and types of errors

5. Statistical treatment of data-Mean, Measures of dispersion, Rejection of data based on confident interval.
Elements of generalized measurement system
• An instrument may be defined as a device or a system that is designed to maintain a functional relationship between the prescribed
properties of the physical variables and it includes different ways of communication with the human observer.
• The functional relationship remains valid only as long as the static calibration of the system remains constant.
• It is possible to describe the operation of a measuring instrument in a generalized manner and the whole operation can be described in
terms of the functional elements they are:
1. primary sensing element
2. variable conversion element
3. data presentation element
Each element is made up of a distinct component or the group of components that perform the required steps in the measurement

Quantity to be
Primary Sensing Variable conversion Variable manipulation Data transmission Data presentation
Element Element Element Element Element
measured

Data conditioning Element


Primary sensing element
• The quantity under measurement makes its first contact with the primary sensing element of a measurement system.
• This act is immediately followed by the conversion of measurement into an analogous electrical signal. this is done by a transducer.
• A transducer is defined as a device that converts energy from one form to another
• But in electrical measurement systems a transducer is defined as a device that converts a physical quantity into an electrical quantity
The physical quantity to be measured is sensed and detected by the element which keeps the output in different analogous form.
• This output is then converted into an electrical signal by a transducer.
• the first stage of the measurement system is known as detector transducer stage.

Quantity to be
Primary Sensing Variable conversion Variable manipulation Data transmission Data presentation
Element Element Element Element Element
measured

Data conditioning Element


Variable conversion element
• The output of the primary sensing element may be an electrical signal of any form.
• It can be a voltage, a frequency or some other electrical parameter sometimes this output is not suited to the system.
• Suppose the output is in analog form and the next stage of the system accepts inputs only in digital form therefore analog to digital
converter must be used for converting the signals from analog to digital form.
• Many instruments do not need any variable conversion element however some instruments need more than one variable conversion
element.

Quantity to be
Primary Sensing Variable conversion Variable manipulation Data transmission Data presentation
Element Element Element Element Element
measured

Data conditioning Element


Variable manipulation element
• The function of this element is to manipulate the signal presented to it preserving the original nature of the signal.
• Manipulation means only a change in the numerical value of the signal.
• Example an electronic amplifier accepts a small voltage signal as input and produces an output signal which is also a voltage but of a
greater magnitude. Thus voltage amplifier act as a voltage manipulation element here it is not necessary that a variable manipulation
element follows the variable conversion element. it may precede the variable conversion element in many cases.
• The output of the transducer contained the information required for further processing of the system and the output signal is usually a
voltage or some kind of electrical signal.
• The two most important properties of voltage are its magnitude and frequency though polarity can be considered in some cases many
transducers develop low voltages of the order of millivolts and some even microvolts.
• A fundamental problem is to prevent this signal from being contaminated by unwanted signals like noise another problem is that a
weak signal may be distorted by processing equipment Another problem is that the weak signal can be distorted by a processing
equipment. Many times it becomes necessary to perform certain operations on a signal before it is transmitted further.

Quantity to be
Primary Sensing Variable conversion Variable manipulation Data transmission Data presentation
Element Element Element Element Element
measured

Data conditioning Element


Variable manipulation element
• These processes may be linear like amplification, attenuation, integration, differentiation and can be nonlinear like modulation filtering
,chopping, clipping etc in order to bring the signal to the desired form to be accepted by the next stage of measurement system
• This process of conversion is called signal conditioning
• The term conditioning includes many other functions and the element that follows the primary sensing element in any instrument or
measurement system is called signal conditioning element.
• When the elements of an instrument are actually physically separated it becomes necessary to transmit data from one to another the
element that performs this function is called the data transmission element
• The signal conditioning and transmission stage is known as intermediate stage.

Quantity to be
Primary Sensing Variable conversion Variable manipulation Data transmission Data presentation
Element Element Element Element Element
measured

Data conditioning Element


Data presentation element
The information about the quantity under measurement has to be conveyed to the personnel handling the instrument or to the system for
monitoring control or analysis purposes.
The information conveyed must be in a form intelligible to the person or to the instrumentation system.
and this function is done by the data presentation element.
These devices may be analog or digital indicating instruments like ammeters voltmeters etc if the data is needed to be recorded recorders
like magnetic tapes high speed cameras and TV equipment storage type CRT printers or even microprocessors can be used.
The final stage in the measurement system is known as the terminating stage.

Quantity to be
Primary Sensing Variable conversion Variable manipulation Data transmission Data presentation
Element Element Element Element Element
measured

Data conditioning Element


Static and Dynamic Characteristics of Instruments
The treatment of instrument and measurement system characteristics can be divided into 2 categories:
static characteristics and dynamic characteristics
• Some applications involve the measurement of quantities that are either constant or slowly vary with time.
• Under these circumstances, it is possible to define a set of conditions or criteria that gives a meaningful description of the quality of
measurement without interfering with dynamic descriptions. these criteria are called static characteristics.
• All the static performance characteristics are obtained in one form or another by a process called static calibration. The calibration of
all the instruments is important since it affords the opportunity to check the instrument against its standard and subsequently to find
errors and accuracy.
• Normally static characteristics of a measurement system are that must be considered when the system or the instrument is used to
measure a condition not varying with time.
• However, many instruments are concerned bit varying quantities and therefore suffer such cases we have to examine the dynamic
relationships that exist between the outputs and the inputs and this is done with the help of differential equations performance criteria
based upon dynamic relations constitute the dynamic characteristics.
Static Characteristics of Instruments
The main static characteristics are:
1. Accuracy
2. Sensitivity
3. Reproducibility
4. Drift
5. Static error
6. Dead zone
The first three qualities are desirable while the other three are undesirable
All the static performance characteristics are obtained by one form or another of a process called calibration. some of the important
definitions or the characteristics are described.
Instrument: A device or mechanism used to determine the present value of the quantity under measurement.
Measurement: The process of determining the amount, degree or capacity by comparison with accepted standards of the system units
being used.
Accuracy: The degree of exactness of a measurement compared to the expected value.
Resolution: The smallest change in a measured variable to which an instrument will respond.
Static Characteristics of Instruments
Precision: a measure of the consistency or repeatability of the measurements that is the successive readings should not differ.
expected value: the design value that is the most probable value that calculations indicate one should expect to measure.
Error: the deviation of the true value from the desired value.
Sensitivity: the ratio of the change in the output of the instrument to the change of input or measured variable
Error in measurement
Measurement is the process of comparing an unknown quantity with an accepted standard quantity.
It involves connecting the measuring instrument into the system under consideration and observing the resulting response on the
instrument.
The measurement obtained is a quantitative measure of the so-called true value.
Any measurement that is affected by many variables therefore the result rarely reflect the expected value.
Example when we connect a measuring instrument to the circuit under consideration it always disturbs or changes the circuit causes the
measurement to differ from the expected value.
Some factors that affect the measurements are related to the measuring instruments.
Other factors are related to the person using the instrument.
Error in measurement
The degree to which a measurement nears the expected value is expressed in terms of the error of the measurement.
Errors may be expressed either as absolute or percentage of the error.
The absolute error may be defined as the difference between the expected value of the variable and the measured value of the variable.
e=
Where e is the absolute error,
is the expected value
is the measured value.

Therefore, % error= Absolute value/Expected value *100=e/*100

=/*100

It is expressed in terms of accuracy,


A=1-
Types of static errors
The static error of a measuring instrument is the numerical difference between the true value of a quantity and its value as obtained by
measurement that is repeated measurement of the same quantity keeps different indications.
• Static errors are categorized as gross errors or human errors , systematic errors and random errors
Gross errors
• These errors are mainly due to human mistakes in reading or in using instruments or errors in recording observations.
• Errors may also occur due to incorrect adjustment of the instruments and computational mistakes. these arrows cannot be treated
mathematically.
• The complete elimination of the gross errors is not possible but we can minimize them.
• One of the basic gross errors that occur frequently is the improper use of the instrument.
• Error can be minimized by taking proper care in reading and recording the instrument parameter
Systematic error
• This error occurred due to shortcomings of the instrument like defective or worn parts or aging or effects of the environment on the
instrument. These errors are sometimes referred to as bias and they influence all the measurements of a quantity.
• A constant uniform deviation of the operation of an instrument is known as systematic error. There are basically three types of
systematic errors.
• Instrumental errors
• Environmental errors
• and Observational errors
Instrumental errors
Instrumental erros are inherent in measuring instruments because of their mechanical structure. example the friction in the bearings of
various moving components the regular spring tensions, stretching of the spring reduction in the tension due to improper handling or
overloading of the instrument.
Instrumental errors can be avoided by,
1. selecting a suitable instrument for the particular measurement applications,
2. applying correction factors after determining the amount of instrumental error
3. calibrating the instrument against the standard.
Environmental errors
• Environmental errors are due to conditions external to the measuring device including conditions in the area surrounding the
instrument, such as the effect of change in temperature humidity, barometric pressure, or electrostatic fields,
• These errors can also be avoided by air conditioning, sealing certain components in the instruments and using magnetic Shields,
Observational errors
• Observational errors are the errors introduced by the observer.
• The most common error is the parallax error introduced in a reading a meter scale and the error of estimation when obtaining a reading
from the meter scale.
• These are also caused by the habits of individual observers.
• For example an observer may always introduce an error by consistently holding his head too far to the left while reading a needle and
scale reading.
• Systematic errors can also be subdivided into static and dynamic errors. static errors are caused by the limitations of the measuring
device and dynamic errors are caused by the instrument not responding fast enough to follow the changes in a measured variable.
Random errors
• These are the arrows that remain after gross and systematic arrows have been substantially reduced.
• Random errors are generally an accumulation of a large number of small effects and maybe of real concern only in measurements that
requires high accuracy.
• These are are due to unknown causes and cannot be determined in the ordinary error measurements.
• Such errors are normally small and follows the laws of probability and it can be treated mathematically.
• Example suppose a voltage is being monitored by a voltmeter which is read at 15-minute intervals. although the instrument operates
under ideal environmental conditions it still gives readings that vary slightly over the period of observation. this variation cannot be
corrected by any method of calibration.
Sources of errors
The sources of error are as follows:
1. insufficient knowledge of the process parameters and design conditions
2. poor design
3. change in process parameters, irregularities, upsets,
4. poor maintenance
5. error caused by the person operating the instrument or the equipment
6. certain design limitations

Dynamic characteristics
• Instruments rarely respond to changes in the measured variables.
• Instruments show slowness because of the capacitances or the electric capacitances.
• Such industrial instruments are nearly always used for measuring quantities that fluctuate with the time therefore dynamic and
transient behavior of the instrument is important as the static behavior.
• The dynamic behavior of the instrument is determined by subjecting its primary element which is the sensing element to some
unknown and predetermined variations in the measured quantity.
• The three moves to common variations in the measured quantity are as follows:
1. step change in which the primary element is subjected to an instantaneous and finite change in the measured variable.
2. linear change in which the primary element is following a measured variable changing linearly with time.
3.sinusoidal change in which the primary element follows a measured variable and the magnitude of which changes in accordance
with the sinusoidal function of constant amplitude.
Sources of errors
the dynamic characteristics of an instrument are
speed of the response, fidelity, lag and dynamic error

1. speed of the response: it is the rapidity with which an instrument responds to changes in the measured quantity.
2. fidelity: it is the degree to which an instrument indicates the changes in the measured variable without dynamic error.
3. Lag: it is the retardation or delay in the response of the instrument to changes in the measured variable.
4. Dynamic error it is the difference between the true value of a quantity changing with the time and the value indicated by the
instrument.
 
 
Dynamic Response of Zero order Instruments
The equation that describes the performance of zero order instrument can be written as :
+ = +
Where = output quantity

T=time
A’s and b’s are combinations of system physical parameters assumed constant.When all the a’s and b’s other than and are assumed to be
zero, the differential equation degenerates into simple equation as
=
Any instrument that closely obeys the above equation over its intended range of operating conditions is defined as zero-order instrument.
The static sensitivity of a zero-order instrument is defined as follows:
=
Where K= = static sensitivity.
A zero order instrument represents ideal or perfect dynamic performance
Eg:displacement measuring potentiometer
Dynamic Response of First order Instruments
The equation that describes the performance of zero order instrument can be written as :
+ = +
Where = output quantity

T=time
If all a’s and b’s other than and are assumed to be zero , the differential equation degenerates into simple equation as
=
Any instrument that closely obeys the above equation is defined as first -order instrument.By dividing by , the equation can be written
as :
=
=time constant : K= = static sensitivity.
The operational transfer function of any first order instrument is

Eg: mercury in-glass thermometer


Dynamic Response of second order Instruments
The equation that describes the performance of second order instrument can be written as :
=
The above equation is reduced as:
+1) = K
Where =undamped natural frequency in radians/time
= damping ratio
K=
Any instrument following this equation is a second order instrument. An example of this type is the spring balance. leaner devices
includes mass spring arrangements, amplifiers filters etc.
Most devices have either first or second order responses that is the equations of the motion describing the devices are either first or second
order linear differentials.
First order systems involve only one kind of energy example thermal energy in the case of a thermometer.
However in the case of a second order system it is all about the exchange between two types of energy example electrostatic and
electromagnetic energy in electrical LC circuits, moving quantifications electromechanical recorders etc
Statistical Treatment of Data [A.K.Sawhney]
• The statistical analysis of measuring data is important because it allows an analytical determination of the uncertainty of the final Test
result. To make the statistical analysis a large number of measurements is usually required
• The experimental data is obtained in two forms of test
Multi-sample test and single-sample test
Multi sample test
• In this test repeated measurement of a given quantity are done using different test conditions such as employing different instruments,
different ways of measurement and by employing different observers.
• Simply making measurements with the same equipment, procedure, technique and same observer do not provide multi-sample results.
Single sample test
• A single measurement then under identical conditions except for time is known as a single sample test
• In order to get the exact value of the quantity under measurement, test should be done using as many different procedures, techniques
and experimenters as practicable

1.Histogram
When a number of multi-sample observations are taken experimentally, that is a scatter of data about some central value. One method of
presenting test results is in the form of a histogram. The technique is shown in the figure representing the data given in this table. The
table shows a set of 50 readings of a lengthy measurement.
1.Histogram
The most probable of central value of length is 100 millimeter.

Length(mm) No: Of Readings


99.7 1
99.8 4
99.9 12
100.0 19
100.1 10
100.2 3
100.3 1

Total number of readings equal to 50


• This histogram of the figure represents the data where the ordinate indicates the number of observer ratings of a particular value.
• A histogram is also called a frequency distribution curve.
• At this central value of 100mm is a large number of readings, 19 in this case with other values placed almost symmetrically on either
side. if smaller incremental steps say 100 trading set to .05 millimeter taken the general form of the histogram will be almost the same
but since The steps have smaller increments we get a smoother curve.
• The smoother curve is symmetrical with respect to the central value
2.Arithmetic mean
The most probable value of a measured variable is the arithmetic mean of the number
of readings taken. The best approximation is possible when the number of ratings of
the same quantity is very large.
The arithmetic mean of n measurements at a specific count of the variable X is given
by the expression

=
Where Arithmetic Mean
=nth reading taken
n=total number of readings
3.Dispersion
The property which denotes the extent to which the values are dispersed about the
central value is called dispersion the other names used for dispersion are spread or
scatter
Figure shows two sets of data. In one case curve 1 the values vary from X1 to X2
add in other curve the values vary from X3 to X4.
 
Curve showing different ranges and
precision limits
3. Dispersion

Though their central value is the same clearly set of data represented by curve 1 has a smaller dispersion than that of the data presented by
curve 2.
It is very important to have a measure of the dispersion from the central value as it is an indication of the degree of consistency and
regularity of the data.
A large dispersion indicates that some factors involved in the measurement process are not under close control and therefore it becomes
difficult to estimate the measured quantity with confidence and definiteness.
4. Range
The simplest possible measure of dispersion is the range which is the difference between the greatest and the least values of data
For example, in the previous figure, the range of curve 1 is X2 - X1 and that of curve 2 is X4 – X3.
5.Deviation from the mean
This is the departure of the observed mean from the arithmetic mean of the group of readings.The deviation may be positive or negative
and the algebraic sum of all deviations is zero.
If the deviation of the first reading is called and that of the second reading , is called and so on, The deviations from the mean can be
expressed as:
= , = , similarly =
5.Deviation from the mean
Arithmetic sum of deviations:
=…….
=( )+( )
= + +…. -n =0
Therefore, the algebraic sum of deviations is zero.
6. Average Deviation
The average deviation is an indication of the precision of the instruments used in making the measurements.
Highly precise instruments yield a low average deviation between readings.
Average deviation is defined as the sum of the absolute values of deviations divided by the number of readings.
The absolute value of deviation if the value without respect to its sign. average deviation may be expressed as:
=
7.Standard Deviation

Standard deviation is also called root mean square deviation. the standard deviation of an infinite number of data is defined as the square
root of the sum of the individual deviations squared, divided by the number of readings.
7.Standard Deviation
Thus standard deviation is:
S.D= σ==
In practice the number of observations is finite. when the number of observations is greater than 20 SD is denoted by symbol σ .
if the number of observations is less than 20 the symbol used is s. the standard deviation of a finite number of data is given by
S ==
8.Variance

The variance is the main square deviation which is same as standard deviation except that square root is not extracted.
Variance, V=
= ==
When the number of observations is less than 20,
Variance=V= =
9.Normal or Gaussian curve of Errors
The normal or gaussian curve of error is the basis for the major part of study of
random effects.
The law of probability states that the normal occurrence of deviations from average
value of an infinite number of measurements or the observations can be expressed as
Y=
where x is the magnitude of deviation from the main,
y is the number of readings at any deviation x, and H is a constant called precision
index.
The equation leads to the curve of the type shown in the figure and this curve
showing Y plotted against the X is called Normal or gaussian probability curve.
Another convenient form of equation describing the quotient curve uses the
standard deviation σ and is given by:
Y=
The form of this equation is useful because σ is usually the non quantity of interest.
In the figure the deviations from the main value are divided in terms of Sigma units
so that the deviation X =1σ,2.
10. Precision Index
From the equation, Y=
when x=0, we have y=
• This is clear from the above equation that the maximum value of the Y depends upon h. The larger the value of the
edge the sharper the curve.
• Thus the value of H determines the sharpness of the girl since the curve drops sharply owing to the term(-h2) Being in
the exponent.
• A sharp curve indicates that the deviations are more closely grouped together around the deviation x=0.
From the figure the two curves having different values of h.
 
Add figure
10. Precision Index
• Therefore, curve 1 one has a large value of h while curve 2 that has a small value of H.
• Hence curve 1 indicates high precision than curve 2.
• A large value of h represents high precision of the data because the probability of occurrence of radiates in a given
range falls off rapidly as the deviation increases because the variables tend to looking closer into a narrow range. On
the other hand a small value of h represents low precision because the probability of occurrences of variates in a given
range falls of gradually as the deviation increases.
11. Probable error
• Consider two points -r and +r marked in the figure
• These points are located that the area bounded by the curve, the X axis,
anti ordinates erected at x=-r and x=+r is equal to half of the total area under
the curve.
• That is half the deviations lie between x=+/-r
• A convenient measure of precision is the quantity r. it is called probable error(P.E).
• The location of point r can be found from the equation right it gives the probable
• error:
• r=0.4769/h
Confidence interval and confidence level
• The range of deviation from the mean value which a certain fraction of all
values is expected to lie is called confidence interval.
• The probability that the value of a randomly selected observation will lie
in this range is called confidence level.
• If the number of observations is large and their errors are random and
follow the normal Gaussian distribution, the various confidence intervals
about the mean value is give in the Table.
• If the number of observations is small(say less than 20) and the standard
deviation is not accurately known, the confidence interval must be
broadened. Here the standard deviation is computed as
• S=
Confidence interval and confidence level

• This standard is multiplied by the suitable factor to establish the confidence


interval.
• The results are given in the below table.
• To obtain the confidence intervals for mean of a group of observations from the
corresponding intervals for an individual observation,

• Confidence interval of the mean=


Rejection of Data
• In most of the experiments, the experimenter finds that some of the data points are different from the majority of data.
• If these data points were obtained under abnormal conditions, they can be discarded straightway.
• But the experimenter can discard the data based on certain mathematical methods. The three commonly used methods are :
• Chauvenets criterion
• Use of confidence intervals
• 3σ limits
Chauvenets criterion
Suppose n observations are made for a quantity. We assume that n is large enough that the results will follow a normal Gaussian
distribution. This distribution may be used to compute the probability that a given reading will deviate by a certain amount from the mean.
Chauvenets criterion specifies that a reading may be rejected. If the probability of obtaining the particular deviation from the mean is less
than 1/2 n.
When applying this criterion, in order to eliminate any dubious data, the mean value and the standard deviation are first calculated using
all data points.
Chauvenets criterion
Number of Ratio of maximum acce25ptable
readings deviation to standard deviation
dmax/σ

2 1.15
The deviations of individual readings are then compared with
3 1.38
the standard deviation. If the ratio of deviation of a reading to
4 1.54
the standard deviation exceeds the limits in Table, then that
5 1.65
reading is rejected.
6 1.73

7 1.80
The mean value and the standard deviation are again calculated
10 1.96
by excluding the rejected reading from the data.
15 2.13

25 2.33

50 2.57

100 2.81

300 3.14

500 3.29

1000 3.48
Rejection of Data based on confidence intervals
A criterion used for discarding the data point is that its deviation from the mean exceeds four times the probable error of a single reading.
This results in discarding the data outside a confidence interval for a single reading at a confidence interval of 0.993.

A better criterion that doesn’t involve the evaluation of probable error when the set of points as small and the standard deviation is not
accurately known, is to discard reading that lies outside the interval corresponding to a confidence level of 0.99 for a single observation.
Rejection of Databased on 3 σ limits
The probability that the reading will lie within limits of the central value is 0.9974 which is a very high
And therefore any reading not lying within the limit should be rejected.

You might also like