Eee225 01

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 27

MEASUREMENT AND

INSTRUMENT (EEE225)

FUNDAMENTALS OF MEASUREMENT
INTRODUCTION
 Instrumentation is a technology of measurement which serves not only science but all branches of engineering, medicine, and
almost every human endeavor. The knowledge of any parameter largely depends on the measurement. The in-depth
knowledge of any parameter can be easily understood by the use of measurement, and further modifications can also be
obtained.

 Measuring is basically used to monitor a process or operation, or as well as the controlling process. For example,
thermometers, barometers, anemometers are used to indicate the environmental conditions. Similarly, water, gas and electric
meters are used to keep track of the quantity of the commodity used, and also special monitoring equipment are used in
hospitals.

Whatever may be the nature of application, intelligent selection and use of measuring equipment depends on a broad knowledge
of what is available and how the performance of the equipment renders itself for the job to be performed.

 The major problem encountered with any measuring instrument is the error. Therefore, it is obviously necessary to select the
appropriate measuring instrument and measurement method which minimises error. To avoid errors in any experimental
work, careful planning, execution and evaluation of the experiment are essential.

 The basic concern of any measurement is that the measuring instrument should not effect the quantity being measured
Measurement: The process of determining the amount, degree, or capacity by comparison (direct or indirect) with the accepted
standards of the system units being used. OR
The act or the results of comparison between an unknown quantity and a known quantity.

Instrument : A device or mechanism used to determine the present value of the quantity under measurement.

SIGNIFICANCE OF MEASUREMENT
 Measurement provides a standard for every day things and processes. From weight, temperature, length even time is
a measurement and it does play a very important role in our lives. The money or currency we use is also a measurement.

 Measurements can also allow us to make decisions based on the outcome of the measurement.

 For the purposes of diagnostics.


DIRECT METHODS
The unknown quantity ( measurand) is measured directly without any comparison with standards.
example : measurement current, voltage, power, resistance
• Deflection method: The value of the unknown quantity is measured by a measuring instrument with a calibrated scale
indicating the quantity under measurement directly .
• Comparison method: The value of the unknown quantity is determined by comparison with a standard of the given quantity.
Eg. Comparing known EMF to an unknown EMF.
• Null method: The action of the unknown quantity upon the instrument is reduced to zero by the counter action of a known
quantity of the same kind.
INDIRECT: The unknown quantity is
• Differential Method (Assignment)
determined by measuring the functionally
related quantities and the desired quantity is
obtained by calculations with formulas.

• The resistance of the conductor can be


calculated by measuring the current and
Null method, the unknown is equal the voltage.
Fig 1 to known when pointer is directly in
Deflection scale the middle • V = IR
• R= V/I

Fig 2 Fig.3
measurement systems
 A system can be defined as an arrangement of parts within some boundary which work together to provide some form of
output from a specified input or inputs. The boundary divides the system from the environment and the system interacts with
the environment by means of signals crossing the boundary from the environment to the system, i.e. inputs, and signals
crossing the boundary from the system to the environment, i.e. outputs.

Electric motor Amplifier system


system (1) (2)

A series of interconnected systems (3)


Instrument systems
 The purpose of an instrumentation system used for making measurements is to give the user a numerical value corresponding
to the variable being measured.

 Thus a thermometer may be used to give a numerical value for the temperature of a liquid. We must, however, recognize that,
for a variety of reasons, this numerical value may not actually be the true value of the variable.

 Thus, in the case of the thermometer, there may be errors due to the limited accuracy in the scale calibration, or reading
errors due to the reading falling between two scale markings, or perhaps errors due to the insertion of a cold thermometer
into a hot liquid, lowering the temperature of the liquid and so altering the temperature being measured.

 We thus consider a measurement system to have an input of the true value of the variable being measured and an output of
the measured value of that variable.

 An instrumentation system for making measurements has an input of the true value of the variable being measured and an
output of the measured value.

Instrumentation/ Example of instrumentation systems: (a) pressure measurement, (c) speedometer,


measurement system (c)flow rate measurement
The constituent elements of an instrumentation system
• An instrumentation system for making measurements consists of several elements which are used to carry out particular
functions. These functional elements are:

Sensor
This is the element of the system which is effectively in contact with the process for which a variable is being measured and gives
an output which depends in some way on the value of the variable and which can be used by the rest of the measurement
system to give a value to it. For example, a thermocouple is a sensor which has an input of temperature and an output of a small
e.m.f. (a) which in the rest of the measurement system might be amplified to give a reading on a meter. Another example of a
sensor is a resistance thermometer element which has an input of temperature and an output of a resistance change (b).

Signal processor
This element takes the output from the sensor and converts it into a form which is
suitable for display or onward transmission in some control system. In the case of the
thermocouple this may be an amplifier to make the e.m.f. big enough to register on a
meter(a). There often may be more than item, perhaps an element which puts the output
from the sensor into a suitable condition for further processing and then an element
which processes the signal so that it can be displayed. The term signal conditioner is used
for an element which converts the output of a sensor into a suitable form for further
processing. Thus in the case of the resistance thermometer there might be a signal
conditioner, a Wheatstone bridge, which transforms the resistance change into a voltage
change, then an amplifier to make the voltage big enough for display (b).
Data presentation
This presents the measured value in a form which enables
an observer to recognize it (Figure 1.9). This may be via a
Instrumentation/measurement system
display, e.g. a pointer moving across the scale of a meter
element. shows how these basic functional
or perhaps information on a visual display unit (VDU). dements form a measurement system.
Alternatively, or additionally, the signal may be recorded,
e.g. on the paper of a chart recorder or perhaps on Example
magnetic disc, or transmitted to some other system such With a resistance thermometer, element
A takes the temperature
as a control system.
signal and transforms it into resistance
signal, element B transforms
the resistance signal into a current
signal, element C transforms the
current signal into a display of a
movement of a pointer across a
scale. Which of these elements is (a) the
A data presentation element
sensor, (b) the signal
processor, (c) the data presentation?
CLASIFICATION OF INSTRUMENTS

Assignment one: Explain the classification of electrical


instruments with examples.
CLASIFICATION OF INSTRUMENTS CONT..
• Mechanical Instruments: Mechanical instruments are very reliable for static and stable conditions. As they use mechanical
parts these instruments cannot faithfully follow the rapid changes which are involved in dynamic instruments. But they are
cheaper in cost and durable.

• Electrical Instruments: When the instrument pointer deflection is caused by the action of some electrical methods
then it is called an electrical instrument. The time of operation of an electrical instrument is more rapid than that of a
mechanical instrument. This mechanical movement has some inertia due to which the frequency response of these
instruments is poor.

• Electronic Instrument: Electronic instruments use semiconductor devices. They are very fast in response. In electronic devices,
since the only movement involved is that of electrons, the response time is extremely small owing to very small inertia of the
electrons. With the use of electronic devices, a very weak signal can be detected by using pre-amplifiers and amplifiers
Characteristics of Instruments
 Static characteristics: The static characteristics of an instrument are, in general, considered for instruments which are used to
measure an unvarying process condition. All the static performance characteristics are obtained by one form or another of a
process called calibration. There are a number of related definitions (or characteristics), which are described below.

Accuracy: The degree of exactness (closeness) of a measurement compared to the expected (desired) value. Or is the extent to
which the value indicated by a measurement system or element might be wrong. For example, a thermometer may
have an accuracy of ±0.1C. Accuracy is often expressed as a percentage of the full range output or full-scale deflection (f.s.d). For
example, a system might have an accuracy of ±1% of f.s.d. If the full-scale deflection is, say, 10 A, then the accuracy is ±0.1 A. The
accuracy is a summation of all the possible errors that are likely to occur, as well as the accuracy to which the system or element
has been calibrated.

Precision : A measure of the consistency or repeatability of measurements ,i.e. successive reading do not differ. (Precision is the
consistency of the instrument output for a given value of input).

The term precision is used to describe the degree of freedom of a measurement system from random errors. Thus, a high
precision measurement instrument will give only a small spread of readings if repeated readings are taken of the same quantity.
A low precision measurement system will give a large spread of readings.

For example, consider the following two sets of readings obtained for repeated measurements of the same quantity by two
different instruments:
20.1 mm, 20.2 mm, 20.1 mm, 20.0 mm, 20.1 mm, 20.1 mm, 20.0 mm
19.9 mm, 20.3 mm, 20.0 mm, 20.5 mm, 20.2 mm, 19.8 mm, 20.3 mm
y
Note that precision should not be confused with accuracy. High precision does not mean high accuracy. A high precision
instrument could have low accuracy.

Error: The deviation of the true value from the desired value or The term error is used for the difference between the result of
the measurement and the true value of the quantity being measured, i.e. error = measured value - true value

Thus if the measured value is 10.1 when the true value is 10.0, the error
is +0.1. If the measured value is 9.9 when the true value is 10.0, the error
is-0.1.

Sensitivity : The ratio of the change in output (response) of the instrument to a change of input or measured variable. Or The
sensitivity indicates how much the output of an instrument system changes when the quantity being measured changes by a
given amount, i.e. the ratio of output/input.
For example, a thermocouple
might have a sensitivity of 20 μV/oC and so give an output of 20μV for each 1oC change in temperature. Thus, if we take a series
of readings of the output of an instrument for a number of different inputs and plot a graph of output against input the
sensitivity is the slope of the graph.
Example
A spring balance has its deflection measured for a number of loads and gave the following
results. Determine its sensitivity
Load in kg 0 1 2 3 4
Deflection in mm 0 10 20 30 40 The graph has
a slope of 10 mm/kg and so
this is the sensitivity.

Sensitivity as slope of input-


output graph

Resolution : The smallest change in a measured variable to which an instrument will respond.

Expected/ True value: The design value, i.e. the most probable value that calculations indicate one should expect to measure.

Stability:
The stability of a system is its ability to give the same output when used to measure a constant
input over a period of time. The term drift is often used to describe the change in output that
occurs over time. The drift may be expressed as a percentage of the full range output. The term
zero drift is used for the changes that occur in output when there is zero input.
Threshold
If the input to an instrument is gradually increased from zero, the input will have to reach a certain minimum level
before the change in the instrument output reading is of a large enough magnitude to be detectable.
This minimum level of input is known as the threshold of the instrument. Manufacturers vary in the way that they
specify threshold for instruments. Some quote absolute values, whereas others quote threshold as a percentage of
full-scale readings. As an illustration, a car speedometer typically has a threshold of about 15 km/h. This means that,
if the vehicle starts from rest and accelerates, no output reading is observed on the speedometer until the speed
reaches 15 km/h.
Linearity
It is normally desirable that the output reading of an instrument is
linearly proportional to the quantity being measured. The Xs marked
on Figure 2.6 show a plot of the typical output readings of an
instrument when a sequence of input quantities are applied to it.
Normal procedure is to draw a good fit straight line through the Xs, as
shown in Figure.
The non-linearity is then defined as the maximum deviation of any of
the output readings marked X from this straight line.
Non-linearity is usually expressed as a percentage of full-scale reading
Scale range and Scale span
The range of variable of system is the limits between which the input can vary. For example, a resistance thermometer sensor
might be quoted as having a range of-200 to +800°C. The meter shown has the dual ranges 0 to 4 and 0 to 20. The range of
variable of an instrument is also sometimes called its span. The range or span of an instrument defines the minimum and
maximum values of a quantity that the instrument is designed to measure.

The term dead band or dead space is used if there is a range of input values for which there is no output. Figure 1.17 illustrates
this. For example, bearing friction in a flow meter using a rotor might mean that there is no output until the input has reached a
particular flow rate threshold.

Multi range mater


Dead space/Dead band
Errors in measurement
Measurement is the process of comparing an unknown quantity with an accepted standard quantity. It involves connecting a
measuring instrument into the system under consideration and observing the resulting response on the instrument.

The measurement thus obtained is a quantitative measure of the “true value” / “expected value” . Any measurement is affected
by many variables, therefore the results rarely reflect the expected value.

For example, connecting a measuring instrument into the circuit under consideration always disturbs (changes) the circuit,
causing the measurement to differ from the expected value.

Some factors that affect the measurements are related to the measuring instruments themselves. Other factors are related to the
person using the instrument.

The degree to which a measurement nears the expected value is expressed in terms of the error of measurement. Error may be
expressed either as absolute or as percentage of error.

Absolute error may be defined as the difference between the expected value of the variable and the measured value of the
variable, or e = Yn – Xn

where e = absolute error


Yn = expected value
Xn = measured value
Example 1 (a) The expected value of the voltage across a
resistor is 80 V.
However, the measurement gives a value of 79 V. Calculate
(i) absolute error,
(ii) % error, (iii) relative accuracy, and (iv) % of accuracy.
Try 1: The expected value of the current through a resistor is 20 mA. However the measurement yields a current value of 18 mA.
Calculate (i) absolute error (ii) % error (iii) relative accuracy (iv) % accuracy.

Example 2: The table gives the set of 10 measurement that were recorded in the laboratory. Calculate the precision of the 6th
measurement. NOTE: The precision of a measurement is a quantitative or numerical indication of the
closeness with which a repeated set of measurement of the same variable agree with the
average set of measurements
The accuracy and precision of
measurements depend not
only on the quality
of the measuring instrument
but also on the person using
it. However, whatever
the quality of the instrument
and the case exercised by the
user, there is always
some error present in the
measurement of physical
quantities.
Precision can also be expressed
mathematically as
TYPES OF STATIC ERROR
 The static error of a measuring instrument is the numerical difference between the true value of a quantity and
its value as obtained by measurement, i.e. repeated measurement of the same quantity gives different
indications. Static errors are categorized as gross errors or human errors, systematic errors, and random errors.

 Gross Errors
These errors are mainly due to human mistakes in reading or in using instruments or errors in recording
observations. Errors may also occur due to incorrect adjustment of instruments and computational mistakes. These
errors cannot be treated mathematically.

The complete elimination of gross errors is not possible, but one can minimize them. Some errors are easily
detected while others may be elusive. One of the basic gross errors that occurs frequently is the improper use of an
instrument. The error can be minimized by taking proper care in reading and recording the measurement parameter.
In general, indicating instruments change ambient conditions to some extent when connected into a complete
circuit.

 Systematic Errors
A constant uniform deviation of the operation of an instrument is known as a systematic error
These errors occur due to shortcomings of the instrument, such as defective or worn parts, or ageing or effects of
the environment on the instrument. These errors are sometimes referred to as bias, and they influence all
measurements of a quantity alike.
There are basically three types of systematic errors (i) Instrumental, (ii) Environmental, and (iii)
Observational.
(I) Instrumental Errors
Instrumental errors are inherent in measuring instruments, because of their mechanical
structure. For example, friction in the
bearings of various moving components, irregular spring tensions, stretching of the spring, or
reduction in tension due to improper handling or overloading of the instrument. Instrumental
errors can be avoided by
(a) selecting a suitable instrument for the particular measurement applications.
(b) applying correction factors after determining the amount of instrumental error.
(c) calibrating the instrument against a standard.

(ii) Environmental Errors


Environmental errors are due to conditions external to the measuring device, including
conditions in the area surrounding the instrument, such as the effects of change in temperature,
humidity, barometric pressure or of magnetic or electrostatic fields.
These errors can also be avoided by (i) air conditioning, (ii) hermetically
sealing certain components in the instruments, and (iii) using magnetic shields.
(iii) Observational Errors
Observational errors are errors introduced by the observer. The most common error is the parallax error introduced in reading of
a meter scale, and the error of estimation when obtaining a reading from a meter scale.

These errors are caused by the habits of individual observers. For example, an observer may always introduce an error by
consistently holding his head too far to the left while reading a needle and scale reading.

In general, systematic errors can also be subdivided into static and dynamic errors.

Static errors are caused by limitations of the measuring device or the physical laws governing its behaviour. Dynamic errors are
caused by the instrument not responding fast enough to follow the changes in a measured variable.

 Random Errors
These are errors that remain after gross and systematic errors have been substantially reduced or at least accounted
for. Random errors are generally an accumulation of a large number of small effects and may be of real concern only
in measurements requiring a high degree of accuracy. Such errors can be analyzed statistically. These errors are due
to unknown causes, not determinable in the ordinary process of making measurements. Such errors are normally
small and follow the laws of probability. Random errors can thus be treated mathematically.

For example, suppose a voltage is being monitored by a voltmeter which is read at 15 minutes intervals. Although
the instrument operates under ideal environmental conditions and is accurately calibrated before measurement, it
still gives readings that vary slightly over the period of observation. This variation cannot be corrected by any
method of calibration or any other known method of control
Limiting Errors
 Most manufacturers of measuring instruments specify accuracy within a certain % of a full scale reading. For example, the
manufacturer of a certain voltmeter may specify the instrument to be accurate within ± 2% with full scale deflection. This
specification is called the limiting error. This means that a full scale deflection reading is guaranteed to be within the limits of 2%
of a perfectly accurate reading; however, with a reading less than full scale, the limiting error increases.

Example1 .A 600 V voltmeter is specified to be accurate within ± 2% at full scale. Calculate the limiting error when the instrument
is used to measure a voltage of 250 V.

Example2 : A 500 mA voltmeter is specified to be accurate with ±2%. Calculate the limiting error when instrument is used to
measure 300 mA.

Example3: A voltmeter reading 70 V on its 100 V range and an ammeter reading 80 mA on its 150 mA range are used to
determine the power dissipated in a resistor. Both these instruments are guaranteed to be accurate within ±1.5% at full scale
deflection. Determine the limiting error of the power.
SOURCES OF ERROR
 The sources of error, other than the inability of a piece of hardware to provide a true measurement, are as
follows:
1. Insufficient knowledge of process parameters and design conditions
2. Poor design
3. Change in process parameters, irregularities, upsets.
4. Poor maintenance
5. Errors caused by person operating the instrument or equipment
6. Certain design limitations
Standards
A standard is a physical representation of a unit of measurement. A known accurate measure of physical quantity is
termed as a standard. These standards are used to determine the values of other physical quantities by the
comparison method.
In fact, a unit is realized by reference to a material standard or to natural phenomena, including physical and atomic
constants.

For example, the fundamental unit of length in the International system (SI) is the metre, defined as the distance
between two fine lines engraved on gold plugs near the ends of a platinum iridium alloy at 0°C and mechanically
supported in a prescribed manner. Similarly, different standards have been developed for other units of
measurement (including fundamental units as well as derived mechanical and electrical units). All these standards
are preserved at the International Bureau of
Weight and Measures at Sèvres, Paris.

Also, depending on the functions and applications, different types of “standards of measurement” are classified in
categories (i) international, (ii) primary, (iii) secondary, and (iv) working standards.

International Standards
International standards are defined by International agreement. They are periodically evaluated and checked by
absolute measurements in terms of fundamental units of Physics. They represent certain units of measurement to
the closest possible accuracy attainable by the science and technology of measurement. These International
standards are not available to ordinary users for measurements and calibrations.
Primary Standards
The principle function of primary standards is the calibration and verification of secondary standards. Primary standards are
maintained at the National Standards Laboratories in different countries. The primary standards are not available for use outside
the National Laboratory. These primary standards are absolute standards of high accuracy that can be used as ultimate reference
standards.

Secondary Standards
Secondary standards are basic reference standards used by measurement and calibration laboratories in industries. These
secondary standards are maintained by the particular industry to which they belong. Each industry has its own
secondary standard. Each laboratory periodically sends its secondary standard to the National standards laboratory for calibration
and comparison against the primary standard. After comparison and calibration, the National Standards
Laboratory returns the Secondary standards to the particular industrial laboratory with a certification of measuring accuracy in
terms of a primary standard.

Working Standards
Working standards are the principal tools of a measurement laboratory. These standards are used to check and calibrate
laboratory instrument for accuracy and performance. For example, manufacturers of electronic components such as
capacitors, resistors, etc. use a standard called a working standard for checking the component values being manufactured, e.g. a
standard resistor for checking of resistance value manufactured.
Units
A unit of measurement is a definite magnitude of a quantity, defined and adopted by convention or by law, that is used as a
standard for measurement of the same kind of quantity. Any other quantity of that kind can be expressed as a multiple of the
unit of measurement.

For example, a length is a physical quantity. The metre is a unit of length that represents a definite predetermined length. When
we say 5 metres (or 5 m), we actually mean 5 times the definite predetermined length called "metre". Measurement is a process
of determining how large or small a physical quantity is as compared to a basic reference quantity of the same kind.

 A multitude of systems of units used to be very common. Now there is a global standard, the International System of Units (SI),
the modern form of the metric system.

System of Units
Historically many of the systems of measurement which had been in use were to some extent based on the dimensions of the
human body. As a result, units of measure could vary not only from location to location but from person to person.

• Traditional systems
Historically many of the systems of measurement which had been in use were to some extent based on the dimensions of the
human body. As a result, units of measure could vary not only from location to location but from person to person.

• Metric systems
Metric systems of units have evolved since the adoption of the original metric system in France in 1791. The current international
standard metric system is the International System of Units (abbreviated to SI). An important feature of modern systems
is standardization. Each unit has a universally recognized size. (Natural, legal control of weight systems etc)

You might also like