0% found this document useful (0 votes)
130 views58 pages

Statistical Process Control: Chapter Objectives

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 58

M15_BEST_2274_03_C15.

qxp 7/2/11 12:50 PM Page 331

15
Statistical Process Control1

Chapter Objectives
• Understanding the seven QC tools of continuous improvement and solving problems: Pareto charts, check-
sheets and histograms, process flow diagrams and cause and effect diagrams
• Understanding basic statistical concepts like measures of central tendency and dispersion, population, sam-
ple and normal distribution
• Overview of application of data-based approach for basic statistical tools for continuous improvement and
solving problems
• Studying statistical control charts, types and some application examples

Introduction
One of the best technical tools for improving product and service quality is statistical process control (SPC).
There are seven basic techniques. Since the first four techniques are not really statistical, the word statistical is
somewhat of a misnomer. Furthermore, this technical tool not only controls the process but has the capability
to improve it as well.

Pareto Diagram
Alfredo Pareto (1848–1923) conducted extensive studies of the distribution of wealth in Europe. He found
that there were a few people with a lot of money and many people with little money. This unequal distribu-
tion of wealth became an integral part of economic theory. Dr. Joseph Juran recognized this concept as a uni-
versal that could be applied to many fields. He coined the phrases vital few and useful many.
A Pareto diagram is a graph that ranks data classifications in descending order from left to right, as shown
in Figure 15-1. In this case, the data classifications are types of coating machines. Other possible data classi-
fications are problems, complaints, causes, types of nonconformities, and so forth. The vital few are on the
left, and the useful many are on the right. It is sometimes necessary to combine some of the useful many into
one classification called “other”. When this category is used, it is placed on the far right.

1
Adapted, with permission, from Dale H. Besterfield, Quality Control, 6th ed. (Upper Saddle River, NJ: Prentice Hall, 2001).

331
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 332

332 ■ CHAPTER 15

Vital
49% Few
90 53%
80

Dollars (In Thousands)


Nonconformity
70

Nonconformity
Frequency 26%
60
50 Useful
40 21% Many
17%
10% 30
7% 20
4% 4% 4% 3% 2%
10
0
35 51 44 47 29 31 51 35 44 29 47 31

Coating Machine Coating Machine

Figure 15-1 Pareto Diagram

The vertical scale is dollars (or frequency), and the percent of each category can be placed above the col-
umn. In this case, Pareto diagrams were constructed for both frequency and dollars. As can be seen from the
figure, machine 35 has the greatest number of nonconformities, but machine 51 has the greatest dollar value.
Pareto diagrams can be distinguished from histograms (to be discussed) by the fact that the horizontal scale
of a Pareto diagram is categorical, whereas the scale for the histogram is numerical.
Pareto diagrams are used to identify the most important problems. Usually, 75% of the total results from
25% of the items. This fact is shown in the figure, where coating machines 35 and 51 account for about 75%
of the total.
Actually, the most important items could be identified by listing them in descending order. However, the
graph has the advantage of providing a visual impact, showing those vital few characteristics that need atten-
tion. Resources are then directed to take the necessary corrective action.
Examples of the vital few are:

A few customers account for the majority of sales.


A few processes account for the bulk of the scrap or rework cost.
A few nonconformities account for the majority of customer complaints.
A few suppliers account for the majority of rejected parts.
A few problems account for the bulk of the process downtime.
A few products account for the majority of the profit.
A few items account for the bulk of the inventory cost.

Construction of a Pareto diagram is very simple. There are five steps:

1. Determine the method of classifying the data: by problem, cause, nonconformity, and so forth.
2. Decide if dollars (best), frequency, or both are to be used to rank the characteristics.
3. Collect data for an appropriate time interval or use historical data.
4. Summarize the data and rank order categories from largest to smallest.
5. Construct the diagram and find the vital few.

Note that a quality improvement of the vital few, say, 50%, is a much greater return on investment than a 50%
improvement of the useful many. Also, experience has shown that it is easier to make a 50% improvement in the
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 333

STATISTICAL PROCESS CONTROL ■ 333

Hold
Telephone

Credit Contract
Fax Log in
check review
OK

Letter

Inventory Schedule Production,


check production etc.

Notify customer
as to delivery date

Figure 15-2 Flow Diagram for an Order Entry Activity

vital few. The use of a Pareto diagram is a never-ending process. For example, let’s assume that coating
machine 51 is the target for correction in the improvement program. A project team is assigned to investigate
and make improvements. The next time a Pareto analysis is made, another machine, say, 35 becomes the tar-
get for correction, and the improvement process continues until coating machine nonconformities become an
insignificant quality problem.
The Pareto diagram is a powerful quality improvement tool. It is applicable to problem identification and
the measurement of progress.

Process Flow Diagram


For many products and services, it may be useful to construct a process flow diagram. Figure 15-2 shows
a flow diagram for the order entry activity of a make-to-order company that manufactures gasoline filling
station hose nozzles. These diagrams show the flow of the product or service as it moves through the var-
ious processing operations. The diagram makes it easy to visualize the entire system, identify potential
trouble spots, and locate control activities. It answers the question, “Who is the next customer?” Improve-
ments can be accomplished by changing, reducing, combining, or eliminating steps.
Standardized symbols are used by industrial engineers; however, they are not necessary for problem solv-
ing. The symbols used in the figure should be sufficient.

Cause-and-Effect Diagram
A cause-and-effect (C&E) diagram is a picture composed of lines and symbols designed to represent a mean-
ingful relationship between an effect and its causes. It was developed by Dr. Kaoru Ishikawa in 1943 and is
sometimes referred to as an Ishikawa diagram or a fishbone diagram because of its shape.
C&E diagrams are used to investigate either a “bad” effect and to take action to correct the causes or a
“good” effect and to learn those causes that are responsible. For every effect, there are likely to be numerous
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 334

334 ■ CHAPTER 15

People Materials Work methods

Quality
characteristic

Environment Equipment Measurement

Causes Effect

Figure 15-3 Cause-and-Effect Diagram

causes. Figure 15-3 illustrates a C&E diagram with the effect on the right and causes on the left. The effect is
the quality characteristic that needs improvement. Causes are sometimes broken down into the major causes
of work methods, materials, measurement, people, equipment, and the environment. Other major causes could
be used for service-type problems, as indicated in the chapter on customer satisfaction.
Each major cause is further subdivided into numerous minor causes. For example, under work methods,
we might have training, knowledge, ability, physical characteristics, and so forth. C&E diagrams are the
means of picturing all these major and minor causes. Figure 15-4 shows a C&E diagram for house paint peel-
ing using four major causes.
The first step in the construction of a C&E diagram is for the project team to identify the effect or quality
problem. It is placed on the right side of a large piece of paper by the team leader. Next, the major causes are
identified and placed on the diagram.
Determining all the minor causes requires brainstorming by the project team. Brainstorming is an idea-
generating technique that is well suited to the C&E diagram. It uses the creative thinking capacity of the team.
Attention to a few essentials will provide a more accurate and usable result:
1. Participation by every member of the team is facilitated by each member taking a turn giving one idea
at a time. If a member cannot think of a minor cause, he or she passes for that round. Another idea may occur

Material Work Method 3


no instructions
cheap 2 too thin paint over
solvents wrong type dirt
contaminated too thick 5
dirt House Paint
1 air pollution Peeling
bad bristle humidity
temperature
brush acid rain
dirty 4
Equipment Environment

Causes Effect

Figure 15-4 Cause-and-Effect Diagram of House Paint Peeling


M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 335

STATISTICAL PROCESS CONTROL ■ 335

at a later round. Following this procedure prevents one or two individuals from dominating the brainstorm-
ing session.
2. Quantity of ideas, rather than quality, is encouraged. One person’s idea will trigger someone else’s idea,
and a chain reaction occurs. Frequently, a trivial, or “dumb,” idea will lead to the best solution.
3. Criticism of an idea is not allowed. There should be a freewheeling exchange of information that lib-
erates the imagination. All ideas are placed on the diagram. Evaluation of ideas occurs at a later time.
4. Visibility of the diagram is a primary factor of participation. In order to have space for all the minor causes,
a 2-foot by 3-foot piece of paper is recommended. It should be taped to a wall for maximum visibility.
5. Create a solution-oriented atmosphere and not a gripe session. Focus on solving a problem rather than
discussing how it began. The team leader should ask questions using the why, what, where, when, who, and
how techniques.
6. Let the ideas incubate for a period of time (at least overnight) and then have another brainstorming ses-
sion. Provide team members with a copy of the ideas after the first session. When no more ideas are gener-
ated, the brainstorming activity is terminated.

Once the C&E diagram is complete, it must be evaluated to determine the most likely causes. This activ-
ity is accomplished in a separate session. The procedure is to have each person vote on the minor causes. Team
members may vote on more than one cause. Those causes with the most votes are circled, as shown in Figure
15-4, and the four or five most likely causes of the effect are determined.
Solutions are developed to correct the causes and improve the process. Criteria for judging the possible
solutions include cost, feasibility, resistance to change, consequences, training, and so forth. Once the team
agrees on solutions, testing and implementation follow.
Diagrams are posted in key locations to stimulate continued reference as similar or new problems arise.
The diagrams are revised as solutions are found and improvements are made.
The C&E diagram has nearly unlimited application in research, manufacturing, marketing, office opera-
tions, service, and so forth. One of its strongest assets is the participation and contribution of everyone
involved in the brainstorming process. The diagrams are useful to

1. Analyze actual conditions for the purpose of product or service quality improvement, more efficient
use of resources, and reduced costs.
2. Eliminate conditions causing nonconformities and customer complaints.
3. Standardize existing and proposed operations.
4. Educate and train personnel in decision-making and corrective-action activities.

Check Sheets
The main purpose of check sheets is to ensure that the data is collected carefully and accurately by operating
personnel. Data should be collected in such a manner that it can be quickly and easily used and analyzed. The
form of the check sheet is individualized for each situation and is designed by the project team. Figure 15-5
shows a check sheet for paint nonconformities for bicycles.
Figure 15-6 shows a check sheet for temperature. The scale on the left represents the midpoint and bound-
aries for each temperature range. Data for this type of check sheet is frequently recorded by placing an “X”
in the appropriate square. In this case, the time has been recorded in order to provide additional information
for problem solving.
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 336

336 ■ CHAPTER 15

CHECK SHEET

Product: Bicycle 32 Number inspected: 2217


Nonconformity type Check Total

Blister 21

Light spray 38

Drips 22

Overspray 11

Runs 47

Others 5
Total 144

Number 113

Nonconforming

Figure 15-5 Check Sheet for Paint Nonconformities

Figure 15-6 Check Sheet for Temperature


M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 337

STATISTICAL PROCESS CONTROL ■ 337

Whenever possible, check sheets are also designed to show location. For example, the check sheet for bicy-
cle paint nonconformities could show an outline of a bicycle, with X’s indicating the location of the noncon-
formities. Creativity plays a major role in the design of a check sheet. It should be user-friendly and, whenever
possible, include information on time and location.

Histogram
The first “statistical” SPC technique is the histogram. It describes the variation in the process, as illustrated
by Figure 15-7. The histogram graphically estimates the process capability and, if desired, the relationship to
the specifications and the nominal (target). It also suggests the shape of the population and indicates if there
are any gaps in the data.
In industry, business, and government the mass of data that have been collected is voluminous. Even one item,
such as the number of daily billing errors of a large bank, can represent such a mass of data that it can be more con-
fusing than helpful. For example, consider the data shown in Table 15-1. Clearly these data, in this form, are diffi-
cult to use and are not effective in describing the data’s characteristics. Some means of summarizing the data are
needed to show what value the data tend to cluster about and how the data are dispersed or spread out. Two tech-
niques are needed to accomplish this summarization of data—graphical and analytical.

Ungrouped Data
The graphical technique is a plot or picture of a frequency distribution, which is a summarization of how the
data points (observations) occur within each subdivision of observed values or groups of observed values.
Analytical techniques summarize data by computing a measure of the central tendency (average, median, and

20

15
Frequency

10

0
0 1 2 3 4 5
Number nonconforming

Figure 15-7 Frequency Histogram


M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 338

338 ■ CHAPTER 15

TABLE 15-1
Number of Daily Accounting Errors
0 1 3 0 1 0 1 0
1 5 4 1 2 1 2 0
1 0 2 0 0 2 0 1
2 1 1 1 2 1 1
0 4 1 3 1 1 1
1 3 4 0 0 0 0
1 3 0 1 2 2 3

mode) and a measure of the dispersion (range and standard deviation). Sometimes both the graphical and
analytical techniques are used.
Because unorganized data are virtually meaningless, a method of processing the data is necessary. Table
15-1 will be used to illustrate the concept. An analyst reviewing the information as given in this table would
have difficulty comprehending the meaning of the data. A much better understanding can be obtained by
tallying the frequency of each value, as shown in Table 15-2.
The first step is to establish an array, which is an arrangement of raw numerical data in ascending or
descending order of magnitude. An array of ascending order from 0 to 5 is shown in the first column of the
table. The next step is to tabulate the frequency of each value by placing a tally mark under the tabulation col-
umn and in the appropriate row. Start with the numbers 0, 1, 1, 2, ... of Table 15-1 and continue placing tally
marks until all the data have been tabulated. The last column of the table is the numerical value for the num-
ber of tallies and is called the frequency.
Analysis of Table 15-2 shows that one can visualize the distribution of the data. If the “Tabulation” col-
umn is eliminated, the resulting table is classified as a frequency distribution, which is an arrangement of
data to show the frequency of values in each category. The frequency distribution is a useful method of
visualizing data and is a basic statistical concept. To think of a set of numbers as having some type of dis-
tribution is fundamental to solving quality control problems. There are different types of frequency distri-
butions, and the type of distribution can indicate the problem-solving approach.
When greater visual clarity is desired, frequency distributions are presented in graphical form called his-
tograms. A histogram consists of a set of rectangles that represent the frequency of the observed values in each

TABLE 15-2
Tally of Number of Daily Accounting Errors
Number
Nonconforming Tabulation Frequency

0 15
1 20
2 8
3 5
4 3
5 1
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 339

STATISTICAL PROCESS CONTROL ■ 339

category. Figure 15-7 is a histogram for the data in Table 15-2. Because this is a discrete variable, a vertical line
in place of a rectangle would have been theoretically correct. However, the rectangle is commonly used.

Grouped Data
When the number of categories becomes large, the data are grouped into cells. In general, the number of cells
should be between 5 and 20. Broad guidelines are as follows: Use 5 to 9 cells when the number of observa-
tions is less than 100; use 8 to 17 cells when the number of observations is between 100 and 500; and use 15
to 20 cells when the number of observations is greater than 500. To provide flexibility, the number of cells in
the guidelines are overlapping. Figure 15-8 shows a histogram for grouped data of the quality characteristic,
temperature. The data were collected using the check sheet for temperature (see Figure 15-6). The interval is
the distance between adjacent cell midpoints. Cell boundaries are halfway between the cell midpoints. If an
odd cell interval is chosen, which in this case is five degrees, the midpoint value will be to the same degree
of accuracy as the ungrouped data. This situation is desirable, because all values in the cell take on the mid-
point value when any additional calculations are made.

Histogram Shapes
Histograms have certain identifiable characteristics, as shown in Figure 15-9. One characteristic of the dis-
tribution concerns the symmetry or lack of symmetry of the data. Are the data equally distributed on each side
Cell
30
Interval
25
20
Frequency

15
Boundary
10
5 Midpoint

0
355 360 365 370 375 380 385
Temperature

Figure 15-8 Histogram for Grouped Data

Symmetrical Skewed right Skewed left

Peaked Flat Bimodal

Figure 15-9 Different Historgram Shapes


M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 340

340 ■ CHAPTER 15

of the center, or are the data skewed to the right or to the left? Another characteristic concerns the peaked-
ness, or kurtosis, of the data.
A final characteristic concerns the number of modes, or peaks, in the data. There can be one mode, two
modes (bi-modal), or multiple modes.
Histograms can give sufficient information about a quality problem to provide a basis for decision mak-
ing without further analysis. They can also be compared in regard to location, spread, and shape. A his-
togram is like a snapshot of the process showing the variation. Histograms can determine the process
capability, compare with specifications, suggest the shape of the population, and indicate discrepancies in
the data, such as gaps.

Statistical Fundamentals
Before a description of the next SPC tool, it is necessary to have a background in statistical fundamentals.
Statistics is defined as the science that deals with the collection, tabulation, analysis, interpretation, and pres-
entation of quantitative data. Each division is dependent on the accuracy and completeness of the preceding
one. Data may be collected by a technician measuring the tensile strength of a plastic part or by an operator
using a check sheet. It may be tabulated by simple paper-and-pencil techniques or by the use of a computer.
Analysis may involve a cursory visual examination or exhaustive calculations. The final results are interpreted
and presented to assist in the making of decisions concerning quality.
Data may be collected by direct observation or indirectly through written or verbal questions. The latter
technique is used extensively by market research personnel and public opinion pollsters. Data that are col-
lected for quality control purposes are obtained by direct observation and are classified as either variables or
attributes. Variables are those quality characteristics that are measurable, such as a weight measured in grams.
Attributes, on the other hand, are those quality characteristics that are classified as either conforming or not
conforming to specifications, such as a “go–no go” gauge.
A histogram is sufficient for many quality control problems. However, with a broad range of problems a
graphical technique is either undesirable or needs the additional information provided by analytical tech-
niques. Analytical methods of describing a collection of data have the advantage of occupying less space than
a graph. They also have the advantage of allowing for comparisons between collections of data. They also
allow for additional calculations and inferences. There are two principal analytical methods of describing a
collection of data: measures of central tendency and measures of dispersion.

Measures of Central Tendency


A measure of central tendency of a distribution is a numerical value that describes the central position of the
data or how the data tend to build up in the center. There are three measures in common use in quality: (1) the
average, (2) the median, and (3) the mode.
The average is the sum of the observations divided by the number of observations. It is the most common
measure of central tendency and is represented by the equation2
n
∑ Xi
i =1
X =
n

2
For data grouped into cells, the equation uses ΣfX, where f = cell frequency and X = cell midpoint.
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 341

STATISTICAL PROCESS CONTROL ■ 341


where X = average and is read as “X bar”
n = number of observed values
Xi = observed value
Σ = sum of
_ _
Unless otherwise noted, X stands for the average of observed values, X x. The same equation is used to find

X or X X − average of averages
R − average of ranges
p − average of proportions, etc.

Another measure of central tendency is the median, Md, which is defined as the value that divides a series
of ordered observations so that the number of items above it is equal to the number below it. Two situations
are possible—when the number in the series is odd and when the number in the series is even. When the
number in the series is odd, the median is the midpoint of the values, provided the data are ordered. Thus,
the ordered set of numbers 3, 4, 5, 6, 8, 8, and 10 has a median of 6. When the number in the series is even,
the median is the average of the two middle numbers. Thus, the ordered set of numbers 3, 4, 5, 6, 8, and 8
has a median that is the average of 5 and 6, which is (5 + 6)/2 = 5.5.
The mode, Mo, of a set of numbers is the value that occurs with the greatest frequency. It is possible for the
mode to be nonexistent in a series of numbers or to have more than one value. To illustrate, the series of numbers
3, 3, 4, 5, 5, 5, and 7 has a mode of 5; the series of numbers 22, 23, 25, 30, 32, and 36 does not have a mode; and
the series of numbers 105, 105, 105, 107, 108, 109, 109, 109, 110, and 112 has two modes, 105 and 109. A series
of numbers is referred to as unimodal if it has one mode, bimodal if it has two modes, and multimodal if there are
more than two modes. When data are grouped into a frequency distribution, the midpoint of the cell with the high-
est frequency is the mode, because this point represents the highest point (greatest frequency) of the histogram.
The average is the most commonly-used measure of central tendency. It is used when the distribution is
symmetrical or not appreciably skewed to the right or left; when additional statistics, such as measures of dis-
persion, control charts, and so on, are to be computed based on the average; and when a stable value is needed
for inductive statistics. The median becomes an effective measure of the central tendency when the distribu-
tion is positively (to the right) or negatively (to the left) skewed. The median is used when an exact midpoint
of a distribution is desired. When a distribution has extreme values, the average will be adversely affected,
whereas the median will remain unchanged. Thus, in a series of numbers such as 12, 13, 14, 15, 16, the median
and average are identical and are equal to 14. However, if the first value is changed to a 2, the median remains
at 14, but the average becomes 12. A control chart based on the median is user-friendly and excellent for mon-
itoring quality. The mode is used when a quick and approximate measure of the central tendency is desired.
Thus, the mode of a histogram is easily found by a visual examination. In addition, the mode is used to
describe the most typical value of a distribution, such as the modal age of a particular group.

Measures of Dispersion
A second tool of statistics is composed of the measures of dispersion, which describe how the data are spread
out or scattered on each side of the central value. Measures of dispersion and measures of central tendency
are both needed to describe a collection of data. To illustrate, the employees of the plating and the assembly
departments of a factory have identical average weekly wages of $325.36; however, the plating department
has a high of $330.72 and a low of $319.43, whereas the assembly department has a high of $380.79 and a
low of $273.54. The data for the assembly department are spread out, or dispersed, farther from the average
than are those of the plating department.
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 342

342 ■ CHAPTER 15

One of the measures of dispersion is the range, which for a series of numbers is the difference between the
largest and smallest values of observations. Symbolically, it is represented by the equation
R = Xh ⫺ Xl
where R = range
Xh = highest observation in a series
Xl = lowest observation in a series

The other measure of the dispersion used in quality is the standard deviation. It is a numerical value in the
units of the observed values that measures the spreading tendency of the data. A large standard deviation
shows greater variability of the data than does a small standard deviation. In symbolic terms, it is represented
by the equation

∑ ( Xi − X )
n 2

i =1
s=
n −1

where s = sample standard deviation


Xi = observed value

X = average
n = number of observed values

Unless otherwise noted, s stands for sX, the sample standard deviation of observed values. The same equa-
tion is used to find

SX— = sample standard deviation of averages


sp = sample standard deviation of proportions
ss = sample standard deviation of standard deviations, etc.

The standard deviation is a reference value that measures the dispersion in the data. It is best viewed as an
index that is defined by the formula. The smaller the value of the standard deviation, the better the quality,
because the distribution is more closely compacted around the central value. The standard deviation also helps
to define populations, as discussed in the next section.
In quality control the range is a very common measure of the dispersion. It is used in one of the principal
control charts. The primary advantage of the range is in providing a knowledge of the total spread of the data.
It is also valuable when the amount of data is too small or too scattered to justify the calculation of a more
precise measure of dispersion. As the number of observations increases, the accuracy of the range decreases,
because it becomes easier for extremely high or low readings to occur. It is suggested that the use of the range
be limited to a maximum of ten observations. The standard deviation is used when a more precise measure is
desired.
The average and standard deviation are easily calculated with a hand calculator.

EXAMPLE PROBLEM

Determine the average, median, mode, range, and standard deviation for the height of seven people. Data are
1.83, 1.91, 1.78, 1.80, 1.83, 1.85, 1.87 meters.
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 343

STATISTICAL PROCESS CONTROL ■ 343

X = Σ X n = (1.83 + 1.91 +  + 1.87) 7 = 1.84


Md = {1.91, 1.87, 1.85, 1.83, 1.83, 1.80, 1.78} = 1.83
Mo = 1.83
R = Xh − Xl = 1.91 − 1.78 = 0.13
n
∑ ( Xi − X )
2

i =1 (1.91 − 1.84)2 +  + (1.78 − 1.84)2


s= =
n −1 7−1
= 0.04

Population and Sample


At this point, it is desirable to examine the concept of a population and a sample. In order to construct a frequency
distribution of the weight of steel shafts, a small portion, or sample, is selected to represent all the steel shafts. The
population is the whole collection of steel shafts. When averages, standard deviations, and other measures are
computed from samples, they are referred to as statistics. Because the composition of samples will fluctuate, the
computed statistics will be larger or smaller than their true population values, or parameters. Parameters are con-
sidered to be fixed reference (standard) values or the best estimate of these values available at a particular time.
The population may have a finite number of items, such as a day’s production of steel shafts. It may be infinite or
almost infinite, such as the number of rivets in a year’s production of jet airplanes. The population may be defined
differently, depending on the particular situation. Thus, a study of a product could involve the population of an
hour’s production, a week’s production, 5,000 pieces, and so on.
Because it is rarely possible to measure all of the population, a sample is selected. Sampling is necessary when
it may be impossible to measure the entire population; when the expense to observe all the data is prohibitive;
when the required inspection destroys the product; or when a test of the entire population may be too danger-
ous, as would be the case with a new medical drug. Actually, an analysis of the entire population may not be as
accurate as sampling. It has been shown that 100% manual inspection of low percent nonconforming product is
not as accurate as sampling. This is probably due to the fact that boredom and fatigue cause inspectors to pre-
judge each inspected item as being acceptable.
When_designating a population, the corresponding Greek letter is used. Thus, the sample average has the
symbol X and the population mean the _ symbol μ (mu). Note that the word average changes to mean when
used for the population. The symbol _ X 0 is the standard or reference value. Mathematical concepts are based
on μ, which is the true value— X 0 represents a practical equivalent in order to use the concepts. The sample
standard deviation has the symbol s, and the population standard deviation the symbol _ σ (sigma). The sym-
bol s0 is the standard or reference value and has the same relationship to σ that X 0 has to μ. The true popula-
tion value may never be known; therefore, the symbol μ^ and σ^ are sometimes used to indicate “estimate of.”
A comparison of sample and population is given in Table 15-3. A sample frequency distribution is represented
by a histogram, whereas a population frequency distribution is represented by a smooth curve. To some extent,
the sample represents the real world and the population represents the mathematical world. The equations and
concepts are based on the population.
The primary objective in selecting a sample is to learn something about the population that will aid in mak-
ing some type of decision. The sample selected must be of such a nature that it tends to resemble or represent
the population. How successfully the sample represents the population is a function of the size of the sample,
chance, the sampling method, and whether or not the conditions change.
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 344

344 ■ CHAPTER 15

TABLE 15-3
Comparison of Sample and Popultion
Sample Population

Statistic Parameter
X—average (X0)—mean
s—sample standard deviation ␴(s0)—standard deviation

Normal Curve
Although there are as many different populations as there are conditions, they can be described by a few gen-
eral types. One type of population that is quite common is called the normal curve, or Gaussian distribution.
The normal curve is a symmetrical, unimodal, bell-shaped distribution with the mean, median, and mode hav-
ing the same value. A curve of the normal population for the resistance in ohms of an electrical device with
population mean, μ, of 90 Ω and population standard deviation, σ, of 2 Ω is shown in Figure 15-10. The inter-
val between dotted lines is equal to one standard deviation, σ.
Much of the variation in nature and in industry follows the frequency distribution of the normal curves.
Thus, the variations in the weights of elephants, the speeds of antelopes, and the heights of human beings will
follow a normal curve. Also, the variations found in industry, such as the weights of gray iron castings, the
lives of 60-watt light bulbs, and the dimensions of steel piston rings, will be expected to follow the normal
curve. When considering the heights of human beings, we can expect a small percentage of them to be
extremely tall and a small percentage to be extremely short, with the majority of human heights clustering
about the average value. The normal curve is such a good description of the variations that occur to most qual-
ity characteristics in industry that it is the basis for many quality control techniques.
There is a definite relationship among the mean, the standard deviation, and the normal curve. Figure 15-11
shows three normal curves with different mean values; note that the only change is in the location. Figure 15-12
shows three normal curves with the same mean but different standard deviations. The figure illustrates the
principle that the larger the standard deviation, the flatter the curve (data are widely dispersed), and the smaller
the standard deviation, the more peaked the curve (data are narrowly dispersed). If the standard deviation is
zero, all values are identical to the mean and there is no curve.

μ = 90
σ = 2.0

– ∞ 84 86 88 90 92 94 96 +

Figure 15-10 Normal Distribution for Resistance of an Electrical Device


M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 345

STATISTICAL PROCESS CONTROL ■ 345

μ = 14 μ = 20 μ = 29

Figure 15-11 Normal Curves with Different Means but identical Standard Deviations

σ = 1.5

σ = 3.0

σ = 4.5

X
5 8 11 14 17 20 23 26 29 32 35

Figure 15-12 Normal Curves with Different Standard Deviations but identical Means

The normal distribution is fully defined by the population mean and population standard deviation. Also,
as seen by Figures 15-11 and 15-12, these two parameters are independent. In other words, a change in one
parameter has no effect on the other.
A relationship exists between the standard deviation and the area under the normal curve, as shown in Figure
15-13. The figure shows that in a normal distribution, 68.26% of the items are included between the limits of
μ + 1σ and μ ⫺ 1σ, 95.46% of the items are included between the limits μ + 2σ and μ ⫺ 2σ, and 99.73% of the

68.26%

95.45%

99.73%

– 3σ – 2σ – 1σ μ + 1σ + 2σ + 3σ

Figure 15-13 Percent of Values Included Between Cetain Values of the Standard Deviation
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 346

346 ■ CHAPTER 15

items are included between μ + 3σ and μ ⫺ 3σ. One hundred percent of the items are included between the
limits + ∞ and ⫺∞. These percentages hold true regardless of the shape of the normal curve. The fact that
99.73% of the items are included between ± 3σ is the basis for variable control charts.

Introduction to Control Charts

Variation
One of the axioms, or truisms, of production is that no two objects are ever made exactly alike. In fact, the
variation concept is a law of nature because no two natural items in any category are the same. The variation
may be quite large and easily noticeable, such as the height of human beings, or the variation may be very
small, such as the weights of fiber-tipped pens or the shapes of snowflakes. When variations are very small,
it may appear that items are identical; however, precision instruments will show differences. If two items
appear to have the same measurement, it is due to the limits of our measuring instruments. As measuring
instruments have become more refined, variation has continued to exist; only the increment of variation has
changed. The ability to measure variation is necessary before it can be controlled.
There are three categories of variations in piece part production:

1. Within-piece variation is illustrated by the surface roughness of a piece, wherein one portion of the
surface is rougher than another portion or the width of one end of a keyway varies from the other end.
2. Piece-to-piece variation occurs among pieces produced at the same time. Thus, the light intensity of
four consecutive light bulbs produced from a machine will be different.
3. Time-to-time variation is illustrated by the difference in product produced at different times of the day.
Thus, product produced in the early morning is different from that produced later in the day, or as a cutting
tool wears, the cutting characteristics change.

Categories of variation for other types of processes such as a continuous and batch are not exactly the same;
however, the concept is similar.
Variation is present in every process due to a combination of the equipment, materials, environment, and
operator. The first source of variation is the equipment. This source includes tool wear, machine vibration,
workholding-device positioning, and hydraulic and electrical fluctuations. When all these variations are put
together, there is a certain capability or precision within which the equipment operates. Even supposedly iden-
tical machines will have different capabilities. This fact becomes a very important consideration when sched-
uling the manufacture of critical parts.
The second source of variation is the material. Because variation occurs in the finished product, it must
also occur in the raw material (which was someone else’s finished product). Such quality characteristics as
tensile strength, ductility, thickness, porosity, and moisture content can be expected to contribute to the over-
all variation in the final product.
A third source of variation is the environment. Temperature, light, radiation, particle size, pressure, and
humidity all can contribute to variation in the product. In order to control environmental variations, products
are sometimes manufactured in white rooms. Experiments are conducted in outer space to learn more about
the effect of the environment on product variation.
A fourth source is the operator. This source of variation includes the method by which the operator per-
forms the operation. The operator’s physical and emotional well-being also contribute to the variation. A cut
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 347

STATISTICAL PROCESS CONTROL ■ 347

finger, a twisted ankle, a personal problem, or a headache can make an operator’s quality performance vary.
An operator’s lack of understanding of equipment and material variations due to lack of training may lead to
frequent machine adjustments, thereby compounding the variability. As our equipment has become more
automated, the operator’s effect on variation has lessened.
The preceding four sources account for the true variation. There is also a reported variation, which is due
to the inspection activity. Faulty inspection equipment, the incorrect application of a quality standard, or too
heavy a pressure on a micrometer can be the cause of the incorrect reporting of variation. In general, varia-
tion due to inspection should be one-tenth of the four other sources of variations. Note that three of these
sources are present in the inspection activity—an inspector or appraiser, inspection equipment, and the envi-
ronment.

Run Chart
A run chart, which is shown in Figure 15-14, is a very simple technique for analyzing the process in the devel-
opment stage or, for that matter, when other charting techniques are not applicable. The important point is to
draw a picture of the process and let it “talk” to you. A picture is worth a thousand words, provided someone
is listening. Plotting the data points is a very effective way of finding out about the process. This activity
should be done as the first step in data analysis. Without a run chart, other data analysis tools—such as the
average, sample standard deviation, and histogram—can lead to erroneous _ conclusions.
The particular run chart shown in Figure 15-14 is referred to as an X chart and is used to record the varia-
tion in the average value of samples. Other charts, such as the R chart (range) or p chart (proportion) would
have also served for explanation purposes. The horizontal axis is labeled “Subgroup Number,” which identi-
fies a particular sample consisting of a fixed number of observations. These subgroups are plotted by order of
production, with the first one inspected being 1 and the last one on this chart being 25. The vertical axis of the
graph is the variable, which in this particular case is weight measured in kilograms.
Each small solid diamond represents the average value within a subgroup. Thus, subgroup number 5 con-
sists of, say, four observations, 3.46, 3.49, 3.45, and 3.44, and their average is 3.46 kg. This value is the one
posted on the chart for subgroup number 5. Averages are used on control charts rather than individual

3.52
3.5
3.48
3.46
Subgroup average

3.44
3.42 X0
3.4
3.38
3.36
3.34
3.32
3.3
5 10 15 20 25
Subgroup number

Figure 15-14 Example of a Run Chart


M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 348

348 ■ CHAPTER 15

observations because average values will indicate a change in variation much faster. Also, with two or more
observations in a sample, a measure of the dispersion can be obtained for a particular subgroup.
The solid line in the center of the chart can have three different interpretations,
— depending on the available
data. First, it can be the average of the plotted points, which in the case of— an X chart is the average of the
averages or “X-double bar.” Second, it can be a standard or reference value, X0, based on representative prior
data, an economic value based on production costs or service needs, or an aimed-at value based on specifica-
tions. Third, it can be the population mean, μ, if that value is known.

Control Chart Example


One danger of using a run chart is its tendency to show every variation in data as being important. In order to
indicate when observed variations in quality are greater than could be left to chance, the control chart method
of analysis and presentation of data is used. The control chart method for variables is a means of visualizing
the variations that occur in the central tendency and dispersion of a set of observations. It is a graphical record
of the quality of a particular characteristic. It shows whether or not the process is in a stable state by adding
statistically determined control limits to the run chart.
Figure 15-15, is the run chart of Figure 15-14 with the control limits added. They are the two dashed outer
lines and are called the upper and lower control limits. These limits are established to assist in judging the
significance of the variation in the quality of the product. Control limits are frequently confused with speci-
fication limits, which are the permissible limits of a quality characteristic of each individual unit of a product.
However, _ control limits are used to evaluate the variations in quality from subgroup to subgroup. Therefore,
for the X chart, the control limits are a function of the subgroup averages. A frequency distribution of the
subgroup averages can be determined with its corresponding average and standard deviation.
The control limits are then established at ± 3σ from the central line. Recall, from the discussion of the normal
curve, that the number of items between + 3σ and ⫺ 3σ equals 99.73%. Therefore, it is expected that more than
997 times out of 1,000, the subgroup values will fall between the upper and lower limits. When this situation occurs,
the process is considered to be in control. When a subgroup value falls outside the limits, the process is considered
to be out of control, and an assignable cause for the variation is present. Subgroup number 10 in Figure 15-15 is
beyond the upper control limit; therefore, there has been a change in the stable nature of the process, causing the
out-of-control point. As long as the sources of variation fluctuate in a natural or expected manner, a stable pattern

3.52
3.5 UCLX
3.48
Subgroup average (kg)

3.46
3.44
3.42 X0
3.4
3.38
3.36
LCL X
3.34
3.32
3.3
5 10 15 20 25
Subgroup number

Figure 15-15 Example of a Control Chart


M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 349

STATISTICAL PROCESS CONTROL ■ 349

of many chance causes (random causes) of variation develops. Chance causes of variation are inevitable. Because
they are numerous and individually of relatively small importance, they are difficult to detect or identify. Those
causes of variation that are large in magnitude, and therefore readily identified, are classified as assignable causes.3
When only chance causes are present in a process, the process is considered to be in a state of statistical control.
It is stable and predictable. However, when an assignable cause of variation is also present, the variation will be
excessive, and the process is classified as out of control or beyond the expected natural variation of the process.
Unnatural variation is the result of assignable causes. Usually, but not always, it requires corrective action
by people close to the process, such as operators, technicians, clerks, maintenance workers, and first-line
supervisors. Natural variation is the result of chance causes—it requires management intervention to achieve
quality improvement. In this regard, between 80% and 85% of the quality problems are due to management
or the system and 15% to 20% are due to operations. Operating personnel are giving a quality performance
as long as the plotted points are within the control limits. If this performance is not satisfactory, the solution
is the responsibility of the system rather than of the operating personnel.

Variable Control Charts


In practice, control charts— are posted at individual machines or work centers to control a particular quality
characteristic. Usually, an X chart for the central tendency and an R chart for the dispersion are used together.
An example of this dual charting is illustrated in Figure 15-16, which shows a method of charting and report-
ing inspection results for rubber durometer.
At work center number 365-2 at 8:30 A.M., the operator selects four items for testing and records the obser-
vations of 55, 52, 51, and 53 in the rows marked X1, X2, X3, and X4, respectively. A subgroup average value
of 52.8 is obtained by summing the observation and dividing by 4, and the range value of 4 is obtained by sub- _
tracting the low value, 51, from the high value, 55. The operator places a small solid circle at 52.8 on the X
chart and a small solid circle at 4 on the R chart and then proceeds with his other duties.
The frequency with which the operator inspects a product at a particular machine or work center is deter-
mined by the quality of the product. When the process is in control and no difficulties are being encountered,
fewer inspections may be required. Conversely, when the process is out of control or during start-up, more
inspections may be needed. The inspection frequency at a machine or work center can also be determined by
the amount of time that must be spent on noninspection activities. In the example problem, the inspection
frequency appears to be every 60 or 65 minutes.
At 9:30 A.M. the operator performs the activities for subgroup 2 in the same manner as for subgroup 1. It
is noted that the range value of 7 falls on the upper control limit. Whether to consider this in control or out of
control would be a matter of organization policy. It is suggested that it be classified as in control and a cur-
sory examination for an assignable cause be conducted by the operator. A plotted point that falls exactly on
the control limit is a rare occurrence.
The inspection results for subgroup 2 show that the third observation, X3, has a value of 57, which exceeds
the upper control limit. It is important to remember the earlier discussion on control limits and specifications.
In other words, the 57 value is an individual observation and does not relate to the control limits. Therefore,
the fact that an individual observation is greater than or less than a control limit is meaningless.
Subgroup 4 has an average value of 44, which is less than the lower control limit of 45. Therefore, sub-
group 4 is out of control, and the operator will report this fact to the departmental supervisor. The operator
and supervisor will then look for an assignable cause and, if possible, take corrective action. Whatever

3
Dr. Edwards Deming uses the words common and special for “chance” and “assignable.”
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 350

350 ■ CHAPTER 15

X and R Charts

X Chart

55 UCLX
Durometer

50 X0

45 LCLX

40
5 10 15 20 25

R Chart
8
UCLR
6
Durometer

4 R0

0 LCLR
5 10 15 20 25

Subgroup Number

Figure 15-16 Example of a Method of Reporting Inspection Results


M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 351

STATISTICAL PROCESS CONTROL ■ 351

_
corrective action is taken will be noted by the operator on the X and R charts or on a separate form. The control
chart indicates when and where trouble has occurred. The identification and elimination of the difficulty is a
production problem. Ideally, the control chart should be maintained by the operator, provided time is avail-
able and proper training has been given. When the operator cannot maintain the chart, then it is maintained
by quality control.
The control chart is used to keep a continuing record of a particular quality characteristic. It is a picture of
the process over time. When the chart is completed and stored in an office file, it is replaced by a fresh chart.
The chart is used to improve the process quality, to determine the process capability, to determine when to
leave the process alone and when to make adjustments, and to investigate causes of unacceptable or marginal
quality. It is also used to make decisions on product or service specifications and decisions on the acceptabil-
ity of a recently-produced product or service.

Quality Characteristic
_
The variable that is chosen for the X and R charts must be a quality characteristic that is measurable and can
be expressed in numbers. Quality characteristics that can be expressed in terms of the seven basic units
(length, mass, time, electrical current, temperature, substance, or luminous intensity), as well as any of the
derived units, such as power, velocity, force, energy, density, and pressure, are appropriate.
Those quality characteristics affecting the performance of the product would normally be given first atten-
tion. These may be a function of the raw materials, component parts, subassemblies, or finished parts. In other
words, high priority is given to the selection of those characteristics that are giving difficulty in terms of pro-
duction problems and/or cost. An excellent opportunity for cost savings frequently involves situations where
spoilage and rework costs are high. A Pareto analysis is also useful for establishing priorities. Another possi-
bility occurs where destructive testing is used to inspect a product.
In any_organization, a large number of variables make up a product or service. It is, therefore, impossible
to place X and R charts on all variables. A judicious selection of those quality characteristics is required.

Subgroup Size and Method


As previously mentioned, the data that are plotted on the control chart consist of groups of items called rational
subgroups. It is important to understand that data collected in a random manner do not qualify as rational.
A rational subgroup is one in which the variation within the group is due only to chance causes. This within-
subgroup variation is used to determine the control limits. Variation between subgroups is used to evaluate long-
term stability. Subgroup samples are selected from product or a service produced at one instant of time or as close
to that instant as possible, such as four consecutive parts from a machine or four documents from a tray. The next
subgroup sample would be similar, but for product or a service produced at a later time—say, one hour later.
Decisions on the size of the sample or subgroup require a certain amount of empirical judgment; however,
some helpful guidelines are:

1. As the subgroup size increases, the control limits become closer to the central value, which makes the
control chart more sensitive to small variations in the process average.
2. As the subgroup size increases, the inspection cost per subgroup increases. Does the increased cost of
larger subgroups justify the greater sensitivity?
3. When costly and/or destructive testing is used and the item is expensive, a small subgroup size of two
or three is necessary, because it will minimize the destruction of expensive product.
4. Because of the ease of computation, a sample size of five is quite common in industry; however, when
inexpensive electronic hand calculators are used, this reason is no longer valid.
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 352

352 ■ CHAPTER 15

_
5. From a statistical basis a distribution of subgroup averages, X ’s, is nearly normal for subgroups of
four or more even when the samples are taken from a nonnormal population. This statement is proved
by the central limit theorem.

There is no rule for the frequency of taking subgroups, but the frequency should be often enough to detect
process changes. The inconveniences of the factory or office layout and the cost of taking subgroups must be
balanced with the value of the data obtained. In general, it is best to sample quite often at the beginning and
reduce the sampling frequency when the data permit.
The precontrol rule for the frequency of sampling could also be used. It is based on how often the
process is adjusted. If the process is adjusted every hour, then sampling should occur every 10 minutes;
if the process is adjusted every 2 hours, then sampling should occur every 20 minutes; if the process is
adjusted every 3 hours, then sampling should occur every 30 minutes; and so forth.

Data Collection
Assuming that the quality characteristic and the plan for the rational subgroup have been selected, a team
member such as a technician can be assigned the task of collecting the data as part of his normal duties. The
first-line supervisor and the operator should be informed of the technician’s activities; however, no charts or
data are posted at the work center at this time.
Because of difficulty
_ in the assembly of a gear hub to a shaft using a key and keyway, the project team
recommends using X and R charts. The quality characteristic is the shaft keyway depth of 6.35 mm (0.250
in.). Using a rational subgroup of four, a technician obtains five subgroups per day for five days. The samples
are measured, the subgroup average and range are calculated, and the results are recorded on the form as
shown in Table 15-4. Additional recorded information includes the date, time, and any comments pertaining
to the process. For simplicity, individual measurements are coded from 6.00 mm. Thus, the first measurement
of 6.35 is recorded as 35.
It is necessary to collect a minimum of 25 subgroups of data. A fewer number of subgroups would not
provide a sufficient amount of data for the accurate computation of the control limits, and a larger number of
subgroups would delay the introduction of the control chart.

Trial Central Lines and Control Limits


_
The central lines for the X and R charts are obtained using the equations

X = Σ Xi g R = Σ Ri g

where X = average of the subgroup averages (read “ X double baa r” )


X i = average of the th subgroup
g = number of subgroups
R = average of the subgrouup ranges
R i = range of the ith subgroup

Trial control limits for the charts are established at ± 3␴ from the central line, as shown by the equations

UCL X = X + 3σ X UCL R = R + 3 σR
LCL X = X − 3σ X LCL R = R − 3 σR
M15_BEST_2274_03_C15.qxp 7/2/11 12:50 PM Page 353

STATISTICAL PROCESS CONTROL ■ 353

TABLE 15-4
Data on the Depth of the Keyway (millimeters)
MEASUREMENTS
Subgroup Average Range
Number Date Time X1 X2 X3 X4 X R Comment

1 7/23 8:50 35 40 32 37 6.36 0.08


2 11:30 46 37 36 41 6.40 0.10
3 1:45 34 40 34 36 6.36 0.06
4 3:45 69 64 68 59 6.65 0.10 New, temporary
5 4:20 38 34 44 40 6.39 0.10 operator

· · · · · · · · ·
· · · · · · · · ·
· · · · · · · · ·
17 7/29 9:25 41 40 29 34 6.36 0.12
18 11:00 38 44 28 58 6.42 0.30 Damaged oil line
19 2:35 35 41 37 38 6.38 0.06
20 3:15 56 55 45 48 6.51 0.11 Bad material
21 7/30 9:35 38 40 45 37 6.40 0.08
22 10:20 39 42 35 40 6.39 0.07
23 11:35 42 39 39 36 6.39 0.06
24 2:00 43 36 35 38 6.38 0.08
25 4:25 39 38 43 44 6.41 0.06
Sum 160.25 2.19

where UCL = upper control limit


LCL = lower control limit
␴—
X = population standard deviation of the subgroup averages
␴R = population standard deviation of the range
_
In practice, the calculations are simplified by_using the product of the average of
_ the range (R ) and a factor
_
A2 to replace the three standard deviations (A2 R = 3σX) in the equation for the X chart. For the R chart, the
range is used to estimate the standard deviation of the range. Therefore, the derived equations are

UCL X = X + A2 R UCL R = D4 R
LCL X = X − A2 R LCL R = D4 R

_
where A2, D3, and D4 are factors that vary with the subgroup size and are found in Appendix A. For the X
chart, the upper and lower control limits are symmetrical about the central line. Theoretically, the control lim-
its for an R chart should also be symmetrical about the central line. But, for this situation to occur, with sub-
group sizes of six or less, the lower control limit would need to have a negative value. Because a negative
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 354

354 ■ CHAPTER 15

range is impossible, the lower control limit is located at zero by assigning to D3 the value of zero for sub-
groups of six or less.
When the subgroup size is seven or more, the lower control limit is greater than zero and symmetrical about
the central line. However, when the R chart is posted at the work center, it may be more practical to keep the
lower control limit at zero. This practice eliminates the difficulty of explaining to the operator that points
below the lower control limit on the R chart are the result of exceptionally good performance rather than poor
performance. However, quality personnel should keep their own charts with the lower control limit in its
proper location, and any out-of-control low points should be investigated to determine the reason for the
exceptionally good performance. Because subgroup sizes of seven or more are uncommon, the situation
occurs infrequently.
In order to illustrate the calculations necessary to obtain the trial control limits
_ and the central line, the data
concerning the depth of the shaft keyway will be used. From Table 15-4, ΣX = 160.25, ΣR= 2.19, and g = 25;
thus, the central lines are

X = Σ X g = 160.25 25 = 6 .41 mm
R = Σ R g = 2.19 25 = 0 .0876 mm

From Appendix Table A, the values for the factors for a subgroup size (n) of four are A2 = 0.729, D3 = 0, and

D4 = 2.282. Trial control limits for the X chart are

UCL X = X + A2 R LCL X = X − A2 R
= 6.41 + (0 .729 )(0.0876 ) = 6 .41 − (0.729 )(0 .0876 )
= 6.47 mm = 6.35 mm

Trial control limits for the R chart are

UCL R = D4 R LCL R = D3 R
= (2.282 )(0 .0876 ) = (0 )(0.0876 )
= 0.20 mm = 0 mm
_
Figure 15-17 shows the central lines and the trial control limits for X and R charts for the preliminary data.

Revised Central Lines and Control Limits


Revised central lines and control limits are established by discarding out-of-control points with assignable
causes and recalculating the central lines and control limits. The R chart is analyzed first to determine if it is
stable. Because the out-of-control point at subgroup 18 on the R chart has an assignable cause (damaged oil
line), it _can be discarded from the data. The remaining plotted points indicate a stable process.
The X chart can now be analyzed. Subgroups 4 and 20 had an assignable cause, whereas the out-of-control
condition for subgroup 16 did not. It is assumed that subgroup 16’s out-of-control state is due to a chance
cause and is part of the natural variation
_ of the process.
The recalculated values are X 0 = 6.40 mm and R0 = 0.079. They are shown _ in Figure 15-18. For illus-
trative purposes, the trial values are also shown. The limits for both the X and R charts became narrower,
as was expected. No change occurred in LCLR because the subgroup size is less than 7. The _ figure also
illustrates a simpler charting technique in that lines are not drawn between the points. Also, X 0 and R0, the
standard or reference values, are used to designate the central lines.
The preliminary data for the initial 25 subgroups are not plotted with the revised control limits. These
revised control limits are for reporting the results for future subgroups. To make effective use of the control
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 355

STATISTICAL PROCESS CONTROL ■ 355

6.65 Control charts — Depth of keyway

UCLX
6.47
Average (X) — mm

X
6.41

LCLX
6.35

0.30
UCLR
0.20
Range (R) — mm

R
0.09

LCLR
0

5 10 15 20 25
Subgroup number
_
Figure 15-17 X and R Charts for Preliminary Data with Trial Control Limits

chart during production, it should be displayed in a conspicuous place, where it can be seen by operators and
supervisors.
Before proceeding to the action step, some final comments are appropriate. First, many analysts eliminate
this step in the procedure because it appears to be somewhat redundant. However, by discarding out-of-
control points with assignable causes, the central line and control limits are more representative of the process.
If this step is too complicated for operating personnel, its elimination would not affect the next step.
— —
Second, the central line X 0 for the X chart is frequently based on the specifications. In such a case, the pro-
cedure is used only to obtain R0. If, in our example problem, the nominal value of the characteristic is 6.38

mm, then X 0 is set to that value and the upper and lower control limits are

UCL X = X 0 + A2 R0 LCL X = X 0 − A2 R0
= 6.38 + (0 .729 )(0.079 ) = 6 .38 − (0.729 )(0 .079 )
= 6.44 mm = 6.32 mm

The central line and control limits for the R chart do not change. This modification can be taken only if the
process is adjustable. If the process is not adjustable, then the original calculations must be used.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 356

356 ■ CHAPTER 15

Control charts — Depth of keyway

UCLX
6.47 UCLX = 6.46

Average (X) — mm
X
6.41 X0 = 6.40

LCLX
6.35 LCLX = 6.34

UCLR
0.20 UCLR = 0.18
Range (R) — mm

R R0 = 0.08
0.09

LCLR LCLR = 0
0

1 5 1 5 10
Subgroup number
_
Figure 15-18 Trial Control Limits and Revised Control Limits for X and R Charts

Third, it follows that adjustments to the process should be made while taking data. It is not necessary to
run nonconforming material while collecting data, because we _ are primarily interested in obtaining R0, which
is not affected by the process setting. The independence of X and R0 provides the rationale for this concept.
Fourth, the process determines the central line and control_limits. They are not established by design, man-
ufacturing, marketing, or any other department, except for X 0 when the process is adjustable.

Achieving the Objective


When control charts are first introduced at a work center, an improvement in the process performance usually
occurs. This initial improvement is especially noticeable when the process is dependent on the skill of the
operator. Posting a quality control chart appears to be a psychological signal to the operator to improve per-
formance. Most workers want to produce a quality product or service; therefore, when management shows an
interest in the quality, the operator responds. _
Figure 15-19 illustrates the initial improvement that occurred after the introduction of the X and R charts
in January. Owing to space limitations, only a representative number of subgroups for each month are shown
in the figure. During January the subgroup ranges had less variation and tended to be centered at a slightly
lower point. A reduction in the range variation occurred also.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 357

STATISTICAL PROCESS CONTROL ■ 357

Control charts — Depth of keyway

6.46 UCL
Average (X) — mm

6.40
LCL

6.34

0.18

UCL
Range (R) — mm

0.08

0
January February July

Figure 15-19 Continuing Use of Control Charts, Showing Improved Quality

Not all the improved performance in January was the result of operator effort. The first-line supervisor ini-
tiated a program of tool-wear control, which was a contributing factor.
At the end of January new central lines and control limits were calculated using the data from subgroups
obtained during the month. It is a good idea, especially when a chart is being initiated, to calculate standard
values periodically to see if any changes have occurred. This reevaluation can be done for every 25 or more
subgroups, and the results can _be compared to the previous values.
New control limits for the X and_R charts and central line for the R chart were established for the month
of February. The central line for the X chart was not changed because it is the nominal value. During the ensu-
ing months, the maintenance department replaced a pair of worn gears, purchasing changed the material sup-
plier, and tooling modified a workholding device. All these improvements were the result of investigations that
tracked down the causes for out-of-control conditions or were ideas developed by a project team. The genera-
tion of ideas by many different personnel is the most essential ingredient for continuous quality improvement.
Ideas from the operator, first-line supervisor, quality assurance, maintenance, manufacturing engineering, and
industrial engineering should be evaluated. This evaluation or testing of an idea requires 25 or more sub-
groups. The control chart will tell if the idea_is good, is poor, or has no effect on the process. Quality improve-
ment occurs when the plotted points of the X chart converge on the central line, when the plotted points of the
R chart trend downward, or when both actions occur. If a poor idea is tested, then the reverse occurs. Of
course, if the idea is neutral, it will have no effect on the plotted point pattern.
To speed up the testing of ideas, the taking of subgroups can be compressed in time as long as the data rep-
resent the process by accounting for any hourly or day-to-day fluctuations. Only one idea should be tested at
a time; otherwise, the results will be confounded.
At the end of June, the periodic evaluation of the past performance showed the need to revise the central
lines and the control limits. The performance for the month of July and subsequent months showed a natural
pattern of variation and no quality improvement. At that point, no further quality improvement would be pos-
sible without a substantial investment in new equipment or equipment modification.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 358

358 ■ CHAPTER 15

Dr. Deming has stated that if he were a banker, he would not lend money to an organization unless statistical
methods were used to prove that the money was necessary. This is precisely what the control chart can achieve,
provided that all personnel use the chart as a method of quality improvement rather than a monitoring function.
When the objective for initiating the charts has been achieved, their use should be discontinued or the fre-
quency of inspection be substantially reduced to a monitoring action by the operator. The median chart is an
excellent chart for the monitoring activity. Efforts should then be directed toward the improvement of some
other quality characteristic. If a project team was involved, it should be recognized and rewarded for its per-
formance and disbanded.
The U.S. Postal Service at Royal Oak, Michigan used a variables control chart to reduce nonconformance
in a sorting operation from 32% to less than 6%. This activity resulted in an annual savings of $700,000 and
earned the responsible team the 1999 RIT/USA Today Quality Cup for government.

State of Control
When the assignable causes have been eliminated from the process to the extent that the points plotted on the
control chart remain within the control limits, the process is in a state of control. No higher degree of unifor-
mity can be attained with the existing process. However, greater uniformity can be attained through a change
in the basic process resulting from quality improvement ideas.
When a process is in control, there occurs a natural pattern of variation, which is illustrated by the control
chart in Figure 15-20. This natural pattern of variation has (1) about 34% of the plotted points on an imagi-
nary band between one standard deviation on both sides of the central line, (2) about 13.5% of the plotted
points in an imaginary band between one and two standard deviations on both sides of the central line, and
(3) about 2.5% of the plotted points in an imaginary band between two and three standard deviations on both
sides of the central line. The points are located back and forth across the central line in a random manner, with
no points beyond the control limits. The natural pattern of the points, or subgroup average values, forms its
own frequency distribution. If all the points were stacked up at one end, they would form a normal curve.
When a process is in control, only chance causes of variation are present. Small variations in machine per-
formance, operator performance, and material characteristics are expected and are considered to be part of a
stable process.
When a process is in control, certain practical advantages accrue to the producer and consumer:

1. Individual units of the product will be more uniform, or, stated another way, there will be less variation.
2. Because the product is more uniform, fewer samples are needed to judge the quality. Therefore, the cost
of inspection can be reduced to a minimum. This advantage is extremely important when 100% conformance
to specifications is not essential.

UCL

Central line
34%
13%

2.5%
LCL

Figure 15-20 Natural Pattern of Variation of a Control Chart


M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 359

STATISTICAL PROCESS CONTROL ■ 359

3. The process capability, or spread of the process, is easily attained from 6σ. With a knowledge of the
process capability, a number of reliable decisions relative to specifications can be made, such as the product
specifications; the amount of rework or scrap when there is insufficient tolerance; and whether to produce the
product to tight specifications and permit interchangeability of components or to produce the product to loose
specifications and use selective matching of components.
4. The percentage of product that falls within any pair of values can be predicted with the highest degree
of assurance. For example, this advantage can be very important when adjusting filling machines to obtain
different percentage of items below, between, or above particular values.
5. It permits the customer to_ use the supplier’s data and, therefore, to test only a few subgroups as a check
on the supplier’s records. The X and R charts are used as statistical evidence of process control.
6. The operator is performing satisfactorily from a quality viewpoint. Further improvement in the
process can be achieved only by changing the input factors: materials, equipment, environment, and oper-
ators. These changes require action by management.

When only chance causes of variation are present, the process is stable and predictable over time, as shown
in Figure 15-21(a). We know that future variation as shown by the dotted curve will be the same unless there
has been a change in the process due to an assignable cause.

Out-of-Control Process
Figure 15-21(b) illustrates the effect of assignable causes of variation over time. The unnatural, unstable
nature of the variation makes it impossible to predict future variation. The assignable causes must be found
and corrected before a natural stable process can continue.

Prediction

(a) Only chance causes of variation present


e
m
Ti

Size

Prediction

e
m
Ti

Size (b) Assignable causes of variation present

Figure 15-21 Stable and Unstable Variation


M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 360

360 ■ CHAPTER 15

Process out of control

UCL UCL UCL

X0 X0 X0

LCL LCL LCL


(a) Seven consecutive (b) Six consecutive (c) Two consecutive
points above points increasing or points in outer
or below decreasing quarter

Figure 15-22 Some Unnatural Runs—Process Out of COntrol

The term out of control is usually thought of as being undesirable; however, there are situations where this
condition is desirable. It is best to think of the term out of control as a change in the process due to an assign-
able cause.
A process can also be considered out of control even when the points fall inside the 3σ limits. This situa-
tion, as shown in Figure 15-22, occurs when unnatural runs of variation are present in the process. It is not
natural for seven or more consecutive points to be above or below the central line as shown at (a). Another
unnatural run occurs at (b), where six points in a row are steadily increasing or decreasing. At (c), the space
is divided into four equal bands of 1.5σ. The process is out of control when there are two successive points
at 1.5σ beyond.4
There are some common questions to ask when investigating an out-of-control process:

1. Are there differences in the measurement accuracy of the instruments used?


2. Are there differences in the methods used by different operators?
3. Is the process affected by the environment? If so, have there been any changes?
4. Is the process affected by tool wear?
5. Were any untrained workers involved in the process?
6. Has there been any change in the source of the raw materials?
7. Is the process affected by operator fatigue?
8. Has there been any change in maintenance procedures?
9. Is the equipment being adjusted too frequently?
10. Did samples come from different shifts, operators, or machines?

It is advisable to develop a checklist for each process using these common questions as a guide.

Process Capability
Control limits are established as a function of the averages—in other words, control limits are for averages.
Specifications, on the other hand, are the permissible variation in the size of the part and are, therefore, for indi-
vidual values. The specification or tolerance limits are established by design engineers to meet a particular

4
For more information, see A. M. Hurwitz and M. Mather, “A Very Simple Set of Process Control Rules,” Quality Engineering 5, no.
1 (1992–1993): 21–29.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 361

STATISTICAL PROCESS CONTROL ■ 361

function. Figure 15-23 shows that the location of the specifications is optional and is not related to any of the
other features in the figure. The control limits, process spread (process capability), distribution of averages, and
distribution of individual values are interdependent. They are determined by the process, whereas the specifi-
cations have an optional location. Control charts cannot determine if the process is meeting specifications.

The true process capability cannot be determined until the X and R charts have achieved the optimal qual-
ity improvement without a substantial investment for new equipment or equipment modification. When the
process is in statistical control, process capability is equal to 6σ, where σ = R0 /d2 and d2 is a factor from
Appendix Table A. In the example problem, it is

6σ = 6(R0 /d2) = 6(0.079/2.059) = 0.230


_
It is frequently necessary to obtain the process capability by a quick method rather than by using the X and
R charts. This method assumes the process is stable or in statistical control, which may or may not be the case.
The procedure is as follows:

1. Take 25 subgroups of size 4, for a total of 100 measurements.


2. Calculate the range, R, for each subgroup.
_
3. Calculate the average range: R ⫽ Σ R/g.
4. Calculate the estimate of the population standard deviation:
σ = R d2

where d2 is obtained from Appendix Table A and is 2.059 for n = 4.


5. The process capability will equal 6σ.

Remember that this technique does not give the true process capability and should be used only if cir-
cumstances require its use. Also, more than 25 subgroups can be used to improve accuracy.

Upper specification
(optional location) USL

Distribution of
Distribution individual values
of averages

Control Process
limits capability μ
+
– 3σx +
– 3σ

Lower specification
(optional location) LSL

Figure 15-23 Relationship of Limits, Specifications, and Distribution


M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 362

362 ■ CHAPTER 15

The relationship of process capability and specifications is shown in Figure 15-24. Tolerance is the differ-
ence between the upper specification limit (USL) and the lower specification limit (LSL). Process capability
and the tolerance are combined to form a capability index, defined as

USL − LSL
Cp =

where USL ⫺ LSL = upper specification ⫺ lower specification, or tolerance


Cp = capability index
6σ = process capability

If the capability index is greater than 1.00, the process is capable of meeting the specifications; if the index
is less than 1.00, the process is not capable of meeting the specifications. Because processes are continually
shifting back and forth, a Cp value of 1.33 has become a de facto standard, and some organizations are using
a 2.00 value. Using the capability index concept, we can measure quality, provided the process is centered.
The larger the capability index, the better the quality. We should strive to make the capability index as large
as possible. This result is accomplished by having realistic specifications and continual striving to improve
the process capability.
The capability index does not measure process performance in terms of the nominal or target value. This
measure is accomplished using Cpk, which is

C pk =
Min {(USL − X ) or ( X − LSL ) }

A Cpk value of 1.00 is the de facto standard, with some organizations using a value of 1.33. Figure 15-25
illustrates Cp and Cpk values for processes that are centered and also off center by 1σ.
Comments concerning Cp and Cpk are as follows:

1. The Cp value does not change as the process center changes.


2. Cp = Cpk when the process is centered.
LSL USL
Tolerance

20 Capability

Nominal

10

0
13.09 13.15 13.21 13.27
Hole location—mm

Figure 15-24 Relationship of Process Capability to Tolerance


M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 363

STATISTICAL PROCESS CONTROL ■ 363

Case I Cp = (USL - LSL)/6 = 8␴/6␴ = 1.33

6␴ 6␴

LSL X0 USL LSL X0 USL

Cp = 1.33 Cp = 1.33
Cpk = 1.33 Cpk = 1.00

Case II Cp = (USL - LSL)/6 = 6␴/6␴ = 1.00

6␴ 6␴

LSL X0 USL LSL X0 USL

Cp = 1.00 Cp = 1.00
Cpk = 1.00 Cpk = 0.67

Case III Cp = (USL - LSL)/6 = 4␴/6␴ = 0.67

6␴ 6␴

LSL X0 USL LSL X0 USL

Cp = 0.67 Cp = 0.67
Cpk = 0.67 Cpk = 0.33

Figure 15-25 Cp and Cpk Values for Three Different Situations

3. Cpk is always equal to or less than Cp.


4. A Cpk value greater than 1.00 indicates the process conforms to specifications.
5. A Cpk value less than 1.00 indicates that the process does not conform to specifications.
6. A Cp value less than 1.00 indicates that the process is not capable.
7. A Cpk value of zero indicates the average is equal to one of the specification limits.
8. A negative Cpk value indicates that the average is outside the specifications.

Quality professionals will use these eight items to improve the process. For example, if a Cp value is less than
one, then corrective action must occur. Initially 100% inspection is necessary to eliminate noncomformities. One
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 364

364 ■ CHAPTER 15

solution would be to increase the tolerance of the specifications. Another would be to work on the process to
reduce the standard deviation or variability.

Process Performance
Process capability indices Cp and Cpk are calculated using standard deviation from control charts as σ ⫽ R/d2.
This standard deviation is a measure of variation within the subgroups. It is expected that subgroup size is
selected such that there is no opportunity for any assignable causes to occur and therefore, only random or
“inherent” variation is present within the subgroups. Thus, σ ⫽ R/d2 represents random variation only and σ
is called “within standard deviation”. Capability indices Cp and Cpk calculated using the within standard devi-
ation are referred to as “Process Capability Within”. Strictly speaking, this is the best or potential capability
of the process that can be expected.
In reality, there are some assignable causes present. These are investigated and corrective action is taken
so that these causes do not reoccur. In addition to Cp and Cpk, SPC manual from Automotive Industry Action
Group (AIAG) suggests process performance indices Pp and Ppk. These indices are calculated using standard
deviation using the formula:
n

∑ (xi − x)
2

s= i= 1

(n − 1)

This is sometimes referred to as “overall” standard deviation. Overall standard deviation includes not only
random variation but also variation between the subgroups and also variation due to the assignable causes, if
any. Pp and Ppk are called process performance indices and can be considered as more realistic representation
of what the customers will experience.

Different Control Charts for Variables


Although most of the quality control activity for variables is concerned with the and R charts, there are other
charts that find application in some situations. These charts are described in Table 15-5.

Control Charts for Attributes


An attribute, as defined in quality, refers to those quality characteristics that conform to specifications or do
not conform to specifications. There are two types:

1. Where measurements are not possible, for example, visually inspected items such as color, missing
parts, scratches, and damage.
2. Where measurements can be made but are not made because of time, cost, or need. In other words,
although the diameter of a hole can be measured with an inside micrometer, it may be more convenient to use
a “go–no go” gauge and determine if it conforms or does not conform to specifications.

Where an attribute does not conform to specifications, various descriptive terms are used. A nonconformity
is a departure of a quality characteristic from its intended level or state that occurs with a severity sufficient to
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 365

STATISTICAL PROCESS CONTROL ■ 365

TABLE 15-5
Different Control Charts for Variables
Type Central Line Central Limits Comments

— X LCL x = X – A3 s
X and s Use when more sensitivity is
LCL x = X – A3 s
desired than R; when
s UCL s = B4 s n > 10; and when data are
LC L s = B3 s collected automatically
Moving average, X UCL x = X + A2 R Use when only one observation

M X and moving LCL x = X – A2 R is possible at a time. Data
range, MR R UCL R = D4 R needn’t be normal.
LCL R = D3 R
X
X and moving R UCL x = X + 2. 660R Use when only one observation
LCL x = X – 2. 660R is possible at a time and the
R UCL R = 3.276R data are normal. Equation are
LCL R = (0 )R based on a moving range of two.
Median and Range Mdmd UCL Md = Md Md + A5RMd Use when process is in a
LCL Md = Md Md – AsRMd maintenance mode. Benefits are
Rmd UCL R = D6RMd less arithmetic and simplicity.
LCL R = D5RMd

cause an associated product or service not to meet a specification requirement. The definition of a defect is
similar, except it is concerned with satisfying intended normal or reasonably foreseeable usage requirements.
Defect is appropriate for use when evaluation is in terms of usage, and nonconformity is appropriate for con-
formance to specifications.
The term nonconforming unit is used to describe a unit of product or service containing at least one non-
conformity. Defective is analogous to defect and is appropriate for use when a unit of product or service is
evaluated in terms of usage rather than conformance to specifications.
In this section we are using the terms nonconformity and nonconforming unit. This practice avoids the con-
fusion and misunderstanding that occurs with defect and defective in product-liability lawsuits.
Variable control charts are an excellent means for controlling quality and subsequently improving it;
however, they do have limitations. One obvious limitation is that these charts cannot be used for quality
characteristics that are attributes. The converse is not true, because a variable can be changed to an attrib-
ute by stating that it conforms or does not conform to specifications. In other words, nonconformities such
as missing parts, incorrect color, and so on, are not measurable, and a variable control chart is not appli-
cable.
Another limitation concerns the fact that there are many variables in a manufacturing entity. Even a small

manufacturing plant could have as many as 1,000 variable quality characteristics. Because X and R charts are
needed for each characteristic, 1,000 charts would be required. Clearly, this would be too expensive and
impractical. A control chart for attributes can minimize this limitation by providing overall quality informa-
tion at a fraction of the cost.
There are two different groups of control charts for attributes. One group of charts is for nonconforming
units. A proportion, p, chart shows the proportion nonconforming in a sample or subgroup. The
proportion is expressed as a fraction or a percent. Another chart in the group is for number nonconforming, np.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 366

366 ■ CHAPTER 15

Another group of charts is for nonconformities. A c chart shows the count of nonconformities in an
inspected unit such as an automobile, bolt of cloth, or roll of paper. Another closely-related chart is the u chart,
which is for the count of nonconformities per unit.
Much of the information on control charts for attributes is similar to that already given in Variable Control
Charts. Also see the information on State of Control.

Objectives of the Chart


The objectives of attribute charts are to

1. Determine the average quality level. Knowledge of the quality average is essential as a benchmark. This
information provides the process capability in terms of attributes.
2. Bring to the attention of management any changes in the average. Changes, either increasing or decreas-
ing, become significant once the average quality is known.
3. Improve the product quality. In this regard, an attribute chart can motivate operating and management
personnel to initiate ideas for quality improvement. The chart will tell whether the idea is an appropriate or
inappropriate one. A continual and relentless effort must be made to improve the quality.
4. Evaluate the quality performance of operating and management personnel. Supervisors should be eval-
uated by a chart for nonconforming units. One chart should be used to evaluate the chief executive officer
(CEO). Other functional areas, such as engineering, sales, finance, etc., may find a chart for nonconformities
more applicable for evaluation purposes.
— —
5. Suggest places to use X and R charts. Even though the cost of computing and charting X and R charts is
more than that of charts for attributes, they are much more sensitive to variations and are more helpful in diagnos-

ing causes. In other words, the attribute chart suggests the source of difficulty, and X and R charts find the cause.
6. Determine acceptance criteria of a product before shipment to the customer. Knowledge of attributes
provides management with information on whether or not to release an order.

Use of the Chart


The general procedures that apply to variable control charts also apply to the p chart. The first step in the pro-
cedure is to determine the use of the control chart. The p chart is used for data that consist of the proportion
of the number of occurrences of an event to the total number of occurrences. It is used in quality control to
report the proportion nonconforming in a product, quality characteristic, or group of quality characteristics.
As such, the proportion nonconforming is the ratio of the number nonconforming in a sample or subgroup to
the total number in the sample or subgroup. In symbolic terms, the equation is

np
p=
n
where p = proportion (fraction or percent) nonconforming in the sample or subgroup
n = number in the sample or subgroup
np = number nonconforming in the sample or subgroup

The p chart is an extremely versatile control chart. It can be used to control one quality characteristic, as

is done with X and R charts; to control a group of quality characteristics of the same type or of the same
part; or to control the entire product. The p chart can be established to measure the quality produced by a
work center, by a department, by a shift, or by an entire plant. It is frequently used to report the perform-
ance of an operator, group of operators, or management as a means of evaluating their quality performance.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 367

STATISTICAL PROCESS CONTROL ■ 367

A hierarchy of utilization exists so that data collected for one chart can also be used on a more all-inclusive
chart. The use for the chart or charts will be based on securing the greatest benefit for a minimum of cost.

Subgroup Size
The second step is to determine the size of the subgroup. The subgroup size of the p chart can be either vari-
able or constant. A constant subgroup size is preferred; however, there may be many situations, such as
changes in mix and 100% automated inspection, where the subgroup size changes.
If a part has a proportion nonconforming, p, of 0.001 and a subgroup size, n, of 1000, then the average num-
ber nonconforming, np, would be one per subgroup. This situation would not make a good chart, since a large
number of values, posted to the chart, would be zero. If a part has a proportion nonconforming of 0.15 and a sub-
group size of 50, the average number of nonconforming units would be 7.5, which would make a good chart.
Therefore, the selection of the subgroup size requires some preliminary observations to obtain a rough
idea of the proportion nonconforming and some judgment as to the average number of nonconforming units
that will make an adequate graphical chart. A minimum size of 50 is suggested as a starting point. Inspection
can either be by audit or on-line. Audits are usually done in a laboratory under optimal conditions. On-line
provides immediate feedback for corrective action.

Data Collection
The third step requires data to be collected for at least 25 subgroups, or the data may be obtained from his-
torical records. Perhaps the best source is from a check sheet designed by a project team. Table 15-6 gives the
inspection results from the motor department for the blower motor in an electric hair dryer. For each subgroup,
TABLE 15-6
Inspection Results of Hair Dryer Blower Motor,
Motor Department, May
Number Number Proportion
Subgroup Inspected Nonconforming Nonconforming
Number n np p

1 300 12 0.040
2 300 3 0.010
3 300 9 0.030
· · · ·
· · · ·
· · · ·
19 300 16 0.053
20 300 2 0.007
21 300 5 0.017
22 300 6 0.020
23 300 0 0.0
24 300 3 0.010
25 300 2 0.007
Total 7500 138
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 368

368 ■ CHAPTER 15

the proportion nonconforming is calculated. The quality technician reported that subgroup 19 had an abnor-
mally large number of nonconforming units, owing to faulty contacts.

Trial Central Lines and Control Limits


The fourth_ step is the calculation of the trial central line and control limits. The average proportion noncon-
forming, p , is the central line and the control limits are established at 3σ. The equations are

p = Σn p Σn

p(1 − p ) p(1 − p )
UCL = p + 3 LCL = p − 3
n n
_
where p = average proportion nonconforming for many subgroups
n = number inspected in a subgroup

Calculations for the central line and the trial control limits using the data on the electric hair dryer are as
follows:

p = Σ np Σn = 1.38 7500 = 0 .018

p(1 − p ) p(1 − p )
UCL = p + 3 LCL = p − 3
n n
0.018(1 − 0 .018 ) 0.018(1 − 0 .018 )
= 0.018 + 3 = 0.018 − 3
300 300
= 0.041 = −0 .005 or 0.0

Calculations for the lower control limit resulted in a negative value, which is a theoretical result. In prac-
tice, a negative proportion nonconforming would be impossible. Therefore, the lower control limit value
of ⫺ 0.005 is changed to zero.
When the lower control limit is positive, it may in some cases be changed to zero. If the p chart is to be
viewed by operating personnel, it would be difficult to explain why a proportion nonconforming that is below
the lower control limit is out of control. In other words, performance of exceptionally good quality would be
classified as out of control. To avoid the need to explain this situation to operating personnel, the lower con-
trol limit is left off the chart. When the p chart is to be used by quality control personnel and by management,
a positive lower control limit is left unchanged. In this manner, exceptionally good performance (below the
lower control limit) will be treated as an out-of-control situation and be investigated for an assignable cause.
It is hoped that the assignable
_ cause will indicate how the situation can be repeated.
The central line, p , and the control limits are shown in Figure 15-26; the proportion nonconforming, p,
from Table 15-6 is also posted to that chart. This chart is used to determine if the process is stable and is not

posted. Like the X and R charts, the central line and control limits were determined from the data.

Revised Central Line and Control Limits


The fifth step is completed by discarding any out-of-control points that have assignable causes and recalcu-
lating the central line
_ and control limits. The equations are the same except p0, the standard or reference value,
is substituted for p.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 369

STATISTICAL PROCESS CONTROL ■ 369

p Chart
0.053
UCL = 0.041
0.04
Fraction nonconforming (p)

0.03

0.02 p = 0.018

0.01

0 LCL = 0
0 5 10 15 20 25
Subgroup

Figure 15-26 p Chart to illustrate the Trial Central Line and Control LimitsUsing the Data of Table 15-6

Most industrial processes, however, are not in control when first analyzed, and this fact is illustrated in
Figure 15-26 by subgroup 19, which is above the upper control limit and, therefore, is out _ of control.
Because subgroup 19 has an assignable cause, it can be discarded from the data, and a new p can be com-
puted with all of the subgroups except 19. The value of p0 is 0.017, which makes the UCL = 0.039 and the
LCL = ⫺ 0.005, or 0.0.
The revised control limits and the central line are shown in Figure 15-27. This chart, without the plotted points,
is posted in an appropriate place and the proportion nonconforming, p, for each subgroup is plotted as it occurs.

Achieving the Objective


Whereas the first five steps are planning, the last step involves action and leads to the achievement of the
objective. The revised control limits were based on data collected in May. Some representative values of
inspection results for the month of June are shown in Figure 15-27. Analysis of the June results shows that
the quality improved. This improvement is expected, because the posting of a quality control chart usually
results in improved quality. Using the June data, a better estimate of the proportion nonconforming is
obtained. The new value (p0 = 0.014) is used to obtain the UCL of 0.036.
During the latter part of June and the entire month of July, various quality improvement ideas generated

by a project team are tested. These ideas are new shellac, change in wire size, stronger spring, X and R charts

0.040 UCL = 0.039


Fraction nonconforming (p)

UCL = 0.036

0.030 p0 = 0.017 UCL = 0.027


p0 = 0.014
0.020 p0 = 0.010

0.010

0 LCL = 0
June July August

Figure 15-27 Continuing Use of the p Chart for Representative Values of the Proportion
Nonconforming, p
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 370

370 ■ CHAPTER 15

on the armature, and so on. In testing ideas, there are three criteria: a minimum of 25 subgroups are required, the
25 subgroups can be compressed in time as long as no sampling bias occurs, and only one idea can be tested at a
time. The control chart will tell whether the idea improves the quality, reduces the quality, or has no effect on the
quality. The control chart should be located in a conspicuous place so operating personnel can view it.
Data from July are used to determine the central line and control limits for August. The pattern of variation
for August indicates that no further improvement resulted. However, a 41% improvement occurred from June
(0.017) to August (0.010). At this point, considerable improvement was obtained from testing the ideas of the
project team. Although this improvement is very good, the relentless pursuit of quality improvement must con-
tinue—1 out of every 100 is still a nonconforming unit. Perhaps a detailed failure analysis or technical assistance
from product engineering will lead to additional ideas that can be evaluated. A new project team may help.
Quality improvement is never finished. Efforts may be redirected to other areas based on need and/or
resources available.

Like X and R charts, the p chart is most effective if it is posted where operating and quality control per-

sonnel can view it. Also, like X and R charts, the control limits are three standard deviations from the cen-
tral value. Therefore, approximately 99% of the plotted points will fall between the upper and lower control
limits.
A control chart for subgroup values of p will aid in disclosing the occasional presence of assignable causes
of variation in the process. The elimination of these assignable causes will lower p0 and, therefore, have a
positive effect on spoilage, production efficiency, and cost per unit. A p chart will also indicate long-range
trends in the quality, which will help to evaluate changes in personnel, methods, equipment, tooling, materi-
als, and inspection techniques.

TABLE 15-7
Different Types of Control Charts for Attributes*
Central Control
Type Line Limits Comments

p(1 − p)
p p UCL=p + 3 Use for nonconforming units with constant or variable
n
sample size.
p(1 − p)
LCL=p − 3
n

np np UCL=np + 3 np(1- p) Use for nonconforming units, where np is the number


nonconforming. The sample size must be constant.
LCL=np − 3 np(1 − p)

UCL=c + 3 c
c c Use for nonconformities within a unit where c is the count
UCL=c −3 c of nonconformities. The sample size is one inspected unit,
i.e., a case of 24 cans.
u
u u UCL=u + 3 Use for nonconformities within a unit where u is the count
n of nonconformities per unit. The sample size can vary.
u
LCL=u + 3
n

*For more information see Quality Control, 6th ed., by Dale H. Besterfield (Upper Saddle River, NJ: Prentice Hall, 2001).
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 371

STATISTICAL PROCESS CONTROL ■ 371

The process capability is the central line for all attribute charts. Management is responsible for the capability. If
the value of the central line is not satisfactory, then management must initiate the procedures and provide the
resources to take the necessary corrective action. As long as operating personnel (operators, first-line supervisors,
and maintenance workers) are keeping the plotted points within the control limits, they are doing what the process
is capable of doing. When the plotted point is outside the control limit, operating personnel are usually responsible.
A plotted point below the lower control limit is due to exceptionally good quality. It should be investigated
to determine the assignable cause, so that if it is not due to an inspection error, it can be repeated.
Additional types of charts for attributes are shown in Table 15-7, with comments concerning their utilization.

Measurement System Analysis (MSA)5


Importance of Measurement
Measurement “converts” unknown existing information into a useable value. It helps us take decisions. For
example, when doctors measure blood pressure, they often want to decide about the medication or line of
treatment. When a cricket umpire decides whether the batsman is out or not, the decision affects the outcome
of the match. When a manufactured part is inspected, it is decided whether it should be accepted or rejected.
Measurement decisions are associated with risks, as there is a possibility of measurement error. When a good
part is rejected, it is called a producer’s risk or ␣-risk. On the other hand, when a bad part is accepted, it is a
consumers’ risk or ␤-risk.
In all of these cases, the impact of wrong measurement on decisions is quite evident. We often do not real-
ize the way our decisions depend on the results of the measurement system. We also do not realize the conse-
quences of incorrect results of a measurement system. Many of us, may have more than a clock in our houses
and each one showing different time! For the domestic application of time measurement, the criticality is quite
less. But imagine its criticality in project like satellite launching at Indian Space Research Organization (ISRO)
or NASA! Least count of our wrist watches is normally one second. It is rather adequate for day-to-day activ-
ity. The time clocks or event meters, which are used in sports, would require much higher accuracy and preci-
sion and the wrist watch is not an acceptable measurement system. In fact, digital frame capture technology has
taken over the conventional time measurement instruments now. Thus, we need to use a measurement system
that is appropriate for the objective and is able to discern the “process variation” to take correct decisions.

Where Do We Use Measurement Systems?


Measurement systems are used for the following purposes:
• Product classification and/or acceptance.
• Process adjustment.
• Process control.
• Analysis of process variation and assessment of process capability.
Measurement systems are not limited to manufacturing and shop-floor environment. Table 15.8 shows
some examples that shall help us understand the wide variety of measurement systems available.
In improvement projects, it is essential that the measurement system is assessed before evaluating the
process capability. The procedures used for variable and attribute data are different. However, measurement

5
This subtopic on MSA is adapted with permission from “Six Sigma for Business Excellence” by Hemant Urdhwareshe, Pearson Edu-
cation India.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 372

372 ■ CHAPTER 15

TABLE 15-8
Examples of Measurement Systems
Type of Measuring Consequences of
System Type of Data Measurement Variation

Watches keep time. Variable We may miss appointments, trains,


buses, etc.
Stop watches of various types Variable Wrong participant may be declared as a
to decide winner in sports, winner.
process evaluation.
Measuring tape of a tailor. Variable Clothes may not fit well.
Engineering graduation Variable Students may be graded wrongly.
examination, MBA entrance Deserving candidates may not be selected
examination. for MBA.
Interview. Discrete binary (selected or Inappropriate candidate may get selected
not selected) or appropriate candidate may get rejected.
Cricket umpire. Discrete binary Batsman will be given out when he is
(out or not out) actually not and vice-versa.
Blood pressure (BP) Variable Medicines can be wrongly prescribed.
measurement.
Stress test for heart fitness. Discrete binary (test positive Patients suffering from heart ailments may
or negative) be declared fit and vice-versa.
Counting parts for inventory. Discrete Wrong purchase order may be issued,
production disruption may happen,
incorrect profits (or losses) may
appear, etc.
Fuel gauge in a car. Variable continuous We may get stuck travelling as it may
show wrong indication of fuel.

system analysis should be performed on both types. Before we study these procedures, let us take a look at
some of the basic terms used in MSA.

Measurement Terminology
Accuracy is the degree of conformity of a measured quantity to its actual or true value. For example, a watch
showing time equal to standard time can be considered accurate.
Precision is the degree to which further measurements show the same or similar results. Figure 15.28 clar-
ifies the difference between accuracy and precision.
Bias is the difference between measurement and master value. To estimate bias, a “same part” needs to be meas-
ured number of times. The difference between the average of these measurements and the true value is called bias.
Calibration helps in minimizing the bias during usage. For example, if a person wants to know his or her
own weight, he or she can take a few readings. Suppose average of 5 such readings is 55 kg. He or she then
goes to a lab where the weighing scales are regularly calibrated. He finds his weight as 56.5 kg in the lab. The
bias of the weighing scale is 56.5 −55 = 1.5 kg.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 373

STATISTICAL PROCESS CONTROL ■ 373

Accurate but Precise but Accurate


not precise not accurate and precise

Figure 15-28 Accuracy and Precision

Repeatability is the inherent variation due to the measuring equipment. If the same appraiser measures
the same part a number of times, the closeness in the readings is a measure of repeatability. Traditionally, this
is referred to as “Within Appraiser Variation”. It is also known as equipment variation (EV).
Linearity is the change in the bias over operating range. It is a systematic error component of the meas-
urement system. In many measuring systems, error tends to increase with the larger measurements. For exam-
ple, pressure gauge, dial gauge, weighing scales, etc.
Stability is the measurement variation over time. We should calibrate a gauge or an instrument to ensure
its stability. This is sometimes also called drift. Periodic calibration of measuring equipment is performed to
assess stability.
Reproducibility is a variation in the average of measurements made by different appraisers using the same
measuring instrument when measuring identical characteristic on the same part. Reproducibility has been
traditionally, referred to as “Between Appraiser” or appraiser variation (AV).

Process and Measurement Variation


The objective in number of situations is to measure process variation (PV). But what we measure in reality is a
total of process as well as measurement variation. Ideally, we do not want any measurement variation! However,
this is not possible. Thus, it is desirable that measurement variation is a very small portion of observed variation.
Measurement variation can be seen as a component of observed variation, as shown in the Figure 15.29.

Repeatability Reproducibility

Process
variation
Measurement
variation
Observed
variation

Figure 15-29 Measurement, Process and Observed Variation


(Reproduced with permission from Institute of Quality and Reliability, Pune)
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 374

374 ■ CHAPTER 15

In a typical measurement system analysis (MSA), we use statistical methods to estimate how much of the
total variation (TV) is due to the measurement system. An ideal measurement system should not have any
variation. However, it is impossible and we have to be satisfied with a measurement system that has variation
less than 10% of the process variation. As portion of variation due to measurement system increases, value or
utility of the measurement system goes on decreasing. If this proportion is more than 30%, the measurement
system is unacceptable.

Repeatability and Reproducibility (R & R)


• Let σm be the standard deviation of the measurement variation and σObserved be the standard deviation of
the total observed variation.
• Ratio of the measurement variation to the process variation, that is, σm/σObserved is called gauge repeata-
bility and reproducibility (GRR). GRR is usually expressed in percent.

It is customary to quantify R&R value as “ratio of measurement standard deviation with total standard
deviation” in percentage. This analysis method is called “% Study Variation Method”. MSA manual published
by AIAG specifies the following norms.

• If GRR is less than 10%, measurement system is acceptable.


• If GRR is between 10 to 30%, equipment may be accepted based upon the importance of application, cost
of measurement device, cost of repair etc.
• Measurement system is not acceptable for GRR beyond 30%.

In a typical measurement system study, GRR is estimated. The stability bias and linearity errors are to be
addressed during the selection and calibration of the instrument.
Please note that in the % study variation method, we are using ratios of standard deviations. The other
alternative to this is percent contribution method. In this method, we compare ratios of variances. It is
easy to understand the norms. For example, when GRR is 10%, the ratio σm/σObserved is 0.1. Thus,
sm2 ⎛ sm ⎞ 2
= ⎜⎝ s ⎟⎠ = 0.1 = 0.01 or 1%. Norms for both the methods are summarized in Table 15-9.
2

s2observed observed

TABLE 15-9

Acceptance Norms for R & R Acceptance Norms


with Study Variation for R & R with %Variance
Method Contribution Method Decision Guidelines

<10% <1% Acceptable measurement system.


Between 10 and 30% Between 1 and 9% May be acceptable based upon
importance of application, cost of
measurement device, cost of repair, etc.
>30% >10% Unacceptable measurement system.
Every effort should be made to improve
the system.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 375

STATISTICAL PROCESS CONTROL ■ 375

Often, when the process variation is very small, we may like to compare the measurement variation with
the tolerance of the component being measured. This is called precision to tolerance ratio or P/T Ratio. Pre-
cision is the 6␴ band that includes 99.73 of the measurements. This is compared with the tolerance. Thus,
6 × σm
P/T Ratio = The AIAG acceptance norms for P/T ratio are same as those for study variation
Tolerance
method shown in Table 15-6.

NUMBER OF DISTINCT CATEGORIES (NDC)

Another norm used in the variable measurement systems is number of distinct categories NDC. This number
represents the number of groups within our process data that the measurement system can discern or dis-
criminate.
σp
NDC = × 1. 41 where sigmap is process standard deviation.
σm

Imagine that we measured ten different parts, and NDC = 4. This means that some of those ten parts are not dif-
ferent enough to be discerned as being different by the measurement system. If we want to distinguish a higher
number of distinct categories, we need a more precise gauge. AIAG MSA manual suggests that when the number
of categories is less than 2, the measurement system is of no value for controlling the process, since one part can-
not be distinguished from the another. When the number of categories is 2, the data can be divided into two groups,
say high and low. When the number of categories is 3, the data can be divided into 3 groups, say low, middle and
high. As per the AIAG recommendations, NDC must be 5 or more for an acceptable measurement system.

PROCEDURE FOR GAUGE R & R STUDY FOR MEASUREMENT SYSTEMS

1. Decide the part and the measuring equipment.


2. Establish and document method of measurement.
3. Train appraisers (operators) for measurement.
4. Select 2 or 3 “Appraisers”.
5. Select 5 to 10 parts. The parts selected should represent process variation.
6. Each part must be measured, by each appraiser at least twice, preferably thrice.
7. Measurements must be taken in random order without ducking other readings so that each measure-
ment can be considered as “independent”.
8. Record, analyze and interpret the results.

COLLECTING DATA

Typically, in a Gauge R & R Study 10 parts are measured by three appraisers. Each appraiser measures each part
thrice. Thus, we have 9 measurements of each part with total 90 measurements. The data structure will be as
shown in Figure 15-30. When we compare the measurement of same part by same appraiser, we get an estimate
of repeatability. We may call this measurement variation “within” appraiser. If we compare average measurements
by each appraiser for the same part, we can estimate variation “between” appraisers, i.e., reproducibility.

ANALYSIS OF DATA

After collecting the data, we can analyze it by one of the two methods.
1. Control chart method.
2. ANOVA method.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 376

376 ■ CHAPTER 15

Parts

P1 P2 P3 P4 P5 P6 P7 P8 P9 P10

A 1 1 A 2 1 A 3 1 A 4 1 A 5 1 A 6 1 A 7 1 A 8 1 A 9 1 A 10 1
A A 1 2 A 2 2 A 3 2 A 4 2 A 5 2 A 6 2 A 7 2 A 8 2 A 9 2 A 10 2
A 1 3 A 2 3 A 3 3 A 4 3 A 5 3 A 6 3 A 7 3 A 8 3 A 9 3 A 10 3

Repeatability
B 1 1 B 2 1 B 3 1 B 4 1 B 5 1 B 6 1 B 7 1 B 8 1 B 9 1 B 10 1
B B 1 2 B 2 2 B 3 2 B 4 2 B 5 2 B 6 2 B 7 2 B 8 2 B 9 2 B 10 2
B 1 3 B 2 3 B 3 3 B 4 3 B 5 3 B 6 3 B 7 3 B 8 3 B 9 3 B 10 3

C 1 1 C 2 1 C 3 1 C 4 1 C 5 1 C 6 1 C 7 1 C 8 1 C 9 1 C 10 1

C C 1 2 C 2 2 C 3 2 C 4 2 C 5 2 C 6 2 C 7 2 C 8 2 C 9 2 C 10 2
C 1 3 C 2 3 C 3 3 C 4 3 C 5 3 C 6 3 C 7 3 C 8 3 C 9 3 C 10 3

Figure 15-30 Typical Crossed R & R Study Data Structure. Three Appraisers Measure 10 Parts Three Times
Each With Total 90 Measurements*

We will briefly discuss only the control chart method using the example from measurement system analy-
sis (MSA) manual by Automotive Industry Action Group (AIAG). Refer Figure 15.31 for the discussion. In
this procedure, the averages and the ranges are calculated for each appraiser for each part. The magnitude of
range values will relate to the repeatability or equipment variation (EV). If we calculate the average of all
these range values, we will get “R-double-bar”. We can covert this into standard deviation σrepeatability using
statistical constant K1. The value of K1 depends upon the number of trials (3 here) by each appraiser for each
part. For three trials, K1=0.5908. Similarly, we can calculate the range of averages, of nine part measurements
for each of the 10 parts. This is range of parts variation Rpart. We can use constant K3 to convert this range
value to standard deviation σp. For ten parts, value of K3 is 0.3146. Based on these two values, we can also
calculate standard deviation due to reproducibility, that is, variation between appraisers. An illustration of this
procedure is shown in Figure 15-31. For complete details of the procedure, refer the AIAG MSA manual.
%GRR in this example is 26.67%. This is between 10 and 30% and therefore is marginally acceptable. The
precision to tolerance (P/T) ratio is calculated using tolerance of 5 units. We can conditionally accept the
equipment with improvement actions planned. The number of distinct categories and precision to tolerance
ratio can also be calculated.

Measurement Systems Analysis for Attribute Data


Quite often, we have to use measurement system for attribute data. Simplest example is the decision of the
cricket umpire to decide whether a batsman is out or not. Typical examples of attribute data include.
• Crack Testing result, i.e., crack or no crack.
• Leak Test.
• Visual Inspection.

* Data collection in R&R study (reproduced with permission from the institute of quality and reliability, India, www.world-class-qual-
ity.com).
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 377

Std.Dev.
Multiplier 6 MSA AIAG Example

Appraiser A Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 7 Part 8 Part 9 Part 10
Reading 1 0.29 –0.56 1.34 0.47 –0.8 0.02 0.59 –0.31 2.26 –1.36
Reading 2 0.41 –0.68 1.17 0.5 –0.92 –0.11 0.75 –0.2 1.99 –1.25
Reading 3 0.64 –0.58 1.27 0.64 –0.84 –0.21 0.66 –0.17 2.01 –1.31
Average 0.4467 –0.6067 1.2600 0.5367 –0.8533 –0.1000 0.6667 –0.2267 2.0867 –1.3067 X-bara 0.1903
Range 0.3500 0.1200 0.1700 0.1700 0.1200 0.2300 0.1600 0.1400 0.2700 0.1100 R-bara 0.1840

Appraiser B Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 7 Part 8 Part 9 Part 10
Reading 1 0.08 –0.47 1.19 0.01 –0.56 –0.2 0.47 –0.63 1.8 –1.68
Reading 2 0.25 –1.22 0.94 1.03 –1.2 0.22 0.55 0.08 2.12 –1.62
Reading 3 0.07 –0.68 1.34 0.2 –1.28 0.06 0.83 –0.34 2.19 –1.5
Average 0.1333 –0.7900 1.1567 0.4133 –1.0133 0.0267 0.6167 –0.2967 2.0367 –1.6000 X-barb 0.0683
Range 0.1800 0.7500 0.4000 1.0200 0.7200 0.4200 0.3600 0.7100 0.3900 0.1800 R-barb 0.5130

Appraiser C Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 7 Part 8 Part 9 Part 10
Reading 1 0.04 –1.38 0.88 0.14 –1.46 –0.29 0.02 –0.46 1.77 –1.49
Reading 2 –0.11 –1.13 1.09 0.2 –1.07 –0.67 0.01 –0.56 1.45 –1.77
Reading 3 –0.15 –0.96 0.67 0.11 –1.45 –0.49 0.21 –0.49 1.87 –2.16

Average –0.0733 –1.1567 0.8800 0.1500 –1.3267 –0.4833 0.0800 –0.5033 1.6967 –1.8067 X-barc –0.2543
Range 0.1900 0.4200 0.4200 0.0900 0.3900 0.3800 0.2000 0.1000 0.4200 0.6700 R-barc 0.3280

Part X-double-
Averages 0.1689 –0.8511 1.0989 0.3667 –1.0644 –0.1856 0.4544 –0.3422 1.9400 –1.5711 bar 0.0014

Difference between max and min of part averages–> R-part 3.5111


R-double
Average of ranges by each appraiser bar 0.3417

Difference between max and min of each appraiser average X-bar-Diff 0.4447

Trials K1 Repeatability or Equipment Variation EV (R–double bar xK1) 0.20186


2 0.8862 Part Variation PV (R–part x K3) 1.10460

AV= (XDiff x K2)2–(EV2/(nr))


3 0.5908 Reproducibility or Appraiser Variation 0.22967
Gauge Repeatability and Reproducibility or GRR 0.30577
Parts K3 Total Variation TV 1.14613
5 0.403
10 0.3146 %EV (100x(EV/TV)) 17.612
%AV (100x(AV/TV)) 20.038

Appraisers K2 %GRR (100x(GRR/TV)) 26.678


2 0.7071 %PV(100x(PV/TV)) 96.376
3 0.5231 NDC 5
Precision to Tolerance (P/T) Ratio 0.367

Figure 15-31 Illustration of R & R Study Calculations


(Reproduced with permission from the Institute of Quality and Reliability, Pune, www.world-class-quality.com)
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 378

378 ■ CHAPTER 15

TABLE 15-10
Attribute Agreement Analysis Layout

APPRAISER A APPRAISER B APPRAISER C


True
Part Standard Trial # 1 Trial #2 Trial #1 Trial #2 Trial #1 Trial #2
1 G G G NG G G G
2 G NG G G G G G
3 NG NG NG NG NG NG NG
4 G G G G NG G G
5 NG NG NG NG NG NG NG
6 NG NG NG NG NG NG NG
7 G G G NG G G G
8 NG NG NG NG NG NG NG
9 G G G G G G G
10 NG NG NG NG NG NG NG
11 G G G NG G G G
12 NG NG NG NG NG NG NG
13 NG NG NG NG NG NG NG
14 G NG G NG G G G
15 NG NG NG NG G NG NG
16 G G NG G G G NG
17 G G G G G G G
18 NG NG NG NG G NG NG
19 NG NG NG NG NG NG NG
20 G G G NG G G G

*Reproduced with permission from sigma XL.

Attribute agreement analysis is used to assess such measurement systems. In this procedure, at least ten
good parts and ten bad parts should be inspected by 3 appraisers. We should also know evaluation of each part
by an “expert” or “master”. The collected data may look as shown in Table 15-10 here.
Once the evaluation is completed, we can analyze the data for the following:

• Agreement within appraisers.


• Agreement between appraisers.
• Agreement between appraisers and “true standard”.

A measure called Cohen's Kappa is used to decide whether measurement system is acceptable or not. Its
value can range between 0 to 1. Value 1 shows a perfect agreement while, as value 0 indicates that the agree-
ment is no better than a chance. For more details of the procedure, refer to MSA manual by AIAG. Kappa
value of >0.75 is considered acceptable as per AIAG MSA manual. Detailed calculations of Kohen's Kappa
are beyond the scope of this book.

Scatter Diagrams
The simplest way to determine if a cause-and-effect relationship exists between two variables is to plot a scat-
ter diagram. Figure 15-32 shows the relationship between automotive speed and gas mileage. The figure
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 379

STATISTICAL PROCESS CONTROL ■ 379

40
(30, 38)

30
Gas mileage — mi/gal

20

10

0
0 30 40 50 60 70 80
Speed — mi/h

Figure 15-32 Scatter Diagram

shows that as speed increases, gas mileage decreases. Automotive speed is plotted on the x-axis and is the
independent variable. The independent variable is usually controllable. Gas mileage is on the y-axis and is the
dependent, or response, variable. Other examples of relationships are as follows:

Cutting speed and tool life.


Temperature and lipstick hardness.
Striking pressure and electrical current.
Temperature and percent foam in soft drinks.
Yield and concentration.
Training and errors.
Breakdowns and equipment age.
Accidents and years with the organization.

There are a few simple steps for constructing a scatter diagram. Data are collected as ordered pairs (x, y).
The automotive speed (cause) is controlled and the gas mileage (effect) is measured. Table 15-11 shows
resulting x, y paired data. The horizontal and vertical scales are constructed with the higher values on the right
for the x-axis and on the top for the y-axis. After the scales are labeled, the data are plotted. Using dotted lines,
the technique of plotting sample number 1 (30, 38) is illustrated in Figure 15-32. The x-value is 30, and the
y-value is 38. Sample numbers 2 through 16 are plotted, and the scatter diagram is complete. If two points are
identical, the technique illustrated at 60 mi/h can be used.
Once the scatter diagram is complete, the relationship or correlation between the two variables can be eval-
uated. Figure 15-33 shows different patterns and their interpretation. At (a), there is a positive correlation
between the two variables, because as x increases, y increases. At (b), there is a negative correlation between
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 380

380 ■ CHAPTER 15

TABLE 15-11

Data on Automotive Speed vs. Gas Mileage


Sample Speed Mileage Sample Speed Mileage
Number (mi/h) (mi/gal) Number (mi/h) (mi/gal)

1 30 38 9 50 26
2 30 35 10 50 29
3 35 35 11 55 32
4 35 30 12 55 21
5 40 33 13 60 22
6 40 28 14 60 22
7 45 32 15 65 18
8 45 29 16 65 24

y y y

x x x

(a) Positive correlation (b) Negative correlation (c) No correlation

y y y

x x x

(d) Negative correlation may exist (e) Correlation by stratification (f) Curvilinear relationship

Figure 15-33 Different Scatter Diagram Patterns


M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 381

STATISTICAL PROCESS CONTROL ■ 381

the two variables, because as x increases, y decreases. At (c), there is no correlation, and this pattern is some-
times referred to as a shotgun pattern. The patterns described in (a), (b), and (c) are easy to understand; how-
ever, those described in (d), (e), and (f) are more difficult. At (d), there may or may not be a relationship
between the two variables. There appears to be a negative relationship between x and y, but it is not too strong.
Further statistical analysis is needed to evaluate this pattern. At (e), we have stratified the data to represent
different causes for the same effect. Some examples are gas mileage with the wind versus against the wind,
two different suppliers of material, and two different machines. One cause is plotted with a small solid circle,
and the other cause is plotted with an open triangle. When the data are separated, we see that there is a strong
correlation. At (f), we have a curvilinear relationship rather than a linear one.
When all the plotted points fall on a straight line, we have a perfect correlation. Because of variations in
the experiment and measurement error, this perfect situation will rarely, if ever, occur.
It is sometimes desirable to fit a straight line to the data in order to write a prediction equation. For exam-
ple, we may wish to estimate the gas mileage at 42 mi/h. A line can be placed on the scatter diagram by sight
or mathematically using least squares analysis. In either approach, the idea is to make the deviation of the
points on each side of the line equal. Where the line is extended beyond the data, a dashed line is used, because
there are no data in that area.

TQM Exemplary Organization6


Founded in 1900, Granite Rock Company employs 400 people and produces rock, sand, and gravel aggre-
gates; ready-mix concrete; asphalt; road treatments; and recycled road-base material. It also retails building
materials made by other manufacturers and runs a highway-paving operation.
Since 1980, the regional supplier to commercial and residential builders and highway construction com-
panies has increased its market share significantly. Productivity also has increased, with revenue earned per
employee rising to about 30% above the national industry average. Most of the improvement has been real-
ized since 1985, when Granite Rock started its Total Quality Program. The program stresses customer satis-
faction with teams carrying out quality improvement projects. In 1991, nearly all workers took part in at least
one of the company’s 100-plus quality teams.
In 1991, Granite Rock employees averaged 37 hours of training at an average cost of $1,697 per employee,
13 times more than the construction-industry average. Many employees are trained in statistical process
control, root-cause analysis, and other quality-assurance and problem-solving methods.
Applying statistical process control to all product lines has helped the company reduce variable costs and
produce materials that exceed customer specifications and industry- and government-set standards. For ex-
ample, Granite Rock’s concrete products consistently exceed the industry performance specifications by 100
times. Granite Rock’s record for delivering concrete on time has risen from less than 70% in 1988 to 93.5%
in 1991. The reliability of several key processes has reached the six-sigma level which is a nonconforming
rate of 3.4 per million.
Charts for each product line help executives assess Granite Rock’s performance relative to competitors on
key product and service attributes, ranked according to customer priorities. Ultimate customer satisfaction is
assured through a system where customers can choose not to pay for a product or service that doesn’t meet

6
Malcolm Baldrige National Quality Award, 1992 Small Business Category Recipient, NIST/Baldrige Home-page, Internet.
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 382

382 ■ CHAPTER 15

expectations; however, dissatisfaction is rare. Costs incurred in resolving complaints are equivalent to 0.2%
of sales, as compared with the industry average of 2%.

Summary
We have seen some basic and simple but quite a useful tools to solve problems. These tools include pareto
charts, cause and effects diagrams, checksheets and histograms, process flow diagrams, run charts, control
charts and scatter plots. Pareto chart is useful to identify a vital few causes or elements and how to prioritize.
Process flow charts are useful to visualize the trouble spots and improvement opportunities for the process, if
any. Histograms and checksheets can make variation visible. Scatter plots can be plotted when it is required
to understand whether two variables are related to each other or not.
Mean, median, and mode are measures of central tendency, while range and standard deviation are meas-
ures of dispersion or variation.
Statistical control charts are powerful to assess stability of processes and to detect presence of assignable
— —
cause(s), if any. Control charts for subgroups such as X -R and X -s are more sensitive and therefore, preferred over
control charts for individuals. Control charts for attributes data include charts for defectives and charts for defects.
It is necessary to validate the measurement system(s) for the critical characteristics. Statistical procedures
are available to analyze and quantify measurement system uncertainty. Measurement system is considered
acceptable, if R&R is less than 10% of the process variation. In case of attribute data, agreement analysis can
be performed. Cohen’s kappa value indicates extent of agreement within and between appraisers.
Process capability index Cp quantifies, relationship between specification limits and standard deviation.
Cpk considers additionally effect of centring.

Exercises
1. A major record-of-the-month club collected data on the reasons for returned shipments during a quar-
ter. Results are: wrong selection, 50,000; refused, 195,000; wrong address, 68,000; order canceled,
5,000; and other, 15,000. Construct a Pareto diagram.

2. Form a project team of six or seven people, elect a leader, and construct a cause-and-effect diagram
for bad coffee from a 22-cup coffee maker used in the office.

3. Design a check sheet for the maintenance of a piece of equipment such as a gas furnace, laboratory
scale, or typewriter.

4. Construct a flow diagram for the manufacture of a product or the providing of a service.

5. An organization that fills bottles of shampoo tries to maintain a specific weight of the product. The
table gives the weight of 110 bottles that were checked at random intervals. Make a tally of these
weights and construct a frequency histogram. (Weight is in kilograms.)
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 383

STATISTICAL PROCESS CONTROL ■ 383

6.00 5.98 6.01 6.01 5.97 5.99 5.98 6.01 5.99 5.98 5.96
5.98 5.99 5.99 6.03 5.99 6.01 5.98 5.99 5.97 6.01 5.98
5.97 6.01 6.00 5.96 6.00 5.97 5.95 5.99 5.99 6.01 5.98
6.01 6.03 6.01 5.99 5.99 6.02 6.00 5.98 6.01 5.98 5.99
6.00 5.98 6.05 6.00 6.00 5.98 5.99 6.00 5.97 6.00 6.00
6.00 5.98 6.00 5.94 5.99 6.02 6.00 5.98 6.02 6.01 6.00
5.97 6.01 6.04 6.02 6.01 5.97 5.99 6.02 5.99 6.02 5.99
6.02 5.99 6.01 5.98 5.99 6.00 6.02 5.99 6.02 5.95 6.02
5.96 5.99 6.00 6.00 6.01 5.99 5.96 6.01 6.00 6.01 5.98
6.00 5.99 5.98 5.99 6.03 5.99 6.02 5.98 6.02 6.02 5.97

6. Determine the average, median, mode, range, and standard deviation for each group of numbers.

(a) 50, 45, 55, 55, 45, 50, 55, 45, 55


(b) 89, 87, 88, 83, 86, 82, 84
(c) 11, 17, 14, 12, 12, 14, 14, 15, 17, 17
(d) 16, 25, 18, 17, 16, 21, 14
(e) 45, 39, 42, 42, 43


7. Control charts for X and R are to be established on a certain dimension part, measured in millimeters.
Data were collected in subgroup sizes of 6 and are given below. Determine the trial central line and
control limits. Assume assignable causes and revise the central line and limits.

Subgroup Subgroup
— —
Number X R Number X R

1 20.35 0.34 14 20.41 0.36


2 20.40 0.36 15 20.45 0.34
3 20.36 0.32 16 20.34 0.36
4 20.65 0.36 17 20.36 0.37
5 20.20 0.36 18 20.42 0.73
6 20.40 0.35 19 20.50 0.38
7 20.43 0.31 20 20.31 0.35
8 20.37 0.34 21 20.39 0.38
9 20.48 0.30 22 20.39 0.33
10 20.42 0.37 23 20.40 0.32
11 20.39 0.29 24 20.41 0.34
12 20.38 0.30 25 20.40 0.30
13 20.40 0.33
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 384

384 ■ CHAPTER 15

8. The following table gives the average and range in kilograms for tensile tests on an improved plastic
cord. The subgroup size is 4. Determine the trial central line and control limits. If any points are out
of control, assume assignable causes, and determine the revised limits and central line.

Subgroup Subgroup
— —
Number X R Number X R

1 476 32 14 482 22
2 466 24 15 506 23
3 484 32 16 496 23
4 466 26 17 478 25
5 470 24 18 484 24
6 494 24 19 506 23
7 486 28 20 476 25
8 496 23 21 485 29
9 488 24 22 490 25
10 482 26 23 463 22
11 498 25 24 469 27
12 464 24 25 474 22
13 484 24

9. Assume that the data in Exercise 7 are for a subgroup size of 4. Determine the process capability.

10. Determine the process capability for Exercise 8.

11. Determine the capability index before (σ0 = 0.038) and after (σ0 = 0.030) improvement for the chapter
example problem using specifications of 6.40 ± 0.15 mm.

12. What is the Cpk value after improvement for Exercise 11 when the process center is 6.40? When the
process center is 6.30? Explain.

13. The Get-Well Hospital has completed a quality improvement project on the time to admit a patient

using X and R charts. They now wish to monitor the activity using median and range charts. Deter-
mine the central line and control limits with the latest data in minutes, as given here.

Observation Observation
Subgroup Subgroup
Number X1 X2 X3 Number X1 X2 X3

1 6.0 5.8 6.1 13 6.1 6.9 7.4


2 5.2 6.4 6.9 14 6.2 5.2 6.8
3 5.5 5.8 5.2 15 4.9 6.6 6.6
4 5.0 5.7 6.5 16 7.0 6.4 6.1
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 385

STATISTICAL PROCESS CONTROL ■ 385

Observation Observation
Subgroup Subgroup
Number X1 X2 X3 Number X1 X2 X3

5 6.7 6.5 5.5 17 5.4 6.5 6.7


6 5.8 5.2 5.0 18 6.6 7.0 6.8
7 5.6 5.1 5.2 19 4.7 6.2 7.1
8 6.0 5.8 6.0 20 6.7 5.4 6.7
9 5.5 4.9 5.7 21 6.8 6.5 5.2
10 4.3 6.4 6.3 22 5.9 6.4 6.0
11 6.2 6.9 5.0 23 6.7 6.3 4.6
12 6.7 7.1 6.2 24 7.4 6.8 6.3

14. The viscosity of a liquid is checked every half hour during one three-shift day. What does the run chart
indicate? Data are 39, 42, 38, 37, 41, 40, 36, 35, 37, 36, 39, 34, 38, 36, 32, 37, 35, 34, 33, 35, 32, 38,
34, 37, 35, 35, 34, 31, 33, 35, 32, 36, 31, 29, 33, 32, 31, 30, 32, and 29.

15. Determine the trial central line and control limits for a p chart using the following data, which are for
the payment of dental insurance claims. Plot the values on graph paper and determine if the process is
stable. If there are any out-of-control points, assume an assignable cause and determine the revised
central line and control limits.

Number Number
Subgroup Number Noncon- Subgroup Number Noncon-
Number Inspected forming Number Inspected forming

1 300 3 14 300 6
2 300 6 15 300 7
3 300 4 16 300 4
4 300 6 17 300 5
5 300 20 18 300 7
6 300 2 19 300 5
7 300 6 20 300 0
8 300 7 21 300 2
9 300 3 22 300 3
10 300 0 23 300 6
11 300 6 24 300 1
12 300 9 25 300 8
13 300 5
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 386

386 ■ CHAPTER 15

16. Determine the trial limits and revised control limits for a u chart using the data in the table for the sur-
face finish of rolls of white paper. Assume any out-of-control points have assignable causes.

Total Total
Lot Sample Noncon- Lot Sample Noncon-
Number Size formities Number Size formities

1 10 45 15 10 48
2 10 51 16 11 35
3 10 36 17 10 39
4 9 48 18 10 29
5 10 42 19 10 37
6 10 5 20 10 33
7 10 33 21 10 15
8 8 27 22 10 33
9 8 31 23 11 27
10 8 22 24 10 23
11 12 25 25 10 25
12 12 35 26 10 41
13 12 32 27 9 37
14 10 43 28 10 28

17. An np chart is to be established on a painting process that is in statistical control. If 35 pieces are
to be inspected every 4 hours, and the fraction nonconforming is 0.06, determine the central line and
control limits.

18. A quality technician has collected data on the count of rivet nonconformities in four-meters travel
trailers. After 30 trailers, the total count of nonconformities is 316. Trial control limits have been deter-
mined and a comparison with the data shows no out-of-control points. What is the recommendation
for the central line and the revised control limits for a count of nonconformities chart?

19. By means of a scatter diagram, determine if a relationship exists between product temperatures and
percent foam for a soft drink.

Product Product
Day Temperature °F Foam % Day Temperature °F Foam %

1 36 15 11 44 32
2 38 19 12 42 33
3 37 21 13 38 20
4 44 30 14 41 27
5 46 36 15 45 35
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 387

STATISTICAL PROCESS CONTROL ■ 387

Product Product
Day Temperature °F Foam % Day Temperature °F Foam %

6 39 20 16 49 38
7 41 25 17 50 40
8 47 36 18 48 42
9 39 22 19 46 40
10 40 23 20 41 30

20. Approximately what area is covered under the normal distribution curve between +/– 3 standard
deviations?
(a) 95.40%
(b) 88.00%
(c) 99.73%
(d) 68.0 %

21. To calculate a performance index Pp, which single of the two should be known:
I Specification limits
II Standard deviation
III Bias
IV The process mean

(a) I and II only.


(b) I and III only.
(c) II and III only.
(d) II and IV only.

22. In an MSA, R&R is 38% and reproducibility component is predominant. Which of the following
actions is most appropriate?
(a) Procure a better equipment.
(b) Calibrate the gauge immediately.
(c) Train the operators.
(d) Conduct a stability study to confirm that gauge performance is sustainable.

23. Which of the following statements cannot be true?


(a) Cp=1.2, Cpk=1.01
(b) Cp=2.3, Cpk=2.3
(c) Cp=1.5, Cpk=1,75,
(d) Cp=0.5, Cpk=0.1

24. A quality engineer to chart number of parts rejected every day. Daily production rate fluctuates
between 1000 to 1200 parts. The engineer should use
(a) X-bar and range charts
(b) U-charts
M15_BEST_2274_03_C15.qxp 7/2/11 12:55 PM Page 388

388 ■ CHAPTER 15

(c) C-charts
(d) p-charts

25. A process is monitored using X-bar and Range chart with subgroup size of 5. The chart for averages
shows control limits at 105 and 95. The process standard deviation for individuals is:

(a) 1.66
(b) 3.72
(c) 5
(d) 2.88
26. When a measuring instrument is calibrated, our objective is to reduce

(a) Accuracy
(b) Bias
(c) Repeatability
(d) Reproducibility
27. In R and R study, measurement variation due to equipment is called

(a) Bias
(b) Reproducibility
(c) Linearity
(d) Repeatability
28. The primary objective of control charts is

(a) To evaluate process mean and spread with reference to specification limits.
(b) To assess the process for stability.
(c) to stop the process when defect is observed.
(d) To ensure that the process is set correctly at the mean of specifications.
29. Which of the following is has strongest linear correlation?

a b c d

You might also like