MSA Changes Englisch
MSA Changes Englisch
Seite 1 / 8
Methodology
The changes are highlighted with reference to version 3 of the MSA manual. The entire text
of the new standard is not included, but it is hoped that this document will enable the user to
quickly identify the modified sections of the standard and consider the implications.
The Changes
General:
There are many changes to formatting and method of cross referencing of diagrams etc
which are not mentioned here as they do not affect the intent of the document.
Page 8: ‘Sensitivity’
OEM has now been clarified to explain ‘Original Equipment Manufacturer’
Manufacturer’
Page 10: ‘Traceability’
A new (quite lengthy) section has been added on Calibration Systems. This makes reference
to a definition of a calibration system and emphasises the need for traceability to reference
standards. It also refers to ‘internal calibration
calibration laboratories’ identifying the need for a
laboratory scope defining the calibrations that are capable of being performed by such
laboratories. Reference is also made to ‘Measurement Assurance Programs’ to verify the
acceptability of measurement processes
processes used throughout the calibration system.
Page 17: Effect on Product Decisions
Paragraph added under note at foot of page stating ‘Risk is the chance of making a decision
which will be detrimental to an individual or process’.
Page 45: ‘Discrimination’ ion’
A paragraph has been added to the end of this page referring to the use of a normal
probability plot to identify a lack of discrimination of a gage.
Page 46 ‘Discrimination’
‘Only qualified, technical personnel familiar with the measurement system and process
should make and document such decisions (i.e. where resolution issues may be the best that
technology allows)’ has been deleted.
Page 73: ‘Chapter II – Section C ‘Preparation for a Measurement System Study’
Additional sub para (c) added under section 2) as ‘customer requirements’
New section added after the first introductory paragraph entitled ‘Assembly or fixture error’.
This addresses situations where the gage is improperly designed or assembled to ensure
that the problem is corrected prior to running the measurement evaluation.
New paragraph inserted in the section on ‘Width Error – Acceptability criteria’ after the first
paragraph. This states that when evaluating the measurement system it can be useful to set
priorities on which to focus. It goes on to say that where SPC is being applied and the
process is stable, it can be considered acceptable for use and does not require separate re- re
In either case the process is producing acceptable product. In case 2) the existence of non- non
conforming product or out of control condition
conditi could be a false alarm.’
Clarification is also made to the section on ‘Acceptability Criteria – Width Error concerning
the ‘general rule of thumb for measurement system acceptability’ as follows:
For situations with under 10% error, it is made clear that
that this situation is
recommended especially when trying to classify parts or when heightened process control is
required.
For percentage GRR values of 10 to 30 percent, the statement has been changed
from ‘may be acceptable based on the importance of application,
application, cost of repair etc’ to ‘may
be acceptable for some applications’ and ‘Decision should be based on, for example,
importance of application measurement, cost of measurement device, cost of rework or
repair. Should be approved by the customer’ added
Over 30% the sentence ‘This condition maybe addressed by the use of an
appropriate measurement strategy, for example using the average result of several readings
of the same part characteristic in order to reduce the final measurement variation’ has been
added.
Page 85: Guidelines for Determining Bias – Independentt Sample Method, Conducting
the Study
An introductory paragraph has been added explaining the methodology behind the
hypothesis test used to evaluate measurement system bias.
Page 85: Analysis of Results – Graphical
An additional step has been added ‘3)’ stating that the bias of each reading should be
determined (i.e. better explaining the process).
Page 85: Analysis of Results – Numerical
Clarification is made that the average bias of the readings should be computed
compute (as opposed
to the average value of the individual readings). The formula given has been amended to
show this.
The method for calculation of the repeatability standard deviation has been changed to
calculate the actual (sample) standard deviation from all all of the data as opposed to the
Range/d*2 method as previously specified.
Page 86:
An additional section ‘7)’ has been added after the first paragraph on this page.
This specifies that the repeatability should be evaluated as to its acceptability by calculating
ca
the %EV by using the formula 100[EV/TV] or 100[σ 100[ repeatability/TV] where the total variation is
based on the expected process variation (preferred) or the specification (tolerance) range
divided by 6. Reference is made to the evaluation criteria given given in Chapter II section D.
Interestingly an editorial note has been left in stating ‘what specifically are we supposed to
look at in Section D that links to EV?’ It is hoped that some explanation will be forthcoming
from AIAG in due course!
Page 86: Determine termine the t statistic for the bias
The sentence ‘bias – observed average measurement – reference value’ has been deleted
along with the formula t = bias/σbias/ b. The following formula has been added:
tstatistic = tbias = ave bias/σb
Page 86 – ‘Bias is acceptable at the α level if zero falls within the 1 - α confidence bounds
around the bias value’ has been reworded to include an additional criterion, ie that the ‘p- ‘p
value associated with tbias is more* than α. No further explanation is given.
*Note: this was changed from ‘less’ in the MSA subsequently issued errata sheet.
The value of υ (representing the number of degrees of freedom) is now determined as n-1 n
(as opposed to looking up g and m values from Appendix C as previously specified) specif
Page 86 Note 36 at the foot of the page has been deleted (as this refers to the use of the
average range method for estimation of standard deviation).
Page 87: Following ‘Figure 10: Bias Study – Histogram of Bias Study
A new histogram pictorial has been added to show the expected variation in the average
values, and additional explanation added. A numeric illustration has also been added to
compare %EV with expected process variation.
Page 87: Final paragraph
The values assigned to the confidence interval of the bias has been changed to (-0.1107, (
0.1349) reflecting the change in method of calculation.
expect to have as low a Ppk as 1.0, nor do they accept a process at that low of a
performance level. It may make more sense to compare the measurement variation to a
target performance tool which meets the customer requirement. To use this option, use the
following GRR analysis: TV = (USL – LSL)/6Pp and PV = √[(TV)2 – (GRR)2]
Page 117: End of section on Analysis of Results - Numerical
The sentence ‘In addition, the ndc is truncated to the integer and ought to be greater than or
equal to 5’ has been changed to ‘For analysis, the ndc is the maximum of 1 or the calculated
value truncated to the integer. The result should be greater than or equal to 5’
Some text has been addeddded commenting on the fact that some computer programs may
round the truncated results resulting in differences in final reports. The basis for calculation of
ndc when using the Pp approach is also given with PV2 = (TV)2 – (GRR)2 and ndc = 1.41
(PV/GRR) giving
iving ndc = [(TV)2 – (GRR)2]/GRR.
Page 121:
The ndc value is now rounded up to 5, instead of being rounded down to 4.
Page 126: Chapter III – Section C Attribute Measurement Systems Study
Possible approaches.
The Upper Specification (USL) limit in Figure
Figure 28 has been changed from 0.545 to 0.55. A
sentence has been added after the first paragraph stating that ‘As for all gages, this attribute
study will have ‘gray’ areas where wrong decisions can be made’.
Page 126:
The paragraph after Figure 29 has been been modified stating that ‘since this has not been
documented by the team, it needs to study the measurement system. However, to address
the areas of risk around the specification limits, the team chose approximately 25% of the
parts or close to the lower specification
specification limit and 25% of the parts at the upper specification
limit. In some cases where it is difficult to make such parts, the team may decide to use a
lower percentage recognising that this may increase the variability of the results. If it is not
possible to make parts close to the specification limits, the team should reconsider the use of
attribute gaging for this process. As appropriate for each characteristic, the parts should be
independently measured with a variable gage with acceptable variationvariation (e.g. a CMM). When
measuring a true attribute that cannot be measured with a variable gage use other means
such as experts to determine which samples are good or defective’.
Page 128: Hypothesis Test Analysis – Cross-Tab Method
A fairly lengthy introduction
oduction to the principles by which the cross tabulation method is
calculated has been added after the first paragraph. This states ‘The cross-tabulation
cross
process analyses distribution data for two or more categorical values. The results, presented
in a matrix
rix format form a contingency table that illustrates the interdependence between
variables’. The section goes on to explain the steps undertaken comparing the results of
pairs of evaluators for each part.
A note has been added at the foot of the page stating
stating that cross tabulation functions are
available in many statistical analysis programs and in spreadsheet pivot table functions.
The analysis of the process continues with estimation of the expected data distribution. The
following has been added: ‘What is the probability that an observer pair will agree or disagree
on an observation purely by chance? In 150 observations, Observer A rejected the part 50
times and observer B rejected the part 47 times.
PA0 = 47/150 = 0.313
PB0 = 50/150 = 0.333
Since the two observers are independent, the probability that they will agree that the part is
bad is given by P(A0∩B0) B0) = PA0 PB0= 0.104
The expected number of times for observer A and observer B agree the part is estimated by
multiplying the combined probability by the number of observations
150 x PA0 PB0= 150 x (47/150) x (50/150) = 25.7
The team made similar estimations of each category pair for each observer pair to complete
the following table..’ The table is provided as per the original document.
Page 129:
The first sentence of the first paragraph has been deleted.
Page 132:
New section entitled ‘Sample Size’ added before ‘concerns’ at the foot at the page.
This section addresses the selection of sample size in the study. This states that ‘a sufficient
number of samples should be selected to cover the expected operating range. With attribute
measurement systems the area(s) of interest are the Type II areas. If If the process capability
is good, then as the process capability improves, the required random sample for the
attribute study should become larger. In the example below, the indices were Pp, Ppk = 0.5
(ie an expected process performance of approximately 13% 13 non-conformance),
conformance), the sample
selected was 50%. An alternate approach to large samples is a ‘salted sample’ where parts
are selected specifically from the Type II areas to augment a random sample to ensure that
the effect of appraiser variability is seen’.
seen’
a homogenous set of parts (small between part variation) can be found to represent a
single part
The shelf life of the characteristic (property) is known and extends beyond the
expected duration of the study – i.e. the measured characteristic does not change over the
expected period of use.
The dynamic (changing) properties can be stabilized’
The information contained in this document has been checked for accuracy to the
best of our ability. However, we cannot guarantee that there are no other changes
which we have not identified.
identified