A Tutorial Introduction To Research On Analog and Mixed-Signal Circuit Testing
A Tutorial Introduction To Research On Analog and Mixed-Signal Circuit Testing
A Tutorial Introduction To Research On Analog and Mixed-Signal Circuit Testing
1389
I. INTRODUCTION
ISTORICALLY, electronic circuits were almost exclusively analog and were designed with discrete components. The components were mounted on printed circuit
boards and tested with a bed of nails tester, allowing access
to all input and output voltages of components. Since the
components of an electronic system could be individually
tested, speed in identifying the cause of failures was more
of a problem. Testing research focused on the development of
methods to rapidly diagnose component failures and assembly
errors during eld servicing of weapons, navigation, and
communication systems.
The advent of integrated circuit (IC) technology and the
scaling of transistor sizes have allowed the development of
much larger electronic systems. Digital design techniques
have become predominant because of their reliability and
lower power consumption. However, although large electronic
systems can be constructed almost entirely with digital techniques, many systems still have analog components. This is
because signals emanating from storage media, transmission
media, and physical sensors are often fundamentally analog.
Moreover, digital systems may have to output analog signals to
actuators, displays, and transmission media. Clearly, the need
for analog interface functions like lters, analog-to-digital converters (ADCs), phase-locked loops, etc., is inherent in such
systems. The design of these interface functions as integrated
circuits has reduced their size and cost, but in turn, for testing
purposes, access to nodes is limited to primary inputs and
Manuscript received November 7, 1996; revised December 23, 1997. This
paper was recommended by Associate Editor G. W. Roberts.
The author is with the Submicron Development Center, Advance Micro
Devices, Sunnyvale, CA 94086 USA.
Publisher Item Identier S 1057-7130(98)07529-6.
1390
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998
1391
1392
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998
where
is the yield of the test in the th position. Average
test time is then
Average Test Time
Hence, minimizing production testing time involves gathering
pass/fail data for each of the circuit specications using a
, the
sample of fabricated chips in order to calculate
position, given previous tests
yield of the test in the
to
. Then Dijkstras Algorithm can be
in positions
used to optimize the order [14]. Specically, the test selection
problem is formulated as a shortest path problem in a directed
graph, where the computational complexity is dominated by
the number of possible subsets of the test set, . In order to
cut the computational cost by avoiding the evaluation of all
subsets of the test set, two heuristic approaches to
possible
test ordering have also been proposed [15].
B. Selecting a Subset of Specication Tests
The easiest way to reduce the number of tests in a test set
is to drop the tests that are never failed. It is possible that a
test will not be failed if it is designed to detect a processing
problem that has not yet occurred. Moreover, it is likely that
some tests will never be failed when there are many more circuit specications that have to be measured than independent
sources of variability in the manufacturing process. It turns
out that the order in which tests are performed will inuence
which tests are never failed. For example, a redundant test,
placed early in a test set, may detect some faulty circuits,
which could be detected by a combination of tests performed
later. For example, a power supply short problem most certainly will be detected by a system performance test, and
therefore a test for such a short would be redundant. Hence,
as there is a tradeoff between minimizing production test time,
achieved by optimally ordering a test set, and maximizing
failure information, there is also a tradeoff between achieving
minimal production test time through eliminating tests and
maximizing failure information. Clearly, failure information
is more important early in the product cycle, while reducing
test time is more important for mature products. And, if a
group of tests is assigned a single failure bin, redundant tests
from that group can be eliminated at little cost.
In [15] and [16], an algorithm has been developed that
orders tests so that the number of tests that have no dropout
is maximized. In other words, this algorithm identies and
maximizes the number of redundant tests. It uses historical
pass/fail data to identify those tests that detect some faulty
circuits which are detected by no other test. If the sample size
is large enough, all necessary tests will detect some such faulty
circuits that are detected by no other test. In this way redundant
tests are identied. On the other hand, if the sample size is
too small, some necessary tests may be wrongly identied as
redundant, resulting in reduced fault coverage, dened as the
probability of correctly rejecting a faulty circuit. This may
occur if some process corners have not yet been exercised.
And in fact, for circuits with high yield, large sample sizes
of nonoptimally tested circuits are needed in order to achieve
SAMPLE SIZE AS
FUNCTION
OF
1393
TABLE I
YIELD AND FAULT COVERAGE
AT
95% CONFIDENCE
Then, if
and
has rank , the measurements
can be used to predict parameters deviations
where
is the transpose of . And the remaining measurements are predicted as follows:
1394
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998
1395
Fig. 4. The map between the parameter space and the measurement space. Parameters and measurements not in the acceptability region correspond
to faculty circuits [15].
Several authors use the same methodology for the simulation of parametric faults [34], [35], [39]. This corresponds
to a parametric fault model involving only local variations
in geometries due to defects. Such a fault model does not
include global parametric variations resulting from imperfect
process control. Given such a local parametric fault model,
in order to simulate a fault, a circuit parameter is set to an
out-of-tolerance value and the resulting circuit is simulated.
But how much out of tolerance should the parametric faults
be? Models of defect size frequency indicate that small defects
are much more likely than large defects [40]. And very small
defects result in only minor changes in circuit performances.
Hence, such small defects may not cause a circuit to fail
specications. Clearly, the denition of a parametric fault
needs to be related to the circuit specications, and specically
dening a parametric fault involves determining parameter
limits such that a circuit fails specications, which may or
may not coincide with parameter tolerances.
Similarly, for global parametric faults, which result from
imperfect control in manufacturing, parameters closer to nominal values are much more likely than parameters which are
far from nominal, while parameter values that are far from
nominal are much more likely to cause a circuit to fail
specications. And, as with local parametric faults, circuits
with parameter values that are close to tolerance limits may
not fail specications, and consequently may not be faulty.
Hence, also for global parametric faults, determining if a
parameter deviation results in a parametric fault involves
determining the map between the random variables describing
where
is the probability density function of parameters
modeling the manufacturing process,
is the complement
of the acceptability region, i.e., the set of all parametric faults,
1396
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998
and
is the set of parameters which correspond to circuits
that fail a given test set.
A straightforward way to evaluate the integrals in the above
equation is to use Monte Carlo analysis, where a sample of
parameters from the probability density function describing
the manufacturing process is simulated. At each , it is rst
, and if so,
determined if is a fault, in which case
is detected, and if so
.
it is determined if
is a fault and if it is
The evaluation of whether or not
detected may be determined directly, using circuit simulation,
or based on regression models of circuit performances, dening parametric faults. However, applying the Monte Carlo
algorithm directly by simulating a circuit with a sample of
parameters representing the manufacturing process may not
lead to accurate results. Specically, if a small sample size is
used, results will be inaccurate because of the sample size. In
fact, unless a test is highly inaccurate, it may be hard to nd a
sample of faulty parameters which is not detected by the test
and
. Alternatively, if the sample size is
set, i.e.,
large, the computational cost of simulating just the blocks, i.e.,
op amps, if not the whole system, hundreds of times, can be
very high, unless very inaccurate simulation models are used.
Importance sampling can reduce this cost of simulation [48].
Nevertheless, when applying the Monte Carlo algorithm, the
use of regression models of block performances as a function
of process parameters and a hierarchical simulation strategy
reduces the computational cost most effectively with accurate
results [33], [41], [42], [46]. In other words, a limited set
of simulations is performed to construct regression models,
and then the regression models are used to evaluate if tests
are passed or failed for the much larger random sample of
parameters representing manufacturing process variations. In
this case, the most signicant sources of inaccuracy will come
from a combination of the accuracy of circuit simulation,
which is used to construct the regression models, and the
ability of the regression models to mimic the simulator.
If all circuit specications are tested, the parametric fault
coverage may still not be 100%, due to measurement noise
[15]. Moreover, if tests other than the specication tests are
used, as proposed in [28], [39], [47], and [49], there may be
a systematic loss of fault coverage (Fig. 5). Similarly good
circuits may fail tests due to measurement noise and if tests
other than the specication tests are used. Yield coverage ( )
has been proposed as a parameter to quantify the problem of
discarding good circuits [47]
Fig. 6. Test results for the Class AB amplier and the measured performances for the three devices that failed specication tests but passed the
proposed test set for catastrophic faults [50].
generating test sets for just catastrophic faults. The test set
he proposed for a Class AB amplier was derived based on
realistic catastrophic faults and demonstrated high catastrophic
fault coverage of modeled faults by simple stimuli, i.e., simple
dc, ac, and transient stimuli. This test program was then
appended to the existing conventional (specication-based)
test program, in order to judge its effectiveness in a production
test environment. The results are shown in Fig. 6. As can
be seen from the gure, the yield of the device was very
high (99.5%), and the fault coverage of the proposed test set
was only 73%. The performances of the three devices which
passed tests for catastrophic faults but failed specication
tests are also shown in Fig. 6. Because the proposed test
set was designed to detect catastrophic faults and because
distributions of both local and global parametric faults have
higher frequencies of parameter values that correspond to
circuit performances close to specication limits, it appears
from these results that these three devices failed due to
parametric faults.
Because the sample of failed devices was so small, Sachdev
[50] followed up this experiment with a larger one using the
same Class AB amplier. The results of this second experiment
are shown in Fig. 7. It can be seen that the fault coverage
of the test set designed solely for catastrophic faults was
87%. The 433 circuits that failed the proposed test set but
had passed the conventional test set were then retested by
the conventional method. Of these 433 circuits, 51 passed
the conventional test set, indicating that measurement results
are very close to specication limits, causing the circuit
to pass or fail based on noise levels. The remaining 382
circuits mostly failed specications on the input offset voltage,
total harmonic distortion, and the signal-to-noise ratio. All of
1397
TABLE II
HIGH-PASS FILTER BLOCK [33]
FOR A
1398
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998
Fig. 8. Decision boundaries between the good circuit and three faulty circuits for measurement
1399
1400
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998
FOR
TESTABILITY TECHNIQUES
1401
Fig. 11.
Fig. 12.
1402
Fig. 14.
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998
Fig. 15.
Fig. 16.
1403
ware and consequently reduce the area overhead needed for the
on-chip generation of test signals and analysis of test results. In
particular, such circuits have DACs and ADCs which may be
used for testing through reconguring the connections between
blocks. By taking advantage of ADCs and DACs, which are
already part of a design, mixed-signal circuits can be tested
with digital testers, components may be tested in parallel, and
testing is more easily performed in the eld.
A common architecture for a mixed-signal circuit is shown
in Fig. 16. This architecture assumes that a mixed-signal
circuit is composed of analog input components, connected to
a large digital section by an ADC, which in turn is connected to
analog output components by a DAC. Given the on-chip DAC,
a digital test stimulus may be implemented on-chip, in order
to test the analog output block. Specically, Ohletz [49] has
proposed a pseudorandom piecewise-constant input signal with
different amplitudes, generated with a linear feedback shift
register (LFSR), the DAC, and an output amplier (Fig. 17).
Alternatively, input stimuli could come from a ROM or DSP
circuitry, rather than an LFSR. All of these approaches keep
1404
Fig. 18.
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998
[94], which are also used for digital test. The signals stored
in the BILBO registers may then be fed to a multiple input
signature register (MISR), which performs the task of onchip data compaction using signature analysis [49]. Hence,
the analog test results are consequently evaluated in the
digital domain, using the same techniques as used for onchip evaluation of the digital response. Nevertheless, signature
analysis is not the only way in which the digitized response
from an analog block can be analyzed. The response could
be compared against a known good response, stored in a
ROM, before compaction, or postprocessing may be done
based on the functional characteristics of the analog blocks.
Specically, in [39], given a pseudorandom piecewise-constant
input signal, circuitry for computing the auto-correlation and
cross-correlation of the impulse response is proposed.
In both [39] and [49], the effectiveness of pseudorandom
inputs in detecting catastrophic faults in analog components
has been demonstrated. However, the effectiveness of such
approaches in detecting both local and global parametric faults
still needs to be determined. Because the circuit performances
that are tested are different than the circuit specications, there
may be signicant systematic losses in fault coverage and/or
yield coverage for parametric faults.
In [95], a BIST circuit is proposed for looking at abnormal
changes in the power supply current. The proposed circuit
involves an upper limit detector, for detecting an abnormally
high power supply current, a lower limit detector, for detecting
an abnormally low power supply current, and some logic to
signal if there is a fault. The idea behind this approach is that
faults will either increase or decrease the power supply current
compared to the fault-free circuit. When using this power supply current monitor to test ADCs, the input voltage is varied,
so that all states of the ADC are exercised. A reasonable
fault coverage of catastrophic faults has been demonstrated by
simulation. Nevertheless, as with using random inputs to test
analog blocks, the effectiveness of this approach in detecting
local and global parametric faults is still unknown.
A more traditional signal generator is proposed for BIST
of ADCs in [4] and [96]. Specically, tests are designed
to measure the signal-to-noise ratio, gain tracking, and the
frequency response of a sigmadelta ADC (Fig. 19). The
stimulus is a precise multitone oscillator designed for an
uncalibrated environment [97], [98]. The design of the oscillator is fully digital, except for an imprecise low-pass
Fig. 21.
Fig. 22.
1405
1406
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSII: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 45, NO. 10, OCTOBER 1998
1407