0% found this document useful (0 votes)
2 views65 pages

FAQ_FINAL

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 65

DFT FAQ’s

Topics:

1. Digital circuits

2. Basics of DFT

3. Scan/Compressor implimentation

4. ATPG

5. Static/DC faults

6. Bridging faults

7. TDF
8.IDDQ faults

9. Path delay faults

10. SDD (small delay deffect)

11. GE (gate exhausive)

12. BIST

13. FV

14. Simulations

15. JTAG

16. STA

17. QDRC

18. SpyGlass

19. Post-Silicon

1. Digital circuits

1. What is CABAC decoder?


2. What is the meta-stability state?

Ans: Any circuit in unknown state is called in meta-stability state. It is an unstable state in which the
system is not able to settle into a stable ‘0’ or ‘1’ logic level within the time required for the proper
circuit operation. So that the circuit will act in unpredictable ways and leads to system failure .Meta
stability states are inherent features of Asynchronous digital systems.

In electronics, the flip flop is a device that is susceptible to Meta stability state. It has two well-defined
stable sates ‘0 ‘ and ‘1’.But under certain conditions like Setup or Hold time violations it can hoe ever
between them for more than a clock cycle. This condition is known as Meta stability.

The most common cause of metastability is due to setup and hold time violation in a flip-flop. During the
time from the setup to the hold (capture window) the data input of the flip-flop should remain in a
stable logic state. The change of data input in that time will have a probability of setting the flop into
Metastability.

In a typical scenario where the data travels from the output of a source flop to input of a target flop,
Metastability is caused by either

a) The target flop is having the different frequency than the source flop, in which case the setup and
hold time of the target flop will be violated eventually if the timing is not met between these 2 flops

b) The target and source clock is having the same frequency, but a phase alignment that causes the data
to arrive at the target flop during its setup and Hold time window. This can be caused by fixed overhead
and variations in logic delay times on the worst case path between the two flops, violations in clock
arrival times (clock skew) or others.

3. How to make a flop using 2 laches?


A master–slave negative edge triggered D flip-flop is created by connecting two gated D latches in
series, and inverting the enable input to one of them. It is called master–slave because the second
latch in the series only changes in response to a change in the first (master) latch.

4. What are the advantages and disadvantages of using synchronous reset?


Ans::Advantages: No Meta stability problem (providing recovery and removal time for reset is taken
care), Simulation is easy.
Disadvantages: synchronous reset is slow; Implementation of synchronous reset requires more number
of gates compared to asynchronous reset design. An active clock is essential for a synchronous reset
design; hence you can expect more power consumption.

5. What are the advantages and disadvantages of using Asynchronous reset?

ANS: Advantages: Implementation of asynchronous reset requires less number of gates compared to
synchronous reset design, Asynchronous reset is fast, Clocking scheme is not necessary for an
asynchronous design. Hence design consumes less power.
Disadvantages: Metastability problems are main concerns of asynchronous reset scheme (design), Static
timing analysis and DFT becomes difficult due to asynchronous reset.
6. What is the effect of the above on coverage? How to get coverage on them?

Ans: If we don’t declare them as clocks then we lose coverage on those pins. So we generally declare set
and reset pins as clocks to get coverage on them.

7. How do you design a divided by 2 clock and will you test it?

Ans: We can use a D Flip-Flop to get divided by 2 clocks. The input clock is given to the clock pin of the D
Flip-Flop. The output ~Q (i.e., NOT_Q) of the Flop is fed back to the D-pin of the flop. The signal on Q
output of the flop will be divided by 2 clocks.

We can write test-bench to test the design

8. How do you design divide by 3 counter?

Ans: Use two mod 2 counters. One counter designed with positive edge flops and other counter
designed using negative edge flops. We can use combination of the output of these counters to get
divided by 3 counter with 50% duty cycle

9. How will you check whether a number is having bit 1 in 3rd location from LSB in a 32-bit register?

Ans: Do bit wise between 32 bit number and binary number 4

(32’b0000_0000_0000_0000_0000_0000_0000_0100). I will give output of this operation to a 32 by 5


bit decoder.

10. What is clock feed through?

Ans: it is a special case of capacitive coupling. Like in an inverter, the coupling b/w the input and output
due to Cgd. This results in an overshoot in the output voltage. The accumulation of a small positive
charge on the source of a MOS switch which occurs after the switch has been turned off due to the
parasitic capacitance that exists between the gate and the source of the transistor, known as clock
feedthrough, is reduced by utilizing a split-gate MOS transistor, and by continuously biasing one of the
gates of the split-gate transistor. Due to this there might be an overshoot in the potential when the
mosfet is switched off

11. What does u mean by critical path?

Ans: The path through the logic which determines the ultimate speed of the structure is called the
critical path.

The path which has the maximum delays or the longest path in the design.

12. what is the difference between latches and flip-flops based designs

Ans: a) Latches are level-sensitive and flip-flops are edge sensitive.


b) Latch based design and flop based design is that latch allows time borrowing which a tradition
flop does not. That makes latch based design more efficient. But at the same time, latch based design is
more complicated and has more issues in min timing (races). Its STA with time borrowing in deep
pipelining can be quite complex.Latches, physically occupy more space in layout when compared to Flip
flops.Latches occupy less area compared to flip-flops.

13. What are local-skew, global-skew, and useful-skew mean?

Ans: Local skew: The difference between the clock reaching at the launching flop vs the clock reaching
the destination flip-flop of a timing-path.

Global skew: The difference between the earliest reaching flip-flop and latest reaching flip-flop for a
same clock-domain.

Useful skew: Useful skew is a concept of delaying the capturing flip-flop clock path, this approach helps
in meeting setup requirement with in the launch and capture timing path.
But the hold-requirement has to be met for the design.

14. What is the various Design constraints used while performing Synthesis for a design?

Ans:

 Create the clocks (frequency, duty-cycle).


 Define the transition-time requirements for the input-ports.
 Specify the load values for the output ports
 For the inputs and the output specify the delay values (input delay and ouput delay), which are
already consumed by the neighbour chip.
 Specify the case-setting (in case of a mux) to report the timing to specific paths.
 Specify the false-paths in the design
 Specify the multi-cycle paths in the design.
 Specify the clock-uncertainity values (w.r.t jitter and the margin values for setup/hold).

15. Difference between onehot and binary encoding?

Ans: Common classifications used to describe the state encoding of an FSM are Binary (or highly
encoded) and One hot. A binary-encoded FSM design only requires as many flip-flops as are needed to
uniquely encode the number of states in the state machine. The actual number of flip-flops required is
equal to the ceiling of the log-base-2 of the number of states in the FSM.
A onehot FSM design requires a flip-flop for each state in the design and only one flip-flop (the flip-flop
representing the current or "hot" state) is set at a time in a one hot FSM design. For a state machine
with 9- 16 states, a binary FSM only requires 4 flip-flops while a onehot FSM requires a flip-flop for each
state in the design. FPGA vendors frequently recommend using a onehot state encoding style because
flip-flops are plentiful in an FPGA and the combinational logic required to implement a onehot FSM
design is typically smaller than most binary encoding styles. Since FPGA performance is typically related
to the combinational logic size of the FPGA design, onehot FSMs typically run faster than a binary
encoded FSM with larger combinational logic blocks

16. What is false path? How it determine in ckt? What the effect of false path in ckt?
Ans: By timing all the paths in the circuit the timing analyzer can determine all the critical paths in the
circuit. However, the circuit may have false paths, which are the paths in the circuit which are never
exercised during normal circuit operation for any set of inputs. An example of a false path is shown in
figure below. The path going from the input A of the first MUX through the combinational logic out
through the B input of the second MUS is a false path. This path can never be activated since if the A
input of the first MUX is activated, then Sel line will also select the A input of the second MUX. STA
(Static Timing Analysis) tools are able to identify simple false paths; however they are not able to
identify all the false paths and sometimes report false paths as critical paths. Removal of false paths
makes circuit testable and its timing performance predictable.

17. What is Clock Gating?


Ans: Clock gating is one of the power-saving techniques used on many synchronous circuits including the
Pentium 4 processor. To save power, clock gating refers to adding additional logic to a circuit to prune
the clock tree, thus disabling portions of the circuitry where flip flops do not change state. Although
asynchronous circuits by definition do not have a "clock", the term "perfect clock gating" is used to
illustrate how various clock gating techniques are simply approximations of the data-dependent
behavior exhibited by asynchronous circuitry, and that as the granularity on which you gate the clock of
a synchronous circuit approaches zero, the power consumption of that circuit approaches that of an
asynchronous circuit.

18. What Physical verification?


Ans: Physical verification of the design, involves DRC (Design rule check), LVS (Layout versus schematic)
Check, XOR Checks, ERC (Electrical Rule Check) and Antenna Checks.

19. Give 5 important Design techniques you would follow when doing a Layout for Digital Circuits?
Ans:
 In digital design, decide the height of standard cells you want to layout.It depends upon how
big your transistors will be.Have reasonable width for VDD and GND metal
paths.Maintaining uniform Height for all the cell is very important since this will help you
use place route tool easily and also incase you want to do manual connection of all the
blocks it saves on lot of area.
 Use one metal in one direction only; this does not apply for metal 1. Say you are using metal
2 to do horizontal connections, and then use metal 3 for vertical connections, metal4 for
horizontal, metal 5 vertical etc...
 Place as much substrate contact as possible in the empty spaces of the layout.
 Do not use poly over long distances as it has huge resistances unless you have no other
choice.
 Use fingered transistors as and when you feel necessary. Try maintaining symmetry in your
design. Try to get the design in BIT Sliced manner.
21. Why is most interrupts active low?

Ans: This answers why most signals are active low


if you consider the transistor level of a module, active low means the capacitor in the output terminal
gets charged or discharged based on low to high and high to low transition respectively. When it goes
from high to low it depends on the pull down resistor that pulls it down and it is relatively easy for the
output capacitance to discharge rather than charging. Hence people prefer using active low signals.
22. Give two ways of converting a two input NAND gate to an inverter?
Ans:
(a) short the 2 inputs of the nand gate and apply the single input to it.
(b) Connect the output to one of the input and the other to the input signal.

23. How can you convert an SR Flip-flop to a JK Flip-flop?


Ans:
By giving the feed back we can convert, i.e !Q=>S and Q=>R.Hence the S and R inputs will act as J and K
respectively.

24. How can you convert the JK Flip-flop to a D Flip-flop?


Ans:
By connecting the J input to the K through the inverter.

25. What is Race-around problem? How can you rectify it?


Ans:
The clock pulse that remains in the 1 state while both J and K are equal to 1 will cause the output to
complement again and repeat complementing until the pulse goes back to 0, this is called the race
around problem.To avoid this undesirable operation, the clock pulse must have a time duration that is
shorter than the propagation delay time of the F-F, this is restrictive so the alternative is master-slave or
edge-triggered construction.

2. Basics of DFT

1. What is DFT? What is the neccessity of DFT in any chip? Basic Advantages?

Ans: Design For Testability is a technique which detects manufacturing defects which occurred during
the course of production.The necessity of DFT for any chip is required because if any defect occurred
during the manufacturing require for us to sort it.

2. What if the difference between DFT and Verification?

Ans: In DFT we will do structural tests and in verfication they will test the the functionality of the chip.

3. Can you explain the normal flow of DFT?

Ans: After getting gate level netlist from the synthesis team (DC netlist) the DFT flow will start.

1) MBIST RTL Generation for the Memories.


2) JTAG RTL Generation and hook up.
3) Synthesing the netlist mbist and JTAG RTL and integrate with DC netlist.
4) Taking to the scan insertion and integrate with MBIST netlist .
5) FV on pre & post DFT
6) FV on pre & post DFT
6) Handoff to the tester PD team.
7) ATPG pattern Generation and coverage analysis.
8) Scan chain validation on post pd netlists.
9) ATPG pattern Generation on final tapeout netlist.
10) Pattern Verification and hand off to tester.

4. Explain the test cost?

Ans: Test cost depends on test time and number of chips we can test at a time on the tester.

Also length of the longest chain and balaneced effect also to considered. Test time can be reduced by
reducing number of patterns and we can test more number of chips at a time by reducing no.of pins.

5. What is meant by test plan?

Ans: Test plan is the document which is prepared before any execution of the DFT project by the DFT
Lead. This document gives us the information about the

a) List of Blocks
b) Overall architectire
c) How its interlink with other block
d) DFT schemes that is going to be implemented.
e) Clocking Diagram
f) Reset Structure
g) Type of DFT architecture planned.
h) Clocking frequency, scan ports is it sharable or stand alone .Type of faults targeted and their
targetted frequency.
i) Limitation if any for the implentation.
j) List of DFT related ports
k) List of blocks for scan
l) If there exist any third party IP

Planning the following things

No.Of Test clocks, No.of Scan_in and Scan_out’s, Max length of Scan chains, targetting Shift Frequency,
Types of faults Targetting, Tools and Their versions.

6. Explain ASIC flow?

Ans:
We need to give more explaination for DFT and PD
7. What kind of projects you have done ?
8. What are the tools used?

Ans: DFT Compiler, Design vision, Tetramax, Conformal LEC, ET, modelsim, PTSI

9. IP cores used in the design and their specs?


10. What is the biggest challenge for you in your DFT experience?
11. What is burn-in test? What kind of pattens will use for this test? (Capture or intergrity)? Why can’t
we loop scan integrity?

12. From top level how to test subchips?

Ans: To Test subchips from the Top, we need to enable that perticular core by programming coretest
control module and we need to provide the information about the instance name of that perticular core
to the tool, so that it will add faults on that perticular instance. We have to add nofaults on the other
cores which we are not targetting.

13. What kind of inputs should be given to designer, so that design is DFT friendly?

Ans: All the clocks SET and RESETs should be controllable from the top-level.

14. How do you design a logic which is DFT friendly?

Ans: 1. Need to have control on clocks of each design flop from primary input.
2. Need to disable tri-state bus contention
3. Need to minimize redundant logic in design
4. Synchronous and asynchronous logic must be separated
15. What is IODFT? What was the speed?

16. Explain low power DFT?


Low power DFT is concept where we have controlling the power numbers during the shift rate.
Low power DFT is the one we see in many angles .one such is how the functional logic is toggling
during the shift. How we are going to limit the Toggling rate by gating enable.

17. Can we do functional testing at the wafer level? If yes, how?

18. Did you insert EDT in any of the designs?

19. On what basis EDT inserts lock-up latches?


Ans: At decompressor side, it considers the clock edges of both edt_clock and first scan cell in the
internal scan chain.
At compactor side, it considers clocking of last scan cell in the internal scan chain and the presence of
pipeline stages in compactor.
In bypass logic, it checks for clocking of the scan cells, which are going to be concatenated in one chain.
20. What are the EDT pins which can’t be shared with functional pins?
Ans: Except edt_clock all other EDT pins can be shared with functional pins.
21. In which cases TestKompress adds 2 latches?
Ans:TestKompress adds 2 latches in the case in which edt_clk is posedge triggered and internal chain
clock is negative edge triggered.

22. How EDT patterns look during load-unload i.e., no. of shifts required?

Ans: During shift no.of shifts = scan chain length with max flops + initialization cycles.
During Unload no.of shifts = scan chain length with maximum flops.

23. What is scan? Why do you insert scan in your design?

Ans:The gate level netlist from the synthesis team will contains the flops that are having test ports (sin
tied to ‘0’ and sout floating).We will stritch these flops so that every sout port is connected to sin port of
other flop and devide them into no.of chains.By doing this we can have controllability and observability
at each and every node.This process is called scan , and the chains here are called scan chains.If a flop is
in the scan chain means the clk and reset of that flop are in our controll.

24. Explain Scan Capture and Scan shift?


Basic steps:

1) Loading the scan chains with known values.


2) Applying the Primary input
3) Observe the Primary output
4) Pulse the capture clock
5) Unload the scan chain

25. Is it possible to shift patterns into scan chains at maximum frequency of 100MHz? what are pros and
cons of scan shifting?

Ans: 1) Frequency depends on tester , It is basically supported by tester depenency (ie) .Basically we do
the shifting at slow frequency .
NO. Because while we are shifting all the design will be in ON state and shifting with higher frequencies
will take more(IR drop) current that the circuit may not bare.If the Circuit can withstand to that much
current then we can shift the patterns with 100Mhz also.

26. Why we are using slow scan clock for shift? What is the typical scan clock frequency?

Ans: Chip may burn if we shift with fast clock because of high IR drop.The typical shift frequency is 25
MHz.

27. How many bits will be allocated for integer in verilog?

Ans: 32

28. What kind of inputs you expect from the designers?

Ans: Clocks (Shift clocks and Capture clocks). OFF state of the clocks.
1. Resets and their OFF states
2. The instances to be made as non-scan.
3. Any pre-existing Scan-chains
4. Number of Scan chains to be stitched.
5. Whether the clocks and edges to be merged
6. Whether Scan-pins to be shared with functional pins, if yes with which pins.

29. Difference between defect, fault and failure

Ans: Defect is a physical imperfection, flaw that may lead to a fault. Fault is a representation of a defect
reflecting a physical condition that causes a circuit to fail to perform in a required manner.
Failure is a deviation in the performance of a circuit or system from its specified behavior and represents
an irreversible state of a component such that it must be repaired in order for it to perform the desired
action.

30. What is observability and controllability?

Ans: Controllability reflects the difficulty of setting a signal line to a required logic value from primary
inputs
Observability reflects the difficulty of propagating the logic value of the signal line to primary outputs.

31. What is serial and parallel loading?

Ans: Parallel patterns are forced parallely (at the same instance of time) @ SI of each flop and measured
@ SO. Basically these patterns are used for simulating the patterns faster. Here only two cycles are
required to simulate a pattern: one to force all the flops and one for capture.

Serial patterns are the ones which are used @the tester. They are serially shifted and captured and
shifted out.

32. Difference between sequential and combinational ATPG?


Ans: The basic difference between combinational and sequential ATPG is that, during sequential ATPG,
one test vector may be insufficient to detect the target fault, because the excitation and propagation
conditions may necessitate some of the flip flop values to be specified at certain values.

33. What is a fault model?

Ans: A fault model is an engineering model of something that could go wrong in the construction or
operation of a piece of equipment. From the model, the designer or user can then predict the
consequences of this particular fault. Fault models can be used in almost all branches of engineering.

34. How many fault models are there?

Ans: There are 6 basic categories of fault models. They are stuck at faults, transition faults, open & short
faults, delay faults & cross talk, pattern sensitivity and coupling faults, analog faults.

35. What is the difference between logic transition fault and memory transition fault?

Ans: The transition fault model also called gross-delay fault model, is a special case of the gate-delay
fault model in which the fault is assumed to be of the same order of magnitude as the clock period. The
transition fault model is used to cover delay effects that are generated by localised defects and whose
sizes are in the order of magnitude of the clock cycle or of the test pattern period. Whereas in Memory
transition fault, the cell makes a transition from 0->1 and 1->0 incorrectly.

36. What is BIST?

Ans: Variety of testing challenges during wafer probe, wafer sort, pre-ship screening, incoming test of
chips and boards, test of assembled boards, system test, periodic maintenance, and repair test makes
testing with ATPG more expensive and time taking. In corporation of Bist in design stage is a solution.
Logic bist generates patterns and analyze the output responses of the functionality circuitry are
embedded in the chip or somewhere on the board. There are two general categories of BIST 1. On-line
BIST 2. Off-line BIST. Online BIST is performed when the functional circuitry is in normal operational
mode. It can be done either concurrently or nonconcurrently. Offline BIST is performed when the
functional circuitry is not in normal mode. This technique does not detect any real-time errors but is
widely used in the industry for testing the functional circuitry at the system, board, or chip level to
ensure product quality.

3. Scan/Compressor implimentation

1. What is compression? Advantages and Disavdantages compare with uncompressed mode


.Example.pattern count, coverage, test-time…etc?

Ans: Concept of Compression I coming in to picture when we have a very huge amount of patterns,
takes more test time, length of the chain is long. By considering some of the factor we go for
implementing compression architecture .Compression is the archtecture where the
 Internal chains length is made smaller.
 Increase the no of chains
 Also plan to have more than 1 compression if there exist more then one core.
Test compression involves compressing the amount of test data that must be stored on automatic test
equipment (ATE) for testing with a deterministic test set. This is done by adding some additional on-chip
hardware before the scan chains to decompress the test stimulus coming from the ATE and after the
scan chains to compress the response going to the ATE. Coverage will be less in lpc mode, patterns are
little more in lpc, and test time will be less because of less shift time in Lpc.

2. Without scan design, is it possible to check manufacturing faults with small circuits?

We can check the manufacturing faults using functional testcases but the amount of time takes will
be long run time weak to gather

3. What is pipeline flop? How many pipe line flops are there in top? What is the advantage of it? Is
there any extra option we use to insert a pipeline flop while insertion?

Concept of pipe –line comes to picture where there is a possiblilty of timing issue. Data coming from the
scan in to the first flop may have some delay. So by adding some pipeline flop we are syncanirizing the
data according to the rated speed required.
Pipeline flop is the one which was inserted between “scan in” pin to “scan in” pin of first flop in scan
chain and between “scan out” pin of last flop of scan chain to “scan out” pin to reduce the delay on
these long wires. Number of pipeline flops required depends on the delay between scan in pin to
flop and flop to scan out pin. We have to specify the scan clock with which the pipeline registers
needs to operate and number of pipeline registers to be used.
4. What is the clock connected to the pipeline flop?
There will be dedicated pipeline clock whose clock freq will be same as shift clock.
5. What is X tolarance and X blocking?
X blocking is the one where we insert masking logic between stump output and inputs of
compressor logic to block x-values shifted out from internal stumps and avoid corrupting scan data
outputs.
6. What is clock mixing and edge mixing? How you will achieve clock mixing and edge mixing?

Clock mixing:

When we want to merge the scan chains of two different clock domains, we use the concept of clock
mixing .This can be done by adding a lock-up latch inbetween them.

Edge Mixing:

When we have 2 different edge flops and if we want them to put in a single scan chain , we do
edge mixing these can be done in 2 ways 1) neg edge is placed first followed by pos edge 2) posedge
with lock up catch then followed bt neg edge .

Ans: If a scan chain is operating with more than one shift clock, it is called as clockmixing. In a single
scan chain contains both pos and neg edge flops is called edge mixing.By adding a latch between two
edges mixed or clock mixed flops, we can achieve it with no data loss.
7. How do you decide No.of scan chains and scan chain length?

Ans: Based on compression factor.

No of scan chains is depended on the

 How much channels the tester supports.


 Also it depends on the grouping of clocks
 Sometimes it may require some design specific requirements .(to keep the particular domain in a
separate chain )

Scan chain length is depends on

 Many factors like clock domain grouping


 Tester supported constrainsts.
 Also how muh max chain length the tester supports .

8. How do you decide the No.of test clocks?


No of test clocks can be determined by the number of functional clocks that are controlling a block.

9. What is scan chain balancing? Why it is required?

Ans: Maintaining the same length for all stumps is called scan chain balancing.It will be useful in reduce
test cost.

Scan chain balancing is the concept where a constant no or approximately equal to the nos are
maintained in the chain in order to form balanced chain.

This is required because , if there is no uniformity in the scan cell one chain will take more shift cycles as
compred to the other chain which has lesser nos of shift cycles . so because of the above reason test
time will increase .

As test cost is directly proportional to test time .

10. What are the considerations for scan?


i. Number of scan inputs and scan outputs.
ii. Dedicated scan ports or shared scan ports.
iii. Compression factor.
iv. Number of shift clocks.
v. Capture methodology is LOC or LOS.
vi. On chip clock generator or off chip clock.
vii. Clock mixing is allowed or not.
11. What are the different approaches of scan?

Ans: Top down approach and Bottom-Up approach.


Top-down approach:

It is basically doing scan insertion at the top level (ie) defining the scan port at the chip top level and
doing the scan stitching.

Bottom-up

Types of scan: Full scan, partial Scan, Partition Scan.

12. What is 8x, 16x, 64x compresion? Which one are you using? Compare pattern counts? Advantages
and disadvantages of them?
13. How is the compressor architecture of synopsys for 8x?
14. What is scan protocol? What it contains?

Ans: Scan protocol file will have the information about the scanchains and their shift (SE), testclock
information. It will have initialization setup details before entering into the testmode.

15. What is scandef file? What issues did you address in scandef file?
Scandef is the one of the optional ouput of the scan insertion tool .Basically it contains the order of
the scan chain in different format .This format is used in the scan chain re-ordering by the phyiscal
design team .

16. If design goes through another optimization& goes out of sync from scandef,how do you handle it?
17. What are the different types of scan flops?explain functionality of each flop?and which is better to
use?

Ans:Muxed D- Scan flop , clocked Scan flop , LSSD flop.

18. What is the advantage of LSSD over mux-DFF and clocked scan?

Ans: We can avoid hold time violations using LSSD scan design method.

19. Diff b/w compression and lbist?


Compression LBIST
Internal BIST controller will take care of
Scan chain inputs and outputs needs to be providing inputs and outputs depending on seed
provided from input’s value that are provided from inputs.
Can’t predict the failure location as LBIST will
Can predict the pass or fail bit location from scan give only pass/fail information.It is collectivily
outs. called as signature (which is basically xoring the
outputs )
LBIST required ROM which should have
Compression logic does n’t requires any ROM’s information about expected values will increase
as the outputs of stumps are observed directly the area of logic
on scan out pins.
Difficult to debug
Easy to debug
20. What is compression factor for any design, how you will calculate?

Ans: compression ratio = (LPC chains)/ (1.2*no.of external chains) . It is good if it is < 30.

21. How do you achieve optimal compression ratio? Which is the compression tool used in the project?

Ans: We can get the optimal compression ratio by increasing the scan chain length, so that the no.of lpc
chins will reduce. I used DFTC Max tool for the compression.

22. Discuss diff types of compression technologies and its dis. adv and adv?

Ans: combinational XOR compression logic (DFTC MAX), sequential compression logic (EDT).

23. What is hierarchical compression flow and its adv?

Ans: If a core is having more flops (ex: more than 2lac) then we may get either high compression ratio or
high scan chain length .To avoid these kind of senarios, hierarchical compression is very useful. In
hierachical compression, first we have to compress subcore with in a core and then we compress the
total core.

24. Mention few DRC's at scan implementation phase?

Ans: D1 - clock is not controllable D2 – reset is not controllable D3-set is not controllable.

25. What is compression Ratio and vector to flop count? Explain the significance?

Ans: v2F count = (max.lpc chain length * patterns)/Total flop count V2F ratio
will give us how many patterns are required to test a single flop. If it is less than 10, will be good.

26. How do you define number of scan chains and compression ratio of the chip?
Scan chain’s in uncompresses mode is determined by tester (number of pins that it can support for
scan input and scan output).compression ratio is determined by tester memory available.
27. How do you handle multiple clocks during shifting?

Ans:We merge the clock domains during scan insertion to get balanced scan chains .so lockup latches
were added at clock domain crossing provided the skew of the second clock is less than the (clk to
Q)delay of first flop + data path delay.This was done to avoid possible data jumping during shift.

28. How many shift clocks do you have and how will you handle them?

Ans: 15 shift clocks in Top level. These will come from 15 different TCKs and will go to CRC’s, CGC’s,
CXC’s and will fed the clocks of corelevel. We can merge the clocks in scan chain
provided they are coming from same TCK.

29. If you have design with single shift clock domain and multiple capture clocks, do you need to insert
lock-up latch at clock domain crossings?
Ans: Yes. Even though there is only one shift clock, there were different clock domains and we merge
the clock domains during scan stitching to get babanced chains.There might be a skew between
different clock domains which might cause data jumping during shift.

30. Why do we need lockup latch?

Ans: we need a lockup latch between clock mixing flops and edge mixing flops in a single chain to avoid
hold time violation.

31. Lockup latch is in your design, but the first clock is skew is more than second clock will the design
work?

No it will not work.

32. If we can not insert lockup latches because of area constraint, what will you do?

Ans: we can delay the clock.

33. Is there any alternative for lock_up latch?

Lock up flop.

34. Explain why we can't put positive flops first and then negedge flops?

Because by the end of the scan cycle the dataout of both posedge flop and negedge flop will have
same value, which means that scan shift doesn’t happen properly.

35. How did you taken care the negedge flops?

Ans: Negative edge flops are stitched at the beginning of scan chain, followed by positive edge flops.

36. What are the issues you faced? Explain with diagram?

37. Give different solutions, if your design has both posedge flops and negedge flops?

Ans: 1.Negative edge flops are stitched at the beginning of scan chain, followed by positive edge flops.

2. If pos-edge flops are stitched before neg-edge flop then a lockup latch is required between them.

38. Suppose you have a design with 4 different clk domains. So in this case how do you insert scan
chains? Do you go for multiple scan chains (single scan chain/domain) or you will select single scan
cahin that covers all the modules. Discuss the adv. and dis adv?

Ans: I will go for single scan chain per domain. If we stich a single scan chain with all different clock, then
we may loss the data or flops may capture wrong values. Timing closure becomes difficult.

39. Do you have dedicated pins for scan chains or shared with functional pins?

Ans: Dedicated pins are there for scan chains.Scan pins are shared with functional pads (mostly gpio
pads).
40. How do you share functional pins for testing?

Ans. No modification is required in case of input pads, as the fanout can be used.For output
pads,modes are defined with muxes on the data and enable paths.

41. How do you go ahead in scan insertion if there are negative edge flops?

Ans: Negative edge flops are stitched at the beginning of scan chain, followed by positive edge flops.

42. How did you mix negedge and posedge flops?

Ans: Negative edge flops are stitched at the beginning of scan chain, followed by positive edge flops.

43. How top level stitching of scan chains was done?

We read the CTL’s of all the sub modules so that all the sub modules will also be a part of your top
level scan chains and also glue logic in top will also be part of your scan chain.

44. If you done scan and atpg at the module level, how it will do in top level and which clock is used?

Each clock in module level is connected to some CRC, CXC’s at the top level, so during the
initialization sequence we need to enable the seq_en for these clock root cells so that their
respective clocks will be reached to the modules.

45. What is a scan router? Explain with block diagram?

46. Inserted scan on RTL or netlist?

Ans: Netlist.

47. Do you know DC flow for scan insertion

Ans: YES.

48. Do you know magma flow

49. Do you know STA

50. What was the frequency for shift out MISR data?

4.ATPG Related :

1. What is mean by defect and fault?


Ans: A defect is a flaw or physical imperfection that may lead to a fault.A fault is an engineering model
of physical defect.
2. What is ATPG? What is ATPG efficiency?
Ans: Automatic Test Pattern Generation is an electronic design automation method/technology used to
find an input (or test) sequence that, when appiled to a digital circuit, enables automatic test equipment
to distinguish between the correct circuit behaviour and faulty circuit behaviour caused by defects.The
generated patterns are used to test semiconductor devices after manufacture, and in some cases to
assist with determining the cause of failure (failure analysis).
Effectiveness of ATPG is measured by the amount of fault models, that are detected and the no.of
generated patterns.These metrics generally indicates testquality (hier with more faults detection) and
test application time (hier with more patterns).
ATPG efficiency is influenced by the fault model under consideration, the type of circuit under test (full
scan, synchronous sequential or async sequential), the level of abstraction used to represent the circuit
under the test (gate, register-transisor, switch) and the required test quality.
3. What is the difference between sequential ATPG and combinational ATPG?
Ans: The combinational ATPG method allows testing the individual nodes (or flip flops) of the logic
circuit without being concerned with the operation of the overall circuit.During test (so called scan
mode) is enabled forcing all FF’s to be connected in a simplified fashion, effectively bypassing
interconnections as intended during normal operation.This allows using a relativley simple vector matrix
to quickly test all the compising FF’s, as well as to trace failure to specific FF’s.
Sequential ATPG searches for a sequence of vectors to detect a perticular fault through the space of all
possible vector sequences. . Avani: combinational ATPG tests the combinational logic between the
flops..To test this combinational logic, sequential logic will be connected as back to back and shift the
test vectors. After shifting the value, the combinational logic will be tested and again values will be
shifted. Combinational ATPG tests both combination logic and sequential logic.It has single pulse in
capture cycle.Sequential ATPG is basically for the faults which can not be covered by single capture
pulse.TO detect the faults around non scan cells,we need to have multiple capture cycles which is taken
care by sequential ATPG.
4. What are the different algorithms in sequential ATPG and combinational ATPG ?which one currently
are you following?

Ans: 1. D-Algorithm (Avani: Single Stuck-At (SSA) and Multiple Stuck-At (MSA))
2.Path-Oriented decision making (PODEM)
3.Fan Out Oriented (FAN algorithm)
4.Pseudorandom test generation
Avani: Currently we are using D-Algorithm. Because this algorithm is easy to understand and very much
helpful to trace back the vectors till primary inputs.
5. For circuit, B input of AND gate is connected to output of NOT gate and input of NOT gate and A
input of AND are connected then output of AND gate is Y. S@1 faut at B pin of AND gate comes
under which category of fault grouping?

Ans: Avani: AP faults


6. For AND gate suppose if B input of AND gate got tied to 1 and tell me the fault groupings at pins A,
B, Y ?
Ans: A – detectable(DT) , B – undetectable tied(UT) , Y- detectable(DT)
7. How you get contention free patterns during ATPG? What the tool does?
Ans: The patterns generated might cause a contention after shifting. We can use the command “set bus
contention check on –atpg” . The tool rejects the patterns which might cause a possible contention.
8. Do you know about n-detect ?
Ans: The main concept behind N-detect is that instead of asking the ATPG tool to detect every fault at
least once, we ask the tool to detect every fault N different ways(where N is an integer).
The theory behind this is that even though single stuck-at faults do not accurately model real defects on
CMOS chips, patterns generated with stuck-at fault models do have very good properties that are
needed to detect almost any type of "logical" failure, namely fault sensitization and propagation to
observable flip-flops or outputs. So when we try to detect stuck-at faults in several different ways, we
are just trying to increase the chance for a real defect t to be sensitized and detected.
The main drawback of N-detect is pattern explosion. Even with N being as small as 3 or 5, test pattern
size may grow very large. Also, there is no easy way to correlate N-detect patterns directly with test
quality.
Some other ATPG methods have been proposed recently to improve test quality with fewer patterns
than N-detect, such as Gate Exhaustive testing. Some of these new methods, however, are not
supported by most ATPG tools
9. How you take care of bidi contention?
Ans: All the Bidis were made as outputs during shifting. This was done by adding test logic on the
Bidi_Enable signal of the Bidis. The test logic was controlled using Scan_Enable pin. An OR gate was
added as test logic. The inputs to the OR gates were Functional Bidi_Enable and Scan_Enable. During
shifting the Scan_Enable signal will be ‘1’ so the Bidi acts as output and during capture Scan_Enable will
be ‘0’, so the functional Bidi_Enable determines whether the Bidi should act as input or output.
Also we will force all the Bidis to “Z” in the load_unload procedure of the test procedure file
10. How do you handle contention on Bidi pins?
Ans: We can add test logic to avoid contention on bidi pins during ATPG. Some time scan flops may
control enable signals of bidi pins. Hence contention may be caused during shifting of scan chain. Other
reason is forcing improper values onto bidi pins from tester. This problem can be solved by adding test
logic on bidi enable pins during shifting and forcing Z values onto bidi pins during testing. Typically test
logic is a single OR gate connected to enable signal of bidi pin with inputs of the OR gate as scan_en and
design signal
11. How did you check bus contention during ATPG?
Ans: During ATPG, bus contention can be checked by using “set contention check on” command. We can
use two switches -atpg and –catpg with capture argument to check for contention during shifting and
after capture respectively.
12. What is the use of edt_update signal?
Ans: Edt_update signal is used to reset flops in the de-compressor logic, and to mask data from mask
shift register to mask hold register.
13. Did you write test procedure file for TestKompress?
Ans: Avani: Yes, the test procedure file is same as normal scan (without compression). Only one change
in both the test procedure file. The compressor structure will be defined in procedure file.
14. How will you pulse your test clock and capture clock?
Ans: The test clock must be pulsed during shifting and capture clock must be pulsed during capture cycle
for all the patterns. Usually test clock is controlled by tester and capture clock might come from top
level or we can use internally generated clock for capture operation.Both test clock(shift clock) and
capture clock come from chip level IO for static testing.For transition delay fault testing,capture clock is
generated from internal logic.
15. Lock-up is used to solve problem during shift or capture?
Ans: during Shift. Avani: when two different clock domains are interacting with each other, the data may
get missed because of different clock (neg edge or pos edge). So it is required to put lock up latch to
sync with data sharing.
16. What is R15 violation?will it effect coverage?will it cause simulation failure?
17. What is S22 and S29 violation?Are you ignoring any s22 or s29? Why?
Ans: Avani: violation S22: Multiple clocks (c1 c2) were used to shift scan chain chain_name
First example: LE to LE devices, LOCKUP latch suggested.

All master gates of scan chain cells in the same scan chain should use a single shift clock. This rule is a
safeguard to try to avoid a situation where chip tester clock skew results in tester pattern failures during
the shifting of the scan chains. If a single clock is used then no clock skew due to the tester can occur. If
more than one clock is used then any skew can result in the rejection at the tester of potentially good
devices.
C1 and C2 are the clock port names involved, chain_name is the symbolic name of the scan chain as
defined in the DRC procedure file used with the run_drc command.
This check is performed by inspecting the clocks sources of the state elements for all scan cells of a scan
chain without regard to the polarity of the clocks or the phase relationship between them.
Second example: TE to LE devices.

A rule violation occurs when a scan chain uses more than one clock to shift data. A potential problem
can exist, but further investigation by the user of the clock polarities involved is necessary. Only the first
occurrence of an S22 violation on each scan chain is identified so investigating one violation does not
mean there are not others on the same scan chain.
Note that the S22 message does not get reported on clock domain crossings that have been identified
with a DSLAVE lockup latch. This is only reported on clock domain crossings that have direct paths
between two scan Master flops.
S29: Dependant slave I1 (G1) can not hold same value as master I2 (G2)
A dependent slave must always carry the same value as its master. I1 and I2 are instance pathnames. G1
and G2 are the corresponding gate ID numbers.
If this rule is violated, the dependent slave might have a value different than its master either at the end
of the shift procedure or after a capture clock. This might lead to the generation of ATPG patterns that
fail in simulation.
TetraMAX performs this check using shift data. It considers the first device to be loaded during shift as
the master and the second device loaded during shift as the dependent slave.
18. Draw the clock gating cell?In which mode do you use which enable?
Ans: Avani: A clock-gating cell propagates the clock signal to downstream logic only when the enable
signal is asserted. During scan shift, if the enable signal is controlled by one or more scan flop, the
shifting test data values cause the clock-gating signal to toggle during scan shift. As a result, the clock
signal does not reliable propagate through the clock-gating cell to downstream scan flop during scan
shift, resulting in scan shift violations.

Figure below shows tn example where the enable signal of an integrated clock-gating cell is driven by a
scan flop. The scan chain path is highlighted in blue. During scan shift, the clock gating enable signal
driven by FEG toggles, interrupting the scan shift clocks needed for FF1 and FF2.
To remedy this, most clock-gating cells have pins that force the clock signal to pass through when the
test pin is asserted. This ensures that downstream scan cells successfully receive the clock signal and
shift values along the scan chain.

Figure below shows a test-aware clock-gating cell with its test pin hooked up to the global scan-enable
signal. During scan shift, FF1 and FF2 receive the clock signal and successfully shift data.

19. If you have only one shift clock in your design, why you need lock-up latches?
Ans: Avani: if you have only one shift clock in the design then there is no need of lock-up latches as there
is no different clock domain(neg edge or pos edge).
20. What is redundant logic?
Ans: The logic for which has no controllabilty is called as redundent logic.The logic which is never
activated/accessed is called redundant logic.
21. How do you create controllability on those points?
Ans: By inserting testpoints.
22. How you control tri-state buses during DFT?
Ans: By disabling tri-state drivers of a bus except one of its drivers can resolve tri-state bus contention
during scan shifting process. This can be done by adding test logic with scan_en pin.
23. What are the issues you have come across while doing the ATPG verification?
Ans: Scan Chain blockages(S1) , clock is not controllable (C1) , S22(clock mixing) , s29(edge mixing ) , s19
, s5 ( gate traced twice durin the scan chain) , z7(can not set all buses to non contending state) ,
Z8(Unable to prevent contention for the bus).
24. After ATPG and Hands off, any issues on Silicon? What feedback did you receive?
Ans: we can expect that silicon is failing at this particular time instant’s and at these scan outs, based on
which which we need to test the chip in uncompressed mode and get to know which flop is causing
failure.

25. Which types of patterns you generated and format of that patterns?
Ans: Verilog. Avani: patterns have different types like STIL, verilog (parallel and serial), binary. In which
we are using verilog (serial) patterns. STIL format.
26. How do you modify the design, if design has combinational feed backs and sequential feed backs?

Ans: Combinational feedbacks can be broken by inserting multiplexers. Avani: for sequential feed back
loops, we are breaking the feedback path and adding one flop there which is controllable.
27. Whats the difference between structural and functional vectors?

Ans: functional vectors are the ones which will test the critical functional logics.

Structural vectors are the vectors which are targetted to find the manufaturing defects.

28. What the major problem faced in dft with tri-state buffers and how is it resolved?
Ans: tri state buffers should be properly controlled in testmode which can cause the bus contention if
they are not properly controlled. To handle this problem we need to insert muxes,and the select lines of
these muxes need to be controlled by JDR’s to avoid contention.
29. How can the existing pattern be used without generating the entire set for Rev or ECO?(Resolve
difference)
30. IP level patterns were imported to top?
31. Which issues you faced during atpg simulation? why ATPG simulation required?
Ans: issues can be x mismatches or 0-1 mismatches. X mismatches could be bacause of improper
compilation of models. 0-1 mismatches could be because of differences in custom library cell netlist
versions used for pattern geneartion and compiled version’s or could be bacause of improper SDF’s files
used.

5.STATIC or DC Related :
1. What is the algorithm used for pattern generation? Can you generate patterns for the below circuit?

Ans: D- Algorithm & No pattern can detect SA0 on B.


2. what are the defects cover under stuck-at faults ?
Ans:stuck at zero,stuck at one.
3. For AND gate suppose A-input is connected to scan flop SFA and B-input is connected to scan flop
SFB and Y output of AND gate connected to scan flop SFY . And a scan chain is formed like SFA->SFB-
>SFY. So in this case how you will create transition from 0->1 on Y pin of AND gate?
Ans : First load the scan chain with 000 or 010 or 100, then force the func inputs such that D inputs of
SFA and SFB should get 11, which can capture a logic transition at Y.
4. What is the difference between stuck-at fault model and transition fault model
Ans: Stuck at fault model is to detect Stuck at faults(0& 1) and Transitoin fault model is to detect the
slow to raise and raise to fall faults.

5. While doing stuck at faults,entire design is “ON”?


6. What is the state of Bidi pins during scan shifting?
Ans: All the Bidis were made as outputs during shifting. This was done by adding test logic on the
Bidi_Enable signal of the Bidis. The test logic was controlled using Scan_Enable pin. An OR gate was
added as test logic. The inputs to the OR gates were Functional Bidi_Enable and Scan_Enable. During
shifting the Scan_Enable signal will be ‘1’ so the Bidi acts as output and during capture Scan_Enable will
be ‘0’, so the functional Bidi_Enable determines whether the Bidi should act as input or output.

Also we will force all the Bidis to “Z” in the load_unload procedure of the test procedure file.
7. How did you control IOs during ATPG?
Ans: To prevent contention during ATPG, we are forcing high impedance values onto bidi pins in test
procedure file.

FAULTS AND COVERAGE Releated:

1. How you will test faults on reset pins and clk pins in scan flops ?
Ans: Togeling the set and reset pins by declaring them as clocks.
2. Explain the fault modeling flow ?
3. What is fault collapsing? and how many types of fault collapsing methods?
4. In Equivalence fault collapsing and Dominance fault collapsing methods which method is good and
why?
5. What is the test coverage and fault coverage?which one in general industy uses?
Ans:Test coverage gives the most meaningful measure of testpattern quality. Test coverage is
defined as percentage of detected faults over detectable faults. Testcoverage =
[(DT+(PT*PT_credit))/(AU-UD-(AN*AU_credit))]*100
Fault coverage is defined as percentage of detected faults out of all faults.
Fault coverage = [(DT+(PT*PT_credit))/All faults]*100
In general industry uses testcoverage.
6. Coverage is calculated at shift phase or capture phase?
Ans:At capture phase.
7. What is a drc violation ? Explain the violations you faced?how did you fix them?
Ans:DRC is called as Design Rule Checking.we will have some design rules to follow ,and violating these
rules is called DRC violation.
1.Pre_DFT DRC (can be done using spyglass at RTL stage or can be done by DFTC max before
implimentation or can be done by Tmax before implimentation)
Ex: clk/reset are not controllable
2.Post_DFT DRC (can be done by tetramax after scan implimentation and on the PD netlists)
Ex: Rules C , R , S , V , B , X , Z ..etc
8. What are ram sequential patterns?
Ans: To propagate faults through RAM, FastScan generates RAM sequential patterns. These patterns
require multiple loads to control data, address and control signals of RAMs.During ram sequential
patterns; data is written into memories and read back.
9. What are ATPG RAM sequential tests?
10. What are the different techniques to test RAM shadow logic?
11. what are eqivalence faults ? in CTS tree how many faults will be there ? how may patterns will be
required to test CTS tree buffer ?
12. Diff b/w Named Capture Precedures and Clock Procedures?
13. What is observability and controllability?
Ans:The ability to set some circuit nodes to a certain states or logic values is called controllability.
The ability to observe the state or logic values of internal nodes is called observability.
14. Usually we aim for >99% Fault Coverage for stuck-at-faults. What makes it so difficult for Delay tests
that we surrender once we have close to 85% fault covergae in them ?
15. What was the coverage improvement you got by changing constraints on some of the pins?
16. Explain the wrapper which was put in order to improve the coverage?
17. Did you used functional patterns for getting coverage?
18. How gs50 memory wrappers were connected to logic chain in DFTC?
19. What is Not-contollabe fault and Not-observable fault ? give an example?
Ans: NC fault class indicates that no pattern was yet found that would conroll the fault site to the state
necessary for fault detection.This is initial default class for all faults.
NO fault class indicates that , although fault site is controllable , that no pattern has yet been found to
observe the fault , so that credit can be given for detection.
20. What is test point insertion?
Ans:To improve the Observablity and Controllability at perticular nodes , we will add flops in the
testpath.These are called testpoints.
21. Which clock will you use for the TPI? Why?
Ans: While inserting testpoints , we adds the flops in the scan chains.So, we use one single
clock(shadow) which is synchronous with all other test clocks.
21.What techniques you follow to improve the test coverage?
Ans: We need to analyze AU faults and also few of the violations which affect test-coverage.
7. If there are any un-necessary constraints, we have to remove such constraints.
8. Toggling RESET signals, i.e., by defining RESET signals as clocks.
9. If there are any non-transparent latches (D5 violation), this might obstruct the
propagation of faults. We can make them transparent latches by adding test logic.
10. If there are any macros like memories in the design, we cannot get the coverage on the
logic around the macro as the macro makes the faults around it, either controllable or
un-observable. We can get coverage on this logic by adding bypass logic. The bypass
logic makes the faults controllable and observable.
22.How to achieve high fault coverage?
Ans: We should have more DT faults to get more fault coverage.
23.What are ND, AU faults? How will they affect coverage?
Ans: ND and AU faults are more then the denominator of coverage will inicrease , so coverage will
decrease. If AU and ND faults are converted to DT then we can get more coverage.
24.What is a fault ?How many fault models are there?explain?
25.What are different types of fault classes?(how many types of fault groupings are there?

Ans: DT , PT , UD , AU , ND are the different types of fault groupings.

The fault classes under each group are as fallows

DT - Detected
DR - Detected Robustly
DS - Detected by Simulation
DI - Detected by Implication
PT - Possibly Detected
AP - ATPG Untestable Possibly Detected
NP - Not analyzed, Possibly Detected
UD - Undetectable
UU - Undetectable Unused
UO - Undetectable Unobservable
UT - Undetectable Tied
UB - Undetectable Blocked
UR - Undetectable Redundant
AU - ATPG Untestable
AN - ATPG Untestable Not-Detected
AX - ATPG Untestable Timing Exceptions
ND- Not Detected
NC - Not Controlled
NO - Not Observed

26.If you have 10,000 AN faults ,how will you analyze them?

6.Bridging Fault Related :

1. Explain the term Bridging Faults.


Ans:A short between two elements is commonly referred to as a bridging fault. These elements can be
transistor terminals or connections between transistors and gates.The case of an element being shorted
to power (VDD) or ground (VSS) is equivalent to the stuck-at fault model; however, when two signal
wires are shorted together,bridging fault models are required
2. What are the inputs required to generate the bridging faults?
3. What are the types of Brdiging Faults?
Ans:
a) Wired AND/Wired OR
b) Dominant AND/Dominant OR
4. How Bridging faults are detected?
Ans: This faults can be detected by IDDQ testing. bridging faults can be modeled at the gate level,
practical selection of potential bridging fault sites requires physical design information. Dermining likely
bridging fault sites is to extract the capacitance between the wires from the physical design after layout
and routing. This provides an accurate determination of those wires that are adjacent and, therefore,
likely to sustain bridging faults.

5. What are the disadvantages of bridging fault model?


 ATPG algorithms are more complex. Testing requires setting the 2 bridged nodes to opposite
values and observing the effect.
 Requires a lower level circuit description for bridging faults within logic elements.

7.TDF OR AC Related :

1. What is at speed?
Ans:The actual functional clock frequencies of the design is called at-speed.
2. What are the different types of transition patterns simulation?
Ans: LOC (launch on shift) , LOS (launch on capture ). LOS:

LOC:

3. What is transition fault and how is At-Speed done?


Ans: Delay faults cause errors in the functioning of a circuit based on its timing. They are caused by the
finite rise and fall times of the signals in the gates, as well as, the propagation delay of interconnects
between the gates. The faults caused by the rise and fall times are called transition delay faults. Due to
this finite time it takes for an input of a gate to show up on the output, faults may arise if the signals are
not given the time to settle.
4. How to make Scan enable to operate at at-speed?
Ans: Scan enable needs to be generated by PLL which is at high frequency for atspeed and need to take
of routing of scan enable to reach scan enable pin of all flops.
5. How logic transition fault is is different from memory transition fault?
Ans: Logic transition fault targets for delay defects caused at each particular node.
In memories each cell stores 1-bit of information and is called as cell. So, memory transition fault
will target for cell fails to transit from 1 to 0 or 0 to 1.

What is LOS and LOC? Which one gives more coverage? Why? Which one has more patterns?
Ans:

LOS LOC
------------------------- ----------------------------
Advantages Advantages
Combinational ATPG fewer requirements on the scan control logic
1.Higher coverage 1.shifting can be done at any frequency.
2.fewer patterns 2.scan enable doesn't have to be routed as
3.faster run time a critical clock or pipeline.

Disavdvantages Disavdvantages
-Must disable SE Quickly -sequential ATPG(atleast 2 system clock cycles)
1.scan enable must be routed 1.Medium Coverage ,More patterns and long
As a timing critical clk. Runtime.
-Can create non-func patterns
1.possible to cause overtest.

6. Any issues seen in LOS and LOC pattern simulation (at-speed)?


7. What was the method employed during the TFT Launch for shift/launch for capture?
Ans:We need to program registers such that the logic that generate at speed clock pulses
a.should have proper offset(deadtime between deassertion of SE and at speed clock pulses)
b. need to program the PLL’s to generate proper clock periods.
c. need to program the clock generation logic for number of pulses to generate.
8. How many clock domains are in design? How many modules are there in design?
Ans: 123 clock domains and 29 modules.
9. Were the clock domains asyn or sync?
10. Will you do interdomain testing?If not,why?Two domains will talk to each other in the design?
11. How do you generate the patterns for each clock domain?How did you control the clocking(enabling
one clock at a time)
12. For TDF ,were the patterns generated for lPC or lPU?
13. What is the maximum freq at which design is operating?
Ans: 740MHz.
14. How did you take care of flops of lower frequency when u do an at-speed testing?
Ans: We need to mask them from capturing the values.
15. What kind of mode used in TFT testing ?
Ans: LOC mode.
16. What is the difference between L-o-C and L-o-S atspeed pattern generation styles , dis-adv and adv.?
Ans: 1. In LOS the launch happens through the shift path and capture is through functional path. In
LOC both launch and capture are through functional path.
2. Because in LOS the launch happens through shift path, the coverage is more. But in case of LOC
both launch and capture happen through functional path, so we have less control. Because of this the
coverage in LOC is less compared to LOS.
3. Whatever the coverage reported in LOC is the true coverage, as it considers only functional
paths. But a part of the coverage reported by LOS is not the true coverage.
4. LOC pattern count is more compared to LOS
5. In LOS we need to toggle the Scan Enable signal at-speed.
Disadvantage of LOS over LOC:
1. We need to toggle the Scan Enable signal at-speed. As Scan Enable signal is routed to all the Scan
Flops in the design, we need to route Scan Enable signal like a clock. This is an overhead because we
cannot route a signal like a clock which is used only during testing of the chip.
On the other hand in LOC we can have enough number of dead cycles between the last shift and the
capture pulses (i.e., launch and capture pulses). So there is no need of at-speed toggling of Scan Enable
signal.
2. Another disadvantage associated with LOS is reporting of the false coverage

17. What was the method employed during the TFT (Transition Fault Testing) {LFC or LFS} ?
18. Is there any other method than LOC and LOS for at speed testing?
Ans:Launch on extra shift (LOES)
19. compare transition delay model versus path delay model ?
Ans:transition delay model will target for logic transition at a particular node of interest, where as PDF
will target for a delay defect at a node and its impact on the critical timing path.
20. How you will generate atspeed pulses for atspeed testing ?
Ans: By programming TCG block.
21. why path delay model is used when we have transition delay model ?
Ans:path delay model will target for delay defects and its impact on critical paths,where as TDF will
target for logic transition at a node and it may not get observed in timing critical paths.
22. if you are to advise the test engineer which delay model do you prefer and why?

Ans:TDF: which will target for stuck at at functional frequency.

23. What is non scan cells? Reasons for non scan cells? How to generate patterns for non scan cells?

Ans:Non scan cells are the ones which were not included in scan chain. Some of the non scan cells
include latches, Clock gating cells. Even though they are sequential elements they might not participate
in scan operation. Using mutiple capture cycles.

24. Define the fuctionality of OCC circuitry?


Ans: OCC circuitary uses the active controller, rather than off-chip clocks from the tester,
to generate the high-speed launch and capture clock pulses.

25. What required changes to do At-speed testing?

Ans: We need to program TCG’s such that at speed clocks will reach the flops which are supposed to
operate at particular frequency. We need to program TCR such that only one frequency domain is
enabled.

26. How you handle capture clock during TFT mode, how you get required number of at-speed capture
clock pulses?
Ans: We had a 2:1 Multiplexer. The two inputs to the Multiplexer are Functional clock and Test clock and
the select line to the multiplexer is connected to the top-level Scan_Enable signal. During shift
Scan_Enable is ‘1’ and Test clock is selected and during the capture the Scan_Enable goes to ‘0’ and
Functional capture clock is selected.

If we have internal clocks coming from PLLs, we can control the at-speed capture pulses using
TCG(Clock-leaker). We can design our TCG(clock-leaker) to generate two at-speed pulses during capture.

If the functional capture clock is coming from the primary input, we can control the at-speed capture
pulses from the tester itself. In the capture procedure of test procedure file, this clock is pulsed twice, to
give two at-speed pulses required for LOC.

27. Did you face any clock domain crossing issue, during TFT pattern generation?
Ans: I did not get any problems during TFT because of clock domain crossing issues because I had taken
care of all the false paths during ATPG.

28. How did you get exact coverage numbers with multiple capture clocks during TFT?
Ans: During TFT, we enabled one capture clock at a time. We wrote separate capture procedures for
each clock. During pattern generation, enabled the 1st capture procedure and generated patterns, saved
patterns and the fault list. Then switched off all the capture procedures and enabled 2nd capture
procedure, loaded the fault list saved earlier with the switches “retain” and “protect”. The switch
“retain” retains the fault class all the faults in the fault list and “protect” protects the faults detected
earlier. Because of this we will get the incremental coverage, also same fault will not be detected more
than once. We will reset all the AU faults and start creating patterns with the 2nd capture procedure.

The same procedure is carried out for all the capture clocks in the design.

29. How you reduce number of patterns, if we have both LOS and stuck-at patterns?
Ans: We can first generate LOS patterns and save the patterns and also the fault-list. As we know, if a
node can make transition within a given time then it also means that it does not have stuck fault. So we
can create stuck-at patterns to detect only those faults which were not detected by LOS. This technique
is called Fault-Grading.
30. How do you handle data between flops during clock domain crossing (During Design not in DFT)?
Which clock you use for synchronization?
Ans: We can add synchronizers between clock-domain crossings. The clock for the synchronizer should
be the clock of sinking clock.
31. You have two shift clocks in your design, clkA operating at 100 MHz and clkB operating at 200 MHz.
How you shift data between two flops when data is going from clkA to clkB during scan shift.?

32. Why LOS has more coverage than LOC?


Ans: LOS detects faults in non-functional paths as well as scan path, hence test coverage is more than
the real coverage.
33. Why TFT generates more patterns than stuck-at?
Ans: A transition fault can be modeled as two stuck-at faults. Thus, one can view testing transition faults
as testing two stuck-at faults. The two vectors are not correlated, these two vectors can be generated
independently. So there is increase in pattern count.
34. How do you take care of false and multi cycle paths?
Ans: We can load false and multi cycle path information into FastScan during ATPG using “read sdc”
command. Hence it places X value in the fault locations associated with false and multi-cycle paths.
35. Brief about different modules and clock domains in your project?
Ans:We have multiple freq domains which were tested independent to other domains and we program
the TCG’s such that at an instant of time only one freq domain is tested.
36. PLLs were there in you'r design? Did you test them?
Ans: Yes.
37. How you will test in interdomain blocks in multidomain design? (Suppose u have design with 2 clk
domains domain-A at 100 MHz and domain-B at 50 MHz then how u will test the path between
those blocks.)

Ans:Generally they are set as false paths which will not be tested. Or else the designers will introduce
synchronizers such that there is no data slip and we need to test these synchronizer blocks.

38. latch – how is it used in dft for sync two clock domains ?
39. Every clock domain was met timing?or any timing exceptions were there?
40. Suppose if you want to test path from domain-B to domain-A then how you create launch and
capture pulses ?
41. Data logger circuit running at slow rate (tester freq.) then there was 4 clock domains (I told before
that design was having 3 clock domains), how come there were 3 clock domains and how you
transition patterns generated.
42. . Suppose in launch of shift if the SE is having a skew of about one clock cycle? Say if SE is going low
at one flop at 0ns and the other flop it’s going low at 10ns so there is skew of 10ns which is equal to
one clock period. How will u take care of this problem?

Ans:We need to delay the clock path of the capturing domain or else change the placement and routing
of the cell which is getting more skew nearer to launch cell. Or else delay the clock if it is having more
setup margin to next cell.

(Enough buffers are added still having this problem).


8.IDDQ Related :

1. Why we will call iddq faults as pseudo stuck-at faults ?

Ans: Traditional test approaches use scan as a method to access inputsand output of internal
combinational logic, so that patterns can be applied directly to the inputs of combinational logic, and
the response can be captured from the poutput of logic. Stuck-at patterns are generally used. Even
though the high stuck-at covearge can be easily achieved using these methods, the final quality is
sometimes below desired levels. IDDQ is used as an economic technique to detect defects missed by
stuck-at test.

IDDQ test generation and fault simulation is often based on pseduto stuck-at model. In this
model, stuck-at faults on the inputs of logic gates are considered detected if the fault is sensitized, and
the effect is propagated through the gate. The fault effect need not propagate any further.

2. What is the basic difference between iddq patterns and stuck at patterns?

Ans: Iddq is a cost-effective test method indispensable to identify some defects which are indiscernible
by the conventional functional tests.

• The applied pattern needs only sensitize the node. This offers an immense computational
reduction (1:7) over the conventional function test in pattern generation.
• It has been proven that the number of Iddq measurements required to reach a fault coverage
greater than 90% is relatively small (2digits).

3. What is IDDQ fault model? Why do we need iddq testing?


Ans:For high complexity and high density digital circuits , many physical deffects can not be detected
only by logical testing.So ,measuring quiscent current , IDDQ testing is now considered an important
part of testing CMOS circuits , because it can detect physical defects which do not change the output
function but induce a large amount of quiscent current to flow in the presence of fault in the circuit.
IDDQ test generation is easier than the logic test testing , since IDDQ measurement point is global
observe , in IDDQ test generation ,unlike logic testing the propagation of a fault effect is not necessary.
IDDQ testing , where quiscent current is measured for variety of states in static CMOS circuits. A CMOS
circuit would use very little power(practically nothing , just leakage current ) in standby mode.i,e the
CMOS circuits with increased leakge current are deffective.Using IDDQ fault model we can detect these
kind of faults.
A quiscent supply current(IDDQ) is the current that flows when the device is in a stable(and
repeatable) state and is equal to the leakage current for logic structures and the combination of leakage
and bias currents in the more general case.
4. Can we do iddq test for 90nm or fewer technologies? If no, why? What are the problems?
Ans: we can do IDDQ testing for 90nm, and lesser or higher technologies.

5. What was coverage for iddq?


Ans: 77.16%(testmode)
127149329 - Total faults
100657664 - Tested
26491665 - untested
6. How many patterns generated? Why no of patterns are less?
Ans: 20 patterns.
7. Can we use functional patterns for iddq?
Ans: Iddq is intended to complement and not replace function for several reasons:

• Neither conventional function nor Iddq alone detects 100% of the defects.

• The Iddq timing is not set to run at the max specified frequency all the times due to test method
constrains.

• The voltages and currents requirements Vil/Vih, Iol/Ioh, Vdd are different in conventional function
than those in Iddq.

8. Explain me the iddq fault escapes under presence of stuck-at faults, by taking any example ?

Ans: IDDQ testing is based on the premise that CMOS circuits draw extremely low leackage current
when no transistors are switching. The ideal IDDQ testable circuit is a fully complementary and fully
static CMOS design. The design that are not ideal are IDDQ tested, but deviation from ideal CMOS
always has a negative impact on the maximum IDDQ test coverage that is possible for the design. IDDQ
tests have proven effective in detecting many faults that otherwise would remain undetected.in
quiescent state typically draws less than 10 micro amps. Any defects causes higher IDDQ than the
assumed threshold value can be detected by monitoring IDDQ.

9. What is the diff between iddq and stuck at faults ?

Stuck-At faults IDDQ faults


Used in IC manufacturing to increase the IDDQ - is another test technique used in IC
overall testability of a circuit manufacturing. It covers manufacturing
defects,such as bridging faults. The idea of this
test is to measure drain current through the
chip, while it is in the static state
The area and design time overhead are very The area and design time overhead are very
high low
Test generation takes more time than IDDQ. Test generation is fast.
Test vector sets are big. Test vector sets are small.

10. Explain about IDDQ faults?How the IDDQ coverage calculated?


11. What procedure you follow for improving the IDDQ coverage?
12. Did you generate IDDQ patterns in you'r design? What precautions did you take care for that?
Ans: yes we have generated the IDDQ patterns.
Precautions:
1. Constraints are proper
1. Analog block should be in sleep mode.
2. Memories should be in sleep mode.

9.PDF - Path Delay Fault Related :

1.Have you tried path delay ATPG?


[Deep]: updated the definition
Ans: Path delay fault is an at-speed fault detection technique which targets the critical paths in the
design. The delay defect in the circuit which causes the cummulative delay of a combinational path
exceed some specified duration. The combinationl path begins at primary input or a flip flop and
contains a connected chain of gates and end at primary output of flip flop.

2. Did you generate the paths?How?


[Deep]: Added the answer
Ans: Critical paths list is usually taken by physical designer engineers or else it is provided by designers if
they want to target a few paths specifically.

3. advantages and dis adv of path delay model?


Ans: Advantages:
1. Detects more delay faults –i.e., in transitional fault model, the delay of a faulty gate may be
compensated for by other faster gates in the path.
2. Can be used with more aggressive statistical design philosophy
Disadvantages:
3. Large number of possible paths in circuit –exponential with number of gates
4. Algorithms for test generation are more complex and less well developed
4. What is the difference between path delay and transition?
Ans:
Path Delay Transition Delay
Path delay fault is considered when the delay of Transition delay fault assumes that the delay fault
any of path in dut exceeds a specified limit. A path affects only one gate in the circuit. There are
is defined as an ordered set of gates { g0, g1..gn}, transition faults associated with each gate: a slow
where g0 is either a primary input or output of a to rise fault and a slow to fall fault.
flip-flop and gn is either a primary output or an
input of a flip-flop.
A path delay fault specification consists of a A delay fault can be observed independent of
physical path and a transition that will be applied wheather the transition propagates through a long
at the beginning of the path. or a short path to any primary output.
5. What is a critical path ? how do you generate patterns for them?
Ans: The longest delay of combinational path of the design is known as critical path. The reason behind
selecting the longest paths is that the delay defects on shorter paths might not be large enough to affect
the design performance. Also if the defects on shorter paths might not be large enough to affect the
circuit performance, one expects that such defects would be detected by other tests that precede the
path delay fault testing.
6. How many paths you have tested for path delay testing? How you selected those paths?
Ans: we should test atleast 200 paths/clock for path delay testing. These paths are selected based on
functional timing margins.
7. What are the types of path delay test.
8. [Deep]: Added question
Ans: - Non-robust path delay test and Robust path delay test

10.Small Delay Defect:

1. What is small delay defect?


Ans: On-chip process variations are more pronounced in today’s manufacturing processes because of
the increased presence of systematic defects—stemming from complex interactions between layout,
mask manufacturing, and wafer processing—compared with previous process technologies. These
process variations tend to further skew the delay-failure distribution toward smaller delays, adding
enough incremental signal delay to adversely impact circuit timing in a higher percentage of devices. In
essence, for a given die size, the product yield of a 45-nm design can decrease sufficiently over that of a
90-nm design that manufacturers must boost the coverage of SDDs just to maintain about the same
DPPM levels observed for the 90-nm process.

2. Why doesn’t standard TD testing cover SDDs?


Ans: The traditional goal of ATPG tools has been to minimize run time and pattern count, not cover
SDDs. TD ATPG targets delay defects by generating one pattern to launch a transition through a delay
fault site—which may activate either a slow-to-rise or a slow-to-fall defect—and by generating a second
pattern to capture the response. During testing, if the signal doesn’t propagate to an end point (a
primary output or scan flop) in the at-speed cycle time, then incorrect data is captured. In this scenario,
the pattern sequence detects a delay defect through the activated path.

3. TD ATPG is effective for detecting delay defects of nominal and large size, but, it’s not effective in
detecting delay defects of relatively small size. – Justify
Ans: To minimize run time and pattern count, TD ATPG uses a “low-hanging fruit” approach to targeting
transition delay faults:
It targets them along the easiest sensitization and detection paths it can find, which often are
the shortest paths. To understand how this affects SDD coverage, consider the circuit in Figure 1, which
shows three possible detection paths for a single delay fault.
Figure 1 Coverage of small delay defects depends on the fault’s path of detection and the
amount of slack in that path (indicated by the green arrows). Here, path 1 exhibits the minimum
slack.

TD ATPG typically generates a pattern sequence that targets the fault along the path that has
the largest timing slack, path 3. Notice this pattern sequence doesn’t cover smaller delay defects
associated with path 1 and path 2 that would have been covered by targeting the path with smallest
slack, path 1.

TD ATPG does manage, however, to detect some SDDs, either directly as targeted faults or indirectly as
bonus faults when targeting other faults. Even so, TD ATPG rarely detects delay faults along the longest
paths needed to detect defects of the smallest “size” (that is, delay).

In summary, TD ATPG is effective for detecting delay defects of nominal and large size, but because it
doesn’t explicitly target delay faults along the minimum-slack paths, it’s not effective in detecting delay
defects of relatively small size.
4. How patterns are different for TD and SDD faults?
Ans: To generate pattern, SDD ATPG uses to target faults having relatively small minimum slack along
their minimum-slack paths and TD ATPG uses to target the remaining faults along their easiest-to-detect
paths. Synopsys’ TetraMAX ATPG product uses a parameter called max_tmgn (the maximum timing
margin) to assign the cutoff slack level for targeting faults at their minimum slacks. Faults along paths
with minimum slack less than or equal to max_tmgn will be targeted by SDD ATPG algorithms, while
other faults will be targeted by TD ATPG algorithms.

11.GE ( Gate Exhausive ) fault model:

1. What is Gate Exhaustive fault model


Ans:Gate-exhaustive fault models were proposed to exercise a gate completely and then observe the
resultant response at an observable output.
2. What is Gate Exhaustive testing
Ans:A gate exhaustive test set applies all possible input combinations to each gate in a combinational
circuit, and observes the gate response at an observation point such as a primary output or a scan cell.
gate exhaustive test sets are more efficient than single stuck-at test sets in terms of the ability to detect
defective chips and test length. It is also shown that test sets with higher values of the gate exhaustive
coverage have better test quality.
3. How GET is Carried out.
Ans:GET is done by pseudoexhaustive testing technique Pseudoexhaustive testing partitions a circuit
into segments such that the number of inputs of every segment is significantly smaller than the number
of primary inputs of the circuit. Exhaustive testing is performed for each segment. the segment is
reduced to each gate in the CUT. The gates can be elementary gates (e.g., AND, OR, NAND, NOR,
inverter), complex gates (e.g., XOR, multiplexer, adder, etc.), or circuit segments.It doesn’t use fault
models to generate test patterns.
4. What is Gate Exhaustive coverage
Ans:The gate exhaustive coverage (GEC) of a test set is defined as the ratio of the total number of all
distinct observed input combinations of all gates in the circuit to the total number of all distinct
observable input combinations of all gates in the circuit.In general, we may not be able to apply all
possible input combinations to an internal gate and observe its response at some observation points
5. What is the major problem in carrying GET.
Ans:The number of test patterns

12.Memory BIST Related:

1.What is BIST?Explain?
Ans:Technique in which a portion of a circuit on a chip, board, or system is used to test the digital logic
circuit itself. With BIST, circuits that generate test patterns and analyze the output responses of the
functional circuitry are embedded in the chip or elsewhere on the same board where the chip resides.
2. How did u do ?How many stages it has?
3. How you decide upon number of MBIST controllers in design?
Ans: The number MBIST controllers mainly depend on how best we can group our memories. The
memories can be grouped together and controlled by single MBIST controller if they satisfy following
requirements.

1. The memories should belong to the same clock domain.


2. The memories should be compatible.
3. Physical architecture of the memory.sai: width and depth
4. Memories should be placed physically close together.
5. All the memories should be either RAMs or ROMs.
6. Whether the memories are single port or dual-port
All the memories should be repairable or non-repairable
4. How you develop access to MBIST controller from top level?
Ans: We access MBIST controller from top-level using JTAG. We added few instructions in JTAG which
were used to give inputs to MBIST controllers and collect response from MBIST controllers.
1. We need to target the Instruction register of the JTAG between TDI and TDO.
2. Instruction pertaining to the MBIST_Enable is loaded into the Instruction register.
3. When we go to update state of the JTAG state machine, this instruction makes
MBIST_Enable register to be targeted between TDI and TDO.
4. We load MBIST_Enable register. When we go to update state of the JTAG state machine,
the data loaded into MBIST_Enable register enables the corresponding MBIST
controllers.
5. Once the top-level MBIST_Done signal goes high, we will again target the instructions
MBIST_DONE and MBIST_FAIL to know the response of different MBIST controllers.
5. How you did at-speed testing of memories?
Ans: The MBIST controller was given the same clock as that of the memories controlled by it. This clock
was coming from the top-level and at-speed frequency was provided during testing.
6. Did you have any repairable memories in your design?
Ans: NO
7. How you find which BIST controller is failing?
Ans: Using JTAG we targeted the instruction MBIST_FAIL which made MBIST_FAIL register to be targeted
between TDI and TDO. We scanned-out the MBIST_FAIL register and found which the controllers are
failing.
8. How would you detect coverage of LBIST using ATPG tool ?
9. How did you generate MBIST logic ?
Ans: The memory model was given to us to generate MBIST controller.
The customer asked us to cover all the stuck, at-speed, coupling faults, and adjacent shorts. Based on
this we decided, which algorithms to use. Also we got inputs about the area and decided to use serial
controllers. There was a ROM in our design, so decided to add a separate MBIST controller for it.
Based on these inputs we generated MBIST controller using the tool MBISTArchitect.
Sai: We need to group similar memories(operated by same clock, single port or dual port, repairable or
non-repairable,ROM or RAM) and in same hierarchy and can accomodate 1 controller for this group.
10. What kind of memory algorithms you used in MBIST ?
Ans: We use March2,March13,GALPAT, Unique Address and Retention Checker Board algorithms.
11. Explain march2 algorithm execution?
Ans: Usually march2 algorithm writes and reads either 0s or 1s into memory cells.

1. up – write0
2. up – read0, write1
3. up – read1, write0
4. down – read0, write1
5. down – read1, write0
6. down – read0
12. How do you test faults on adjacent locations ?
Ans: In order to detect faults on adjacent locations, we need to scan through memory locations in
ascending and descending order, that is we need to perform read/write operations on memory in both
the directions.
13. Whether BIST clock is coming from top-level or it is internal clock ?
Ans: The BIST clock is coming from top level in the design.
Sai: It should be coming from PLL present in top level.
14. What is the functional frequency of BIST controller and memories?
Ans: Both BIST controller and memories will work at same clock frequency, which is BIST clock.
15. Can you cover all the faults using a lower frequency for BIST than functional frequency of your
design?
Ans: We cannot cover at-speed sai:transition faults using lower frequency for BIST. We need to run BIST
controller at functional frequency to detect at-speed faults of the memory.
16. What are the algorithms that detects coupling faults?
Ans: March2 algorithm detects coupling faults.
In March2 algorithm we start writing from lower address to higher address. Once we have written till
the last location, we will come back to the first location, read it and compare it with the expected data.
If this location is coupled with any of the other locations which were written after this, the data in this
location would be corrupted. If the data read is fine, it means that there is no coupling fault.
March2 also writes and reads in the descending order, which help to find higher address locations
coupled with lower address locations.
17. How will you proceed with MBIST implementation?
Ans: The first thing to be considered is memory grouping. The memories can be grouped together and
controlled by single MBIST controller if it satisfies following requirements.
1. The memories should belong to the same clock domain.
2. All the memories should be compatible.
3. Physical architecture of the memory.
4. Memories should be placed physically close together.
5. All the memories should be either RAMs or ROMs.
6. Whether the memories are single port or dual-port
7. All the memories should be repairable or non-repairable.
Once memory grouping is done, we decide upon the algorithms to be used in each MBIST
controller, whether to generate a serial controller or parallel controller.
18. How do you decide about algorithms to be used?
Ans: This mainly depends on the faults to be detected and the type of the memories we are using.
Based on Faults:
To detect Stuck-at faults, at-speed faults, coupling faults we can go for March2.
To detect address decoder faults we can go for Unique Address algorithm.
Based on Type of Memories:
Whether the memory is a ROM? If the memory is a ROM we can choose ROM1 or ROM2 algorithm.
Whether the memory is a dual port memory? If so we can use port-interaction algorithm.
19. How can you use redundant row or columns in memory? How did you select these redundant
resources?
Ans: In a repairable memory, if faults are detected on any row or column we can use these redundant
rows and columns to repair the memory. When-ever the address pertaining to the faulty row or column
appears, we can divert it to particular redundant rows and columns using fuse blow.

20. What tool you used for MBIST ?


Ans: I used MentorGraphics MBISTArchitect for BIST generation and insertion.
21. What was your role in MBIST project?
Ans: My role is to generate BIST controllers at RTL level and verify them using verilog test bench.
22. What algorithm you used in BIST controller?
Ans: I have used march2 and unique address decoder algorithms in BIST controller.
23. How did you access your BIST controllers form top level?
Ans: I am using JTAG registers to control BIST operation from top level. At the start of the BIST, the
enable signals and reset signals of BIST controller are loaded into two separate JTAG data registers.
24. How did you read the response of MBSIT controllers?
Ans: The output responses of BIST controllers are captured into another set of JTAG data registers,
which holds test_done and fail signal information from each of the controller.
25. MBIST initialization using JTAG ?
Ans: We are loading enable signals of BIST controllers using JTAG at the start of the test to enable
controllers.
26. Does MBIST has reset, how did you control reset?
Ans: We have separate RESET register to control reset signals of memory controllers, which is also
initialized with JTAG.
27. Did you collect any failure information ?
Ans: Using diagnostic logic of BIST controller, I have collected failure information in verilog test bench.
28. How did you do verification of MBIST?
Ans: I used verilog test bench and run simulations in VCS simulator to verify the operation of BIST
controller.
29. What stage of the design you inserted MBIST?
Ans: I inserted BIST at RTL level
30. Did you integrate MBIST controllers into the design?
Ans: Yes, I inserted BIST controllers into top level netlist and handed over to designer.
31. Have you given any inputs to the designer after BIST integration?
Ans: I given the information related to BIST instantiation, collar instance location, and controller and
collar interface signals. Also given information related to BIST enable signals and output response
signals.
32. Did you write any test bench to verify MBSIT?
Ans: Yes, I have written test bench to verify the functionality of BIST with diagnostic logic
33. What is the use of diagnostic logic, if memories are non-repairable?
Ans: We can identify the location of the failure in the memory; this information can be used to improve
the manufacturing process.
34. How do you control BIST configuration?
Ans: BIST configuration can be controlled in dofile of MBISTArchitect. In this file we need to provide the
information of algorithms to be used for particular BIST controller during BIST generations. During BIST
insertion, we need to control the BIST controller placement and the memories to be controlled by this
controller. Sai: all memory ports would have mux logic called collar after bist insertion,and BIST
controller signals are connected to memory collars.
35. Can we control different memories using single controller?
Ans: Different memories can be controlled by single controller, but we need to consider following points
during grouping of memories in one controller.
1. Memory type(RAM or ROM)
2. Clock speed of the memory
3. Physical location of the memory
4. Read/Write cycles of the memories
5. Synchronous/Asynchronous memories
6. Physical structure of the memory
36. How did you decide upon number of BIST controllers?
Ans: We have both single port and dual port memories in our design, which are operating at different
frequencies. So we decided to use two separate controllers, one for single port memories and other for
dual port memories.
37. What are the clock domains of the memories?
Ans: Both controllers are working at different functional frequencies.
38. Difference between single port and dual-port memory?
Ans: Single port memories have only one set of address, data, and control lines. Whereas dual port
memories will have two sets of address, data, and control lines, which goes to two different ports of the
memory.
39. Why do we need separate controllers for single port and dual port memories?
Ans: In dual port RAMs, the read/write operation of one port does not affect the other port. By using
separate BIST controller for dual port RAMs, BIST controller can easily detect faults associated with dual
port memories.
40. Is your BIST controller gives the location of the fault? How ?
Ans: Yes, the diagnostic data shifted out from diagnostic logic contains failed memory address, failing
data, memory controller state and memory number.
41. How do you access MBIST controllers from top level
Ans: I am using JTAG registers to control BIST from top level
42. IF one of the memory has defect, how will you identify that memory?
Ans: We can identify the failed controller by using data in JTAG register for fail data. We can enable the
failing controller using BIST enable regiser through JTAG. At each failure we can shift out the fail data of
that controller using diagnostic logic. This shifted data identifies the failed memory.
43. How would you detect coverage of LBIST using ATPG tool
44. How do you test BIST at speed?
Ans: At-speed BIST operation means the BIST and memories are exercised at functional frequency. BIST
can be tested at-speed by using functional clock as BIST_CLK. This test improves test quality and also
reduces test time significantly
45. Single membist controller for 4 memories - implementing in parallel. How do you check which
memory is failed?
Ans: Using the Diagnostic capbilities of tool using the simulation log or tester log
And supporting files generated by tool with any algorithm we can check which memory and which
bit is failing.
46. How do you share single clock for membist controller and data logger circuit?
47. How do you corrupt memory?
Ans: By modifying the memory models
48. After implementing the algorithm for membist, any other faults can occur or not?
Ans: Each Algorithm targets a specific number of faults so if a specific fault is not targeted by that
algorithm that faults can be skipped.
49. When memory is failed, is membist controller stopped and shift out the data by data logger circuit?
50. How to decide the number of controllers for a given design (that is factors you consider).
Ans: The total number of controllers depends on the Area Overhead , Routing and other Physical design
parameters.
51. What is difference between ROM BIST & RAM bist. What algorithms you have implemented and
why
Ans: While bist for ram will write, read, compress the read data into a signature, the mbist for rom will
only read and compress the read data into a signature. The 'signature' for rom are precalculated using
the data stored in the rom.
52. How many memories were there and what was the selection criteria
Ans: There were 70 memories and the memories which are grouped together
Depends on the singleport/dual port, frequencies,how this memories are placed , row and column
numbers.
53. How are you going to test the BIST Controllers at the sub-chip level from the top?
Ans: Using JTAG controllers.sai: BIST logic should be part of regular scan chain.
54. What is the area overhead because of memory bist?
Ans: This depend on the number of memories and engines generally it is around 2%.
55. Explain the bist structure for whichever project you inserted?
Ans: It consists of engines, memory collar logic and comparators. A JTAG controller was used to program
the mbist registers for a specific algorithm and output after the comparator is stored in the memory
registers which is shifted out at TDO port using TAP algorithm.
56. Memories used in your project and BIST controller selection criteria?
Ans: There were 70 memories and the memories which are grouped together
Depends on the singleport/dual port, frequencies,how this memories are placed , row and column
numbers.
57. How testing of memories done? Serially/Parallel? Explain the same.?
Ans: Memories are tested both serially and parallelly as the mbist engines have capabilities to either run
all mbist engines in parallel or serial and corresponding memories also in serial and parrallel so we can
program the mbist controller using JTAG accordingly however testing the memories in serial mode takes
too much run time.
58. How did you determine the number of BIST controllers in the design?
Ans: The total number of controllers depends on the Area Overhead , Routing and other Physical design
parameters.
59. Specification of memories used in the design? From that they asked how did u decide on the
number of controllers?
60. Data logger circuit was scan inserted or not.sai: it should be scan stitched
61. How do you instantiate memories with controllers in your design
62. What is the tstate of controller and what it indicates? Sai: Tstate is sequence of address and data
sequence generated by BIST controller.

63. How many memory elements was scanable during first run and what you did to make them
scanable? Sai: Insert bypass logic or insert memory control and observe logic for outputs and inputs
respectively.
64. Did you face any issues during verifying the memory controllers?
65. What is LBIST? Explain the block diagram of LBIST?
66. How you integrated the BIST logic in your design?
67. He asked about pipelining in BIST? if we face any issues regarding timing where we will go about
adding pipeline stages?sai: We can add it either at input or output side of memories and controller.
68. If timing is not met for data signals between controller and memory? Will you add pipelines only for
datalines? Sai: Yes we need to add for all control signals.add,data,cs,we_en

13.FV Related:

1. How did you verify that DFT process didn't change you'r functionality?
Ans: By Doing LEC. There should be no non-equivalents between pre DFT and post DFT netlist.
2. At which stage you willl do fv in DFT ?why is it needed ?
Ans: Pre DFT Vs Post DFT netlists.It is to conform that we did not disturb the functionality of the design
while doing scan implimentation.
[added by sivas]
Pre ECO & Post ECO netlists: It is to confirm that ECO is implemented properly, without any
change in functionality.
3. How do you debug the non equivalents ?

Ans:Open GUI , go to mapping manager , do diagnose on non-equivalent point , view schematic from
the diagnose window .It will open both golden and revised schematics. By using that we can find out the
differnce in logic between Golden and Revised.

4. What is a compare point and a key point ?

Ans:The compare point is a point at which mapping will occur, and each and every node of the logic
cone of that compare point is a Key point(generally d, clk , reset , set , q pins are compare points)

5. How the mapping process will occur ?

Ans: Mapping process will occur between two compare points , and the comparision will takes place
between the logic cones of those compare points.

6. What are the constraints are you using in dofile ,and why ?How do deside the constraints ?

Ans:while doing LEC we will put the constraints , so that the design should be in functional mode.

Tap_testmode_tdr(0) – Mux select lines , test_en of sleep high CGCs , fanin of acc ports are anded
with tap_testmode_tdr . tap_atpg_shift (0) - scan
enable pins of flops. Tcr_cgc_atpg_ctrl (0) - test enable of sleep
low cgc’s tcr_async_reset_atpg_ctrl (0) -
tcr_async_set_atpg_ctrl (0) - all ctcm bits(16)
of the softcores should be ‘0’

14.Simulations Related :

1. what is simulation ? why it is needed ?what are the different types of simulations ?

Ans:Simulation is used to build the complex waveforms in very short time.such waveforms can be used
,for example as test vectors for a complex design or as a prototype of some synthesizable logic that will
be implimented in the future. ‘

Types of simulaion: 1.serial


simulations (zero-delay , unit delay , Timing )
2.parallel simulations(zero-delay , unit delay , Timing )

2. what is a test bench? What are the advantages of it ?

Ans: Testbench is a verilog pattern file which will be the input for the simulation tool.
3. How does a serial & parallel Verilog testbench generated by TetraMax will work?

Ans: Serial pattern will load serially(one after the other bit ) and unloading also will be done serially.
Parallel pattern will load simultaniously (all bits at a time) and unloads parallely.(we can use n-shift ‘5’
to cover all the transitions in parallel sims)

4. How CUSION does generated testbench works?


5. what is the difference b/w serial and parallel simulations?when you go for serial and when you go
for parallel?

Ans: serial simulations are slower than parallel simulations.serial simualtions will not give the exact
failing flop information , where as parallel simulation will gives.

6. What is the difference b/w zero delay and unit delay simulations ?why we do them?

Ans:In zero delay , output of a gate will fallow the inputs within no time.In Unit delay , Output of a gate
will follows the inputs after unit delay. By doing Zero dalay simulation,we can ansure that the setup
(netlist,libraries,patterns) taken for simulation are proper. Unit delay
simulations provides some information on signal evolution in time , espicially to detect glitches.
7. what is difference between RTL and gate simulation?what will these two simulations tell you?
Ans:RTL simulation doesn't involve the propagation delay of the gates into consideration while verifying
the functionality, whereas, gate level simulation considers the delay of the gates during verification. The
delays will change according to the library that’s used for synthesis. Gate level simulation use real timing
and simulation on RTL level intends only for functional check.

8. What is PVT corner? What is min and max corners ?


Ans:It is a naming convention for process corners based on on-chip variation effects. These include
process, voltage and temperature (PVT) variation effects on on-chip interconnect, as well as via
structures.
Max corner are setup corners and min is hold corner.
In max corner, circuit becomes slow because the voltage is reduced, so high chances of setup
failure
In min corner, circuit becomes fast due to high voltage being used, so high chances of hold failure.

9. What is the flow after P&R netlist and SDF?


Ans: Checking DRC’s , generating patterns and doing simulations

10. What is possible cause of simulation mismatches when you simulate the generated patterns? what
is right way to debug them?(in both No-timing and Timing )
Ans:
 Timing: Main issue is when the setup/hold violations are not fixed. So when the timing is clean
then we must not face any issue during simulation.
 In most of the cases the issue will be due to library mismatch or db mismatch. So these come
under setup or database issues.
To debug this, we will have to remove the switch “-sdferror” “-sdfwarn” and run again. This will
give the exact file or cell type for which the issue is occurring.
 Hierarchical netlist should be read for hierarchical cores
 Timescale definition issue. (ns/ps).
 No-timing: If it goes into infinite loop then the below switch should be given,
Add switch "-compat" at compilation command "Vlog" and compile the
design. And add switch "-detectzerodelayloop" to Vsim command and run
simulation, this method directly gives the loop in log .
 The best way to debug the issue is by dumping the waveforms and tracing back to the failing
flop. (This is easier in LPU mode then in LPC mode).

11. What are the issues faced in zero delay simulations?what caused sim mismatches in zero delay?
Ans:Same answer as above.
12. Simulation mismatches debug.(what type of failures were they.scan chain failures or others)?
13. Did it happen like flop fails in simulation but its timing is clean?
14. Hypothetically why the failure like above will happen?
15. Why do you want to run timing SIM, if PT shows reports are clean?
16. Even if the models used in Tmax and Sim are same ,the patterns still fail?
17. If we TIE any internal node to ‘1’ or ‘0’ and generate patterns, will the simulations effect?
18. What could be the reason if a flop is capturing "X" during simulation? Forget about setup and hold
violations what can be the other factors.

Ans:In most of the cases the issue will be due to library mismatch or db mismatch. So these come under
setup or database issues.
To debug this, we will have to remove the switch “-sdferror” “-sdfwarn” and run again. This will give the
exact file or cell type for which the issue is occurring.

19. What were the violations or problems that you faced during pattern simulation
20. How do you debug simulations of scan integrity test ?

Ans:

 Timing: Main issue is when the setup/hold violations are not fixed. So when the timing is clean
then we must not face any issue during simulation.
 In most of the cases the issue will be due to library mismatch or db mismatch. So these come
under setup or database issues.
To debug this, we will have to remove the switch “-sdferror” “-sdfwarn” and run again. This will
give the exact file or cell type for which the issue is occurring.
 Hierarchical netlist should be read for hierarchical cores
 Timescale definition issue. (ns/ps).
 No-timing: If it goes into infinite loop then the below switch should be given,
Add switch "-compat" at compilation command "Vlog" and compile the
design. And add switch "-detectzerodelayloop" to Vsim command and run
simulation, this method directly gives the loop in log .
 The best way to debug the issue is by dumping the waveforms and tracing back to the failing
flop. (This is easier in LPU mode then in LPC mode).

21. What is n-shift ? where do you specify this option? what is the use of it ?
22. What is setup time and Hold time violation ? why they will occur?how do you resolve them ?

Ans: Setup Time: the amount of time the synchronous input (D) must be stable before the active edge of
the clock
Hold Time: the amount of time the synchronous input (D) must be stable after the active edge of the
clock
If either is violated correct operation of the FF is not guaranteed Metastability can result.

Setup time fixing:


1) reducing combinational logic delay by minimising number of logic levels
2) splitting the combinational logic
3) Implimenting Pipelining
4) Using double syncronizer using flipflops
5) upsize the cells

Hold time fixing:


1) Can be fixed by adding delays on input ports
2) adjusting clock speed
3) add buffers
4) downsize cells (It might affect setup time)

23. Adding lock up latch, is it helps to avoid hold time violation?

Ans: Yes. Lockup latches can be used to avoid Hold violations.

we should insert lockup latches between cross clock domains to avoid hold violation. [In cases where the
hold time requirement is very huge, basically to avoid data slip]

24. What is setup time and hold time constraints ?What do they signify?which one is critical for
estimating maximum clock frequency of a circuit?

Ans: The setup and hold time constraints are


1. Constraint for setup time: the data and clock should be available at the same time
2. Constraint for hold time: the data should not change after clock tick.
They signify the correctness of the signal

Suppose your flip-flop is positive edge triggered. Time for which data should be stable prior to
positive edge clock is called setup time constraint.
Time for which data should be stable after the positive edge of clock is called as hold time
constraint.
If any of these constraints are violated then flip-flop will enter in Meta stable state, in which we
cannot determine the output of flip-flop.

Setup time constraint is critical for estimating maximum clock frequency of a circuit because hold
time does not depend on frequency.

25. In your design if you have both setup and hold time violations ,then which one do you first prefer to
resolve? Why ?
Ans:I prefer Hold violation to resolve first. Because if the design has insufficient Setup time , we can
decrease the shift frequency to meet the timing , but Hold violation is independent of frequency.
26. In a system with insufficient setup time ,will slowing down the clock frequency will help?
Ans: It will help.
27. In a system with insufficient Hold time ,will slowing down the clock frequency will help?
Ans: NO. Hold is independent of frequency.
28. What is back annotation?
Ans: Applying all types of delays(clk to q/wire dalay/…) in the SDF , on the netlist is called back
annotation.
29. In simulation ,how do you assure SDF annotated is correct?
30. What is SDF and which information it has?
Ans:SDF – Standard delay Format.It will have the delay information(clk to q/wire dalay/…) of the design.
31. What is critical path ,false path , multicycle path ,negative slack ,Jitter Vs clock skew?
Ans:Critical path: it is the path between an input and an output with maximum delay in a circuit.
False Path: false paths are the paths which will never be sensitized i.e. wont come in actual situation,
that path will never occur. Paths in a design which are functionally never be true.For eg ,Paths between
any two asynchronous clocks. The designer here knows such path can never be true .
Multicycle path: Multicycle paths are those which require more than one clock cycles to complete due
to high delays of big combo logic.
Negative Slack: A negative slack value indicates the margin by which the timing requirement was not
met. Negative slack implies that a path is too slow, and the path must be sped up (or the reference
signal delayed) if the whole circuit is to work at the desired speed.
Jitter Vs Clock skew: Clock Skew is the difference between the clock arrival times at two different nodes.
Jitter is the variation in the clock period ( that is the clock edge might not be at
the required time). Jitter need not be expressed with respect to two nodes.
32. Did you get any mismatches during EDT pattern simulation? How did you debugged and fixed it?
Ans: I got few mismatches during pattern simulation.
During pattern simulation, I got mis-compares on few flops. I put these flops and few flops around these
flops into waveform. On analyzing the waveforms, I found that the flops which are showing mi-
scompares have a violation on them.
In the waveform, I found that after the last shift the Scan_Enable signal is toggling from ‘1’ to ‘0’, the
data on the D-input had a logical value (0 or 1) but the Scan-Flop captured it as X. Upon analyzing, I
found that, the late toggling of Scan_Enable is giving this violation.
To solve this problem, I added 2 dead cycles between last shift and capture, and generated patterns
once again. This time simulations went through fine.
33. What do you mean by negative timing? How you enable or disable this check during timing
simulations ?
Ans: Few flops have negative timings, i.e., negative hold and negative setup. This is because of the non-
equivalent delay between the clock and data signal from the cell boundary to inside the cell.
We enable negative timing checks using the switch +negtchk. This switch helps in considering the
negative delays. If we don’t specify this switch negative timings will not be considered. The tool will just
throw a warning saying “all the negative delays are rounded to zero”. This in-turn might affect our
simulations and lead to simulation mismatches.
34. How did you design a clock gating circuit which is glitch free ?
Ans: We added an ICG cell in-front of the gated clock signal which avoids glitches.
35. What kind of switches you used during simulation (before PD)?
Ans:

NON TIMING EXAMPLE

bsub -Ip -q dft -S 1024 -R "select[type==LINUX64&&mem >7000]rusage[mem=7000]" \

Vopt \

+no_notifier \

+notimingchecks \

+nowarnTSCALE \

+nowarnTFMPC \

-work $bin_loc/verilog/modeltech/$MODELTECH_VERSION/${GATEID} \

-L $bin_loc/verilog/modeltech/$MODELTECH_VERSION/${GATEID} \

+acc \
-vopt_verbose (Gives more details in the session.log)

+no_notifier
disables the toggling of the notifier register argument of the timing check system tasks for all
instances in the specified design. If there are any timing check violations notifier will not be
triggered and doesn’t cause X-propagation. We are not using Gate sim as a tool to figure out
setup and hold violations; we are using Gate sims to figure out any issues related to multi cycle
paths, frequency checks.
+notimingchecks
disables Verilog and VITAL timing checks for allinstances in the specified design; sets generic
TimingChecksOn to FALSE for all VHDL Vital models with the Vital_level0 or Vital_level1
attribute. Setting this generic to FALSE disables the actual calls to the timing checks along with
anything else that is present in the model's timing check block. The justification as to why we
are using notimingchecks is similar to the one noted for no_notifier.
+nowarnTSCALE (Suppresses warnings messages about timescale)
+nowarnTFMPC (Suppresses warnings messages about too few port connections.)
36. There are 10 channels to EDT module and 100 internal chains in the core each having length of 200
flops. How many number of shifts required for loading internal scan chains?
Ans: The number of shifts for loading internal scan chains is the length of the internal scan chain with
maximum number of flops plus the initialization cycles of the EDT hardware
37. What operations will be done before loading internal scan chains?
Ans: Before loading internal scan chains, the flops inside decompressor and input pipeline registers must
be reset. At compressor side, the data from mask shift register must be loaded into mask hold register.
38. Have you written any test bench to simulate patterns?

Ans: I used verilog test bench generated by TetraMax for pattern simulations.

39. In which format you are dumping verilog simulation data?


Ans: wlf or fsdb.
40. How parallel and serial simulation works?
Ans: Parallel simulation:
1. Force scan data values on SD pin of each flop
2. Latch this data into scan flops
3. It forces the non-clock primary inputs
4. Measure primary outputs
5. Pulse capture clocks
6. Strobes scan-outs of all the flops in parallel
Serial simulation:
1. Load scan chains serially
2. Force primary inputs at the end of last shift
3. Measure primary outputs
4. Pulse capture clock
5. Unload/load scan chains serially

41. Asked about simulation and Qcing?


42. Have you run any JTAG simulations?
Ans: NO.
43. Problems faced in TDL simulation and how debugging is done?
44. Third party IP Verification, Fault Simulation?

Ans:The third party will provide high quality deliverables that include full verification of the non-
implementation-specific attributes of the CPU. Primarily, this is a rigorous validation of the behavioral
functionality of the processor. Thorough examination of their architectural and behavioral validation
processes including tool flows, history of success, and deliverables management is vital. In the ideal
case, the availability of IP "proven in silicon" helps guarantee the quality of the soft IP deliverables.

Fault simulation consists of simulating a circuit in the presence of faults.


Comparing the fault simulation results with those of the fault-free simulation of
the same circuit simulated with the same applied test, we can determine the
faults detected by that test.
45. What are the issues faced?

15.JTAG Related :

1. What is JTAG ? why is it necessary ?

[Deep]: updated the answer


Ans: JTAG is developed by Joint Test Action Group. JTAG builds test facilities or test points into chips
and it ensures test compatability between all IC’s. JTAG, as defined by the IEEE Std.-1149.1 standard, is
an integrated method for testing interconnects on printed circuit boards (PCBs) that are implemented at
the integrated circuit (IC) level.

2. JTAG - limitations related to testability (related scan design coverage)

Ans:No faults are added. Hence scan coverage is not affected.

3. What is a boundary scan chain ? explain the operation and importance?

Ans: A boundary scan chain consists of two or more boundary scan devices where the test data output
(TDO) of the first boundary scan device in the chain is connected to the test data input (TDI) of the
second boundary scan device to form a chain. The test mode select (TMS) and test clock input (TCK)
signals are common for all the boundary scan devices in a chain.

In the above figure, each boundary scan device internal boundary scan Architecture is given in below
diagram:
During standard operations, boundary cells are inactive and allow data to be propagated through the
core logic normally. During test modes, all input signals are captured for analysis and all output signals
are preset to test down stream devices. The operation of these scan cells is controlled through the Test
Access Port (TAP) Controller and the instruction register.

4. Explain the operation of a boundary scan cell ?

Normal mode : In normal mode, Mode (select pin of mux) is 0 . Data In (PI) passes to Data Out (PO).
Scan mode : ShiftDR = 1, ClockDR =scan clock. Serial data is shifted from Scan IN (SI) to Scan Out
(SO) .
Capture mode: ShiftDR =0, ClockDR = 1 clock pulse, Data IN (PI) is captured on Q (SO) of capture
scan cell.
Update : with mode=1, UpdateDR = 1 clock pulse, Data Out is updated by the vale on Q.

5. Explain the importance of each register in JTAG ?

 Bypass Register: The Bypass register (BR) is a 1-cell pass-through register which connects the TDI
to the TDO with a 1-clock delay to give test equipment easy access to another device in the test
chain on the same board.
 Instruction Register: At any time, only one Data register can be connected between TDI and
TDO e.g., the Instruction register, Bypass, Boundary-Scan, Identification, or even some
appropriate register internal to the device. The selected Data register is identified by the
decoded parallel outputs of the Instruction register.
 An optional 32-bit Identification register capable of being loaded with a permanent device
identification code.

6. What is a bypass register ? what is the importance of it ?

Ans:A device's boundary scan chain can be skipped using the BYPASS instruction, allowing the data to
pass through the bypass register. This allows efficient testing of a selected block without incurring the
overhead of traversing through other devices. The BYPASS instruction allows serial data to be
transferred through a device from the TDI pin to the TDO pin without affecting the operation of the
device.
7. Instead of bypass register b/w tdi and tdo can I have a direct connection b/w tdi and tdo when that
chip is not tested ?
Ans:If we are not doing any testing using JTAG, we can connect tdi and tdo.
8. Explain the 16 state FSM of JTAG ?
TMS and TCK go to a 16-state finite-state machine controller, which produces the various control
signals.
The value on the state transition arcs is the value of TMS. A state transition occurs on the positive
edge of TCK and the controller output values change on the negative edge of TCK.
The TAP controller initializes in the Test_Logic Reset state (“Asleep” state). While TMS remains a 1
(the default value), the state remains unchanged. In the Test_Logic Reset state and the active
(selected) register is determined by the contents of the Hold section of the Instruction register. The
selected register is either the Identification register, if present, else the Bypass register. Pulling TMS
low causes a transition to the Run_Test/Idle state (“Awake, and do nothing” state). Normally, we
want to move to the Select IR_Scan state ready to load and execute a new instruction.
An additional 11 sequence on TMS will achieve this. From here, we can move through the various
Capture_IR, Shift_IR, and Update_IR states as required. The last operation is the Update_IR
operation and, at this point, the instruction loaded into the shift section of the Instruction register is
transferred to the Hold section of the Instruction register to become the new current instruction.
This causes the Instruction register to be de-selected as the register connected between TDI and
TDO and the Data register identified by the new current instruction to be selected as the new target
Data register between TDI and TDO. For example, if the instruction is Bypass, the Bypass register
becomes the selected data register. From now on, we can manipulate the target data register with
the generic Capture_DR, Shift_DR, and Update_DR control signals.

9. How you did Jtag insertion?

Ans : Some of the CAD tools supports integrating the JTAG block in netlist directly. Inputs required are
the list of the ports , custom registers information, instruction and data registers definition.

10. How Bc1 cell looks

11. How Bc7 cell looks


[Deep]: Added below questions
12. What are the mandatory Instructions of JTAG. What are the common public instructions.

Ans: There are four mandatory instructions BYPASS, SAMPLE, PRELOAD, and EXTEST and several private
instructions including INTEST, RUNBIST, CLAMP, IDCODE, USERCODE, and HIGHZ. Public instructions are
documented by the chip manufacturers and available for general use. Private instructions are not.

13. What is BSDL file

Ans: Boundary scan description Language provides information about how a boundary-scan IC is
implemented, which can be used by ATPG software or system integrators to develop the test for the
chip. Descriptions for mandatory logic, such as the TAP and BYPASS registers, do not have to be
provided. These are already provided in a standard way. The designer only has to describe the design-
specific attributes, such as the length of boundary-scan register, the user-defined boundary-scan
instructions, the decoder for own instructions, and the I/O pins assignment

14. What is IEEE 1149.6 Standard

Ans: It is extension of 1149.1 standard it is used for high speed Ios

16.STA Related:

1. What is STA?

Ans:Static Timing Analysis (STA) is a method of computing the expected timing of a digital circuit
without requiring simulation,by checking all possible paths for timing violations.

In Static Timing Analysis (STA) static delays such as gate delay and net delays are considered in each
path and these delays are compared against their required maximum and minimum values.

2. What is the difference between Dynamic Timing analysis and STA?

Ans:Dynamic timing analysis verifies functionality of the design by applying input vectors and checking
for correct output vectors whereas Static Timing Analysis checks static delay requirements of the circuit
without any input or output vectors.

Quality of the Dynamic Timing Analysis (DTA) increases with the increase of input test vectors. Increased
test vectors increase simulation time. Dynamic timing analysis can be used for synchronous as well as
asynchronous designs. Static Timing Analysis (STA) can’t run on asynchronous deigns and hence Dynamic
Timing Analysis (DTA) is the best way to analyze asynchronous designs. Dynamic Timing Analysis (DTA) is
also best suitable for designs having clocks crossing multiple domains.

Sai: it checks all timing paths, not just the logical conditions that are sensitized by a particular set of test
vectors, based on the timing reports and for the timing violated paths,we need to validate whether it is
logical path or false path.
3. How many minimum modes I should qualify STA for a chip

Ans: 1. Scan Shift mode


2. Scan Capture mode
3. MBIST mode
4. Functional modes for Each Interface
5. Boundary scan mode
6. scan-compression mode
4. What do you mean by Timing arcs?
Ans:Setup Timing Check, Hold Timing Check and Clock-gating Check.
5. What are the various timing-paths?
Ans:Input to Register, Register to Register, Register to Output and Input to Output.

6. How do you perform checks for asynchronous circuits?


Ans:Recovery check and removal checks are the two checks that are performed to check the
asynchronous circuit. Recovery time is the minimum length of time an asynchronous control signal must
be stable before the next active clock edge. The recovery slack time calculation is similar to the setup
slack time calculation, but it applies asynchronous control signals.

Removal time is the minimum length of time an asynchronous control signal must be stable after the
active clock edge. The removal time slack calculation is similar to the hold slack calculation, but it applies
asynchronous control signals.

7. Explain command report_timing.

Ans:Report_timing reports design timing information for each path group (or clock group) and offers
several switches to segregate the timing results based on max delay, min delay, recovery, removal etc.
The level of detail that can be viewed in the reports can also be customized. Simple syntax for this
construct is

# To report timing from one clock group to another (max_delay, setup)


report_timing -from [get_clocks clk1] -to [get_clocks clk2] -delay max

Sai: -nworst(no . of worst paths to be reported per end point)

–trans –cap –slack_lesser_than –max_paths(max number of paths to be reported among all path
groups)

8.What information can be obtained from primetime reports?

Ans:Prime time can report unconstrained clocks in the design, combinational timing loops, inputs or
outputs that are not constrained, multiple clocks clocking the same flop or flops that do not have a clock
defined on them. This aids the designer to identify incorrect or undefined constraints earlier in the cycle.

Sai: bottleneck report, recovery and removal timing checks.


9. How the tool will calculate the the maximum frequency?

Ans:The tool lists the paths for each clock domain that are selected by the user. The maximum running
frequency of each clock domain is calculated based on the maximum register-to-register delay of each
clock domain. It picks the longest register-register path of each clock domain, adds the setup time
requirement of the destination register, and considers it as the maximum clock frequency.

The user can apply constraints on the clock frequency. Based on the user's clock period requirement, the
tool calculates the maximum allowed register-to-register path delay based on the following equation,

max reg-to-reg path delay = clock period requirement – setup time requirement +clock skew
Sai: max reg-to-reg path delay = clock period requirement – setup time requirement - clock skew

17.QDRC:

1. What is QDRC? Why to use QDRC?

Ans:QDRC stands for Qualcomm Design Database and Rule Checker. QDRC has rule checkers that can
verify DFT-related design rules. It checks a variety of clock, memory and scan rules.
A configuration file is required to run QDRC. All the design netlist(s) and rule checking variables have to
be provided in the configuration file by the user. QDRC is mainly used to run DFT related design rule
checks on a specified design. This tool is easy to setup and run. Results are obtained early and can be
addressed at the gate-level or back at the RTL-level.
2. What could be the reason if the build model is not completed without errors, even though the .cfg
file is proper?

Ans:There is a limitation in the maximum number of netlist ET can read. If the .cfg file has a large
number of netlist’s, in such case club 2 or more files in one file and then source the configuration file.

3. What are the checks done by QDRC clock rules checker?

Ans:QDRC clock rules check pin to pin connectivity of cgc cell ports: test_en, seq_en, clk_rst, clk_nset.

Clock Cell Rules


1. Checks if the test_en pin of the CGC cells is not tied high or low
2. Checks if the seq_en or the seq_clk_en pin of the Clock Cell isnt tied low.
3. Check whether the seq_en or the seq_clk_en port of the Clock Cell is tied high. If not tied
high, then check if test_en pin on the same Clock Cell is controlled by the top level pin
tap_test_mode_tdr.
4. Check if the cgc cells that dont have a seq_en port, check whether test_en is controlled by
tcr_cgc_atpg_ctrl.
5. Checks that Clock_Cell:clk_rst is gated off to its inactive value when tap_test_mode_tdr is
set to 1.
6. Checks that Clock_Cell:clk_nset is gated off to its inactive value when tap_test_mode_tdr
is set to 1.
4. What are the checks done by QDRC ram rule checker?

Ans:Qdrc ram rule checker checks the conectivity of RAM cell ports: shift_en, scan_en, acc, clk_sel,
slp_ret_n, slp_nret_n.

RAM Rules

1. If a RAM cell has slp_n pin or a slpb pin then check that it is neither tied high nor tied low.
2. Check that scan_n pin on a RAM cell comes form the inverse of tap_ram_test_ctrl pin
from the top level.
3. Check that scan_n pin on a RAM cell comes form the inverse of tap_ram_test_ctrl pin or
the inverse of tap_test_mode_tdr from the the top level.
4. Check that shift_n pin on a RAM cell comes form the inverse of tap_atpg_shift from the
top level.
5. Check that clk_sel pin on a RAM cell is gated off to 0 with tap_test_mode_tdr.

5. What is domain analysis?

Ans:This analysis is used to validate the frequency enable pin connection for proper at-speed capture. It
checks if every flip-flop in the design belongs to one and exactly one frequency domain.

6. How many domais are there in your design?


There are a total of 123 frequency domains for transition delay ATPG.

7. How will you merge the Domains? what factors do you consider to merge any two or three
domains?
8. What are scan chain rules (Lock-up Latch rules) checking is suported by QDRC?

Ans: 1. A positive-edge triggered scan flip-flop is followed by a negative-edge triggered flip-flop

2. There should be exactly one lockup latch between two flops deriving clock from two different
sources

3. A redundant lockup-latch has been inserted between 2 flip-flops that are driven by the same clock
source

4. A lockup-latch has been inserted between 2 flip-flops of opposite clock polarity

9. What is meant by Flops in No Domain, flops in Multiple Domain and Mixed Domain?

Ans: Domain analysis will apply test mode constraints one at a time. Then it checks for the flops which
see transition of clocks occurring for those particular constraints. These set of flops which receive
transition are put into a Domain D1. This process continues till all the sets of test constraints are applied
and flops which receives clocks are put in those corresponding domains.
Flops which do not receive clock pulses after all the possible test mode constraints are applied are called
flops in NO Domain.

Flops which observe transition’s for more than one set of test constraints are referred to as flops in
Multiple Domain.

Flops which observe transitions in clocks only when two or more sets of constraints are applied together
are referred to as Flops in Mixed Domains.

10. What is the Disable Timing Arc?

Disable Timing Arc (DTA) uses orthogonal method to compute the total flops in each domain. When
clock is defined in Encounter test setup, with an ES clock flag, Encounter test will automatically
propagate this clock to every possible node on the clock path unless the clock is blocked with a
controlling value.

In DTA the clock will propagate through the gate if the inputs are “X” on the off-gate. Clock will get
blocked only when the off gate inputs are constrained to a value.

11. What is the difference between Domain Analysis (DA) and Disable Timing Arc (DTA)?

Domain analysis performs actual logic simulation to propagate the clock and missing constraints will
block the clock. Whereas DTA uses ET’s mechanism to propagate the clock flag and the clock flag
will propagate to the destination flop if the off path inputs of gates on the clock paths are at non-
controlling values or X’s.

18.Spyglass:

1. What is spyglass DFT? How spyglass will help?

Ans:Spyglass DFT performs DFT checks on RTL . we can modify RTL to eliminate DFT violations prior to
synthesis. It will reduce the number of DFT violations found in gate level, since most found and fixed in
RTL. So it reduces the the no of iterations in design cycle.

2. What is spyglass DFT-DSM?

Ans:Spyglass DFT-DSM performs at-speed violations checks on RTL. It identifies design issues that may
prevent from achieving high transition coverage.

3. What are the critical async violations in spyglass DFT?

Ans:Async_07 : Reports Async reset/set sources that are not disabled for shift mode.

Async_02_capture : The Async_02_capture rule reports violation for those flip-flops whose set/reset
pins are driven by sequential elements, that is, flip-flop, latch, or blackbox, in the capture mode.
Async_02_shift : The Async_02_shift rule reports violation for the flip-flops whose set/reset pins are
driven by sequential elements, such as, flip-flop, latch, or blackbox, in the shift mode.

Async_06 : Flip-flops where both set and reset lines are simultaneously active

Async_08 : The Async_08 rule reports violation for testmode signals that only control asynchronous
set/reset pins of flip-flops and these set or reset pins are not tied off and are also not testable in the
capture mode.

4. What is no_scan constraint?

Ans:The no_scan constraint is used to declare flip-flops as being non-scannable.

5. How spyglass will helpful to increase coverage?

Ans: Spyglass checks the controllability of whole design. It will improve the controllability of the design,
thus the nodes which are not controllable at certain places, will be covered and Tmax will able to find
the faults on those nodes. Hence coverage will increase.

6. List the inputs files needed to run Spyglass.


Ans: 1. .sgdc 2. Filter.wr 3. Filelist 4. Makefile

7. Enlist the steps to run spyglass with version 4.6.0


Ans: There are 4 steps to run spyglass (pldrc flow)
1. gnumake filelist
2. gnumake analyze
3. gnumake sg_run
4. gnumake sg_view

8. What do you mean by blackbox in terminology of spyglass? Will the spyglass error count will impact
with increase number of blackboxes.
Ans: Like other tools, spyglass also treats undefined module as black-box with severity error. But the
module which is having outer border definitation will be declared as black-box with sevirity warning.
Yes, the black box count will impact on the error count. The module which is defined as black box, the
spyglass will give error for all those nodes which are connected with that module.

9. Can we do the modification in spyglass?


Ans: yes, we can do the modification in spyglass. We can also change the constraints.

10. What is the reason to do At-speed testing?

Ans: The test clocks in traditional stuck-at testing are designed to run on the test equipment at
frequencies lower than the system speed. At-speed testing requires test clocks to be generated at the
system speed, and therefore are often shared with functional clocks from a phase locked loop (PLL)
clock source. This additional test clocking circuitry affects functional clock skew, and thus the timing
closure of the design.
At-speed tests often result in lower than required fault coverage even with full-scan and high (>99%)
stuck-at coverage. Identifying reasons for low at-speed coverage at ATPG stage is too late to make
changes to the design and affects schedules significantly.

11. Enlist the features of DFT-Analysis.

Ans:

 Pinpoints and diagnoses DFT issues at RTL or gate level


 Predicts ATPG test coverage with high correlation (within 1% of final ATPG result)
 Pinpoints causes that block high at-speed coverage
 Ensures that RTL is scan-compliant
 Built-in controllability and observability engine analyzes testability strategies
 Guides selection of highest-value test points
 Unique AutoFix capability automatically corrects RTL to improve scannability
 Intuitive, integrated debug environment with cross-probing among views

12. Enlist the features and benefits of DFT-DSM analysis.


Ans:
Features
 At-speed test rules help resolve timing closure issues upfront at RTL
 Predicts at-speed test coverage early at RTL
 Pin-points and diagnoses low coverage issues at RTL
Benefits
 Enables RTL designers to fix design-for-test and timing closure issues upfront without
being experts
 Address testability early in RTL without having to spend days later in the design cycle
 Achieve high at-speed test coverage (>90%) in golden RTL and maintain through out
design implementation

19.Post Silicon Related :

1. what is post silicon validation?Which part of ASIC flow it will come?How can we achieve it?
Ans:post silicon validation is basically a validation method to make sure that your fabricated chip is
working correctly(both test and func mode) and meeting all specifications (Ex:
temp,voltage,pressure…etc).
In ASIC flow , post silicon validation will be a part of DFT.We can do post silicon validation,because we
are inserting testlogic in the design.
2. What is wafer ? What is packaging?

Ans: A thin, round slice of semiconductor material, typically, silicon ,from which microchips are made.
Silicon is processed into large cylindrical ingots, sliced into ultra-thin wafers and then implanted with
transistors before being cut into smaller semiconductor chips. Wafer is cut into pieces called dies.The
dies that respond with the write answer to the testpattern will be put forward for the next step called
packaging.In packaging , pins and other intergrations will be done for the chip.
3. What are the different IC package types?Which one used for your project?
Ans:1.DIP(dual inline package)
2.Ball grid array
3.pin grid array …etc
For my project Ball grid array is used.
4. Is your design includes padring ?If yes, what this module contains?
Ans: Yes.This module contains step-up or step-down voltage circuitaries and those will be interacting
with IO’s , so that the chip will withstand to the high or low voltages.
5. What is yield?How to improve it?
Ans:The yield of manufacturing process is defined as the percentage of acceptable parts among all parts
that are fabricated.
Yeild = no.of acceptable part/Total no.of parts fabricated.
We have to maintain quality DFT(like test coverage coverage ,simulate more patterns ,target maximum
fault models…etc) to improve the yeild.
6. What is reject rate?
Ans: reject rate = no.of faulty parts passing final test/total no.of parts passing final test
This is measured in PPM(parts per Million)
7. Possible reasons why the yeild is low?
Ans: (1) Design is not robust (DFM rules not sufficient, too small guarding band etc)
(2) Manufacturing introduced defects (human/ machine / environment errors in all steps of chip making)
(3) Bad test patterns may throw away good chips.
(4) Bad testers (or incorrectly setup) may introduce false alarms
…...
As a DFT engineer, (assume you are designing a digital design), you need consider both over-testing and
under-testing. Under-testing may cause test escape even if the yield appears good. Over-testing may
throw away good chip to cause low yield. To achieve both, you need:
(1) Insert scan chains and try to get as close to a full-scanned design as possible. (make the design easier
to run ATPG and diagnosis)
(2) Run ATPG, pay attention to DRC warnings, false / multi-cycle paths to avoid creating bad patterns.
(3) Better use a power analysis tool to analyze the power consumption of ATPG patterns is within the
power limit. Over-testing with power drop could be a reason for low yield.
(4) Validate the patterns with timing aware simulation tool.
(5) Run patterns on tester and collect all failure log files.
(6) Run software diagnosis tool with failure logs.
(7) Use software yield analysis tool to do some statistical learning and data mining to find the systematic
failure spots.
(8) Pick a few critical chips that may reveal the root cause of low yield and give them to PFA engineers.

8. How many pins your design is having?can you explain the purpose of each?
Ans: 84+
scan_in =30, scan_out =30 ,testclocks=15,shift_pin=1,Jtag pins =5, pll pins = 2+ ,powe_pins =2 ,….
9. What are the different phases in post-silicon debugging?
Ans:P0,P1, P2,P3
10. Discuss some silicon debug issues?
[Deep]: Added the answer
Ans: The common problem is the shift in the strobe point which can be adjusted after comparing the
values through logic analyzers. Device reset issues because of Low volatge detectors and regulators
which can be checked by upper and lower thresholds values of LVDs.

11. Before shipping the chip to the customers, what are all the things need to test/check ?
[Deep]: updated the answer

Ans: As a part of DFT we need to test the patterns for all the targetted faults , at targetted temperature
and frequency. DC chararteristics of DUT in all modes at specific temperatures and frequencies. All
Analog modules like LVD, ADC and PLLs. All corner samples with different doping conentrations like
Fast- Fast, Slow-Slow, Fast- Slow etc.

Vector conversion:
1. How do you convert the stil patterns to tester format?
[Deep]: Added the answer
Ans: STIL patterns can be converted to tester format using some scripts or there are some in-house
tools which can take STIL patterns and an test sheet which has different operating conditions. There are
some specific rules like test name, pattern formats (eg header, socket file) etc has to be followed.

Tester format :
1. which format the tester(verigy) will accept the patterns?
Ans:Binary format.

Vector timing :
1. what is vector timing?

Tester memory:
1. what is tester memory?How it matters while testing a chip?

[Deep]: Added the answer


Ans: Automatic test equipment (ATE) has limited memory The test data bandwidth between the tester
and the chip is relatively low and generally is a bottleneck with regard to how fast a chip can
be tested.The chip cannot be tested any faster than the amount of time required to transfer the test
data which is equal to: Amount of Test Data on Tester/ (Number of Tester Channels* Tester Clock Rate).
So generally the data to be stored on tester is compressed (both stimulus and response).

Test time:
1. what is test time?How it relates to test cost?what are facts that effect the test time?
[Deep]: Added the answer
Ans: Test time is the amount of time a chip requires to test all the patterns. As the tester cost is high
usually it costs around 3 cents/second. The test time is affected by the the number of patterns that are
present to test the chip. The memory patterns , Flash programming generally takes a lot of time.

Tester – Verigy, Inovys :


1. What is the difference between verigy and Inovys testers?
2. What is the differece between testing Wafer and testing a chip?
3. What are the different stages while testing a chip and wafer?
4. How do you test the stuck at , TDF , PDF , IDDQ , Bridging patterns on the tester?How many
patterns do you check?
5. Will the tester gives the list of failing flops?

Ans: Tester won’t give failing flops it will give error log which contains cycle number,chain name, ‘got’
value of the failing flop.

6. Explain the tester channels?


7. What is a shmoo plot?
Ans:It is a Voltage Vs freq plot that shows us the failing and passing regions.
8. What are the steps to follow to work on inovys tester?
Ans:
1.Insert the chip in the daughter card.
2.load the Test program that is having all pin information , and it is
specific to a project.
3.Import the stil pattern file.Then it will conver it into .dat format.
4.open pattern manager and compile the pattern and load it.
5.open progarm flow and and test the pattern.It will show us pass or fail.
6.open shmoo tool to view shmoo plot(voltage vs freq).
7.open logic analyzer to view wave form of required pins.

9. What software you are using for testing on Inovys?what it will do?
Ans:Stylus software.The software is designed in such a way that it will interact with the tester and gives
us the failing or passing status.
10. Explain the equipment used for testing on Inovys?
Ans:PC (Stylus installed) ,Inovys tester, daughter card.
11. What are the reasons for the patterns to fail on tester even though VT simulations are passed?

Masking :
1. How will you do masking?After masking a flop ,how its charecterstics change?
Ans:By using the tester output error log file , we will prepare diagnosys file with required info(like cycle
number,chain name and ‘got ‘ value ). Using this file ,Tmax will generate the failing flops list.
Add_cell_cons/add_capture_mask command will be used to mask those flops.

2. What will be your approach if there is a failure on silicon?


Ans: 1.finding the reason for failure.
2.Take error log from tester , mask the failing flops.
3.generate patterns with masked flops
4.test the patterns on tester.
3. What is masking ?why we will do masking?On what factors it will effect on design?
Ans:making a flop not controllable and not observable is called masking.If any flop is failing on tester
then considering the reasons for failing ,we will mask it.masking a flop will effect(reduce) the coverage
numbers.
4. What could be the reason if there is a failure happening on silicon on says every 100th capture?

You might also like