DEVELOPMENT OF TEST PATTERNS
The most difficult is to create a set of test vectors which exercises the chip as completely as possible, without consuming excessive amounts of test time. Exhaustive testing is solution only extremely simple chips.
Two basics approaches:
1. Testing the function (output as a function of input)
2. Testing the structure (components of a shematic)
Hybrid approach is the best!
Test vectors generation can be generated can be greatly eased, and generation process greatly speeded, if the vectors can be expressed in a algorithmic form. Expansion of the algorithm into the bit patterns can be automated.
Especially easy for memory chips.
Especially difficult for microprocessor chips.
Once the input patterns have been generated, the next task is to compute the expected responses. Algorithmic approach is again the best. Using the output from a behavioral simulator is good for functional testing. Using the technology related data is good for structural testing.
Note the difference between:
1. Design engineer
2. Test engineer
Their attitude to testing is different. The first one will do more of the functional testing. The second one will do more of the opposite.
The test vectors generated by the design engineer are not always adequate for the test engineer. Typically, the second one uses a large volume of test vectors, and does a larger variety of tests.
What follows is oriented towards the after-the-fab testing (testing engineer). Only a subset of what follows is reasonable to use for during-the-design testing (design engineer)
FAULT COVERAGE
Regardless of how test vectors are developed, it is important to have some measure of the quality of the generated test vectors.
Fault coverage is often defined as the percentage of all possible faults that the test vectors will detect. Faults can be defined on the behavioral level, or on the structural level. Once the appropriate levels chosen, a model to define a fault must be selected.
On the behavioral level, the model of faults helps determine which of the output-versus-input test vectors are redundant (this decision is also architecture-dependent).
On the structural level, most workers use the SSA model (Single Stuck At):
Fault-free structure is one in which all logic gates work properly, and all interconnections assume either logic 1 or logic 0, as necessary. Further, it is assumed that all faults (whether arising from flaws on interconnections, or within the gates) manifest themselves as if the interconnection were permanently held at either 1 or 0.
Figure 5.19. Examples of some cases that can be modeled after the SSA (single stuck at) methodology:
a) Input shorted to the voltage supply can be modeled as SA-1.
b) Open input can usually be modeled as SA-1 (TTL logic, for example).
c) Input shorted to the ground can be modeled as SA-0.
d) Output stuck at high logic level can be modeled as SA-1.
DRAWBACKS
PATH SENSITIZATION:
How to determine the test vector which tests for a particular SSA fault?
COMBINATIONAL CIRCUITS
EXAMPLE
In order to test that connection C4 is not SA-1, the test vector is ABC=010. At the output is generated 0 (fault-free) or 1 (fault). The same vector also test C13 for SA-0, as a side-test
Figure 5.20.
An example of test vector generation after the SSA (single stuck at) methodology: input test vector ABC = 010 tests for the SA-1 condition at . If the output () shows logic zero, this means that contains an error of the type SA-1.
Figure 5.21.
An example of the diagram that can not be tested after the SSA methodology: there is no way to generate an input test vector to see if there is an SA-0 type of error at .
DRAWBACKS
NOTE
Complete automatic generation of test vectors for combinational circuits is possible.
SEQUENTIAL CIRCUITS
Sequential circuits create interruption in the sensitization path. The path must be created up to the input of a flip-flop, and then resumed once all the flip-flops in the system are clocked. Major problems with sequential circuits are:
NOTE
Complete automatic generation of test vectors is not possible.
GENERAL PROBLEMS OF ALL MODELS
How to treat Byzantine faults?
AUTOMATIC TEST VECTOR GENERATION
RANDOM
The approach is base on the fact that a typically fault coverage curve has the following shape:
Figure 5.22.
Generation of random test-vectors: typical relation of error coverage factor (K) to the number of generated test vectors (N). For small N, insignificant changes result in significant changes . This phenomenon enables the designer to achieve relatively good test results from a relatively small number of randomly generated test vectors.
DRAWBACKS
INTELLIGENT
Either human or some AI software can do the following:
For example, HITEST by CIRRUS is based on 2 HLL languages:
CLL (Cirrus Circuits Language) for hardware description and CWL (Cirrus Waveform Language) for test development control. Heuristic search is based on the architecture (CCL) and the hints from the user (CWL).
BIST: Built-in Self-test
or design for testability
Problem: How to test an arbitrary logic which consists of both sequential and combinational circuitry.
If test vectors are applied only to the inputs, the number of necessary test vectors may be 2N, for certain level of testing (e. g., exhaustive test).
If there is a way to inject test vectors into the internal of the shame, the number of necessary test vectors may be:
For the same level of testing, assume:
K1+K2+…+KI
For N large number (VLSI). Obviously, the injection-oriented approach is promising.
Essence: Injection of test vectors is enabled by SCAN PATH techniques. They make the test pattern problem much easier, and most BIST structures are based on SCAN PATH techniques.
General structure of SP-oriented schemes:
Figure 5.23
. General structure of the diagrams inclined toward the SP (scan path) testing methodology. The inputs and outputs are marked (, ¼ , n) and (, ¼ , m), respectively. The nodes marked with * are normally open circuited; they form connections only during the test vector entry and exit.
Conversions from D-FFs to latches and other types of FFs is relatively easy.
ESSENCE OF SP-ORIENTED SCHEMES:
1. In test mode,
All system bistable elements are converted into a single shift register called SCAN PATH, and an arbitrary test pattern is shifted into the bistable elements.
2. In normal operation mode,
Contents of bistable elements act as input signals for combinational circuitry, and after one clock period (!) new contents are stored into the bistable elements.
3. Back in test mode,
Contents of bistable elements are shifted out,
And compared against pre-prepared correct response.
STANFORD SCAN-PATH DESIGN-FOR-TESTABILITY
Terminology: MDFF - Multiplexed data flip-flop. A max is placed at the data input, to permit selection of two different data inputs (test + normal operation ).
T=1 - Test operation (a shift-register is formed)
T=0 - Normal operation (DFFs re-connected to clock)
a)
b)
Figure 5.24.
The structure of the diagrams derived from the Stanford SP (Stanford Scan Path) testing methodology. The inputs and outputs are marked (, ¼ , n) and (, ¼ , m), respectively. The test vectors are entered through the input, and the output test vectors are departed through the R output. The symbol MDFF refers to the multiplexed data flip-flop; the input is active when the control input is (normal mode), and the input is active when the control input is (test mode). Symbol CK refers to the clock signal. The signals and (, ¼ , s) refer to the input and output test vectors, respectively.
Test procedure:
Let sufficient time for combinational logic to settle and checking the values on Z outputs.
6.Shift out the flip-flop contents yi via Zm, and test it against the expected correct values. The next text pattern can be shifted in, at the same time.
Note: Flip-flops have to be tested, too.
Modification #1: 2PFF Design (Two-port flip-flop)
Two control inputs (C1 + C2)
For two data inputs (D1 + D2).
Figure 5.25.
The structure of the diagrams corresponding to the 2PFF-SP (two-port flip-flop scan path) testing methodology. The 2PFF flip-flops have two control inputs each ( and ), for two different data inputs (1D and 2D).
Modification #2:
Latch-based designs (typical of CPU designs)
Extra latches are used to allow system latches
to be connected into a shift register.
Figure 5.26.
The structure of the diagrams corresponding to the LSSD testing methodology. The latches L1-i (, …) are system latches, and they are a part of the diagram that is being tested. The latches L2-i (, …) are added to enable the connection of the system latches into a single shift register. The SDI (scanned data in) and SDO (scanned data out) signals serve the purpose of entering and retrieving the test vectors, respectively. Symbols CK and TCK refer to the system clock (used in the normal mode), and the test clock (used in the test mode), respectively.
Each system latch is replaced by one 2P latch (L1-i) and one 1P latch (L2-i). Everything works very much like a 2PFF system.
SDI: Scanned in test data
SDO: Scanned out test data
TCK: Test clock
CK: Clock
2. UNIVAC’s SCAN-SET
Separate test-data shift registers are used, which avoids the necessity to configure system latches into flip-flops.
Combination of muxing and demuxing is used to set and scan out the system latches which avoids the use of shift registers entirely.
ROM
Basic methods:
Skew checksum:
1. sum = 0 (* Skew Checksums *)
2. address = 0
3. rotate_left_1(sum)
4. sum = sum + rom(address)
5. address = address + 1
6. if address < rom_length then goto 3
7. end
Figure 5.27. An algorithm for ROM memory testing.
RAM
Basic types of failures:
Portions of memory become unaccessible.
Items gets written into more than one location.
Reading is history sensitive (e.g. 0 after 1010101 … is read properly;
0 after 1111111 … is not read properly).
Failure to retain data for the specified refresh time
Erroneous data storage only for certain patterns,
due to interactions between physically adjacent memory cells.
Complexity of testing
An algorithm has an O(k) complexity, if the time needed for that algorithm is proportional to Nk, where N is number of memory location to be tested. Many of the good classic memory test algorithms are N2 or O(2)
Simple patterns algorithms
Marching 1s 11111 … 5N O(1)
and 0 is written in its place.
Marching 0s 11111 … 5N O(1)
Same procedure on reverse data.
Good for:
Week for:
Checkerboards:
010101 … 4N O(1)101010 …
Usage:
Drawbacks:
1. Physical layout of rows and columns rarely matches
their logical ordering. One must create a true checkerboard, not
not a logical checkerboard.
2. Address descrambling circuits have to be built.
Walking patterns (walkpat) 2N2 + 6N O(2)
to ensure that they still contain all 0s. Verify location 1.
Good for:
Galloping patterns
Galpat I:
Galloping 1s and 0s
Same as WALKPAT, except that 1 is rechecked after each single 0 is read, to ensure that 1 remains undisturbed. Address lines undergo every possible transition.
Galpat II:
Galloping Write Recovery Test 8N2 – 4N O(2)
Procedures starts with an arbitrary memory contents.
and the original location is again checked for 1.
and the original location is again checked for 1.
Surround disturb N3/2
All above tests are not practical for RAMs above 4K bits. For dynamic RAM one can create a simpler test based on the assumption that dynamic memory cells are the most susceptible to interference from the nearest neighbors, which means that the global sensitivity check can be eliminated. This is an example on how the knowledge about architecture and technology in order to decrease the testing complexity.