DEVELOPMENT OF TEST PATTERNS

 

The most difficult is to create a set of test vectors which exercises the chip as completely as possible, without consuming excessive amounts of test time. Exhaustive testing is solution only extremely simple chips.

 

Two basics approaches:

 

1. Testing the function (output as a function of input)

2. Testing the structure (components of a shematic)

 

Hybrid approach is the best!

 

Test vectors generation can be generated can be greatly eased, and generation process greatly speeded, if the vectors can be expressed in a algorithmic form. Expansion of the algorithm into the bit patterns can be automated.

 

Especially easy for memory chips.

Especially difficult for microprocessor chips.

 

Once the input patterns have been generated, the next task is to compute the expected responses. Algorithmic approach is again the best. Using the output from a behavioral simulator is good for functional testing. Using the technology related data is good for structural testing.

 

Note the difference between:

1. Design engineer

2. Test engineer

 

Their attitude to testing is different. The first one will do more of the functional testing. The second one will do more of the opposite.

 

The test vectors generated by the design engineer are not always adequate for the test engineer. Typically, the second one uses a large volume of test vectors, and does a larger variety of tests.

 

What follows is oriented towards the after-the-fab testing (testing engineer). Only a subset of what follows is reasonable to use for during-the-design testing (design engineer)

 

FAULT COVERAGE

 

 

Regardless of how test vectors are developed, it is important to have some measure of the quality of the generated test vectors.

 

Fault coverage is often defined as the percentage of all possible faults that the test vectors will detect. Faults can be defined on the behavioral level, or on the structural level. Once the appropriate levels chosen, a model to define a fault must be selected.

 

On the behavioral level, the model of faults helps determine which of the output-versus-input test vectors are redundant (this decision is also architecture-dependent).

 

On the structural level, most workers use the SSA model (Single Stuck At):

Fault-free structure is one in which all logic gates work properly, and all interconnections assume either logic 1 or logic 0, as necessary. Further, it is assumed that all faults (whether arising from flaws on interconnections, or within the gates) manifest themselves as if the interconnection were permanently held at either 1 or 0.

 

Figure 5.19. Examples of some cases that can be modeled after the SSA (single stuck at) methodology:

 

a) Input shorted to the voltage supply can be modeled as SA-1.

b) Open input can usually be modeled as SA-1 (TTL logic, for example).

c) Input shorted to the ground can be modeled as SA-0.

d) Output stuck at high logic level can be modeled as SA-1.

  1. Output stuck at low logic level can be modeled as SA-0.

 

DRAWBACKS

 

  1. The SSA model ignores multiple faults. This is fortunately not an issue. The goal is not to find what has gone wrong. The goal is to determine that something is gone wrong.
  2. Not all flows appear as SA faults. The model is OK for bipolar circuits. It is not adequate for CMOS circuits (a gate may turn into a sequential device). Fortunately, although physical faults are not accurate represented, empirical evidence suggest that vectors derived from the SSA model result in a low level of undetected defects.

 

PATH SENSITIZATION:

 

How to determine the test vector which tests for a particular SSA fault?

 

COMBINATIONAL CIRCUITS

 

  1. Set input bits to a value that forces logic 0 on a connection, to check if that connection is SA-1. 
  2. Set input bits to propagate this logic 0 to the first testable point, e.g. output.
  3. Repeat all above with logic 1.
  4. Repeat all above for all connections, except those tested as a side-test.

 

 

EXAMPLE

 

In order to test that connection C4 is not SA-1, the test vector is ABC=010. At the output is generated 0 (fault-free) or 1 (fault). The same vector also test C13 for SA-0, as a side-test

 

 

Figure 5.20. An example of test vector generation after the SSA (single stuck at) methodology: input test vector ABC = 010 tests for the SA-1 condition at . If the output () shows logic zero, this means that contains an error of the type SA-1.

 

Figure 5.21. An example of the diagram that can not be tested after the SSA methodology: there is no way to generate an input test vector to see if there is an SA-0 type of error at .

 

DRAWBACKS

 

  1. It is easy to create a circuit which has undetectable faults. For such circuits, the maximal fault coverage of the SSA model is smaller than 1.
  2.  

  3. Undetectable faults are typically the results of the presence of some redundancy, which is often something that is desired, for one reason or the other.

 

NOTE

 

Complete automatic generation of test vectors for combinational circuits is possible.

 

 

SEQUENTIAL CIRCUITS

 

Sequential circuits create interruption in the sensitization path. The path must be created up to the input of a flip-flop, and then resumed once all the flip-flops in the system are clocked. Major problems with sequential circuits are:

 

  1. The output of a sequential circuit depends not only upon its inputs, but also upon its history. 
  2. Sequential circuits are prone to timing glitches that can fire a flip-flop.

 

 

NOTE

 

Complete automatic generation of test vectors is not possible.

 

GENERAL PROBLEMS OF ALL MODELS

 

How to treat Byzantine faults?

 

 

AUTOMATIC TEST VECTOR GENERATION

 

 

RANDOM

 

The approach is base on the fact that a typically fault coverage curve has the following shape:

 

 

Figure 5.22. Generation of random test-vectors: typical relation of error coverage factor (K) to the number of generated test vectors (N). For small N, insignificant changes result in significant changes . This phenomenon enables the designer to achieve relatively good test results from a relatively small number of randomly generated test vectors.

  

DRAWBACKS

 

  1. Difficult to determine the actual fault coverage.
  2. For sequential circuits, the first few test vectors can place some of the sequential circuits in to " hang " state.

 

 

INTELLIGENT

 

Either human or some AI software can do the following:

  1. Break down the system into smaller blocks, which as clear interfaces as possible
  2. Follow the functionality (architecture) idea of each block, as well as the characteristics (technological) features of the elements used.
  3. Use random or exhaustive testing on the lowest levels, as appropriate.

 

For example, HITEST by CIRRUS is based on 2 HLL languages:

CLL (Cirrus Circuits Language) for hardware description and CWL (Cirrus Waveform Language) for test development control. Heuristic search is based on the architecture (CCL) and the hints from the user (CWL).

 

 

BIST: Built-in Self-test

or design for testability

 

Problem: How to test an arbitrary logic which consists of both sequential and combinational circuitry.

If test vectors are applied only to the inputs, the number of necessary test vectors may be 2N, for certain level of testing (e. g., exhaustive test).

If there is a way to inject test vectors into the internal of the shame, the number of necessary test vectors may be:

 For the same level of testing, assume:

 K1+K2+…+KI » N,

For N large number (VLSI). Obviously, the injection-oriented approach is promising.

Essence: Injection of test vectors is enabled by SCAN PATH techniques. They make the test pattern problem much easier, and most BIST structures are based on SCAN PATH techniques.

 

General structure of SP-oriented schemes:

 

 

Figure 5.23. General structure of the diagrams inclined toward the SP (scan path) testing methodology. The inputs and outputs are marked (, ¼ , n) and (, ¼ , m), respectively. The nodes marked with * are normally open circuited; they form connections only during the test vector entry and exit.

 

Conversions from D-FFs to latches and other types of FFs is relatively easy.

 

ESSENCE OF SP-ORIENTED SCHEMES:

 

1. In test mode,

All system bistable elements are converted into a single shift register called SCAN PATH, and an arbitrary test pattern is shifted into the bistable elements.

 

2. In normal operation mode,

Contents of bistable elements act as input signals for combinational circuitry, and after one clock period (!) new contents are stored into the bistable elements.

 

3. Back in test mode,

Contents of bistable elements are shifted out,

And compared against pre-prepared correct response.

 

 

STANFORD SCAN-PATH DESIGN-FOR-TESTABILITY

 

Terminology: MDFF - Multiplexed data flip-flop. A max is placed at the data input, to permit selection of two different data inputs (test + normal operation ).

 

T=1 - Test operation (a shift-register is formed)

T=0 - Normal operation (DFFs re-connected to clock)

 

a)

 

b)

 

Figure 5.24. The structure of the diagrams derived from the Stanford SP (Stanford Scan Path) testing methodology. The inputs and outputs are marked (, ¼ , n) and (, ¼ , m), respectively. The test vectors are entered through the input, and the output test vectors are departed through the R output. The symbol MDFF refers to the multiplexed data flip-flop; the input is active when the control input is (normal mode), and the input is active when the control input is (test mode). Symbol CK refers to the clock signal. The signals and (, ¼ , s) refer to the input and output test vectors, respectively.

 

Test procedure:

 

  1. Set T=1 (scan mode).
  2. Shift the test pattern yI into the flip-flops.
  3. Set the test values on Xi inputs.
  4. Set T=0 (normal operation),
  5. Let sufficient time for combinational logic to settle and checking the values on Z outputs.

  6. Apply one check impulse to CK, and T=1.

  6.Shift out the flip-flop contents yi via Zm, and test it against the expected correct values. The next text pattern can be shifted in, at the same time.

 

Note: Flip-flops have to be tested, too.

 

  1. Shift a 0 thru string of 1s.
  2. Shift a 1 thru string of 0s.

 

Modification #1: 2PFF Design (Two-port flip-flop)

Two control inputs (C1 + C2)

For two data inputs (D1 + D2).

 

 

Figure 5.25. The structure of the diagrams corresponding to the 2PFF-SP (two-port flip-flop scan path) testing methodology. The 2PFF flip-flops have two control inputs each ( and ), for two different data inputs (1D and 2D).

 

Modification #2:

Latch-based designs (typical of CPU designs)

 

  1. IBM’s LSSD (level sensitive scan design)

Extra latches are used to allow system latches

to be connected into a shift register.

 

 

Figure 5.26. The structure of the diagrams corresponding to the LSSD testing methodology. The latches L1-i (, …) are system latches, and they are a part of the diagram that is being tested. The latches L2-i (, …) are added to enable the connection of the system latches into a single shift register. The SDI (scanned data in) and SDO (scanned data out) signals serve the purpose of entering and retrieving the test vectors, respectively. Symbols CK and TCK refer to the system clock (used in the normal mode), and the test clock (used in the test mode), respectively.

 

Each system latch is replaced by one 2P latch (L1-i) and one 1P latch (L2-i). Everything works very much like a 2PFF system.

 

SDI: Scanned in test data

SDO: Scanned out test data

TCK: Test clock

CK: Clock

 

 

2. UNIVAC’s SCAN-SET

Separate test-data shift registers are used, which avoids the necessity to configure system latches into flip-flops.

 

  1. AMDAHL’s FUJITSU’s SCAN-MUX

Combination of muxing and demuxing is used to set and scan out the system latches which avoids the use of shift registers entirely.

 

ROM

 

Basic methods:

 

  1. Scanning all addresses for the desired patterns.
  2. Doing the checksum (a super parity). There is a number of a approaches. Each one has a non-zero probability not to detect an error.

 

Skew checksum:

 

1. sum = 0 (* Skew Checksums *)

2. address = 0

3. rotate_left_1(sum)

4. sum = sum + rom(address)

5. address = address + 1

6. if address < rom_length then goto 3

7. end

 

Figure 5.27. An algorithm for ROM memory testing.

 

RAM

 

Basic types of failures:

 

  1. Stack at
  2. Decoder failures:
  3. Portions of memory become unaccessible.

  4. Multiple writes:
  5. Items gets written into more than one location.

  6. Slow sense amplifier recovery:
  7. Reading is history sensitive (e.g. 0 after 1010101 … is read properly;

    0 after 1111111 … is not read properly).

  8. Sleeping sickness
  9. Failure to retain data for the specified refresh time

  10. Pattern sensitivity:

Erroneous data storage only for certain patterns,

due to interactions between physically adjacent memory cells.

 

Complexity of testing

 

An algorithm has an O(k) complexity, if the time needed for that algorithm is proportional to Nk, where N is number of memory location to be tested. Many of the good classic memory test algorithms are N2 or O(2)

 

Simple patterns algorithms

 

 

Marching 1s 11111 … 5N O(1)

 

  1. Memory is initialized to all 0s.
  2. 0 is read from the first cell, and 1 is written in its place.
  3. This read/write sequence is repeated for entire memory.
  4. After the last one of the cells is reached, 1 is read from the last cell,
  5. and 0 is written in its place.

  6. This continuous backward until the first memory location is reached.

 

 

 

Marching 0s 11111 … 5N O(1)

 

Same procedure on reverse data.

 

Good for:

 

  1. Multiple writes
  2. Stuck at

 

Week for:

 

  1. Address decoder errors
  2. Pattern sensitivity errors
  3. Slow sense amplifier recovery errors

 

Checkerboards: 010101 … 4N O(1)

101010 …

 

  1. Memory is filled up as: 010101 …
  2. Memory is read
  3. The same is repeated with the inverse pattern

 

Usage:

 

  1. To detect shorts between adjacent memory elements.
  2. As an introduction to other tests.

 

Drawbacks:

 

1. Physical layout of rows and columns rarely matches

their logical ordering. One must create a true checkerboard, not

not a logical checkerboard.

2. Address descrambling circuits have to be built.

 

 

Walking patterns (walkpat) 2N2 + 6N O(2)

 

 

  1. Initialize memory to all zeros.
  2. Write 1 into location, and read all other locations,
  3. to ensure that they still contain all 0s. Verify location 1.

  4. Return location 1 to 0, and continue till the end.
  5. Repeat all for the walking 0.

 

Good for:

 

  1. Sense amplifier recovery
  2. Multiple writes.
  3. Pattern sensitivity.

 

Galloping patterns

 

Galpat I:

Galloping 1s and 0s 2N2 + 8N O(2)

 

Same as WALKPAT, except that 1 is rechecked after each single 0 is read, to ensure that 1 remains undisturbed. Address lines undergo every possible transition.

 

Galpat II:

Galloping Write Recovery Test 8N2 – 4N O(2)

 

Procedures starts with an arbitrary memory contents.

 

    1. First location is written with 1.
    2. Second location is written with 0,
    3. and the original location is again checked for 1.

    4. Second location is written with 1,
    5. and the original location is again checked for 1.

    6. The sequence is repeated for all possible pairs of memory locations.

 

 

Surround disturb N3/2

 

All above tests are not practical for RAMs above 4K bits. For dynamic RAM one can create a simpler test based on the assumption that dynamic memory cells are the most susceptible to interference from the nearest neighbors, which means that the global sensitivity check can be eliminated. This is an example on how the knowledge about architecture and technology in order to decrease the testing complexity.