a) twoval.res.req The CS was created from stuck-at simulations; restricted vectors were removed and required vectors were marked. This is the "normal" CS.

b) twoval.nores.req The CS was created from stuck-at simulations; restricted vectors were NOT removed and required vectors were marked. Since the restricted vectors are not removed, this CS contains more behavior than the previous CS.

c) twoval.res.noreq The CS was created from stuck-at simulations; restricted vectors were removed and required vectors were NOT marked.

d) twoval.nores.noreq The CS was created from stuck-at simulations; restricted vectors were NOT removed and required vectors were NOT marked.

e) nineval.res.req The CS was created from nine-valued simulations; restricted vectors were removed and required vectors were marked. This is the "normal" CS, except that nine-valued simulations were used.

f) nineval.nores.req The CS was created from nine-valued simulations; restricted vectors were NOT removed and required vectors were marked. This CS has complete containment, since restricted vectors are not removed and static hazards are included in the CS.

g) nineval.res.noreq The CS was created from nine-valued simulations; restricted vectors were removed; required vectors were NOT marked.

h) nineval.nores.noreq The CS was created from nine-valued simulations; restricted vectors were NOT removed and required vectors were NOT marked.

i) stuck-at There is no CS; (the CS is simply the stuck-at signature)

I want to answer three questions. Among these questions are:

**
1. Which CS gives the greatest diagnostic success rate?
**

It looks like it is "nineval.res.noreq".

I graphed "res.req", "nores.req", "res.noreq", and "nores.noreq" for both nine-valued CS ( Nine-valued results ) and for two-valued CS ( Two-valued results ). In both cases, either "res.req" or "res.noreq" gave the best results. Furthermore, CS based on nine-valued simulations were consistently better than those based on stuck-at (two-valued) simulations. I graphed "res.req" and "res.noreq" for both nine-valued and two-valued CS ( results ).

The matching algorithm performs better when restricted vectors are removed from the CS.

We are more successful with nine-valued simulations.

**
2. How important are required vectors?
**

This is a tough one. Sometimes marking required vectors resulted in more misleading diagnoses c1908 . Most of the time, results were mixed c7552 .

**Results:**

**
3. How important is containment?
**

Four different composite signatures were compared: twoval.res.req, twoval.nores.req, nineval.nores.req, and stuck-at. Theoretically, if the amount of containment alone governed the diagnostic success rate, then the order from worst to best would be: stuck-at, twoval.res.req, twoval.nores.req, then nineval.nores.req (complete containment). Clearly, our observations do not match theory.

**Results:**

Of these four, generally, complete containment is best. For example c5315 . However, consider the c2670 . The behavior can also be more complicated: c7552. The directory with all of the results can be found here .

Apparently, containment is not all that useful, since the best CS is "nineval.res.noreq", which does not have complete containment. We must think on these things.

**Explanation:**
At small delays, the number of discrepancies is
small, so there will be a large number of composite signatures that
contain all the behavior, and in which the behaviors are "required"
(ie, a lot of CS with a lexicographical order of 100 100 xx).
Every step towards complete containment increases the size of the
composite signature. Since the last matching criterium is the amount of
missing behavior, the larger the CS, the more painful the hit. Since
the "stuck-at" CS has the smallest size, it works best at small delays.

Why is complete containment not always the best? We'll find out!

imthurn@cse.ucsc.edu