There exist two new set of graphs. The first one varies the default Required Strength. This is the Required score given to composite signatures with zero required vectors. Originally, this was 100; we also looked at 50 and at 0. It turns out that no matter the default required score, the best composite signatures remained those created from nine-valued simulations, with restricted vectors removed and NO marking of required vectors. Click here for these exciting graphs.
The other set of graphs plots success versus delay size with 100 candidates for each diagnosis, rather than 10 candidates. Obviously, the success rate will increase. Curiously, (or perhaps not), the best composite signatures appear to be those created with nine valued simulations, with restricted vectors removed and required vectors marked ("nineval.res.req"), as opposed to "nineval.res.noreq" which worked best when the diagnosis size was 10. I had a Required Strength of 100 for all plots. Clearly, required vectors improve the diagnostic success rate when the diagnosis size is 100; previously, with a diagnosis size of 10, it appeared the required vectors were NOT important. Click here for these exciting graphs.
A very brief summary of the project and what needs to be done.
I think that we need to wait for Carl's CPT program before we can draw any firm conclusions, but I suspect that some modification of their algorithm is the way to go. I am very curious as to the number of behaviors for each fault that they diagnose, since they did not report this.
I suspect that we can learn a lot about how David's algorithm works on delay faults by examining which faults are correctly diagnosed by which composite signature set. I talked about this briefly with Joel and Brian. If, say, "nineval.res.req" correctly diagnoses 10 faults, and "nineval.res.noreq" correctly diagnoses 8 faults, are those 8 faults a subset of the 10 faults? Are they completely disjoint? Who knows?? I have a perl script that allows you to examine this and similar questions, but I do not have enough time to actually look into it myself.
Other things to do:
1. Pass-Fail analyses (we did this briefly at the start, but haven't looked at it seriously yet).
2. We know which gates in the c432 and the c7552 circuits that Girard et al. did their diagnoses on, so we should check our results on those particular faults. In fact, I just may do this myself when I have some spare time.
3. One-sided delays (slow-to-rise, slow-to-fall). I have a perl script that invokes verilog and leads to much smaller Verilog output files, so if anyone is interested, they can make some easy modifications in the tdl2verilog script and check this out.
4. Average Correct Match Position: we haven't done a damn thing with this.
5. For David: completely rewrite your code so that the diagnosis size is variable: candidates are printed out until the correct match is found (or at least the CM position is printed out for each diagnosis). This should only take 1-2 hours, so some afternoon you should get at it.
It has been a pleasure working with you. If anyone has any questions or needs some clarification (although I can't imagine why you would need it), please contact my lawyer at the address given below.