Algorithmic Techniques in
Verification by Explicit State Enumeration
Modern digital systems often
employ sophisticated protocols.
Unfortunately, designing correct protocols
is a subtle art.
Even when using great care, a designer typically
cannot foresee all possible interactions among the
components of the system; thus, bugs like subtle race
conditions or deadlocks are easily overlooked.
One way a computer can support the designer
is by simulating random
executions of the system.
There is, however, a high probability
of missing executions containing errors -- especially in complex
systems -- using this simulation
approach. In contrast, an automatic
verifier tries to examine all states
reachable from a given set of startstates.
The biggest obstacle in this
exhaustive approach is that often there is a
very large number of reachable
This thesis describes three techniques
to increase the size of the reachable state
spaces that can be
handled in automatic verifiers.
The techniques work in verifiers that are
based on explicitly storing each reachable
state in a state table, and aim at minimizing
both runtime and memory requirements.
(For the types of industrial protocols that were
examined, this explicit state enumeration
outperformed symbolic model checking, in
which the set of
reachable states is represented by a binary
In short, the three techniques are:
(1) a probabilistic state compression
scheme, (2) a parallel version of the
algorithm for state space
exploration, and (3) a scheme allowing the use of magnetic
disk for storing the state space with only small
The first technique reduces the memory
requirements of the state table by storing only
(hash) signatures instead of full state descriptors
in this table.
The memory savings come at the
price of a certain probability
that the verifier incorrectly claims that an
erroneous protocol is correct.
This probability, however,
can be made negligibly small.
The technique was introduced under the
name hash compaction by Wolper
By refinements in the data structures and the
analysis of the original hash compaction scheme,
the improved hash compaction scheme
presented in this thesis can
reduce the number of bits per state typically 4-fold
in comparison to the previous scheme, while
guaranteeing the same bound on the probability
of false reports. The total memory savings
of the refined hash compaction scheme is typically
two orders of magnitude.
The second technique described in this thesis
is a parallel explicit state enumeration
algorithm, which runs on
distributed memory multiprocessors
and networks of workstations (NOWs)
and is based
on the message passing paradigm.
In experiments on a 32-node NOW and a 64-node SP2
with three complex protocols,
the algorithm showed close to linear speedups,
which were largely
insensitive to communication latency and
bandwidth. There is some slowdown with increasing
communication overhead, for which a simple yet
relatively accurate approximation formula is given.
The algorithm also allows
the verification of larger protocols
since the state table is distributed over
Modifications of the algorithm are proposed
that are expected to
allow good speedups in the case
of heterogeneous workstations
with a dynamically changing load and
a slow interconnection network.
The third technique allows the use of magnetic disk instead of
main memory for storing almost all
of the state table, at the cost of a small
The algorithm avoids costly random accesses
to disk and amortizes the
cost of linearly reading the state table from disk
over all states in each breadth-first level
of the reachable state space.
The remaining runtime overhead for
accessing the disk can be strongly
reduced by combining the scheme with hash compaction.
In experiments with three complex cache coherence
protocols, the new algorithm
achieved memory savings
of one to two orders of magnitude with a
typical runtime overhead of only around 15%.
All three techniques have been implemented within
the Murphi protocol verification system
developed at Stanford and have worked well for
virtually all of the protocols on which
they were tried.
In addition, all three techniques can be combined
with each other and with the three state reduction
techniques in Murphi (symmetry reduction,
reversible rules, and repetition constructors),
to allow verification of more complex protocols.
The three techniques described in this thesis,
however, are not compatible
with checking liveness properties, since
all known efficient algorithms for checking liveness
properties require some sort of
depth-first search of the state graph and the three new techniques
require a breadth-first search.
Nevertheless, efficient algorithms for checking
liveness properties based on breadth-first search
might be found in the future.