Thursday, April 30, 2015

Automated Proof in Practice


While it is dangerous to generalise from a single example, it's instructive to look at Foster and Struth's paper as giving us a snapshot of the state of the art in automating ordinary reasoning, but first let’s look at the particular subject of the paper. 

Regular expressions and finite automata are fundamental to the practice of computer science, but they are a fertile topic for research, too. A key question is identity: when do two regular expressions – or two finite automata – behave identically? The answer is “when they describe the same set of strings”, but that's less useful computationally than it first looks, since it requires that we test sets – which potentially are infinite – for equality. So, how else to judge identity? By setting out a collection of laws – or axioms – that encapsulate identity.  Foster and Struth's paper sets out the various attempts, by a number of authors including Boffa, Conway, Kozen and Salomaa, to provide axioms that are sound (don't prove any false negatives) and complete (can prove all identities). 

The “fine structure” of the title is a detailed study of the interrelationships of these sets of axioms: showing when one set implies another – a process of deduction – and when one set is strictly stronger than another, which is shown by giving a counterexample which  meets one set and not the other. The process of both deduction and counterexample-finding are computer assisted, and this provides the main contribution of the paper. 

How does the automation directly contribute to the mathematics? It meant that a number of loose ends could be tidied up: one set of axiomatisations was simplified (though finding hitherto undiscovered relationships between axioms), some missing axioms were found in the statement of a particular axiom set, and a gap was found in an existing, well-known, proof. So, it increased assurance in the work, but probably does not constitute a substantial advance in itself.

On the other hand, the work is indicative of real progress in the facilities available to support online proof and counterexample development. Advances have come in a number of ways. Most important is the way in which different approaches have come together: it used to be that proof was either done “manually” with a proof assistant, or in an entirely automated way using some kind of decision procedure. The work under review uses the Isabelle proof assistant, which now supports the Sledgehammer tool to apply a number of external automated theorem provers for first-order logic, and in cases where this fails, Isabelle has its own simplification and proving systems. This has the effect of allowing users to concentrate their attention on shaping the higher-level architecture of the theories, rather than having to deduce results proof step by (painful) proof step. Where this reaches its limit is when proofs go beyond the purely algebraic and reverie rather more mathematical structure (e.g. sets) as provided by a higher-order logic.

While it's not the case that most mathematicians use theorem proving technology in their day to day practice – as opposed to the wide take-up of computer algebra systems in various areas of maths and engineering – it has taken real hold in the area of mechanised meta-theory of programming languages, as witnessed by the POPLmark challenge http://www.seas.upenn.edu/~plclub/poplmark/. It may well be because this is an area where the scale of proofs can be such that the usual social process of proof checking have been shown to be less than ideal, hence that value that accrues from formalisation, and, in conclusion, it is interesting to speculate on how long it will take for the practising mathematician to reach for a theorem proving system as a part of his or her daily work.

This review of On the Fine-Structure of Regular Algebra by Simon Foster and Georg Struth and published in the Journal of Automated Reasoning is a first draft as submitted to Computing Reviews.