TPHOLs 2007 - 20th International Conference on Theorem Proving in Higher Order Logics - Kaiserslautern, Germany, 10-13 September 2007

List of Accepted Papers

A formally verified prover for the ALC description logic

María-José Hidalgo, José-Antonio Alonso, Joaquín Borrego-Díaz, Francisco-Jesus Martin-Mateos and José-Luis Ruiz-Reina

The Ontology Web Language (OWL) is a language used for the Semantic Web. OWL is based on Description Logics (DLs), a family of logical formalisms for representing and reasoning about conceptual and terminological knowledge. Among these, the logic ALC is a ground DL used in many practical cases. Moreover, the Semantic Web apears as a new field for the application of formal methods, that could be used to increase it reliability. A starting point could be the formal verification of satisfiability provers for DLs. In this paper, we present the PVS specification of a prover for ALC, as well as the proofs of its termination, soundness and completeness. We also present the formalization of the well-foundedness of the multiset relation induced by a well-founded relation. This result has been used to prove the termination and the completeness of the ALC prover.

Proof Pearl: Looping around the Orbit

Steven Obua

We reexamine the While combinator of higher-order logic (HOL) and introduce the For combinator. We argue that both combinators should be part of the toolbox of any HOL practitioner, not only because they make efficient computations within HOL possible, but also because they facilitate elegant inductive reasoning about loops. We present two examples that support this argument.

Finding Lexicographic Orders for Termination Proofs in Isabelle/HOL

Lukas Bulwahn, Alexander Krauss and Tobias Nipkow

We present a simple method to formally prove termination of recursive functions by searching for lexicographic combinations of size measures. Despite its simplicity, the method turns out to be powerful enough to solve a large majority of termination problems encountered in daily theorem proving practice.

Separation Locic for Small-step Cminor

Andrew Appel and Sandrine Blazy

Cminor is a mid-level imperative programming language; there are proved-correct optimizing compilers from C to Cminor and from Cminor to machine language. We have redesigned Cminor so that it is suitable for Hoare Logic reasoning, we have designed a Separation Logic for Cminor, we have given a small-step operational semantics, and we have a machine-checked proof of soundness of our Separation Logic. This is the first large-scale machine-checked proof of a Hoare Logic w.r.t. a small-step semantics. We give a small-step semantics (instead of Leroy's big-step) that is motivated by the need to support future concurrent extensions. In addition, we prove the soundness of the Separation Logic and also the relation between our small-step semantics and the big-step semantics, so sequential programs can be compiled by Leroy's compiler with formal end-to-end correctness guarantees.

A monad-based modeling and verification toolbox with application to security protocols

Christoph Sprenger and David Basin

We present an advanced modeling and verification toolbox for functional programs with state and exceptions. The toolbox integrates an extensible, monad-based, component model, a monad-based Hoare logic and weakest pre-condition calculus, and proof systems for temporal logic and bisimilarity. It is implemented in Isabelle/HOL using shallow embeddings and incorporates as much modeling and reasoning power as possible from Isabelle/HOL. We have validated the toolbox's usefulness in a substantial security protocol verification project.

Verified Decision Procedures on Context-Free Grammars

Yasuhiko Minamide

We verify three decision procedures utilized in a program analyzer for a server-side programming language. One of the procedures decides inclusion between a context-free language and a regular language. The other two decide decision problems related to well-formedness and validity of XML documents. From the formalization, we generate executable code for one of the decision procedures and incorporate it into an existing program analyzer.

Source-Level Proof Reconstruction for Interactive Theorem Proving

Lawrence Paulson and Kong Woei Susanto.

Interactive proof assistants should verify the proofs they receive from automatic theorem provers. Normally this proof reconstruction takes place internally, forming part of the integration between the two tools. We have implemented source-level proof reconstruction: resolution proofs are automatically translated to Isabelle proof scripts. Users can insert this text into their proof development or (if they wish) examine it manually. Each step of a proof is justified by calling Hurd's Metis prover, which we have ported to Isabelle. A recurrent issue in this project is the treatment of Isabelle's axiomatic type classes.

Proof Pearl: The power of higher-order encodings in the logical framework LF

Brigitte Pientka

In this proof pearl, we demonstrate the power of higher-order encodings in the logical framework Twelf by investigating proofs about an algorithmic specification of bounded subtype polymorphism, a problem from the POPLmark challenge. Our encoding and representation of the problem plays to the strengths of the logical framework LF. Higher-order abstract syntax is used to deal with issues of bound variables. More importantly, we exploit the full advantage of parametric and higher-order judgments. As a key benefit we get a tedious narrowing lemma, which must normally be proven separately, for free. Consequently, we obtain an extremely compact and elegant encoding of the admissibility of general transitivity and other meta-theoretic properties.

Formalising Generalised Substitutions

Jeremy Dawson

We use the theorem prover Isabelle to formalise and machine-check results of the theory of generalised substitutions given by Dunne and used in the B method. We describe the model of computation implicit in this theory and show how this is based on a compound monad, and we contrast this model of computation and monad with those implicit in Dunne's theory of abstract commands. Subject to a qualification concerning frames, we prove, using the Isabelle/HOL theorem prover, that Dunne's results about generalised substitutions follow from the model of computation which we describe.

Proof Pearl: de Bruijn terms really do work

Michael Norrish and Rene Vestergaard

Placing our result in a web of related mechanised results, we give a direct proof that the de Bruijn lambda-calculus (à la Huet, Nipkow and Shankar) is isomorphic to an alpha-quotiented lambda-calculus. In order to establish the link, we introduce an index-carrying abstraction mechanism over de Bruijn terms, and consider it alongside a simplified substitution mechanism. Relating the new notions to those of the alpha-quotiented and the proper de Bruijn formalisms draws on techniques from the theory of nominal sets.

Mizar's Soft Type System

Freek Wiedijk

In Mizar, unlike in most other proof assistants, the types are not part of the foundations of the system. Mizar is based on untyped set theory, which means that in Mizar expressions are typed but the values of those expressions are not. In this paper we present the Mizar type system as a collection of type inference rules. We will interpret Mizar types as soft types, by translating Mizar's type judgments into sequents of untyped first order predicate logic. We will then prove that the Mizar type system is correct with respect to this translation in the sense that each derivable type judgment translates to a provable sequent.

Primality Proving with Elliptic Curves

Laurent Théry and Guillaume Hanrot

Elliptic curves are fascinating mathematical objects. In this paper, we present the way they have been represented inside the Coq system, and how we have proved that the classical composition law on the points is internal and gives them a group structure. We then describe how having elliptic curves inside a prover makes it possible to derive a checker for proving the primality of natural numbers.

. Abstract:

Automatically translating type and function definitions from HOL to ACL2

James Reynolds

We describe an automatic, semantically equivalent translation between HOL, the higher-order logic supported by the HOL4 theorem proving environment, and a deep embedding of the first order logic supported by ACL2. An implementation of this translation allows ACL2 to be used as a symbolic execution engine for functions defined in HOL.

Formalising Java's Data-Race-Free Guarantee

David Aspinall and Jaroslav Sevcik

We present a formalisation of the data race free guarantee provided by Java, as captured by the semi-formal Java Memory Model. The data race free (DRF) guarantee says that all programs which are correctly synchronised (free of data races) can only have sequentially consistent behaviours, so they can be understood intuitively by programmers. By formalising this in a theorem prover, we have achieved three main aims. First, we have made the definitions and proofs precise, leading to a better understanding. Our analysis found several hidden inconsistencies and missing details. Second, once we built a formal definition, we could explore variations and investigate their impact in the proof with the aim of simplifying the JMM; we found that not all of the conditions in the JMM definition are necessary for the DRF guarantee. This allows us to suggest a quick fix to the serious bug discovered by Cenciarelli et al (2007) without invalidating the DRF guarantee. Finally, the formal definition provides a basis to test concrete examples, and investigate JMM-aware logics for concurrent programs in future work.

Simple Types, deep and shallow

François Garillot and Benjamin Werner

We present a formal treatment of normalization by evaluation in type theory. Because we use a presentation with named variables, the involved semantics of simply-typed lambda-calculus are the simply typed fragment of the type theory. This means we have constructed and proved correct a decompilation function which recovers the syntax of a program, provided it belongs to the simply typed fragment. The entire development runs and is checked in Coq. Possible applications include the formal treatment of languages with binders.

Encoding indexed inductive definitions using the intensional identity type

Ulf Norell

It is well-known how to represent indexed inductive definitions, or inductive families, using extensional equality. We show that this encoding also works in intensional type theory and that the computation rules associated with an inductive family are preserved. The proof has been formalised in Agda.

(paper withdrawn by author)

Program verification: operational semantics, concurrency and weak memory models

Tom Ridge

This paper concerns the formal semantics of programming languages, and the specification and verification of software. We are interested in the verification of real programs, written in real programming languages, running on machines with real memory models. To this end, we verify a Caml implementation of a concurrent algorithm, Peterson's mutual exclusion algorithm, down to the operational semantics. The implementation makes use of Caml features such as higher order parameters, state, concurrency and nested general recursion. Our Caml model includes a datatype of expressions, and a small step reduction relation for programs (a Caml expression together with a store). We also develop a new proof of correctness for a modified version of Peterson's algorithm, designed to run on a machine with a weak memory.

A modular formalisation of finite group theory

Georges Gonthier, Assia Mahboubi, Laurence Rideau, Enrico Tassi and Laurent Théry

In this paper, we present a formalisation of elementary group theory done in {\sc Coq}. This work is the first milestone of a long-term effort to formalise Feit-Thompson theorem. As our further developments will heavily rely on this initial base, a special care has been taken to articulate it in the most compositional way.

Building Formal Method Tools in the Isabelle/Isar Framework

Makarius Wenzel and Burkhart Wolff

We present the generic system framework of Isabelle/Isar underlying recent versions of Isabelle. Among other things, Isar provides an infrastructure for Isabelle plug-ins, comprising extensible state components and extensible syntax that can be bound to tactical ML programs. Thus the Isabelle/Isar architecture may be understood as an extension and refinement of the traditional LCF approach, with explicit infrastructure for building derivative systems. To demonstrate the technical potential of the framework, we instantiate it with a concrete formal methods tool: the HOL-Z 3.0 environment, which is geared to towards the analysis of Z specifications and formal proof of forward-refinements.

Verification of Expectation Properties for Discrete Random Variables in HOL

Osman Hasan and Sofiene Tahar

One of the most important concepts in probability theory is that of the expectation of a random variable, which basically summarizes the distribution of the random variable in a single number. In this paper, we develop the basic techniques for analyzing the expected values of discrete random variables in the HOL theorem prover. We first present a formalization of the expectation function for discrete random variables and based on this definition the expectation properties of three commonly used discrete random variables are verified. Then, we utilize the definition of expectation to verify the linearity of expectation property, a useful characteristic to analyze the expected values of probabilistic systems involving multiple random variables, in HOL. To demonstrate the usefulness of our approach, we verify the expected value of the coupon collector's problem within the HOL theorem prover.

HOL2P - A System of Classical Higher Order Logic with Second Order Polymorphism

Norbert Voelker

This paper introduces the logical system HOL2P that extends classical higher order logic (HOL) with type operator variables and universal types. HOL2P has explicit term operations for type abstraction and type application. The formation of type application terms t[T] is restricted to small types T that do not contain any universal types. This constraint ensures the existence of a set-theoretic model and thus consistency. The expressiveness of HOL2P allows category-theoretic concepts such as natural transformations and initial algebras to be applied on the level of polymorphic HOL functions. The parameterisation of terms with type operators adds genericity to theorems. Type variable quantification can also be expressed. A prototype of HOL2P has been implemented on top of HOL-Light. Type inference is semi-automatic, and some type annotations are necessary. Reasoning is supported by appropriate tactics. The implementation has been used to check some simple sample derivations.

Using XCAP to Certify Realistic System Code: Machine Context Management

Zhaozhong Ni, Dachuan Yu and Zhong Shao

Formal, modular, and mechanized verification of realistic system code is desirable but challenging. Machine context management, the basis of multi-tasking and system software, is one representative example---with context operations occurring in every millisecond on every PC, their correctness deserves the most careful examination. Despite common misunderstandings which consider such small and stable code bases suitable for informal scrutiny and testing, context management code, after extensively studied and used for decades, still prove to be a common source of bugs and confusion. Yet its verification remains difficult due to the machine-level details, irregular patterns of control flows, and rich application scenarios. This paper reports our experience on applying XCAP, a recent theoretical assembly verification framework, on x86 machines, and certifying a realistic machine context implementation with it. XCAP supports expressive and modular logical specifications, but has only previously been applied on simple ideal machine and code. By applying the XCAP theory to x86 machine, building libraries of common proof tactics and lemmas, composing specifications for the context data structures and routines, and proving that the code behave accordingly, we achieved the first formal, modular, and mechanized verification of realistic x86 context management code. Everything has been fully mechanized in the Coq proof assistant. Our library code (and proof) can be linked with other system and application code (and proof), and runs on stock hardware. Our approach applies to other variants of context management (more complex context, different platforms, etc.), provides a solid basis for further verification of thread implementation and concurrent program, and illustrates how to achieve formal, modular, and mechanized verification of realistic system code in general.

Extracting Purely Functional Contents from Logical Inductive Types

David Delahaye, Catherine Dubois and Jean-Frédéric Étienne

We propose a method to extract purely functional contents from logical inductive types in the context of the Calculus of Inductive Constructions. This method is based on a mode consistency analysis, which verifies if a computation is possible w.r.t. the selected inputs/outputs, and the code generation itself. We prove that this extraction is sound w.r.t. the Calculus of Inductive Constructions. Finally, we present some optimizations, as well as the implementation designed in the Coq proof assistant framework.

Verifying nonlinear real formulas via sums of squares

John Harrison

Techniques based on sums of squares appear promising as a general approach to the universal theory of reals with addition and multiplication, i.e. verifying Boolean combinations of equations and inequalities. A particularly attractive feature is that suitable `sum of squares' certificates can be found by sophisticated numerical methods such as semidefinite programming, yet the actual verification of the resulting proof is straightforward even in a highly foundational theorem prover. We will describe our experience with an implementation in HOL Light, noting some successes as well as difficulties. We also describe a new approach to the univariate case that can handle some otherwise difficult examples.

Proof Pearl: The Termination Analysis of TERMINATOR

Joe Hurd

TERMINATOR is a static analysis tool developed by Microsoft Research for proving termination of Windows device drivers written in C. This proof pearl describes a formalization in higher order logic of the program analysis employed by TERMINATOR, and verifies that if the analysis succeeds then program termination logically follows.

Proof Pearl: Wellfounded Induction on the Ordinals up to Epsilon-0

Konrad Slind and Matt Kaufmann

We discuss a proof of the wellfounded induction theorem for the ordinals up to Epsilon-0. The proof is performed on the embedding of ACL2 in HOL4, thus supporting logical justification for that embedding and supporting the claim that the ACL2 logic has a model.

Improving the Usability of HOL through Controlled Automation Tactics

Eunsuk Kang and Mark Aagaard

This paper introduces the concept of controlled automation as a balanced medium between high-level automated reasoning and low-level primitive tactics in HOL. We have created a new tactic that automates the use of many existing low-level tactics for logical operations and three new tactics that simplify the most common uses of term rewriting: definition expansion, simplification, and equational rewriting. To implement the tactics, we extended HOL with a facility to label assumptions and operate uniformly on both goals and assumptions. We select automatically and predictably which low-level tactic to apply by examining the structure of the selected assumption or goal. A simple and uniform set of hints enable users to provide the minimal information needed to guide the tactics. We performed two case studies and achieved a 60% reduction in the number of tactics in proof scripts. Informal feedback from novice users in a verification course was also quite positive.

  Last modified: 28 Aug 2013. Contact: tphols2007@informatik.uni-kl.de.    
TPHOLs 2007 is supported by DASMOD (Dependable Adaptive Systems and Mathematical Modeling), Fraunhofer IESE (Institute for Experimental Software Engineering) and DFKI (German Research Center for Artificial Intelligence).