Keynotes

Chris Hawblitzel, Microsoft Research:Chris Hawblitzel, Microsoft Research
Fast verification of fast cryptography for secure sockets

The Everest project is a joint effort between Microsoft Research, INRIA, and CMU to build a formally verified replacement for core HTTPS components, including the TLS protocol, cryptographic primitives, and certificate processing. The goal is to build an efficient implementation of these components, and the cryptographic primitives are especially critical to performance. Therefore, the project has developed verified hand-written assembly language implementations of common cryptographic primitives such as AES, SHA, and Poly1305.
This talk will present an overview of Everest, its verified assembly language cryptography, and the tools used to verify the code, including Vale, Dafny, F*, and Z3. It will discuss challenges in using such tools to verify low-level cryptographic code, including the need to reason about bit-level operations, large integers, and polynomials. A key challenge is the speed of the verification, and the talk will discuss ongoing efforts to combine tactics with SMT solving to make verification fast without sacrificing automation.


Marta Kwiatkowska, OxfordMarta Kwiatkowska, Oxford
Safety Verification of Deep Neural Networks

Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. This lecture will describe progress with developing a novel automated verification framework for deep neural networks to ensure safety of their classification decisions with respect to image manipulations, for example scratches or changes to camera angle or lighting conditions, that should not affect the classification. The techniques work directly with the network code and, in contrast to existing methods, can offer guarantees that adversarial examples are found if they exist. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples.


Viktor Vafeiadis, MPI-SWSViktor Vafeiadis, MPI-SWS
Formal reasoning under weak memory consistency

The semantics of concurrent programs is now defined by a weak memory model, determined either by the programming language (e.g., in the case of C/C++11 or Java) or by the hardware architecture (e.g., for assembly and legacy C code). Since most work in concurrent software verification has been developed prior to weak memory consistency, it is natural to ask how these models affect formal reasoning about concurrent programs.

In this talk, we show that verification is indeed affected: for example, the standard Owicki-Gries method is unsound under weak memory. Further, based on concurrent separation logic, we develop a number of sound program logics for fragments of the C/C++11 memory model. We show that these logics are useful not only for verifying concurrent programs, but also for explaining the weak memory constructs of C/C++.