CAV 2017: Call for Papers

CAV 2017: 29th International Conference on Computer-Aided Verification

Important Dates

All deadlines are AOE (Anywhere on Earth).

Papers:

Paper submission: January 24, 2017 (Tuesday)
Author response period: March 20-22, 2017 (Monday – Wednesday)
Author notification: April 12, 2017 (Wednesday)
Final version: May 5, 2017 (Friday)

Conference:

Workshops July 22-23, 2017
Main conference July 24-28, 2017

Submission URL

http://cav2017.mpi-sws.org/

Scope

CAV 2017 is the 29th in a series dedicated to the advancement of the theory and practice of computer-aided formal analysis and synthesis methods for hardware and software systems.  CAV considers it vital to continue spurring advances in hardware and software verification while expanding to domains such as cyber-physical, social, and biological systems.  The conference covers the spectrum from theoretical results to concrete applications, with an emphasis on practical verification tools and the algorithms and techniques that are needed for their implementation. The proceedings of the conference will be published in the Springer LNCS series. A selection of papers will be invited to a special issue of Formal Methods in System Design and the Journal of the ACM.

Topics of interest include but are not limited to:

  • Algorithms and tools for verifying models and implementations
  • Algorithms and tools for system synthesis
  • Mathematical and logical foundations of verification and synthesis
  • Specifications and correctness criteria for programs and systems
  • Deductive verification using proof assistants
  • Hardware verification techniques
  • Program analysis and software verification
  • Software synthesis
  • Hybrid systems and embedded systems verification
  • Compositional and abstraction-based techniques for verification
  • Probabilistic and statistical approaches to verification
  • Verification methods for parallel and concurrent systems
  • Testing and run-time analysis based on verification technology
  • Decision procedures and solvers for verification and synthesis
  • Applications and case studies in verification and synthesis
  • Verification in industrial practice
  • New application areas for algorithmic verification and synthesis
  • Formal models and methods for security
  • Formal models and methods for biological systems

Paper Submission

NEW this year:

  1. There is no separate registration deadline. Full papers should be uploaded by the submission deadline.
  2. Tool papers require a concurrent artifact submission together with the paper submission. Artifact evaluation occurs concurrently with the review process and the PC gets access to the artifact evaluation during the PC discussions.

Submissions on a wide range of topics are sought, particularly ones that identify new research directions.  CAV 2017 is not limited to topics discussed in previous instances of the conference.  Authors concerned about the appropriateness of a topic may communicate with the conference chairs prior to submission.

As explained below, CAV 2017 will follow a lightweight double-blind review process.  Submissions that are not “blinded” will be rejected without review.  Simultaneous submission to other conferences with proceedings or submission of material that has already been published elsewhere is not allowed.  The review process will include a feedback/rebuttal period where authors will have the option to respond to reviewer comments.  The PC chairs may solicit further reviews after the rebuttal period.

Papers must be submitted in PDF format here

Submissions will be in two categories: Regular Papers and Tool Papers.

Regular Papers

Regular Papers should not exceed 16 pages in LNCS format, not counting references and appendices.  Authors can include a clearly marked appendix at the end of their submissions, that is exempt from the page limit restrictions. However, the reviewers are not obliged to read the contents of these appendices.  These papers should contain original research and sufficient detail to assess the merits and relevance of the contribution.  Papers will be evaluated on basis of a combination of correctness, technical depth, significance, novelty, clarity, and elegance. We welcome papers on theory, case studies, and comparisons with existing experimental research, as well as combinations of new theory with experimental evaluation.  A strong theoretical paper is not required to have an experimental component.  On the other hand, strong papers reproducing and comparing existing results experimentally do not require new theoretical insights.

We encourage authors to provide any supplementary material that is required to support the claims made in the paper, such as detailed proofs or experimental data.  These materials should be uploaded at submission time, as a single pdf or a tarball, not via a URL.  It will be made available to reviewers only after they have submitted their first-draft reviews and hence need not be anonymized.  Reviewers are under no obligation to look at the supplementary material but may refer to it if they have questions about the material in the body of the paper.

Tool Papers

Tool Papers should not exceed 6 pages, not counting references.  These papers should describe system and implementation aspects of a tool with a large (potential) user base (experiments not required, rehash of theory strongly discouraged).  Papers describing tools that have already been presented (in any conference) will be accepted only if significant and clear enhancements to the tool are reported and implemented.  Note that tool papers require the submission of an artifact for evaluation by the submission deadline.  Artifacts will be evaluated concurrently with the review process and the program committee will have access to the artifact evaluation while making their decision.  In special cases, where an artifact cannot be submitted, the authors should contact the program chairs to find alternate modes of artifact evaluation.

Lightweight Double-Blind Reviewing Process

CAV 2017 will employ a lightweight double-blind reviewing process. This means that committee members will not have access to authors’ names or affiliations as they review a paper; however, authors’ names will be revealed once reviews have been submitted.

To facilitate this, submitted papers must adhere to two rules:
Author names and institutions must be omitted, and references to authors’ own related work should be in the third person (e.g., not “We build on our previous work…” but rather “We build on the work of …”).

The purpose of this process is to help the PC and external reviewers come to an initial judgement about the paper without bias, not to make it impossible for them to discover the authors if they were to try.  Nothing should be done in the name of anonymity that weakens the submission, makes the job of reviewing the paper more difficult, or interferes with the process of disseminating new ideas. For example, important background references should *not* be omitted or anonymized, even if they are written by the same authors and share common ideas, techniques, or infrastructure.  Authors should feel free to disseminate their ideas or draft versions of their paper as they normally would.  For instance, authors may post drafts of their papers on the web or give talks on their research ideas.

Artifact Submission and Evaluation

Authors of accepted regular papers will be invited to submit (but are not required to submit) the relevant artifact for evaluation by the artifact evaluation committee.

Authors of all tool papers are required to submit their artifact to the artifact evaluation committee at the paper submission time. Unlike regular papers, the results of the artifact evaluation for tool papers will be available to the program committee during the online discussions.

To submit an artifact, please prepare a virtual machine (VM) image of your artifact and keep it accessible through an HTTP link throughout the evaluation process. As the basis of the VM image, please choose commonly used OS versions that have been tested with the virtual machine software and that evaluators are likely to be accustomed to. We encourage you to use https://www.virtualbox.org and save the VM image as an Open Virtual Appliance (OVA) file. Please include the prepared link in the appropriate field of the paper submission form.

In addition, please supply at submission time a link to a short plain-text file describing the OS and parameters of the image, as well as the host platform on which you prepared and tested your virtual machine image (OS, RAM, number of cores, CPU frequency). Please describe how to proceed after booting the image, including the instructions for locating the full documentation for evaluating the artifact.

If you are not in a position to prepare the artifact as above, please contact PC chairs for an alternative arrangement.

It is to the advantage of authors to prepare an artifact that is easy to evaluate by the artifact evaluation committee and that yields expected results. We next provide some guidelines.  Document in detail how to reproduce most of the experimental results of the paper using the artifact; keep this process simple through easy-to-use scripts and provide detailed documentation assuming minimum expertise of users. Ensure the artifact is in the state ready to run. It should work without a network connection. It should not require the user to install additional software before running. It should use reasonably modest resources (RAM, number of cores), so that the results can be reproduced on various hardware platforms including laptops. The evaluation should take reasonable amount of time to complete. When possible include source code within your virtual machine image and point to the most relevant and interesting parts of the source code tree.

Members of the artifact evaluation committee and the program committee are asked to use submitted artifact for the sole purpose of evaluating the contribution associated with the artifact.