Arrival, Check-in, Coffee
Welcome
Complexity class of JavaScript regex matching, in Rocq
Victor Deng, EPFLAbstract
JavaScript regexes are widely used in JavaScript programs, but the complexity class of the JavaScript regex matching problem has not yet been well-understood. We present ongoing work on proving several complexity class results for JavaScript regex matching: JavaScript regex matching is PSPACE-hard and OptP-hard, even without negative lookarounds; JavaScript regex matching without forced quantifiers is PSPACE-complete; JavaScript regex matching without forced quantifiers and without lookarounds is OptP-complete. The proofs are mechanized in the Rocq proof assistant.
Parallel SMT Solving via Iterative Tree Partitioning
Tomas Kolarik, University of LuganoAbstract
We present a novel algorithm for parallel solving of SMT problems based on a partitioning process that divides the original problem into a tree structure in an iterative way. It allows revisiting nodes in the tree and eliminates the problem present in prior approaches to partitioning that often suffer from worse runtimes compared to sequential results. The resulting algorithm is highly flexible, offers to combine the ideas of partitioning, portfolio-solving, and clause sharing, allows the use of various partitioning functions, and scales gracefully with the available resources. Furthermore, the resulting approach improves over an award-winning sequential solver even when using a single solver. Notably, the implementation of the algorithm is a consistent winner of the annual competition of parallel SMT solvers.
Precise Reasoning about Container-Internal Pointers with Logical Pinning 🔗
Yawen Guan, EPFLAbstract
Most separation logics hide container-internal pointers for modularity. This makes it difficult to specify container APIs that temporarily expose those pointers to the outside, and to verify programs that use these APIs.
We present logical pinning, a lightweight borrowing model for sequential programs that allows users to selectively track container-internal pointers at the logical level. Our model generalizes the magic-wand operator for representing partial data structures, making it easy to write and prove precise specifications, including pointer-stability properties. Because it only changes the way representation predicates and specifications are written, our approach is compatible with most separation logic variants.
We demonstrate the practicality of logical pinning by verifying small but representative pointer-manipulating programs, and deriving more precise versions of common container specifications. In doing so, we show that our approach subsumes some well-known proof patterns, simplifies some complex proofs, and enables reasoning about program patterns not supported by traditional specifications. All of our results are mechanized in the Rocq proof assistant, using the CFML library.
CHC-Based Reachability Analysis via Cycle Summarization 🔗
Konstantin Britikov, University of LuganoAbstract
Modern reachability analysis techniques are highly effective when applied to software safety verification. However, they still struggle with certain classes of problems, particularly the verification of programs with complex control flow and deep nested loops. In this paper, we introduce Cycle Summarization-based Reachability Analysis (CSRA), a new Constrained Horn Clause (CHC) based approach for reachability analysis of nested-loop software. Our technique relies on the generation and refinement of cycle summaries within the CHC system. CSRA analyzes cycles in a modular manner, constructing summaries and cycle unrollings. Cycle summaries in our approach are used both to prove safety and detect potential safety violations. This enables more efficient exploration of nested loops. The prototype of CSRA is implemented within the Golem CHC solver. An empirical comparison with other reachability analysis techniques demonstrates that our approach is highly competitive in both proving safety and constructing counterexamples.
Abductive Inference of Separation Logic Specifications with Isorecursive User-Defined Predicates and Magic Wands
Nicolas Klose, ETH ZurichAbstract
Program verifiers based on separation logic, such as Gillian, Verifast, and Viper, allow one to prove complex properties of heap-manipulating, concurrent programs. These tools automate a large part of the proof search, but require a substantial amount of annotations such as method pre- and postconditions and loop invariants. Inference techniques such as bi-abduction can alleviate this burden, but existing techniques are too restrictive to be used in expressive program verifiers. In particular, existing bi-abduction techniques do not support the magic wand connective, so that inference for iterative traversals of structures beyond list segments is limited. Moreover, they rely on an equirecursive interpretation of predicates, instead of the isorecursive interpretation used by most SMT-based verifiers. In this paper, we present a novel abductive inference that addresses these limitations. It infers loop invariants that combine user-defined predicates with magic wands to keep track of the part of a data structure still to be traversed and the remainder of the data structure, such that ownership of the entire structure is retained after the traversal. Moreover, our abduction technique is the first to infer the auxiliary operations required by verifiers to manipulate predicates and wands. We implemented our inference in Viper; our evaluation shows that our approach can infer over 80% of the specifications required to verify memory safety of a diverse benchmark set.
E-Graphs for Property-Based Testing
David Spielmann, University of St. GallenLunch
Functional Programming for the (hopefully) Hungry Masses!
Farhad Mehta, OSTInteractive Proofs for Self-Stabilizing Algorithms using PADEC
Karine Altisen, Université de GenèveAbstract
PADEC is a framework to build machine-checked proofs of self-stabilizing algorithms using the Coq/Rocq proof assistant. The framework includes the definition of the computational model, tools for time complexity and composition of algorithms, lemmas for common proof patterns and case studies. A constant purpose was to convince the designers that what we formally prove using PADEC is what they expect by using definitions that are (syntactically) as close as possible to their usual definitions. This talk gives an overview of the framework along with recent developments.
Programming Without References 🔗
Dimi Racordon, HES-SO Valais-WallisAbstract
Taming the unintended consequences of shared mutable state has been a decades-long quest in both industry and academia. While programming languages like Rust and more recent efforts like Scala's capture checking have demonstrated the benefits of this effort, balancing the expressiveness and complexity costs of alias tracking mechanisms remains a timely issue.
For a little more than five years, I have been exploring a different direction: programming without references. It turns out that this restriction yields a neat discipline, mutable value semantics, which is expressive enough to write efficient in-place algorithms, offers many of the benefits of pure functional programming, and does not require sophisticated typing features.
In this talk, I will present Hylo, a programming language for systems programming that embraces mutable value semantics. After a quick review of key features, I will show how to design safe and efficient data structures and algorithms without resorting to references.
Letting go of data privately with budgeted declassification
Ali Mohammadpur-fard, Università della Svizzera ItalianaAbstract
Collaborative environments often need to share sensitive data across different trust levels, yet existing tools either ignore security hierarchies (treating data as simply "private" or "public") or treat release as all-or-nothing with no way to control how much is leaked. We introduce a language-level mechanism that unites differential privacy with information-flow control, making declassification a first-class, type-checked feature governed by privacy budgets. This allows the enforcement of a novel "noninterference up to ε" property, ensuring bounded leakage across security boundaries. A prototype shows that programs using this approach are significantly shorter than ad-hoc alternatives while achieving higher accuracy with equivalent budgets.
Probabilistic Model Checking of Quantum Networks 🔗
Lorenzo La Corte, Università della Svizzera ItalianaAbstract
Quantum networks enable capabilities that are impossible using classical information alone. They connect quantum-capable nodes by distributing entangled pairs of qubits, which serve as a fundamental resource for communication. To distribute entanglement, quantum network protocols rely on inherently probabilistic operations acting on short-lived resources. In addition, resource contention may arise when operations execute in parallel, introducing nondeterministic behavior. This makes quantitative analysis essential for the design and optimization of quantum networking protocols. This talk presents ongoing work on the formal specification and analysis of quantum networks using probabilistic model checking techniques. We show how entanglement distribution protocols can be modeled as Markov decision processes, capturing both probabilistic effects and nondeterminism arising from scheduling and resource contention. We then apply quantitative methods to compute expected costs, cost-bounded reachability probabilities, and extremal cost distributions. Together, these results provide a systematic framework for reasoning about performance guarantees and trade-offs in quantum network protocols.
Verified Composable CRDTs
Alex Städing, University of St. GallenCoffee Break
Towards Composable Proof of Cache Coherence Protocols
Martina Camaioni, EPFLAbstract
Verifying cache coherence protocols is a notoriously difficult problem. At the intersection between distributed protocol and computer architecture, it has long served as a premier target for formal methods. Current verification approaches hinge on the challenging discovery of large global induc- tive invariants. This paper introduces an alternative proof strategy that tames the complexity through two main con- tributions: protocol decomposition and a framework of local invariants.
We prove the correctness of the standard MSI protocol compositionally by independently verifying the simpler MI and SI subprotocols and proving MSI behaves as their com- bination. This decomposition revealed a useful insight: the invariants necessary in these proofs can be established using a small set of simple local invariants: invariants between a single cache and the parent and memory, eliminating com- plex global reasoning across children caches. We demon- strate how our approach reduces the number of required invariants from several dozens to hundreds in prior work to a small, structured and manageable set, simplifying the proof of correctness. The entire development is formalized in Lean 4.
Advancing Formal Verification at CERN: Updates on PLCverif and Neural Network Verification Challenges
Xaver Fink, CERN & RWTH Aachen UniversityAbstract
The CERN BE-ICS group is integrating formal methods into the engineering and validation of safety-relevant control software for particle accelerators. This talk gives an update on our Programmable Logic Controller (PLC) and Neural Network (NN) verification activities, with an emphasis on work-in-progress topics where we actively seek discussion and collaboration.
On the PLC side, we report on the ongoing consolidation of the open-source PLCverif framework. We also share recent experience from analyzing CERN’s UNICOS baseline objects (a widely reused PLC framework component), where PLCverif uncovered implementation inconsistencies and specification mismatches. Potential future directions are (i) stronger model checking k-induction proofs via LLM-supported invariant synthesis, (ii) Model-Based software engineering.
On the NN side, we outline three verification challenges emerging from CERN control applications: (i) sequence-level robustness for time-series classifiers used in beam instrumentation, (ii) verification of reinforcement-learning policies for beam alignment under uncertainty, and (iii) planned verification of NN approximations of MPC (e.g., cooling and energy systems).
Agentic Proof Automation: A Case Study 🔗
Yichen Xu, EPFLAbstract
Proof engineering is notoriously labor-intensive: proofs that are straightforward on paper often require lengthy scripts in theorem provers. Recent advances in large language models (LLMs) create new opportunities for proof automation: modern LLMs not only generate proof scripts, but also support agentic behavior, exploring codebases and iteratively refining their outputs against prover feedback. These advances enable an emerging scheme where LLM-based agents undertake most proof engineering under human guidance. Humans provide mathematical insight (definitions, theorems, proof strategies); agents handle the mechanical work of proof development. We call this scheme agentic proof automation. We present this scheme through a case study: mechanizing the semantic type soundness of a sophisticated formal system, System Capless, in Lean 4, comprising over 14,000 lines of code. Using off-the-shelf LLM agents with a single lightweight proof-checking tool, the agents completed 189 proof engineering tasks with an 87% success rate, only 16% requiring human intervention. The case study demonstrates that agents are capable proof engineers that substantially boost productivity, though they fall short in creative reasoning and still require human guidance in certain cases. We release an interactive explorer where readers can examine all agent interactions; the mechanization is open-sourced for experiments and extensions.
Space Explanations: Formal and General Explanations of NNs
Faezeh Labbaf, University of LuganoAbstract
Space Explanations are formal explanations for neural networks that give provable guarantees of the behavior of the network in continuous areas of the input feature space. To automatically generate space explanations, we leverage a range of flexible Craig interpolation algorithms and unsatisfiable core generation. Based on real-life case studies, ranging from small to medium to large size, we demonstrate that the generated explanations are more meaningful than those computed by state-of-the-art. I will present the work in progress in our group on computing space explanations.
Utilising the PCG in Prusti
Jonas Fiala, ETH ZurichAbstract
Recent work on Place Capability Graphs (PCG) for Rust provides a complete model of Rust's type-checking results. In this talk, we will explain how Prusti now leverages this PCG model, which simplifies Prusti's implementation and allows it to support more complicated borrowing patterns.
