ATVA’14 – Day 3

Below are posts about a selection of the talks of the third and last day of ATVA’14.

Parameterised Synthesis – Roderick Bloem

“A cookie cutter approach to synthesis”

Classical synthesis approach, given a formula synthesise a system (LTL2Verilog). In a classical approach, you do the thinking twice: you write a specification and an implementation, and you verify whether implementation satisfies specification. You do not want to do the same thing twice (use synthesis, see photo). (note: there may be some criticism about this…).

Roderick Bloem during the keynote at ATVA'14

Roderick Bloem during the keynote at ATVA’14

Roderick explains a case study with this classical approach: synthesising the AMBA Arbiter. Some statistics: specification consists of 12 guarantees and 3 assumptions. Problem: size of synthesized system looks exponential in the number of masters involved in the protocol, but size of handwritten implementation is more or less constant (since it is parameterised). The problem is that you are doing monolithic synthesis for the number of masters, so what you want is a cookie-cutter approach, and do parameterised synthesis since real specs are parameterised anyway.

This parameterised approach consists of two steps: (1) synthesis of systems of arbitrary size is reduced to synthesis of fixed size k (adapting cutoff results), and (2) synthesise distributed uniform system of size k. The last problem is undecidable, so you want to use bounded synthesis. We look at both steps in a bit more detail:

  1. Reduction to fixed size. Example using token rings with Emerson and Namjoshi’s cutoff theorem (from the setting of verification): a ring of size c is correct iff andy ring is correct. If this theorem holds for verification, then it also holds for synthesis so we reuse this sort of result.
  2. Uniform synthesis. Roderick first looks at parameterised synthesis for token rings with LTL\X specifications which is also undecidable. To tackle this, one can try bounded synthesis (Finkbeiner & Schewe 2007) instead. For bounded synthesis there is a semi-decision procedure which basically repeatedly tries to find a small implementation that satisfies the formula; if such a thing exists it terminates with the smallest implementation. The approach uses nice automata theoretic results such as one-letter co-Buechi universality. The actual synthesis is performed using an SMT solver. This bounded synthesis approach is then generalised to uniform synthesis, again giving a semi-decision procedure.

All of the above is explained for the token ring problem. Roderick goes on to show the whole approach in a more general setting, the ideas are similar, but the cutoffs not only depend on the property but also on the size of the system, so the whole approach is a bit more complex. All this is implemented in the tool PARTY (but only works for very simple examples).

Some additional observations on synthesis that are interesting to mention:

  • Only safety is not enough for synthesis
  • You have to do liveness as well
  • For this you want to take fairness into account

To conclude this post: of course, the cookie cutter in the talk is in the uniformity of the process: you make a template (the cookie cutter) to build a ring in which all processes are the same!

Fast Debugging of PRISM Models – Christian Dehnert

For the talk, Christian focuses on debugging probabilistic reachability properties on probabilistic automata, although the approach is a bit more general than that. The work is done in the context of PRISM, hence models are described in its modelling language; the state space is essentially the cartesian product of the variables in the model.

Giving high-level counterexamples in terms of commands is NP-hard; an existing approach by Wimmer computing a minimal set of commands takes a long time for realistic examples. In the current talk, they describe a new approach. The proposed approach is to enumerate all command sets while cutting away sub-optimals sets (because otherwise the number of sets is large). In the approach constraints are created eliminating these unwanted counterexamples, and all this is encoded into a MaxSAT problem.

 

Verification of Markov Decision Processes using Learning Algorithms – Jan Kretinsky

How can we benefit from results from the machine learning community in probabilistic verification? This is the central question of Jan’s talk. He illustrates the approach using LTL model checking of MDPs. The problem in practice is that computing reachability of maximal end components is too slow. Jan proposes heuristics from machine learning to compute this reachability.

The heuristics are applied to the fixed point computation in value iteration. What are the heuristics? Observe that stabilisation of fixed point computation depends on the order of evaluation. The idea is to more frequently evaluate states that are visited more frequently by good schedulers. They use reinforcement learning for this. The same idea is also applied to the case where you don’t know the exact transitions in a statistical model checking setting.

Rabinizer 3 – Jan Kretinsky

The translation from LTL to non-det. BA and from non-det. BA to det. Rabin automaton are both exponential, and both translations are traditionally needed in the verification of LTL formulae. Determination is complicated, results in a “messy” state space, and is hard to implement. Therefore, there are results that try to get rid of this determinisation step. Existing translations (e.g. in Rabinizer and Rabinizer 2) go directly from LTL to DRA without making this determinisation step. In this talk, Jan presents a method that translates the whole of LTL to deterministic (transition-based) generalised Rabin automata (DTGRA). These are as easy to use for probabilistic model checking than DRA, yet do not blow up so bad; using them for synthesis is more complicated.

 

A Game-Theoretic Approach to Simulation of Data-Parameterised Systems – Sharai Sheinvald

The final session of ATVA is kicked off by Sharai, one of my fellow students of the Marktoberdorf Summerschool in 2011.

Data-parameterised systems are finite state systems with variables, that can range over infinite domains, so clearly describe infinite state systems using a finite representation. For data-parameterised systems, Sharai extends temporal logic with variables to variable temporal logic (VTL); this is simply a first-order extension for LTL likewise she extends CTL* to VCTL*, a first order extension of CTL*.

Traditionally we know that ACTL* characterises simulation. Sharai’s contribution is a simulation definition for abstract systems, with and algorithm for finding such a simulation, and a game-based characterisation. To take care of the data parameterisation, the simulation relation is extended with a function matching the variables between the different systems. Finding the maximal simulation is exponential — in this setting language equivalence is exponential though. If games are related using simulation than they satisfy the same VACTL* formulas, lifting the results for the non-parameterised case. There also is a simulation game between refuter and prover such that games are related iff prover has a memoryless winning strategy in this simulation game. Likewise, if there is no simulation relation, than refuter has a memoryless winning strategy. In both cases some information about variables needs to be remembered though. A nice result in addition is that if there is no simulation relation between to systems, then there is a distinguishing VACTL formula.

 

Symmetry Reduction in Infinite Games with Finite Branching – Steen Vester

This is the last talk of the conference. We all know very well that parity games, once we generate them explicitly, get very large. Steen tries to attack this by defining symmetry reduction on games. The results work for infinite games with finite branching due to the application of Koenigs lemma in the proofs. The symmetry approach is the same as the classical symmetry approach for labelled transition systems or Kripke structures, based on symmetry groups and orbits. The symmetry results is proved for general objectives. As examples he instantiates the results for parity games and to alternating temporal logic. For concurrent games, the approach also works, but symmetries must map actions to actions.

An interesting open question about this work is whether this can be generalised to infinite games in which we also allow finite branching.

 

ATVA’15

ATVA 2015 will be held in Shanghai. Abstracts need to be submitted by April 22, 2015.

ATVA’14 – Day 2

Below is a summary of a selection of the talks at the second day of ATVA’14.

Automatic Correctness Checking of Implementations of Concurrent Objects – Ahmed Bouajjani (Keynote)

Ahmed discusses the problem of concurrent data structures. In general, you want to implement such data structures in a library using as much concurrency as possible. However, you want to provide a simple interface to the clients that provide the ordinary, atomic operations (client view). Providing naive solutions with coarse grain locking are unacceptable; therefore, in practice, execution intervals of different library routines may overlap, yet implementations must provide this client view.

Typically, verification of librari2014-11-05 09.08.28es providing concurrent data structures is done using observation refinement. Filipovic et al. showed in 2010 that observation refinement is implied by linearizability (due to Herlihy and Wing, 1990) — Ahmed notes that this result was accepted as folklore for over a decade. The problem with using this approach is that, even when given a finite number of threads and a regular specification, checking linearizability is in EXPSPACE (Alur et al. 1996). In 2014 Had Hamza showed that it is also EXPSPACE hard. Even the simpler case of checking linearizability of a single execution is NP-complete (Gibbons and Korach, 1997), and if you have an unbounded number of threads, even with finite-state threads, linearizability is undecidable (proof is by reduction of reachability in 2-counter machines).

Given the high complexity of the observation refinement problem, Ahmed proposes a different approach. He defines a notion of history for libraries, \(H(L)\) is the history for a library \(L\). History inclusion is equivalent to observation refinement. Furthermore, \(L_1\) is linearisable w.r.t. \(L_2\) iff \(H(L_1)\) is included in \(H(L_2)\) when \(L_2\) is atomic. Not that these equivalences immediately give rise to the result that computing history inclusion is computationally hard. Therefore the idea of doing verification behind, and we focus on efficient detection of observation refinement violations to find bugs. This can be done using an under-approximation schema.

Histories can be weakened in the sense that \(h_1\) is weaker than \(h_2\) (\(h_1 \leq h_2\)) iff \(h_1\) has less constraints than \(h_2\). If \(h_1 \leq h_2\) and \(h_2\) is in \(H(L)\) for library \(L\) then \(h_1\) is in \(H(L)\) too (on a high-level, we do ignore some details here). Now you can define an approximation \(A_k(h)\), such that \(A_k(h) \leq h\) for all \(k\), an for some \(k\) it is the case that \(h \leq A_k(h)\) (i.e. there is an approximation that returns the original history. Now, suppose that we have a history \(h\) such that \(A_k(h)\) in \(H(L_1)\) (so \(h\) in \(H(L_1)\)), and \(A_k(h)\) is not in \(H(L_2)\). Then we know that \(h\) is not in \(H(L_2)\), and we have found a counter example with respect to history inclusion using the abstraction, thus we discovered a bug. The abstraction (bounding) concept is based on the notion of interval-length, and ultimately checking is done using counting. For details I refer you to Ahmed’s slides.

The Context-Freeness Problem is coNP Complete for Flat Counter Systems – Vincent Penelle

Model: state machine with positive counters; edges have guards and updates over counters. Context-freeness proof is done in a number of steps. First semi-linear sets are considered (union of linear sets) A semi-linear set is stratifiable iff it is the union of linear sets with a stratified set of periods. A theorem by Ginsburg, 1966, relates context freeness of languages to stratifiable semilinear sets. For a subclass of semilinear sets, integral polyhedra, the stratifiability problem is coNP complete.

In flat counter systems, constraints can be expressed as integral polyhedra, so, the question whether the language of S is context-free is equivalent to the question whether the corresponding integer polyhedron is stratifiable.

Flat counter systems are interesting as an object of study for complexity reasons. There is a parallel with VAS systems, where flat VAS are used as a skeleton to solve the complexity for the general case.

Deciding Entailments in Inductive Separation Logic with Tree Automata – Tomas Vojnar (fix name)

Separation logic is used for reasoning about heaps. Unbounded heaps are described using systems of inductive definitions. A nice example of an inductive definition is that of the doubly linked list using the \(DLL(h,p,t,n) = …\) predicate. The inductive definitions the approach allows have some restrictions, yet it is shown how these restrictions can be lifted (sometimes leading to incompleteness, however).

The approach proposed is to reduce entailment \(\varphi \models \psi\) to checking \(L(A_{\varphi}) \subseteq L(A_{\psi})\), where \(A_{\varphi}\) and \(A_{\psi}\) are tree automata recognising unfolding trees of the inductive definitions \(\varphi, \psi\). The tree automata are deterministic, and the construction is based on tiles such that every predicate \(P\) is represented by a single tile, and a single tile maps to a single TA state \(q_P\). Soundness follows directly from the construction. For completeness canonisation and rotation closure is used. The described approach is EXPTIME complete.

 

A Mechanised Proof of Loop Freedom of the (untimed) AODV routing protocol – Peter Hoefner and Rob van Glabbeek

In this talk, Peter and Rob consider the correctness of a routing protocol in wireless mesh networks (WMNs). In practice WMNs are unreliable and have poor performance. One of the causes for this could be the underlying routing protocols. As an example, they study an ad hoc on-demand distance vector protocol. On a high level, the protocol works as follows: if route is needed route request is broadcast; if a node receives such a request, and has no information it forwards the request; if a node has information about a destination a route reply is sent back in unicast; if unicast fails or a link break is detected a route error is sent using a group cast message. The description of the protocol relies on a routing table.

Properties that are relevant for the correctness of such a protocol are: route correctness, loop freedom, route discovery, packet delivery. In the verification they describe here, they consider loop freedom: you do not keep sending around packets indefinitely. The verification is performed using the following steps:

  • Formal specification made in the process algebra AWN (process algebra suited for specifying routing protocols). AWN extends process algebras with data structures, network-specific primitives such as (local) broadcast and (conditional) unicast, and it has a layered structure, allowing processes, nodes, networks of nodes and encapsulation. The specification consists of 6 processes, totalling about 150 lines of specification (the original standard is 40 pages of English prose).
  • Pen-and-paper proof of loop freedom: 20 pages, using 40 invariants (state/transition invariants talking about one or more nodes).
  • Mechanized proof: formalised Isabelle/HOL. One part is mechanization of process algebra AWN (ITP’14). Mechanization of the loop-freedom proof consists of 360 lemmas of which 40 are invariants, with the usual mechanisation overhead.
    • Node properties are often straightforward to prove.
    • Network properties are harder because the describe relations between multiple nodes; stating the properties alone in a machine readable way is already hard.

The total effort of doing the mechanised verification is estimated to be 1 person-year, of which about 8 months were devoted to the formalisation of AWN, and about 4 months were for the verification of the routing protocol.

One of the most interesting parts of the talk is the reflection on the mechanisation process (or, did they waste their time?) Here they observe that, when verifying variants of the protocol — that appear naturally due to different interpretations of the natural language standard — a lot of time can be saved. They illustrate this observation by reporting on variants they verified: in 5 variations in which the lemmas and invariants that were formulated were preserved, the formalisation and proofs could be fixed very easily (in a matter of hours). You are not always lucky, though, since if the lemmas do not hold in their original form any longer, you will need more expertise to fix the proofs, and the fixing is not so straightforward as in the other cases.

 

Using Flow Specifications of Parameterised Cache Coherence Protocols for Verifying Deadlock Freedom – Sharad Malik

Cache coherence protocols gives shared memory view to multiple processors with individual caches and main memory. A well-known example of a cache coherence protocol is that for guaranteeing exclusive access in the German protocol. In this case, concurrent requests may lead to contention, and eventually to deadlock (where system-wide deadlock, or s-deadlock is considered).

Given a protocol P(N) with N an unbounded number of caches, the problem addressed is to verify freedom of s-deadlock on it. Prior work in this area mainly applies to mutual exclusion (no two caches have exclusive access at the same time); in that case, counterexamples typically involve just two caches because the property has a two-index property. For deadlock freedom, all caches are involved, hence counter examples might involve all caches. This makes that approaches that work in the mutual exclusion setting do not generally carry over to the setting of checking absence of s-deadlock.

Sharad describes a nice approach to verifying s-deadlock, which is based on the following idea. Establish a set of invariants that are stronger than s-deadlock, that do allow for verification using existing approaches. Once such invariants are established s-deadlock immediately follows. In order to establish such invariants, he uses flows (these are high-level descriptions of protocols). The invariants that are determined are such that some flow is enabled at all times, immediately implying absence of s-deadlock.

To determine such invariants, a looping process is started, where initially a number of straightforward invariants is given (that do not necessarily hold). These invariants are verified using model checking on a small number of agents. If the invariants hold for this small number of agents, then they proceed to verifying that they also hold for arbitrary N. If the invariants do not hold, the user is asked to refine the invariants, and the process repeats.

One of the open questions, and in my opinion a very important one, is whether invariants can automatically be refined when they fail during the model checking phase.

The Complexity Landscape of Partial-Observation Stochastic Games – Krishnendu Chatterjee, ATVA’14

Krishnendu heads of in a direction different from Monday’s tutorial talk altogether. Whereas on Monday he considered perfect information games, today he is considering games with imperfect information. Instead of a mean-payoff objective, he considers game graphs with a parity objective.

The motivation to consider games with imperfect information is that, in practice, the assumption that a controller knows everything is unrealistic since, e.g., systems have internal variables, and sensors are unreliable. In today’s talk, transitions are stochastic.

In the talk, a lot of different complexity results are discussed. Essentially, all combinations are considered where: player 1 has partial/perfect information and player 2 has partial/perfect information, and then for all these instances, three types of winning criteria are considered: sure winning, almost sure winning, and quantitative analysis.

His slides contain a number of nice comprehensive overview slides regarding the complexity results, so I would encourage you to check them out.

To conclude, I want you to give the following quote that came up during the talk. “In computer science, what God van do we can also do by non-determinism”.

Multidimensional Quantitative Games and MDPs – Krishnendu Chatterjee, ATVA’14

This week I’m attending ATVA’14. On Monday we started with a series of tutorial talks, the first one by Krishnendu Chatterjee. Once available I’ll update the post with a link to the slides.

Krishnendu started of by introducing two-player games in general, using three motivating examples:

  1. Concurrency synthesis, e.g. synthesising locks in concurrent programs, where the objective is the have maximal performance without introducing data races.
  2. Incompatible synthesis, i.e. synthesising a program satisfying a set of Boolean objectives that cannot all be satisfied at the same time. The objective is then, e.g., to satisfy as many of them as possible.
  3. Quantitative simulation, which is useful in the case that programs cannot simulate each other exactly, now the problem is to match steps as closely as possible.

In the first part of the talk Krishnendu very clearly introduced quantitative games, in particular mean-payoff games (objectives about long-run averages), as well as lim-inf and lim-sup objectives. Mean pay-off games are two-player games in which the edges carry some (possibly negative) integer weight. The game is played by two players, player 1 and player 2, and the objective for player 1 is to make sure that the long run average of these weights exceeds a certain threshold. He explained the consequences of the choice for memoryless and finite- and infinite-memory strategies in this setting: mean-payoff games are memoryless determined, so having finite or infinite memory does not change the expressive power. Deciding whether player 1 has a winning strategy is in NP and co-NP.

The second part of the tutorial focused on generalising the ideas in mean-payoff games to the multidimensional setting. This means that instead of satisfying a single mean-payoff objective, multiple objectives have to be satisfied. There are multiple reward functions on the edges, and player 1 wins if all mean-payoff objectives are satisfied. He studies the impact of this w.r.t. different problems. It turns out that:

  • infinite memory strategies are more powerful than finite memory strategies, which in turn are more powerful than memoryless strategies.
  • determining winning in the setting with finite memories is co-NP complete.

In the process, he shows an algorithm that uses a linear programming approach to compute flows in strongly connected components in the game graph (to encode the mean payoff objective), and proofs some properties using the principle of edge induction. Both approaches are somewhat of a recurring theme, that you should check out! (I will add some reference later)

In the third part of his talk, he generalises the above ideas to Markov Decision Processes; here the algorithmic approach does not use SCCs, but instead work on maximal end components.

Open Problems

During his talk, Krishnendu introduces several open problems. One of which is to determine whether deciding winning in multi-dimensional mean-payoff games with finite memory is polynomial time computable.