# ATVA’14 – Day 3

Below are posts about a selection of the talks of the third and last day of ATVA’14.

# Parameterised Synthesis – Roderick Bloem

“A cookie cutter approach to synthesis”

Classical synthesis approach, given a formula synthesise a system (LTL2Verilog). In a classical approach, you do the thinking twice: you write a specification and an implementation, and you verify whether implementation satisfies specification. You do not want to do the same thing twice (use synthesis, see photo). (note: there may be some criticism about this…).

Roderick Bloem during the keynote at ATVA’14

Roderick explains a case study with this classical approach: synthesising the AMBA Arbiter. Some statistics: specification consists of 12 guarantees and 3 assumptions. Problem: size of synthesized system looks exponential in the number of masters involved in the protocol, but size of handwritten implementation is more or less constant (since it is parameterised). The problem is that you are doing monolithic synthesis for the number of masters, so what you want is a cookie-cutter approach, and do parameterised synthesis since real specs are parameterised anyway.

This parameterised approach consists of two steps: (1) synthesis of systems of arbitrary size is reduced to synthesis of fixed size k (adapting cutoff results), and (2) synthesise distributed uniform system of size k. The last problem is undecidable, so you want to use bounded synthesis. We look at both steps in a bit more detail:

1. Reduction to fixed size. Example using token rings with Emerson and Namjoshi’s cutoff theorem (from the setting of verification): a ring of size c is correct iff andy ring is correct. If this theorem holds for verification, then it also holds for synthesis so we reuse this sort of result.
2. Uniform synthesis. Roderick first looks at parameterised synthesis for token rings with LTL\X specifications which is also undecidable. To tackle this, one can try bounded synthesis (Finkbeiner & Schewe 2007) instead. For bounded synthesis there is a semi-decision procedure which basically repeatedly tries to find a small implementation that satisfies the formula; if such a thing exists it terminates with the smallest implementation. The approach uses nice automata theoretic results such as one-letter co-Buechi universality. The actual synthesis is performed using an SMT solver. This bounded synthesis approach is then generalised to uniform synthesis, again giving a semi-decision procedure.

All of the above is explained for the token ring problem. Roderick goes on to show the whole approach in a more general setting, the ideas are similar, but the cutoffs not only depend on the property but also on the size of the system, so the whole approach is a bit more complex. All this is implemented in the tool PARTY (but only works for very simple examples).

Some additional observations on synthesis that are interesting to mention:

• Only safety is not enough for synthesis
• You have to do liveness as well
• For this you want to take fairness into account

To conclude this post: of course, the cookie cutter in the talk is in the uniformity of the process: you make a template (the cookie cutter) to build a ring in which all processes are the same!

# Fast Debugging of PRISM Models – Christian Dehnert

For the talk, Christian focuses on debugging probabilistic reachability properties on probabilistic automata, although the approach is a bit more general than that. The work is done in the context of PRISM, hence models are described in its modelling language; the state space is essentially the cartesian product of the variables in the model.

Giving high-level counterexamples in terms of commands is NP-hard; an existing approach by Wimmer computing a minimal set of commands takes a long time for realistic examples. In the current talk, they describe a new approach. The proposed approach is to enumerate all command sets while cutting away sub-optimals sets (because otherwise the number of sets is large). In the approach constraints are created eliminating these unwanted counterexamples, and all this is encoded into a MaxSAT problem.

Verification of Markov Decision Processes using Learning Algorithms – Jan Kretinsky

How can we benefit from results from the machine learning community in probabilistic verification? This is the central question of Jan’s talk. He illustrates the approach using LTL model checking of MDPs. The problem in practice is that computing reachability of maximal end components is too slow. Jan proposes heuristics from machine learning to compute this reachability.

The heuristics are applied to the fixed point computation in value iteration. What are the heuristics? Observe that stabilisation of fixed point computation depends on the order of evaluation. The idea is to more frequently evaluate states that are visited more frequently by good schedulers. They use reinforcement learning for this. The same idea is also applied to the case where you don’t know the exact transitions in a statistical model checking setting.

# Rabinizer 3 – Jan Kretinsky

The translation from LTL to non-det. BA and from non-det. BA to det. Rabin automaton are both exponential, and both translations are traditionally needed in the verification of LTL formulae. Determination is complicated, results in a “messy” state space, and is hard to implement. Therefore, there are results that try to get rid of this determinisation step. Existing translations (e.g. in Rabinizer and Rabinizer 2) go directly from LTL to DRA without making this determinisation step. In this talk, Jan presents a method that translates the whole of LTL to deterministic (transition-based) generalised Rabin automata (DTGRA). These are as easy to use for probabilistic model checking than DRA, yet do not blow up so bad; using them for synthesis is more complicated.

# A Game-Theoretic Approach to Simulation of Data-Parameterised Systems – Sharai Sheinvald

The final session of ATVA is kicked off by Sharai, one of my fellow students of the Marktoberdorf Summerschool in 2011.

Data-parameterised systems are finite state systems with variables, that can range over infinite domains, so clearly describe infinite state systems using a finite representation. For data-parameterised systems, Sharai extends temporal logic with variables to variable temporal logic (VTL); this is simply a first-order extension for LTL likewise she extends CTL* to VCTL*, a first order extension of CTL*.

Traditionally we know that ACTL* characterises simulation. Sharai’s contribution is a simulation definition for abstract systems, with and algorithm for finding such a simulation, and a game-based characterisation. To take care of the data parameterisation, the simulation relation is extended with a function matching the variables between the different systems. Finding the maximal simulation is exponential — in this setting language equivalence is exponential though. If games are related using simulation than they satisfy the same VACTL* formulas, lifting the results for the non-parameterised case. There also is a simulation game between refuter and prover such that games are related iff prover has a memoryless winning strategy in this simulation game. Likewise, if there is no simulation relation, than refuter has a memoryless winning strategy. In both cases some information about variables needs to be remembered though. A nice result in addition is that if there is no simulation relation between to systems, then there is a distinguishing VACTL formula.

# Symmetry Reduction in Infinite Games with Finite Branching – Steen Vester

This is the last talk of the conference. We all know very well that parity games, once we generate them explicitly, get very large. Steen tries to attack this by defining symmetry reduction on games. The results work for infinite games with finite branching due to the application of Koenigs lemma in the proofs. The symmetry approach is the same as the classical symmetry approach for labelled transition systems or Kripke structures, based on symmetry groups and orbits. The symmetry results is proved for general objectives. As examples he instantiates the results for parity games and to alternating temporal logic. For concurrent games, the approach also works, but symmetries must map actions to actions.

An interesting open question about this work is whether this can be generalised to infinite games in which we also allow finite branching.

# ATVA’15

ATVA 2015 will be held in Shanghai. Abstracts need to be submitted by April 22, 2015.

# Multidimensional Quantitative Games and MDPs – Krishnendu Chatterjee, ATVA’14

This week I’m attending ATVA’14. On Monday we started with a series of tutorial talks, the first one by Krishnendu Chatterjee. Once available I’ll update the post with a link to the slides.

Krishnendu started of by introducing two-player games in general, using three motivating examples:

1. Concurrency synthesis, e.g. synthesising locks in concurrent programs, where the objective is the have maximal performance without introducing data races.
2. Incompatible synthesis, i.e. synthesising a program satisfying a set of Boolean objectives that cannot all be satisfied at the same time. The objective is then, e.g., to satisfy as many of them as possible.
3. Quantitative simulation, which is useful in the case that programs cannot simulate each other exactly, now the problem is to match steps as closely as possible.

In the first part of the talk Krishnendu very clearly introduced quantitative games, in particular mean-payoff games (objectives about long-run averages), as well as lim-inf and lim-sup objectives. Mean pay-off games are two-player games in which the edges carry some (possibly negative) integer weight. The game is played by two players, player 1 and player 2, and the objective for player 1 is to make sure that the long run average of these weights exceeds a certain threshold. He explained the consequences of the choice for memoryless and finite- and infinite-memory strategies in this setting: mean-payoff games are memoryless determined, so having finite or infinite memory does not change the expressive power. Deciding whether player 1 has a winning strategy is in NP and co-NP.

The second part of the tutorial focused on generalising the ideas in mean-payoff games to the multidimensional setting. This means that instead of satisfying a single mean-payoff objective, multiple objectives have to be satisfied. There are multiple reward functions on the edges, and player 1 wins if all mean-payoff objectives are satisfied. He studies the impact of this w.r.t. different problems. It turns out that:

• infinite memory strategies are more powerful than finite memory strategies, which in turn are more powerful than memoryless strategies.
• determining winning in the setting with finite memories is co-NP complete.

In the process, he shows an algorithm that uses a linear programming approach to compute flows in strongly connected components in the game graph (to encode the mean payoff objective), and proofs some properties using the principle of edge induction. Both approaches are somewhat of a recurring theme, that you should check out! (I will add some reference later)

In the third part of his talk, he generalises the above ideas to Markov Decision Processes; here the algorithmic approach does not use SCCs, but instead work on maximal end components.

# Open Problems

During his talk, Krishnendu introduces several open problems. One of which is to determine whether deciding winning in multi-dimensional mean-payoff games with finite memory is polynomial time computable.