Verification only one assume meaningful

Having multiple assume-statements instead of one big connected statement seems to break the verification. If you have e. g.

void example(int x, int y) {
    _Assume(x > 0);                // #1
    _Assume(y < 0);                // #2
    _Assert(x > 0 && y < 0);    // #3
}

you can use the block rule (Definition 7.4.10) to see, that the post-condition of #2 created via #3 will be the expression inside the _Assert (#3).

The now applying rule for assume specifies #2s pre-condition to be just true and creating a verification condition of y < 0 => (x > 0 && y < 0) which is not generally true.

From just looking at the code, one might assume that by having the two assume-statements, they both would be considered in the following verification process.

Since this doesn’t seem to be the behaviour of our assume-precondition-definition: Is there any alternative way to define the precondition function assume to make multiple assumes directly following each other meaningful?

I think you’re right.

It seems to me that you could get the behaviour you want by swapping pc and vc in this case, which would give you

x > 0 \implies (y < 0 \implies x > 0 \land y < 0)

which is what you want.

Interestingly, my implementation actually does this, as that is how it was written in last year’s lecture notes…

2 Likes

Just to be sure: The way it is currently written in the current lecture notes is the way we should implement it the project?

Kinda. The rules @hack presented in the actual lecture / hand-written notes are correct, at least almost always. The prog2.de script rules are missing a \mathit{vc} term (might be fixed now).

For Assume, the rules implemented in the reference right now are \mathit{pc}\ \texttt{assume e;}\ Q = e \Rightarrow Q and \mathit{vc}\ \texttt{assume e;}\ Q = \{\}. You can implement either variant, the tests only ever have one _Assume() at the beginning, if at all. You are correct that the original rules also break having multiple _Assume statements, which is unfortunate, too. This is because _Assume is defined as a loop having the loop invariant \top. If we instead do not give it an executable semantics (which is done anyway as soon as we do total correctness), but instead axiomatically use the rules above, things work as you would expect them to.

(\top might be called \mathit{true} in the lecture notes)

1 Like

For comparison here is the alternative from the lecture notes:

pc~[assume~e]~Q=true \\ vc~[assume~e]~Q=\{e\implies Q\}

This is sound and if provable, shows the correctness of the program.
But as you said, it is a bit weak as it forgets the first part of the program.
Therefore, you would need to state all information at the point of the assume (but not more).

As Jonathan and Johannes said, you can also use

pc~[assume~e]~Q=e\implies Q \\ vc~[assume~e]~Q=\{\}

The idea is from hoare rule

\dfrac{}{\{P\}assume(e)\{P\land e\}}

Basically, assume the programmer tells the compiler that something is always true at that point (unconditionally).
For instance, a calling convention is met.
The semantic justification for this rule can be obtained by defining the alias
assume(e) := while(~e) _Inv(~e \/ Q) {}
Which strengthens the invariant true to ~e \/ Q to preserve more information:

  • Either the infinite loop is performed (\lnot e holds) or Q needs to hold and we skip the loop.

One can equivalently write e\implies Q for \lnot e \lor Q.
If we now look at pc and vc from the while loop, we get:

pc = I = (e => Q)
vc =
    I /\ ~e => pc s I  // (correct due to  empty body)
    vc s I // (correct due to  empty body)
    I /\ ~~e => Q
(I /\ ~~e => Q) = ((~e \/ Q)/\ ~~e => Q) = true

I would implement the stronger variant.
It is able to prove more programs correct and easier to implement.

Note:
If we only have one assume at the beginning of the function, both versions behave the same.

2 Likes

We updated the lecture notes to contain the stronger variant of the rules for assume.

Assert and Assume