Custom tests fail despite matching outputs

Hello,

I was just trying to implement a few tests and I noticed something. Basically I copy and pasted the check_victory test twice and the merge test once.

I did not change the dimension of the board in either case but I did change the numbers on the board by selecting them (and only them) and then typing the new number in. I adjusted the .ref file accordingly.

I then get this output when I run “./run_tests.py -d student pub” :

Running test_check_win_2...
FAILED
============================================================
Expected output:
check_victory returned: 1

============================================================
Actual output was:
check_victory returned: 1
============================================================

Running test_check_win_3...
FAILED
============================================================
Expected output:
check_victory returned: 0

============================================================
Actual output was:
check_victory returned: 0
============================================================

Running test_merge_2...
FAILED
============================================================
Expected output:
Board Size: 16
Board Content: 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0

============================================================
Actual output was:
Board Size: 16
Board Content: 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
============================================================

So apparently the input and the output match perfectly, yet the test is counted as failing.

If it matters, I pass the related public and daily tests and check_victory.asm is basic enough for me to be certain, that it doesn’t cause the problem. I used no store command in check_victory.asm at all, so I would be amazed if I managed to change the board somehow.

I pushed just now, but I am not sure whether I should push the contents of the tests/student folder as well. If it helps solving the problem I can push that too.

Thanks for any help!

EDIT: I just applied the patch because I thought it might help. It had no effect on my issue.

2 Likes

These outputs are not the same. Your .ref files contain a trailing new line which you should remove in order for the tests to work.

Thanks for the hint, I figured it out!

The problem seems to be that Kate automatically adds a newline character to a file upon saving if it does not end on one already. I changed this under Settings → Configure Kate → Open/Save by unchecking the “Append newline at end of file on save”.

Additionally Kate removes trailing spaces at the end of lines, this can be turned off in the same settings for Open/Save by choosing “Never” for “Remove trailing spaces”.

Very weird that this is the standard behavior, maybe something that can be changed for next years VM?

5 Likes