Two tests are failing, are there any clues?

I am failing 2 tests (testIpRelaxed1, testRegNameConstrained3) even though I am sure that I covered all possible cases and think that it may by a very specific edge case. I have tested IPs with all the possible hex digit combinations, both valid and invalid, and with invalid characters any yet I am failing the first IP test.
Is there any hint about what I could be missing?

1 Like

As always, the names of the daily tests might give a hint about their nature.
None of your tests is obscure in the sense that it is specific to a non-boundary special case.
We might have mutants with wrong edge case behavior.
If you covered all possible cases you should also cover all edge cases.
A bit about edge cases here:

I am sure that we would find many of our mutation tests mirrored in a student implementation throughout the course of the project.
So think about errors that one might make when shallowly reading the grammar, ignoring special cases, making spelling mistakes, thinking with wrong reasoning, … .


I’m having simillar problems in that I don’t know what I missed and am trying to firure out from the “hints” the test names give what it could be, but I’m a bit thrown of by the word “constrained” in the names.
Can anyone give me a hint to understand the hint?

Thanks in advance!

1 Like

same issue :raised_hand:

What level of “doesn’t understand what they are doing” are we dealing with here? Can we at least assume that the hypothetical person (you know: the student the tutor imagined making the mistake) who wrote the broken implementation read the text surrounding the grammar? Can we assume that they know that ()*[]-" are part of the regex grammar (as long as they are not surrounded by two ")?

Constraint means that something is restricted or less than normal. Think about the space that might be restricted here (compared to the intended space).

Think about the different parts involved in the parser.
What might be different in each part?
Where are possible mistakes?
You should not literally think about understanding mistakes but how could a parser be wrong.
For instance, as you said it might parse invalid symbols like (.
Although maybe not that special as such bugs would be quite uncommon and one can not test every symbol at every position.
But if you think about that case of a possible bug just write another test as long as you don’t go too far and write thousands of tests. In the worst case, you have written five lines that do not add coverage.

Hmm, I’m not sure, if I understand correctly.
Do you mean that for example in the test “testRegNameConstrained3”, the broken implementation would take less than the correct implementation?
I’m really trying to wrap my head around this and it’ really annoying, because it’s just the reg name test that is not working correctly.

It probably it not “just” the reg-name test.
The daily tests are a help to show problems in your implementations.
If one test fails, it means there is at least one problem you overlooked and probably more eval tests will fail.
(The opposite does not hold)
For instance, by the name this probably means that you did not test reg-name extensively enough and need more tests regarding reg-name.
As the project description says: Test against the specification not against the daily tests.

Yes, that’s what I’m doing. I’ve looked the specification through multiple times after that and I’m still stuck, because I can’t find another case that I wouldn’t have covered. I know that there is one or more cases, but I can’t seem to find them.

Another way of writing tests now that we are in the second part is coverage based.
Go through your parser code and think about:

  • Why is this code exactly as it is? => what URLs does it allow what does it prevent
  • How would the parser behavior change if I changed this line?

Also, consider all edge, corner, and outlier cases.