Skip to content

Match xfail behavior for pytest and unitttest with TAP spec #57

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Nov 7, 2020

Conversation

mblayman
Copy link
Member

@mblayman mblayman commented Nov 7, 2020

This PR is intended to make the all the different xfail behaviors work (including pytest xfail and unittest.expectedFailure) and be consistent with the TAP specification.

Here's the truth table that I expect.

Failure type Test result TAP output
unitest.expectedFailure PASS ok 1 test_name # TODO unexpected success
unitest.expectedFailure FAIL not ok 1 test_name # TODO expected failure
pytest.mark.xfail(strict=False reason='the reason') PASS ok 1 test_name # TODO unexpected success: the reason
pytest.mark.xfail(strict=False reason='the reason') FAIL not ok 1 test_name # TODO expected failure: the reason
pytest.mark.xfail(strict=True reason='the reason') PASS not ok 1 test_name # unexpected success: [XPASS(strict)] the reason
pytest.mark.xfail(strict=True reason='the reason') FAIL not ok 1 test_name # TODO expected failure: the reason

Based on that truth table, there are a number of things in the plugin that need to change.

  1. unittest.expectedFailure is backwards currently compared to the table. Fixing this will bring the behavior in line with TAPTestResult from tappy.
  2. Strict unexpected success reports as TODO, but it should be treated as a non-TODO failure.

To accept your contribution, please complete the checklist below.

  • Is your name/identity in the AUTHORS file alphabetically?
  • Did you write a test to verify your code change?
  • Is CI passing?

Fixes #55 (eventually)

@mblayman
Copy link
Member Author

mblayman commented Nov 7, 2020

@lazka, if you don't mind, could you take a look at the truth table in this PR description to see if you agree with my expected TAP output for each of the xfail scenarios?

I'll probably work on making the plugin match up with that table. I'd love to get a second pair of eyes if you have the time.

@lazka
Copy link

lazka commented Nov 7, 2020

Looks good, except the "XPASS(strict)" I would expect to be "ok" (?)

@mblayman
Copy link
Member Author

mblayman commented Nov 7, 2020

My rationale for treating XPASS as failing is that, by using strict, the user is opting into behavior to make sure the suite fails when something unexpectedly passes.

In my thinking, there's nothing to do and it would be inconsistent with their stated expectation if the TAP report treated it as an ok.

@mblayman mblayman marked this pull request as ready for review November 7, 2020 19:05
@lazka
Copy link

lazka commented Nov 7, 2020

In my thinking, there's nothing to do and it would be inconsistent with their stated expectation if the TAP report treated it as an ok.

Ah, I missed that there is no TODO. yes, makes sense, thanks!

@mblayman mblayman merged commit 65c4fe5 into master Nov 7, 2020
@mblayman mblayman deleted the xfail-tests branch November 7, 2020 23:09
@lazka
Copy link

lazka commented Nov 8, 2020

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Writes "ok" instead of "not ok" for an expected failure
2 participants