Skip to content

Allow additional output #91

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jenstroeger opened this issue Nov 1, 2024 · 6 comments · Fixed by #94
Closed

Allow additional output #91

jenstroeger opened this issue Nov 1, 2024 · 6 comments · Fixed by #94

Comments

@jenstroeger
Copy link

jenstroeger commented Nov 1, 2024

This is an interesting plugin, thank you! I’ve recently started to add more TAP based tests to a Python project and unifying test results for reporting using TAP is a great prospect.

However, I wanted to discuss the option of outputting additional test information which might conflict with TAP based reporting — I am not sure at this point.

For example, running tests from this template repo usually prints a bunch more information:

> make test
pre-commit run pytest --hook-stage push --files tests/
Run unit tests...........................................................Passed
- hook id: pytest
- duration: 1.25s

============================= test session starts ==============================
platform darwin -- Python 3.13.0, pytest-8.3.3, pluggy-1.5.0 -- /Volumes/Dev/python-package-template/.venv/bin/python
cachedir: .pytest_cache
hypothesis profile 'default-with-verbose-verbosity' -> max_examples=500, verbosity=Verbosity.verbose, database=DirectoryBasedExampleDatabase(PosixPath('/Volumes/Dev/python-package-template/.hypothesis/examples'))
rootdir: /Volumes/Dev/python-package-template
configfile: pyproject.toml
plugins: cov-5.0.0, hypothesis-6.111.2, env-1.1.3, custom-exit-code-0.3.0, tap-3.4, doctestplus-1.2.1
collected 3 items                                                              

src/package/something.py::package.something.Something.do_something PASSED [ 33%]
tests/test_something.py::test_something PASSED                           [ 66%]
docs/source/index.rst::index.rst PASSED                                  [100%]

---------- coverage: platform darwin, python 3.13.0-final-0 ----------
Name                       Stmts   Miss Branch BrPart  Cover   Missing
----------------------------------------------------------------------
src/package/__init__.py        1      0      0      0   100%
src/package/something.py       4      0      0      0   100%
----------------------------------------------------------------------
TOTAL                          5      0      0      0   100%

Required test coverage of 100.0% reached. Total coverage: 100.00%
============================ Hypothesis Statistics =============================

tests/test_something.py::test_something:

  - during generate phase (0.00 seconds):
    - Typical runtimes: ~ 0-1 ms, of which < 1ms in data generation
    - 2 passing examples, 0 failing examples, 0 invalid examples

  - Stopped because nothing left to do


============================== 3 passed in 0.08s ===============================

Notice the Hypothesis and Coverage statistics here. In contrast, running using the --tap option gives me this:

> make test
pre-commit run pytest --hook-stage push --files tests/
Run unit tests...........................................................Passed
- hook id: pytest
- duration: 0.72s

TAP version 13
1..3
ok 1 src/package/something.py::[doctest] package.something.Something.do_something
ok 2 tests/test_something.py::test_something
ok 3 docs/source/index.rst::[doctest] index.rst

where additional information is missing.

Looking at the TAP spec v13 it seems that YAML blocks or comments would come in handy here?

@mblayman
Copy link
Member

mblayman commented Nov 1, 2024

I'm not sure where pytest is storing/writing that informational data. At the most basic level, the reporter could be updated to convert that information to diagnostics to show in the tap stream.

@jenstroeger
Copy link
Author

@mblayman woohoo! Looking forward to the update 👍🏼 Thank you!

@mblayman
Copy link
Member

@codambro did most of the work for sure! I'm getting the packaging cleaned up for this package. I'll try to push out an updated release in a day or two.

@jenstroeger
Copy link
Author

jenstroeger commented Feb 1, 2025

@mblayman I think I’ll have to dig a little deeper into how some plugins dump “post test” statistics — they’re still not captures (see comment). I suspect that’s got to do with Reporting hooks in pytest… 🤔

@codambro
Copy link
Contributor

codambro commented Feb 1, 2025

If you're looking at pytest-cov plugin, they use the hook pytest_terminal_summary. Which is called after all tests are completed. The TAP plugin captures and logs tests while the tests are running via the pytest_runtest_logreport hook.

Basically, plugins like pytest-cov are free to add summaries that simply print to the terminal AFTER all test execution is complete. pytest-tap has no knowledge of this because pytest itself does not capture those logs anywhere at all. Additionally, the TAP specs expect diagnostics to be relevant to the preceding test. The summary information is not pertinent to the test results, so it does not follow TAP's protocol to add it.

I tried to come up with a simple workaround, but was not able to, as the order of events within pytest is rather prohibitive, and pytest-cov tracks all the coverage results internal to the plugin, so can't really be called outside of the plugin itself.

@codambro
Copy link
Contributor

codambro commented Feb 3, 2025

Quick and dirty workaround in conftest.py...

import pytest


@pytest.hookimpl(trylast=True)
def pytest_terminal_summary(terminalreporter, exitstatus, config):
    tapplugin = config.pluginmanager.getplugin("tapplugin")
    covplugin = config.pluginmanager.getplugin("_cov")
    if not tapplugin or not covplugin:
        return
    report = covplugin.cov_report.getvalue()
    diags = "".join(["# " + line for line in report.splitlines(keepends=True)])
    failed = False
    if covplugin.options.cov_fail_under is not None and covplugin.options.cov_fail_under > 0:
        failed = covplugin.cov_total < covplugin.options.cov_fail_under
    desc = 'Required test coverage of {required}% {reached}. ' 'Total coverage: {actual:.2f}%'.format(
        required=covplugin.options.cov_fail_under,
        actual=covplugin.cov_total,
        reached='not reached' if failed else 'reached',
    )
    testcase = "COVERAGE"
    if failed:
        tapplugin._tracker.add_not_ok(testcase, testcase, desc, diagnostics=diags)
    else:
        tapplugin._tracker.add_ok(testcase, testcase, desc, diagnostics=diags)

Result (addt test) appended to TAP:

# TAP results for COVERAGE
ok 10 COVERAGE # Required test coverage of 0.0% reached. Total coverage: 94.29%
# ---------- coverage: platform darwin, python 3.9.6-final-0 -----------
# Name                              Stmts   Miss  Cover
# -----------------------------------------------------
# samples/test_sample.py               38      2    95%
# samples/test_sample_unittest.py      32      2    94%
# -----------------------------------------------------
# TOTAL                                70      4    94%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants