Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing metrics for failed tests #65

Open
husmen opened this issue Apr 18, 2023 · 9 comments
Open

Missing metrics for failed tests #65

husmen opened this issue Apr 18, 2023 · 9 comments

Comments

@husmen
Copy link

husmen commented Apr 18, 2023

Describe the bug
Monitoring results for failed tests are missing from .pymon.

To Reproduce

  1. Create some tests:
# test.py

def test_1():
    assert 1 == 1

def test_2():
    assert 1 == 2
  1. Run pytest: pytest test.py
  2. .pymon has only 1 entry for test_1 in TEST_METRICS table.

Expected behavior
Unless I missed something in the documentation, I expect metrics to be reported for failed tests as well.

Desktop (please complete the following information):

  • OS: Linux (Ubuntu 20.04), Windows 10
  • Python version: 3.8, 3.11
  • Pytest version: 6.x, 7.x
  • pytest-monitor version: 1.6.5
@michelfurrer
Copy link

Same issue here.
In my current setup it would be especially intressting to see metrics in tests that fail.

@js-dieu
Copy link
Collaborator

js-dieu commented Apr 20, 2023

Hello

Thanks for submitting. Failed tests are filtered out on purpose (primarily designed that way to get consistent metrics).
However, the subject has been previously raised in #27 and at that time decision has been taken to postpone it.

I think it may be time for that improvement. Now the question game:

  • shall we add test status (failed or succeeded) or not? I personally tend to think that the status is of interest for those willing to filter out failed test.
  • shall we add an option to activate that behavior (by extension, the current behavior will be the default one) or should we consider it default?

@Diego-Hernandez-Moodys
Copy link

@js-dieu It's useful because I expect to use this to test for memory leaks. To answer your questions

  • Yes, add the status to have parity with the pytest output
  • It should be on by default since that's what the user would see in the pytest output. They should have an option to turn off the plugin, though.
    One row for each test case please.

@Veritogen
Copy link

Veritogen commented Dec 5, 2023

I'd also like to see the results of failed tests in the database by default.

@js-dieu If you can point me to where changes are needed to accomplish that and if its not too much of implementation effort, my employer allows me to implement this kind of stuff during my work time.

@js-dieu
Copy link
Collaborator

js-dieu commented Dec 5, 2023

Hi @Veritogen / @Diego-Hernandez-Moodys

Sorry for the long delay. I have reworked the plugin a bit to be able to support this feature without having everything break.
I expect to deliver it by end of week. I need to solve a bias in memory snapshot first but I don't expect it to be very long.

@Veritogen
Copy link

Veritogen commented Dec 6, 2023

@js-dieu awesome, thats great to hear. Let me know if there's something you could need help with.
I also made a prove of concept for a DBHandler that can handle postgres for persisting the test results. Should I raise a PR for that?

@lhpt2
Copy link

lhpt2 commented May 8, 2024

Is there any progress on that? The feature would be quite handy.

@lhpt2
Copy link

lhpt2 commented May 15, 2024

As the discussion froze and @js-dieu seems to be busy, I decided to do my own changes locally. I'm gonna summarize my insights below so in case this feature will be implemented in the future.

  • I added a new table column (boolean) "TEST_PASSED" to the METRICS table to reflect if a test passed or not and changed the wrapper functions accordingly
  • As the memory_profiler module is not maintained anymore and the company I work is not able to take maintainership, I stripped out and simplified the logic needed for memory measurement and put it in a new module profiler.py inside of this project.
  • I changed the functions calls from the plugin to the profiler accordingly and introduced a new attribute "passed" in the pytest node to be able to hand through the information to the database

Of course I would be open to make a PR if you want, but I'm not sure if I did it the way it was intended by you.

lhpt2 pushed a commit to einhundert/pytest-monitor that referenced this issue Jun 18, 2024
@lhpt2
Copy link

lhpt2 commented Jul 9, 2024

For anyone interested in using this feature: As long as it hasn't been merged upstream yet, there is a soft fork available here: https://github.com/einhundert/pytest-monitor/ (We are only providing support for features and changes made by us, don't request anything not related to these)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants