-
Notifications
You must be signed in to change notification settings - Fork 65
RunningTests
Testify has a powerful and flexible test runner. But out of the box, it's pretty simple to use.
rhettg@devvm1:~/src/Testify $ bin/testify test
...............
PASSED. 15 tests / 12 cases: 15 passed (0 unexpected), 0 failed (0 expected). (Total test time 0.00s)If you add a test runner to your test case files themselves:
if __name__ == "__main__":
run()You can then just execute the file:
rhettg@devvm1:~/src/Testify $ python test/test_case_test.py
...............
PASSED. 15 tests / 11 cases: 15 passed (0 unexpected), 0 failed (0 expected). (Total test time 0.00sHowever, using the testify runner is more powerful, and easier to remember:
rhettg@devvm1:~/src/Testify $ bin/testify test.test_case_test TestMethodsGetRun
..
PASSED. 2 tests / 1 case: 2 passed (0 unexpected), 0 failed (0 expected). (Total test time 0.00sAssuming you have collected tests together using Suites, it's easy to mix and match:
rhettg@devvm1:~/src/Testify $ bin/testify test -i basic -x disabledThis will run all the tests in the basic suite, excluding any tests that have been marked disabled (by putting them in the
disabled suite)
Exclusion suites can be very helpful for when you have tests that shouldn't be run in certain environments. For example we have a set of tests that require some number of external services to be setup, so if you include those tests in the requires-services suite, you have control over whether they are run or not.
Testify can be easily extended (ExtendingTestify) for whatever your reporting needs are, but out of the box we already have a few options.
Running with --verbose option gives you a rundown of all the tests and times being run:
rhettg@devvm1:~/src/Testify $ bin/testify test --verbose
test.test_case_test ClassSetupFixturesGetRun.test_test_var ... ok in 0.00s
test.test_case_test DeprecatedClassSetupFixturesGetRun.test_test_var ... ok in 0.00s
test.test_case_test DeprecatedClassTeardownFixturesGetRun.test_placeholder ... ok in 0.00s
test.test_case_test DeprecatedSetupFixturesGetRun.test_test_var ... ok in 0.00s
.....
PASSED. 15 tests / 12 cases: 15 passed (0 unexpected), 0 failed (0 expected). (Total test time 0.00s)In addition, by using the --summary option all test failures are collected in a easy to use report at the end of run
rhettg@devvm1:~/src/Testify $ bin/testify test --verbose --summary
test.test_case_test ClassSetupFixturesGetRun.test_test_var ... ok in 0.00s
test.test_case_test DeprecatedClassSetupFixturesGetRun.test_test_var ... ok in 0.00s
test.test_case_test DeprecatedClassTeardownFixturesGetRun.test_placeholder ... ok in 0.00s
test.test_case_test DeprecatedSetupFixturesGetRun.test_test_var ... ok in 0.00s
.....
=======================================================================
FAILURES
The following tests are expected to pass.
========================================================================
test.test_case_test TestMethodsGetRun.test_method_1
------------------------------------------------------------
Traceback (most recent call last):
File "./test/test_case_test.py", line 5, in test_method_1
assert_equal(1 + 1, 3)
File "/nail/home/rhettg/src/Testify/testify/assertions.py", line 32, in assert_equal
assert lval == rval, "assertion failed: %r == %r" % (lval, rval)
<type 'exceptions.AssertionError'>: assertion failed: 2 == 3
========================================================================
========================================================================
test.test_case_test TestMethodsGetRun.assert_test_methods_were_run
------------------------------------------------------------
Traceback (most recent call last):
File "./test/test_case_test.py", line 13, in assert_test_methods_were_run
assert self.test_1_run
<type 'exceptions.AttributeError'>: 'TestMethodsGetRun' object has no attribute 'test_1_run'
========================================================================
FAILED. 16 tests / 12 cases: 14 passed (0 unexpected), 2 failed (0 expected). (Total test time 0.00s)In addition to normal stdout test output, we've also got a component for storing results in JSON format for later machine processing:
rhettg@devvm1:~/src/Testify $ bin/testify test --json-results=results.json --extra-json-info="{\"tag\": \"production\"}"
rhettg@devvm1:~/src/Testify $ head -1 results.json
{"name": "testify.test_case DeprecatedSetupFixturesGetRun.classSetUp", "success": true, "start_time": 1281141702.0, "module": "testify.test_case", "tag": "production", "run_time": 1.7e-05, "type": "fixture", "end_time": 1281141702.0}Using the options --bucket-count and --bucket allows you to split up your test set into separate sets that can be run individually.
The buckets are determined deterministically, so a given test case will always end up in the same bucket given the same number of buckets.
For large projects, this can be very helpful to spread out tests across multiple machines or multiple cores.