How to skip the rest of tests in the class if one has failed? How to skip the rest of tests in the class if one has failed? python python

How to skip the rest of tests in the class if one has failed?


I like the general "test-step" idea. I'd term it as "incremental" testing and it makes most sense in functional testing scenarios IMHO.

Here is a an implementation that doesn't depend on internal details of pytest (except for the official hook extensions). Copy this into your conftest.py:

import pytestdef pytest_runtest_makereport(item, call):    if "incremental" in item.keywords:        if call.excinfo is not None:            parent = item.parent            parent._previousfailed = itemdef pytest_runtest_setup(item):    previousfailed = getattr(item.parent, "_previousfailed", None)    if previousfailed is not None:        pytest.xfail("previous test failed (%s)" % previousfailed.name)

If you now have a "test_step.py" like this:

import pytest@pytest.mark.incrementalclass TestUserHandling:    def test_login(self):        pass    def test_modification(self):        assert 0    def test_deletion(self):        pass

then running it looks like this (using -rx to report on xfail reasons):

(1)hpk@t2:~/p/pytest/doc/en/example/teststep$ py.test -rx============================= test session starts ==============================platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev17plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeoutcollected 3 itemstest_step.py .Fx=================================== FAILURES ===================================______________________ TestUserHandling.test_modification ______________________self = <test_step.TestUserHandling instance at 0x1e0d9e0>    def test_modification(self):>       assert 0E       assert 0test_step.py:8: AssertionError=========================== short test summary info ============================XFAIL test_step.py::TestUserHandling::()::test_deletion  reason: previous test failed (test_modification)================ 1 failed, 1 passed, 1 xfailed in 0.02 seconds =================

I am using "xfail" here because skips are rather for wrong environments or missing dependencies, wrong interpreter versions.

Edit: Note that neither your example nor my example would directly work with distributed testing. For this, the pytest-xdist plugin needs to grow a way to define groups/classes to be sent whole-sale to one testing slave instead of the current mode which usually sends test functions of a class to different slaves.



It's generally bad practice to do what are you doing. Each test should be as independent as possible from the others, while you completely depend on the results of the other tests.

Anyway, reading the docs it seems like a feature like the one you want is not implemented.(Probably because it wasn't considered useful).

A work-around could be to "fail" your tests calling a custom method which sets some condition on the class, and mark each test with the "skipIf" decorator:

class MyTestCase(unittest.TestCase):    skip_all = False   @pytest.mark.skipIf("MyTestCase.skip_all")   def test_A(self):        ...        if failed:            MyTestCase.skip_all = True  @pytest.mark.skipIf("MyTestCase.skip_all")  def test_B(self):      ...      if failed:          MyTestCase.skip_all = True

Or you can do this control before running each test and eventually call pytest.skip().

edit:Marking as xfail can be done in the same way, but using the corresponding function calls.

Probably, instead of rewriting the boiler-plate code for each test, you could write a decorator(this would probably require that your methods return a "flag" stating if they failed or not).

Anyway, I'd like to point out that,as you state, if one of these tests fails then other failing tests in the same test case should be considered false positive...but you can do this "by hand". Just check the output and spot the false positives.Even though this might be boring./error prone.