Writing a re-usable (parametrized) unittest.TestCase method [duplicate] Writing a re-usable (parametrized) unittest.TestCase method [duplicate] python python

Writing a re-usable (parametrized) unittest.TestCase method [duplicate]


Some of the tools available for doing parametrized tests in Python are:


If you really want to have multiple unitttest then you need multiple methods. The only way to get that is through some sort of code generation. You can do that through a metaclasses, or by tweaking the class after the definition, including (if you are using Python 2.6) through a class decorator.

Here's a solution which looks for the special 'multitest' and 'multitest_values' members and uses those to build the test methods on the fly. Not elegant, but it does roughly what you want:

import unittestimport inspectclass SomeValue(object):    def __eq__(self, other):        return other in [1, 3, 4]class ExampleTestCase(unittest.TestCase):    somevalue = SomeValue()    multitest_values = [1, 2, 3, 4]    def multitest(self, n):        self.assertEqual(self.somevalue, n)    multitest_gt_values = "ABCDEF"    def multitest_gt(self, c):        self.assertTrue(c > "B", c)def add_test_cases(cls):    values = {}    functions = {}    # Find all the 'multitest*' functions and    # matching list of test values.    for key, value in inspect.getmembers(cls):        if key.startswith("multitest"):            if key.endswith("_values"):                values[key[:-7]] = value            else:                functions[key] = value    # Put them together to make a list of new test functions.    # One test function for each value    for key in functions:        if key in values:            function = functions[key]            for i, value in enumerate(values[key]):                def test_function(self, function=function, value=value):                    function(self, value)                name ="test%s_%d" % (key[9:], i+1)                test_function.__name__ = name                setattr(cls, name, test_function)add_test_cases(ExampleTestCase)if __name__ == "__main__":    unittest.main()

This is the output from when I run it

% python stackoverflow.py.F..FF....======================================================================FAIL: test_2 (__main__.ExampleTestCase)----------------------------------------------------------------------Traceback (most recent call last):  File "stackoverflow.py", line 34, in test_function    function(self, value)  File "stackoverflow.py", line 13, in multitest    self.assertEqual(self.somevalue, n)AssertionError: <__main__.SomeValue object at 0xd9870> != 2======================================================================FAIL: test_gt_1 (__main__.ExampleTestCase)----------------------------------------------------------------------Traceback (most recent call last):  File "stackoverflow.py", line 34, in test_function    function(self, value)  File "stackoverflow.py", line 17, in multitest_gt    self.assertTrue(c > "B", c)AssertionError: A======================================================================FAIL: test_gt_2 (__main__.ExampleTestCase)----------------------------------------------------------------------Traceback (most recent call last):  File "stackoverflow.py", line 34, in test_function    function(self, value)  File "stackoverflow.py", line 17, in multitest_gt    self.assertTrue(c > "B", c)AssertionError: B----------------------------------------------------------------------Ran 10 tests in 0.001sFAILED (failures=3)

You can immediately see some of the problems which occur with code generation. Where does "test_gt_1" come from? I could change the name to the longer "test_multitest_gt_1" but then which test is 1? Better here would be to start from _0 instead of _1, and perhaps in your case you know the values can be used as a Python function name.

I do not like this approach. I've worked on code bases which autogenerated test methods (in one case using a metaclass) and found it was much harder to understand than it was useful. When a test failed it was hard to figure out the source of the failure case, and it was hard to stick in debugging code to probe the reason for the failure.

(Debugging failures in the example I wrote here isn't as hard as that specific metaclass approach I had to work with.)


I guess what you want is "parameterized tests".

I don't think unittest module supports this (unfortunately),but if I were adding this feature it would look something like this:

# Will run the test for all combinations of parameters@RunTestWith(x=[0, 1, 2, 3], y=[-1, 0, 1])def testMultiplication(self, x, y):  self.assertEqual(multiplication.multiply(x, y), x*y)

With the existing unittest module, a simple decorator like this won't be able to "replicate" the test multiple times, but I think this is doable using a combination of a decorator and a metaclass (metaclass should observe all 'test*' methods and replicate (under different auto-generated names) those that have a decorator applied).