Better Unit Tests with Nose

Unit testing! We all love it, and we all do it, right? Of course we do. But wouldn’t it be much more satisfying if we could write tests more easily, quickly see what code still needs testing, and keep a record of these tests? Using nose to run your Python tests helps with all of these, and more. nose finds tests more easily, gives you a bunch of helpful assertions (and lets you write your own), can generate line and branch coverage reports, and can export these stats for a continuous integration environment (like Jenkins). I’ll be mostly covering how to integrate nose with Django, but it’s easy enough to use on its own in any Python project.

The Code

Here’s a very simple Django view that will return a response based on a username parameter.

from django.http import HttpResponse


def test_view(request, username):
    if username == 'cooluser':
        greeting = 'You are cool!'
    else:
        greeting = 'You are not cool.'

    return HttpResponse(greeting)

And here’s the test class for it:

from django.test import TestCase
from views import test_view


class ViewTests(TestCase):
    def test_cool_user(self):
        resp = test_view(None, 'cooluser')
        self.assertEqual(resp.content, 'You are cool!')

Pre-Nose

Before installing nose, the test output should look pretty familiar:

$ python manage.py test
Creating test database for alias 'default'...
.
----------------------------------------------------------------------
Ran 1 test in 0.002s

OK
Destroying test database for alias 'default'...

Django Nose

Updating your Django project to use nose is pretty easy, just install the django-nose package; it will let you keep all your nose configurations inside your settings.py file. It also takes care of installing nose for you. First…

$ pip install django-nose

Then, add django_nose to your INSTALLED_APPS, and set the TEST_RUNNER setting to 'django_nose.NoseTestSuiteRunner'.

INSTALLED_APPS = (
    …
    'django_nose',
    …
)

TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'

Now, when running the test again (still using manage.py test), nosetests takes over.

$ python manage.py test
nosetests --verbosity=1
Creating test database for alias 'default'...
.E
======================================================================
ERROR: myapp.tests.test_view
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/ben/.virtualenvs/djangotest/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/Users/ben/.virtualenvs/djangotest/lib/python2.7/site-packages/nose/util.py", line 619, in newfunc
    return func(*arg, **kw)
TypeError: test_view() takes exactly 2 arguments (0 given)

----------------------------------------------------------------------
Ran 2 tests in 0.004s

FAILED (errors=1)
Destroying test database for alias 'default'...

Uh, what? I don’t remember telling you to test that method. It’s not even a test method!

nose can be a little too zealous about finding and running anything that looks like a test. What’s happened here is that test_view is now in the local namespace of a test file, and starts with the word test, so nose tries to execute it. There are ways to instruct nose to ignore tests, but the easiest thing for me to do in this case is rename the method and import statement, from test_view to user_view.

…
def user_view(request, username):
…

Now when running, it should work correctly.

$ python manage.py test
nosetests --verbosity=1
Creating test database for alias 'default'...
.
----------------------------------------------------------------------
Ran 1 test in 0.003s

OK
Destroying test database for alias 'default'...

Setting up code coverage

The best part about using nose for testing is that it will integrate with coverage reporting to tell you exactly what lines of code your tests are hitting. To get it going, first install the coverage package.

$ pip install coverage

Then add a NOSE_ARGS variable to your settings.py file; this is just a tuple of arguments that will be passed to nosetests when it runs.

NOSE_ARGS = (
    '--with-coverage',
)

Now when you run your tests, you’ll see what lines are missing coverage. Although by default, you might see something like this:

$ python manage.py test
nosetests --with-coverage --verbosity=1
Creating test database for alias 'default'...
.
Name                                     Stmts   Miss  Cover   Missing
----------------------------------------------------------------------
django.contrib.admin.models                 57     21    63%   19-20, 42, 45-55, 58, 61, 64, 68, 75-81
django.contrib.auth.management             101     49    51%   49-58, 64-65, 68, 103-104, 111-112, 117-127, 137-150, 164-188
…
myapp/views                                  6      1    83%   8
…

Hmmm, it looks like my test is missing a lot of Django code. Maybe I should write more tests to cover them? Of course not! The Django geniuses have unit tested their own code. What you can do instead is tell nose the packages you want coverage stats for, by adding the --cover-package argument to your NOSE_ARGS. There are a couple of options for doing this, each with their pros and cons. First, you can cover ., which will generate coverage stats for all code in the working directory.

NOSE_ARGS = (
    '--with-coverage',
    '--cover-package=.'
)

Now the coverage stats look like this:

$ python manage.py test
nosetests --with-coverage --cover-package=. --verbosity=1
Creating test database for alias 'default'...
.
Name                    Stmts   Miss  Cover   Missing
-----------------------------------------------------
djangotest/__init__         0      0   100%
djangotest/settings        19     19     0%   12-91
djangotest/urls             4      4     0%   1-6
djangotest/wsgi             4      4     0%   10-14
manage                      6      6     0%   2-10
myapp/__init__              0      0   100%
myapp/admin                 0      0   100%
myapp/models                1      0   100%
myapp/tests                 6      0   100%
myapp/views                 6      1    83%   8
-----------------------------------------------------
TOTAL                      46     34    26%
----------------------------------------------------------------------
Ran 1 test in 0.005s

OK
Destroying test database for alias 'default'...

The other option (well, there’s a ton of options you could use, but in this example) is to add each of your Django apps to the --cover-package argument.

NOSE_ARGS = (
    '--with-coverage',
    '--cover-package=myapp'
)

And your coverage output will only show the apps you’ve added.

$ python manage.py test
nosetests --with-coverage --cover-package=myapp --verbosity=1
Creating test database for alias 'default'...
.
Name           Stmts   Miss  Cover   Missing
--------------------------------------------
myapp              0      0   100%
myapp.models       1      0   100%
myapp.views        6      1    83%   8
--------------------------------------------
TOTAL              7      1    86%
----------------------------------------------------------------------
Ran 1 test in 0.007s

OK
Destroying test database for alias 'default'...

The advantages of setting the coverage package to . is that you never have to change this setting as you add more Django apps, they’ll automatically be covered. You will also see coverage for the directory containing your Django URLs, settings, and so on, which means if you want 100% coverage you’ll want to have tests for these.

Alternatively, by adding a list of your apps as the coverage options, you’ll purely be testing the code you write, and I think testing (for example) your wsgi file that you probably haven’t edited, is better left to an integration test. The downside is that any time you create a new Django app you’ll need to add it to the coverage list, but this isn’t a huge amount of work.

Using coverage stats to write better tests

Code coverage stats will help you write better tests by letting you know what code you’re not hitting. Looking at the coverage report for my simple example, we can see that line 8 in myapp/views.py is not being hit during testing.

Name           Stmts   Miss  Cover   Missing
--------------------------------------------
myapp.views        6      1    83%   8

By examining the code I can see that it is the else case, (for when the user is not cool). Writing an extra test for it is pretty straightforward:

class ViewTests(TestCase):
    …
    def test_normal_user(self):
        resp = user_view(None, 'normal')
        self.assertEqual(resp.content, 'You are not cool.')

Then test again:

$ python manage.py test
nosetests --with-coverage --cover-package=myapp --verbosity=1
Creating test database for alias 'default'...
..
Name           Stmts   Miss  Cover   Missing
--------------------------------------------
myapp              0      0   100%
myapp.models       1      0   100%
myapp.views        6      0   100%
--------------------------------------------
TOTAL              7      0   100%
----------------------------------------------------------------------
Ran 2 tests in 0.009s

OK
Destroying test database for alias 'default'...

Woohoo, 100% coverage.

Coverage isn’t perfect

With line coverage you still need to be careful. Let’s say I wanted to refactor my view to use ternary assignment:

def user_view(request, username):
    greeting = 'You are cool!' if username == 'cooluser' else 'You are not cool.'
    return HttpResponse(greeting)

Now, I can @skip my newest test (like so)…

    @skip
    def test_normal_user(self):
        …

…and still get 100% coverage…

$ python manage.py test
nosetests --with-coverage --cover-package=myapp --verbosity=1
…
Name           Stmts   Miss  Cover   Missing
--------------------------------------------
myapp.views        4      0   100%
--------------------------------------------
…
Ran 1 test in 0.007s

…even though it’s obvious that there two outcomes to that line, and only one is being covered. This should be kept in mind when coding, and perhaps avoiding ternary statements to ensure full branch coverage explicitly.

Debugging tests with PyCharm

In my experience, nose doesn’t play too nicely with PyCharm, if you want to use its debugger when running your tests. In an ideal world our tests would be simple enough that they wouldn’t need debugging, but in an ideal world we’d probably also not write bugs in the first place. Maybe we’re refactoring and writing some unit tests for legacy code, a process made easier by using a debugger. But I digress…

Whatever the reason, I’ve found that the simplest method to get the debugger working is to disable the NoseTestSuiteRunner when running through PyCharm. Obviously this means that your coverage and custom nose things won’t work during the debug session, so this might not be perfect for all occasions.

nose is disabled by setting a special flag when running with PyCharm. This could be done this by parsing a command line option, but I’ve decided instead to check an environment variable in my settings.py file.

if os.getenv('NO_NOSE') != '1':
    TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'

Then I just set the environment variable in the PyCharm configuration.

Configuration for PyCharm to make tests work in debug. Configuration for PyCharm to make tests work in debug.

And that’s it, sweet sweet PyCharm breakpoints.

CI Integration

nose can also write the coverage report to a file in different formats. For example, to output the report in XML format, just add --cover-xml to your NOSE_ARGS.

NOSE_ARGS = (
    '--with-coverage',
    '--cover-xml',
    '--cover-package=myapp'
)

Next time your run tests, a coverage file (coverage.xml) will be generated. When running on a CI server (like Jenkins), the stats will be picked up, and a history of coverage will be kept.

But I don’t even Django!

No problem, you can just run nosetests manually. Just pip install nose coverage, then pass the NOSE_ARGS as command line arguments, e.g:

$ nosetests --with-coverage --cover-package=myapp
.
…
Ran 1 test in 0.551s

OK

Although there’s still lots more to nose (I haven’t covered custom assertions, branch coverage or plugins), hopefully this introduction to coverage will help you and your team keep testing quality high. Check out the nose documentation for more about this awesome and helpful tool.

Previous entry

Next entry