Testing Tips: Avoid sleep in tests

Hi 👋,

In this article I wanna show a testing tip that I’ve recently learned myself by reading Software Engineering at Google: Lessons Learned from Programming Over Time. The technique improved the way I write unit tests.

When I’m writing bigger unit tests, I have execute something in the background, like for example publishing a message to a message broker, wait for the message to be published and then consume it to test that what I published is correct.

When waiting for the message to be published or any other operation that required waiting in tests I used to call a sleep function, for a second or two, this is decent for few tests but if your tests grow then this approach does not scale well. Imagine if you’re having 50 tests and each test sleeps for one second, it would take at least 50 seconds to run the test suite, which is a lot of wasted time.

The better approach is to use a timeout and polling, you can poll at every millisecond to see if your test has done what you wanted to do instead of sleeping, this will improve the tests and reduce the execution time by a lot!

Let’s demonstrate this will a small example using the Golang programming language, I’m not going to use any external dependencies to demonstrate this technique but you can apply it everywhere you’re calling something that blocks or if you need to wait for something.

What we’re going to test is a simple struct with a method that blocks and modifies a field.

import (
	"math/rand"
	"time"
)

type SystemUnderTest struct {
	Result string
}

func (s *SystemUnderTest) SetResult() {
	go func() {
		time.Sleep(time.Duration(rand.Intn(3000)) * time.Millisecond)
		s.Result = "the_result"
	}()
}

func (s *SystemUnderTest) GetData() string {
	time.Sleep(time.Duration(rand.Intn(3000)) * time.Millisecond)
	return "the_data"
}

This is the not ideal way of testing it:

// A not very ideal way to test SetResult
func Test_SystemUnderTest_SetResult_NotIdeal(t *testing.T) {
	sut := SystemUnderTest{}
	sut.SetResult()

	time.Sleep(4 * time.Second)

	if sut.Result != "the_result" {
		t.Fatalf("Result not equal, want %s got %s", "the_result", sut.Result)
	}
}

SetResults takes between 0 to 3 seconds to run, since we’re waiting for the result we’re sleeping for 4 seconds.

=== RUN   Test_SystemUnderTest_SetResult_NotIdeal
--- PASS: Test_SystemUnderTest_SetResult_NotIdeal (4.00s)
PASS

A better way is to write a simple loop and poll for the result:

// A better way of testing the code
func Test_SystemUnderTest_SetResult(t *testing.T) {
	sut := SystemUnderTest{}
	sut.SetResult()

	passedMilliseconds := 0
	for {
		if passedMilliseconds > 4000 {
			t.Fatalf("timeout reached")
		}
		passedMilliseconds += 1
		time.Sleep(1 * time.Millisecond)
		if sut.Result != "" {
			break
		}
	}
	if sut.Result != "the_result" {
		t.Fatalf("Result not equal, want %s got %s", "the_result", sut.Result)
	}
}

Writing a loop and polling for the result will make the test more complex but it will execute faster. In this case the benefits outweigh the downsides.

=== RUN   Test_SystemUnderTest_SetResult
--- PASS: Test_SystemUnderTest_SetResult (2.08s)
PASS

If the language permits we can also use channels, let’s change the following function that returns a result after a random amount of time and test it.

func Test_SystemUnderTest_GetData(t *testing.T) {
	sut := SystemUnderTest{}

	timeoutTicker := time.NewTicker(5 * time.Second)
	result := make(chan string)

	// Get result when ready
	go func() {
		result <- sut.GetData()
	}()

	select {
	case <-timeoutTicker.C:
		t.Fatal("timeout reached")
	case actual := <-result:
		if actual != "the_data" {
			t.Fatalf("Data not equal, want: %s, got %s", "the_data", actual)
		}
	}
}

We avoided writing a loop with the use of a ticker and select.

In another case you may need to test HTTP calls on the local machine or any other library. Look for timeout options.

Go’s HTTP library let’s you specify a custom timeout for every call you make:

	client := http.Client{
		Timeout: 50 * time.Millisecond,
	}
	response, err := client.Get("http://localhost:9999/metrics")
	...

In Conclusion

Avoid the use of sleep in tests, try polling for the result instead or check if the blocking functions have parameters or can be configured to stop the execution after a timeout period.

Thanks for reading and I hope you’ve enjoyed this article! 🍻

Pytest Fixtures and Yield

Hi 👋

In this short article I want to explain the use of the yield keyword in pytest fixtures.

What is pytest?

Pytest is a complex python framework used for writing tests. It has lots of advanced features and it supports plugins. Many projects prefer pytest in addition to Python’s unitttest library.

What is a fixture?

A test fixture is a piece of code that fixes some common functionality that is required for writing the unit tests. This functionality can be

  • a connection to the database
  • a testing http server or client
  • creation of a complex object

You can read more about test fixtures on Wikipedia.

What does yield keyword do?

In Python, the yield keyword is used for writing generator functions, in pytest, the yield can be used to finalize (clean-up) after the fixture code is executed. Pytest’s documentation states the following.

“Yield” fixtures yield instead of return. With these fixtures, we can run some code and pass an object back to the requesting fixture/test, just like with the other fixtures.

https://docs.pytest.org/en/6.2.x/fixture.html

An example code could be the following:

@pytest.fixture()
def my_object_fixture():
    print("1. fixture code.")
    yield MyObjectThatRequiresCleanUp()
    print("4. fixture code after yield.")

Running a sample test which utilizes the fixture will output:

collected 1 item                                                                                                                                                                       

tests\test_my_object.py 1. fixture code.
2. Initializing MyObjectThatRequiresCleanUp
3. test code.
.4. fixture code after yield.

Running the same test but now with the fixture my_object_fixture2, will output:

tests\test_my_object.py 1. fixture code.
2. Initializing MyObjectThatRequiresCleanUp
2.1 Entering
3. test code.
.3.1 Exiting
Clean exit
4. fixture code after yield.

I hope I could successfully ilustrate with these examples the order in which the testing and fixture code is run.

To run the tests, I’ve used pytest --capture=tee-sys . in the project root. The file contents are attached to the end of this article. The --capture parameter is used to capture and print the tests stdout. Pytest will only output stdout of failed tests and since our example test always passes this parameter was required.

Conclusion

Pytest is a python testing framework that contains lots of features and scales well with large projects.

Test fixtures is a piece of code for fixing the test environment, for example a database connection or an object that requires a specific set of parameters when built. Instead of duplicating code, fixing the object’s creation into a fixture makes the tests easier to maintain and write.

yield is a python keyword and when it is used in conjunction with pytest fixtures it gives you a nice pythonic way of cleaning up the fixtures.

Thanks for reading! 📚


Contents of the Pytest fixtures placed in tests/__init__.py

import pytest

from my_object import MyObjectThatRequiresCleanUp


@pytest.fixture()
def my_object_fixture():
    print("1. fixture code.")
    yield MyObjectThatRequiresCleanUp()
    print("4. fixture code after yield.")


@pytest.fixture()
def my_object_fixture2():
    print("1. fixture code.")
    with MyObjectThatRequiresCleanUp() as obj:
        yield obj
    print("4. fixture code after yield.")

Contents of my_object.py

class MyObjectThatRequiresCleanUp:
    def __init__(self):
        print("2. Initializing MyObjectThatRequiresCleanUp")

    def __enter__(self):
        print("2.1 Entering")
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        print("3.1 Exiting")
        if exc_type is None:
            print("Clean exit")
        else:
            print("Exception occurred: {}".format(exc_type))

Contents of test_my_object.py placed in tests/test_my_object.py

from tests import my_object_fixture


def test_my_object(my_object_fixture):
    print("3. test code.")

Testing Python projects with Tox

Hi 👋

In this article I will show you how to test your Python projects with Tox.

Introduction

Tox is a tool for automating testing in Python, their vision is to standardize the testing process. It can be used to easily test your project using multiple Python interpreters and run various commands.

Getting Started

To get started all you need to add to your project is a tox.ini file. To simplify running the tests we will make use of the following Dockerfile, which contains Python interpreters for 3.6 and 3.7

FROM ubuntu:20.04

RUN apt update && apt install -y software-properties-common \
               && add-apt-repository ppa:deadsnakes/ppa \
               && apt install -y python3.6 && apt install -y python3.7 \
               && apt install -y python3-pip && pip3 install tox

VOLUME /code

WORKDIR /code
ENTRYPOINT tox

A tox.ini file which tests using python 3.6 and python 3.7 looks like this:

# content of: tox.ini , put in same dir as setup.py
[tox]
skip_missing_interpreters = True
envlist = py36,py37

[testenv]
# install pytest in the virtualenv where commands will be executed
deps =
    pytest==6.2.1
    pytest-cov==2.11.1
    responses==0.13.3
commands =
    # NOTE: you can run any command line tool here – not just tests
    pytest

[testenv:bamboo]
commands =
  pytest —junitxml=results.xml \
    —cov=your-module—cov-config=tox.ini —cov-report=xml
    coverage2clover -i coverage.xml -o clover.xml
deps =
    {[testenv]deps}
    coverage2clover

We have two environments: testenv and testenv:bamboo, the later one being used for coverage reporting in Bamboo using clover. To run Tox with a specific environment you’d type tox -e bamboo.

To run the test via the Dockerfile, first you’d build the docker container using: docker build . -f Dockerfile -t tox

Then, you’d run the container with docker run -v “$(pwd)”:”/code” tox -e bamboo to test with the Bamboo environment or just docker run -v “$(pwd)”:”/code” tox for the default env.

Practical Example

Here’s an example that you can use to follow along. We have the following files:

@denis ➜ tox_article ls
__pycache__  tests.py  tox.ini
# @denis ➜ tox_article cat tests.py
import unittest

class TestStringMethods(unittest.TestCase):

    def test_upper(self):
        self.assertEqual('foo'.upper(), 'FOO')

    def test_isupper(self):
        self.assertTrue('FOO'.isupper())
        self.assertFalse('Foo'.isupper())

    def test_split(self):
        s = 'hello world'
        self.assertEqual(s.split(), ['hello', 'world'])
        # check that s.split fails when the separator is not a string
        with self.assertRaises(TypeError):
            s.split(2)

if __name__ == '__main__':
    unittest.main()%
# @denis ➜ tox_article cat tox.ini
[tox]
skip_missing_interpreters = True
envlist = py36,py37
skipsdist = True

[testenv]
commands =
    python -m unittest%

Running Tox in our docker image will yield the following output:

@denis ➜ tox_article docker run -v "$(pwd)":"/code" tox
py36 create: /code/.tox/py36
py36 run-test-pre: PYTHONHASHSEED='520882151'
py36 run-test: commands[0] | python -m unittest
...
----------------------------------------------------------------------
Ran 3 tests in 0.000s

OK
py37 create: /code/.tox/py37
py37 run-test-pre: PYTHONHASHSEED='520882151'
py37 run-test: commands[0] | python -m unittest
...
----------------------------------------------------------------------
Ran 3 tests in 0.000s

OK
___________________________________ summary ____________________________________
  py36: commands succeeded
  py37: commands succeeded
  congratulations 🙂

The same tests are run twice, first with Python 3.6 and then with Python 3.7.

Thanks for reading and happy testing! 🔧