Happy Employees == Happy ClientsCAREERS AT DEPT®
DEPT® Engineering Blogpython

Setting up Integration Tests for a Django backend & React frontend

With a React frontend, Django isn't responsible for the frontend and doesn't know how to start the frontend process. This causes the tests to fail due to there being no frontend to interact with. To fix the failing tests we need to have the frontend running when running the integration tests.

If you've worked on a Django project with a React or similar frontend you'll know you can't take advantage of Django's LiveServerTestCase for running integration tests. LiveServerTestCase  would allow you to use automated test clients such as Selenium. But in a setup with a React frontend, Django isn't responsible for the frontend and doesn't know how to start the frontend process.

This causes the tests to fail due to there being no frontend to interact with.

To fix the failing tests we need to have the frontend running when running the integration tests. There are a number of ways to solve this, from manually starting the frontend to building a management command to handle the process. We’re creating a custom TestRunner, meaning developers can continue using the ./manage.py test command they already know. There won't be a need for a separate management command or manually starting a process before running tests. The custom TestRunner also gives us the ability to easily add some additional functionality to streamline the testing process.

The CustomTestRunner will inherit from  DiscoverRunner.  At minimum we'll need to define two methods: setup_test_environment and teardown_test_environment.  These methods will be responsible for starting and stopping the frontend process.

from django.conf import settings
from django.test.runner import DiscoverRunner

import os

class CustomTestRunner(DiscoverRunner):
	def setup_test_environment(self, **kwargs):
		os.system("kill $(ps aux | grep '[y]arn.js run dev -p %s' | awk '{print $2}')"	% settings.TEST_FRONTEND_PORT)
		
		os.system(f"API_BASE_URL=http://{settings.TEST_HOST}: settings.TEST_BACKEND_PORT} && nohup yarn run dev -p {settings.TEST_FRONTEND_PORT} &")
		os.system(f"FRONTEND_URL=http://{settings.TEST_HOST}:{settings.TEST_FRONTEND_PORT}")
		
		DiscoverRunner.setup_test_environment(self, **kwargs)

	def teardown_test_environment(self, **kwargs):
		os.system("kill $(ps aux | grep '[y]arn.js run dev -p %s' | awk '{print $2}')" % settings.TEST_FRONTEND_PORT)

		DiscoverRunner.teardown_test_environment(self, **kwargs)

Setup Test Environment

The setup_test_environment is responsible for bringing up the frontend environment. However, before starting the frontend process we need to make sure it isn't already running. For this, we’ll run a kill command.

os.system("kill $(ps aux | grep '[y]arn.js run dev -p %s' | awk '{print $2}')"% settings.TEST_FRONTEND_PORT

Depending on how you're running your frontend you may need to update the grep string. The [] in the grep string are used to get the actual process and not the  grep command itself. This command and the ones to follow also pass ports specific to testing. We'll touch on that for the next step.

Now that we've made sure the frontend isn't running we can to bring up the frontend.

os.system(f"API_BASE_URL=http://{settings.TEST_HOST}:{settings.TEST_BACKEND_PORT} && nohup yarn run dev -p {settings.TEST_FRONTEND_PORT} &")

To do this we're using nohup which will bring the process up in the background. Like with the grep command you may need to adjust this depending on your frontend. As part of bringing up the process we're also setting a couple of environment variables: API_BASE_URL  and TEST_FRONTEND_PORT.

The API_BASE_URL is set so the frontend knows the host and port of the backend. We're also setting what port the frontend will be running on using TEST_FRONTEND_PORT. This is being done so the test environment can come up without conflicting with anyone's development environment. We'll also need to set the frontend URL for the backend. This is accomplished by the following:

os.system(f"FRONTEND_URL=http://{settings.TEST_HOST}:{settings.TEST_FRONTEND_PORT}")

The last piece for setup_test_environment  is to call the base class setup_test_environment.

DiscoverRunner.setup_test_environment(self, **kwargs)

Teardown Test Environment

The other method that needs to be defined in our TestRunner is teardown_test_environment, it will be responsible for bringing down the frontend. To accomplish this we'll call the kill command we call at the start of the setup.

os.system("kill $(ps aux | grep '[y]arn.js run dev -p %s' | awk '{print $2}')"% settings.TEST_FRONTEND_PORT

As with  setup_test_environment we'll call the base classes teardown_test_environment

DiscoverRunner.teardown_test_environment(self, **kwargs)

Now that CustomTestRunner has been defined we can set it as the TestRunner to use. In your settings file add  TEST_RUNNER = "<path to test runner>.TestRunner". Now when someone runs ./manage.py test, the frontend environment will be torn down as part of the testing process.

Including and excluding integration tests

Now that we have a custom TestRunner there is additional functionality we can add to improve the testing process. Integration tests are an important part of a test suite but they can be slow. Because of this, there may be times when we don't want to run them. There are also times when all that needs to be run are the integration tests. We'll look at adding two keyword arguments to our new CustomTestRunner: --only-integration-tests and --exclude-integration-tests to handle these use cases.

To add new keyword arguments we'll need to define the class method add_arguments. We'll also need to define the __init__ method for handling the keyword arguments when passed.

@classmethod
def add_arguments(cls, parser):
	parser.add_argument(
		"--exclude-integration-tests",
		action="store_true",
		help="Exclude integration tests.",
	)

	parser.add_argument(
		"--only-integration-tests",
		action="store_true",
		help="Runs only integration tests. Superseeds --exclude-integration-tests",
	)

	DiscoverRunner.add_arguments(parser)

Both keywords will have an action of store_true which will set their param to True if the keywords are passed.

def __init__(self, *args, **kwargs):
	self.kwargs = kwargs
	self.frontend_needed = True

	if self.kwargs["only_integration_tests"]:
		self.add_tag("integration")
		self.remove_exclude_tag("integration")
		self.frontend_needed = True
	elif self.kwargs["exclude_integration_tests"]:
		self.remove_tag("integration")
		self.add_exclude_tag("integration")
		self.frontend_needed = False
	elif self.has_tag("integration"):
		# enable frontend if tag passed manually
		self.frontend_needed = True

	self.tags = self.kwargs["tags"]
	super().__init__(*args, **self.kwargs)

def add_exclude_tag(self, tag):
	self.append_kwarg_list("exclude_tags", tag)

def remove_tag(self, tag):
	if self.kwargs["tags"] and tag in self.kwargs["tags"]:
		self.kwargs["tags"].remove(tag)

def add_tag(self, tag):
	self.append_kwarg_list("tags", tag)

def remove_exclude_tag(self, tag):
	if self.kwargs["exclude_tags"] and tag in self.kwargs["exclude_tags"]:
		self.kwargs["exclude_tags"].remove(tag)

In the __init__ method we need to check if the new keyword arguments have been passed. As the keywords are mutually exclusive we'll give only_integration_tests priority over exclude_integration_tests.

For only_integration_tests we'll add integration to the tags list as well as remove it from the exclude tags. We need to remove integration from the exclude tags as it's possible for integration to be added manually. The last step is to set self.frontend_needed to True. This will allow us to know in setup_test_environment and teardown_test_environment if the frontend should be loaded.

For exclude_integration_tests we'll do the opposite, adding integration to the exclude tags list and removing it from the tags list. We'll then set self.frontend_needed to False as the frontend won't need to be run.

With these new keywords, we can new run ./manage.py test --exclude-integration-tests and ./manage.py test --only-integration-tests.

If in your use case you want to be excluding integration tests most of the time. You can look at excluding integration tests by default and adding a keyword argument to include them when needed. Another use case would be to add a slow tag and keyword arguments include and exclude those tests as needed. Ultimately a custom TestRunner will give you flexibility in managing and running your test moving forward.