Skip to main content

"Syncing" unit-tests for different layers [Resolved]

Consider unit tests for two (or more) consequent layers in web-application, backend, eg views (these are concerned with parsing form parameters, rendering the response) and actions (application logic). In the view, calls to actions are mocked. In the actions calls to underlying level are mocked. Tests are fast, but are less useful than they can be, because the connection between layers needs syncing and integration tests are slow and rarer (in theory). Syncing means that mock and mocked functions/methods are in sync. Otherwise we test a mock, which may easily loose connection to reality if not updated, and thus the power of unit tests is diminished.

The question is, has anyone already come with an idea how to keep upper layer mocks with lower level "interfaces" with the goal of making test more maintainable while still being fast? Less stable interfaces are of interest of course.

I always preferred functional, more integration-like tests because they tend to catch more regressions and bugs than mocked separate unit tests. In some cases, application logic is just a simple transaction script with N actions, so mocking does not provide any benefit to testing (unit test too closely follows implementation). However, those are slow, so I am trying to come with some mechanism, and maybe inventing the wheel.

Especially, in the higher level dynamic languages, like Python.

Some example.

function my_script():

Tests will look like (pseudocode):

monkeypatch(callA, dummyA)
monkeypatch(callB, dummyB)
monkeypatch(callC, dummyC)


assert dummyB.called_after(dummyA)
assert dummyC.called_after(dummyB)
  • this will not catch callD() should that be added to my_script or side-effects inside those functions or even changes in their signatures (lest say, callB changed to require a parameter). The test will still pass. In this case unit test is useless.

Question Credit: Roman Susi
Question Reference
Asked October 23, 2018
Posted Under: Programming
1 Answers

I really like to use combination of fakes and tests that work against both real and fake implementation. This is useful in case of faking database or other APIs.

This is how testing against DB api would look like: enter image description here

In this scenario some business logic needs some database. What it needs of database is in Database Api. The business logic tests test the business logic alongside fake in-memory database, possibly just lists of structures. This allows the business logic tests to be fast and isolated, even when there is many of them. Second set is Database Api tests. This set, runs same tests twice, once against the fake implementaion and second against real production implementation. This ensures that behavior of the two implementations is the same. Of course the real implementation will persist the data properly, while in-memory will be just local to single use-case.

Big advantage of this design is that business logic tests will run as fast unit tests, so there can be many of them and they can do complex logic without worrying about using slow, real implementation. While the Databse Api Tests can just test specific queries and not worry about the business logic.

credit: Euphoric
Answered October 23, 2018
Your Answer