When using SQL for more than just CRUD, it might be worth adding unit tests for complex queries. These tests need well crafted test data so that the test query returns the expected result.
The test data is often created in a set-up procedure before the test and removed in a tear-down procedure after the test. With this approach test cases can interfere with each other: on the one hand, if they run in parallel, on the other hand if the tear-down procedure fails to clean up.
Designing the tests to run on transient data that will automatically vanish after the test is one way to cope with these issues.
Option 1: Open Transactions
The easiest and furthermore most flexible solution is to do the set-up and testing in a single open transaction, which is rolled back in tear-down.
This approach as many advantages:
Relatively easy to implement
The essential part is to use the same database session for the set-up, test, and tear-down functions. Furthermore, unintended commits must be prevented. That are especially commits done by the tested code itself. Mocking the database connectivity can help to capture these commit. However, client side code cannot capture implicit commits done by the database—e.g, due to execution of DDL statements.
Complete procedures can be tested
The only limit to the complexity of the test case is that it must suffice with a single transaction.
Parallel testing is possible
The test data is only visible within the open transaction0 and can thus not affect any other activity in the database. Constraints can cause delays, however—if two test cases insert the same primary key value, for example.
The parallel execution might even help finding poorly chosen transaction isolation levels.
Performance tests are possible
The tested code itself is unchanged and can be used for performance analysis and stress testing.
This method combines two elements: (1) the table accesses of a query are “hijacked” by
with queries of the same name; (2) the
with queries use the
values clause to generate the test data “out of nothing”.
WITH cart (product_id, qty) AS ( VALUES (1, 2) ) SELECT ... FROM cart ...
In this example a
with clause was put in front of the query to be tested. The essential detail is that the
with clause overloads the table name
cart, which is used by the test query. Note that this overloading does not work if the table is schema-qualified or accessed through a view (note: Compatibility).
If the test query itself uses a
with clause—e.g., because it is literate SQL—, it can be wrapped to provide the test data in an outer
WITH cart (product_id, qty) AS ( VALUES (1, 2) ) SELECT * FROM ( WITH ... SELECT ... FROM cart ... ) test_query
The test data is created by the
values clause. If the target database does not support
insert, the conforming or proprietary alternatives shown in
from can be used instead.
Speed is the main advantage of this method: a single database call combines the test data with the query. Tests can run in parallel without influencing each other—not even lock contention.
The striking disadvantage is that it only works for functional testing of single queries (not processes). Conclusions about non-functional aspects like performance are impossible. Even the implementation is very unconventional: the query string is prefixed by the
with clause or even wrapped into a subquery.