Unit Testing Large And Complex Data
This week I’ve been helping some developers with unit testing adoption which raise an interesting topic that I’ve not seen explicitly addressed.
Unit testing is great when you are working with simple data but what if you have larger or more complex data such as waveforms or images?
I’ve used a couple of techniques over the years:
Simplify
Maybe this is an obvious one – but the first option is to identify whether the method can still be applicable on a subset of the real data.
For example, on an application where I’m doing some image processing, the real data will be 256×256 pixels
However, my tests run over 3×3.
This is still applicable to the larger image as many algorithms involve handling edge conditions and the normal implementation in the middle. The larger arrays will have more of the normal implementation but the smaller tests will still cover the edge conditions as well (which is often where things can go wrong!).
Generate
In some cases, you need a full input set but you are just measuring some properties.
An example might be a frequency processing function where we want to extract the size of the peak at 1kHz.
The FFT parameters change a lot based on the size of the data you put in so really we want to use the actual size we expect in the application. So instead what I have done in the past is to write a basic routine to generate an appropriate signal.
In the example above I use generators to produce a multitone signal, perform the frequency analysis and manipulation which I am testing and then compare just the components of interest.
(Note: this is before I got a bit more structured with my testing!)
Credit to Piotr Demski at Sparkflow for pointing out an important point I missed when I first published this. If you are generating data – it should be the same every time i.e. beware random data sources. Your tests may fail without anything changing.
Golden Data
The approaches above may not work if it isn’t obvious how to generate the data or you can’t generate the important elements easy. They also only work where the problem is importing data – but what if you need to compare a large result.
Here I will revert to storing a constant of some reference data. Normally by running the system, using a probe on the data and copying it to my test.
On the input, this can work without any compromise.
For expected cases there is an obvious catch – if you generate data from the VI under test it will of course pass. However, if you have no way of generating an expected output then we have to compromise.
Instead, we can write the algorithm until it works (validated manually) and capture that data for the test. It means the test doesn’t guarantee it is right, however, it will fail if you do anything that alters the output, which gives you a chance to catch potential regressions.
Another path is to use a known good algorithm (perhaps from another language) to capture the expected output in a similar way. This is fantastic when you are looking to replace or replicate legacy systems since you can take data directly from them.
Catch It At Different Levels
In some cases, it may simply not be viable to create a unit test for the algorithm. That’s OK – 100% coverage isn’t a useful goal in most cases anyway. Instead, consider:
- Try and unit test the components inside the algorithm (depending on the structure of your code). This will provide some confidence.
- Make sure this algorithm is verified in system or integration level tests. Ideally, find ways to make it fail loudly so it isn’t likely to be missed.
I hope that gives you some ideas for your use case. Feel free to post any questions you have.
Fabiola De la Cueva
December 12, 2019Excellent article as usual. If you haven’t done so already, please link to this article from the unit testing group at the NI forums.
I do want to add as I did in the LinkedIn discussion, if for whatever reason you decide you want an unit test with random data, make sure the random data is recorded somehow in the result, so you know what data made the test fail. Add that data to a different unit test and modify your code to make it pass. I stay away from random data. In the years I have advocated for unit tests, there has only been one situation where it made sense to do it.
We cannot know every application out there, so rather than saying never, I rather say avoid and what to do if you decide to do it.
Pingback: Unit Testing Large And Complex Data – Alliance of LabVIEW™ Architects.
Pingback: Unit Testing - Tips For Getting Started - System Automation Solutions LLC
Pingback: Unit Testing - Tips For Getting Started - SAS Workshops