How We Unit Test LabVIEW Code

So if you have been following me for a while you will know that one of my new years resolutions was to really get to grips with unit testing LabVIEW code.
It has been a successful year for this and I thought I would share my results so far.

Result #1 – Always Unit Testing

The immediate question is whether it is worth it and my experience says it is. There is a time investment, that much is obvious but I have seen many benefits:
  • Better development process – by taking a component in isolation and being able to focus and test that thoroughly on its own means less time running full applications, clicking buttons in the right order to create the scenario you need to make sure the latest change works. This means more time writing code, less time debugging and testing. (as well as less code coupling).
  • Higher Confidence – When you have a good suite of tests around your code that you can run in an instant you are more confident in changing it and not causing bugs which looks good to customers and feels great. It also encourages refactoring as it becomes a low risk activity.
  • Smoother Deployments: Compared to last year my commissioning visits have been far smoother with less issues and those that do come up are easier to narrow down.
This is and will continue to be core to how we develop software at Wiresmith Technology.

Result #2 – JKI VI Tester and LabVIEW OO are the Tools of Choice

As late as May I was trying to make NI UTF work for me. I like the concepts and having the option of coverage testing is great but the usability sucks so I transitioned to VI Tester as the framework of choice.
VI tester generates a lot of code as you go! But it lets you write very specific tests in whatever style you like and follows standard testing conventions. My only concern is that it seems unclear what the state of development is and if it will continue. For example I would love to see a “Before” and “BeforeEach” function as opposed to a single “SetUp” VI. It is also very clunky with multiple project targets which I would love to understand what can be done.
Slightly more controversially, I feel that to be really effective you need to be using OO. This is simply because using OO and good design practices/patterns allows you to substitute test doubles where items are difficult to test (i.e. IO) or not relevant to the tests (i.e. another QMH that you don’t want to start, just see what interface calls are made). I just don’t see an effective way to do this with more traditional methods.

Result #3 – Test First not Test Driven

What I refer to here is the purist Test Driven Development. This says that the tests drive the design of your code. You write just enough code to pass each test, refactor the code and by the end you end up with an optimal design.
I have not found much success with this. I have tried a couple of projects where I did try to do this rather than having a proper up front design and the code did not feel very clear. Perhaps it is my style or not enough refactoring but it did not feel good to me.
What I will say is I do follow a test first process.
Unit Testing LabVIEW Process
The thing to remember with the test code is we are trying to call it from as high a level as possible, substituting any test doubles as low as possible to test as many of the real parts as possible, all the time trading off having to spend hours creating test doubles or the test code itself.
Why test first, it has a couple of key benefits:
  • You make sure your test fails first, if the test passes when you haven’t written the code your testing is wrong!
  • It helps you consider the problem domain, what parts are interacting with the behaviour you are working on.
  • It feels like (not saying it is, just feels like) taking two steps back when you are writing tests for code you have already written and know works.
So I think that is one of the first new years resolutions I have ever kept! There is not huge amounts of information out there on unit testing with LabVIEW so I hope this helps. I have a few specific posts in mind to cover some techniques and tools that I have developed along the way that I hope to put up over the next few months.
In the meantime I will once again plug the unit testing group on the community which remains a great resource on this topic.
EDIT: I wrote this post before the announcement of JKI’s new testing framework. I will be looking for a project to evaluate this new approach on and will report back!


  • Fabiola De la Cueva

    November 27, 2015

    Excellent blog post. Thanks for sharing your journey with us.

    I do think you are doing Test Driven Development. According to the book Agile Software Development Principles, Patterns, and Practices
    by Robert C. Martin
    “All code should be written in order to make a failing unit test pass. At the beginning of the iteration, the unit test is designed and of course when ran it fails, because the function does not exist.
    Then the actual functionality is programmed and the unit test is ran to verify that the implementation worked correctly.”

    In order to create a good test, you need to first define the API and what are the expected results to given inputs. As far as I can tell, you are doing Test Driven Development.


  • James McNally

    November 27, 2015

    Ah yes, to be fair I am being a little pedantic on the terminology.

    The book that makes me make that differentiation is Kent Beck’s book (Test Driven Development: By Example) which suggests that the process drives the design of the software quite heavily but I struggled with that.

    That said the term is used very widely for what I have described.

    In searching for the name of the book I have just found a post from Kent Beck which summarises the benefits really nicely as well:

    • Supreeth Koushik

      November 27, 2018

      Nice post Mac (reading this a few years late! :)).

      I came across an informative article on TDD which speaks about some of the issues you mention:

      “the process drives the design of the software quite heavily but I struggled with that”
      The key to achieving this seems to be avoiding local optimisation (getting stuck as the article explains). Refactoring code the right way (in order to make code more generic) might help the process (of TDD) drive the design . As the test driven development progresses, more and more unit test cases you can think of must already be serviced by the production code (by virtue of its generic design/implementation).

      Thanks and Regards,

  • Russell Blake

    December 1, 2015

    Hello James,

    Great post. Have you considering using Caraya from JKI? I would be interested in getting your thoughts on it.

    • James McNally

      December 1, 2015

      Hi Russell,

      Great question. I am going to try it out on a project.

      First impressions are that I like the simplicity of it and hopefully it will encourage people to investigate this and the fact it is open source is a great step (I think open source developer tools will develop the fastest).

      I have some concerns that it may be oversimplified and there is no transition to a more powerful tool, just a start again. Which as I’ve said I felt burned by UTF and having to move to a new tool to get what I need but I need to try it out and see what I miss! Generally simpler is better if it can do it all.

  • Aristos Queue

    December 4, 2015

    I am long-time member of LabVIEW R&D staff, but I also have a preference for the VI Tester system compared to the NI Unit Test Framework. Neither is bad, and I sometimes recommend *both* be used, depending upon a user’s needs.

    There’s a theory of testing that says the more code you have in the test, the more likely it could be the test that is broken, not your code, when something goes wrong. A test is only useful for refactoring if it doesn’t itself need to be edited when you refactor. The Unit Test Framework comes from that philosophy — it is entirely data driven. You create vectors of data to feed into functions and test for expected outputs. This works great for testing things like math libraries, protocol streams, GPIB call/response systems, and other “subroutines with no external side-effect” libraries.

    The VI Tester comes from the camp that says that your test environment is a key part of the tests. It uses a setup and teardown scheme and then runs the tests within that environment.

    I find the latter to be far more flexible and far easier to write deep tests. I’ve been told by testing experts that I’m just not designing my software to be modular enough to make the tests data driven. That’s probably true, but I can’t wait for my software engineering skills to improve to test my code. 🙂

    I don’t use the VI Tester directly… most of my code is C#, C++, that sort of thing, or the LV R&D custom built test framework that we created decades ago and still use (not the sort of thing we’d ever ship… it has idiosyncrasies). But in terms of philosophy, I like the approach used by VI Tester more. And it is a solidly built tool from a good company.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.