Browse Tag: LabVIEW

Testing Events In VI Tester

The APIs that you have to test are not always simple. As well as passing data they may involve events (with the front panel or with user events).

The other day I needed to test that an event fired as part of a test case. I could see a generic solution, so I created a template for it. I had two requirements:

  1. If the event doesn’t fire – test fails.
  2. If the event fires with the wrong data – test fails.

In my given when then sequence then we end up with a test that follows the structure:

  • Given: Who knows, in this case, a UI library has been tied to a control.
  • When: We take some action that should cause an event on that control.
  • Then: Check the event.

To check the event we create an event structure outside of a loop as we don’t want to handle multiple events. We need two cases:

  1. A timeout case with a suitable timeout – In this case, we call the Test Case.lvclass:fail.vi to fail the test. This should never run if the when code fired the event.

    Failing Path
  2. A case that handles the event – If you don’t care about the data then you can do nothing here, otherwise, include tests on the data included in the event.

    Passing Path

 

Additional Complexity

  1. Dynamic Event Registration: If this is a user event then you will need to register for the event. I’ve included this in my template, but you must move the event registration to the given case. If you haven’t registered the event before the action in the when case, it won’t ever fire.
  2. Parallel/Dynamic Event Generation: If your event is in some dynamic code you may need to have this running. My advice: DONT. Try and pull out the internal API and test synchronously. Asynchronous testing in LabVIEW introduces timing concerns which make your tests much more complicated.

Where To Get It

If you want to use this template, or even if you are just using VI tester you can download the new version of the VI Tester Advanced Comparisons (VITAC) tool from https://github.com/WiresmithTech/VITAC/releases/tag/v1.1.0.

 

Where Do I Save Config Files In LabVIEW?

When writing applications that will be used by anyone else you will need a configuration file. In my experience, this is almost universal and the more I make configurable, the more powerful the software becomes and the less small changes I have to make for my customers.

Where do we save config files in LabVIEW? The landscape is more complicated than you would think! In this post, I’m going to summarise what we do on our LabVIEW projects. We are focusing on Windows since RT is simpler (put it in /c/) and I don’t use Mac or Linux with LabVIEW.

Types of Config Data

I’m going to refer to two types of config data:

  • Global Data: No matter who logs into the system they should share the same configuration. In my experience, this covers the vast majority of industrial applications.
  • User Data: Configurations that should change depending on the user. This might be screen layouts for example.

Files or Registry?

Microsoft is actually quite keen that you put this data in the registry – that is what it is for. There is a Software folder in each top level folder where you should create your own Company/App folder structure and you can store settings as different variable types.

For user data, you can store it under HKEY_CURRENT_USER and for global data, you can store it under HKEY_LOCAL_MACHINE. In many ways it is a pretty nice solution to the problem, however, I’ve avoided it for 3 reasons:

  1. Files are much easier for users to get, edit or send you. Whilst I don’t want them directly editing the files much it is great that when there is a problem they can send me a file or even a screenshot of the file (when it is readable) so I can understand their setup.
  2. Files make save as… much easier if the user wants to be able to switch between configurations.
  3. Files are universal. Although I don’t have much cross-platform code I like that I can create multi purpose configuration libraries that work on Windows or RT. Without this, I would have to have different code for the different platforms.

I am curious though about who is using this. Please leave a comment below and let me know why you like this and if I have anything wrong.

If Files, Where?

OK, so we have decided on files, where should we put them? Helpfully Microsoft has an article on this however 7 years on there are still issues!

User Data

User data is the easiest and where Microsoft’s advice still works. In each user folder, there is a hidden AppData folder. This is designed to hold user-based configuration files and so the user has full read/write access to this. It is just hidden to protect you from “users with initiative” as Fab puts it in this presentation! Within here you should create a folder structure with Company Name\App Name to follow the standard convention.

To get this path use the Get System Directory.vi with the User Application Data input.

 

Global Data

Global data is where this gets messy. There is an equivalent folder to the user AppData folder for this purpose, but…

In XP all worked well. It was located under All Users\Application Data and all users had write access and software worked.

Then Windows 7 came and two changes occurred:

  1. The location was changed to C:\ProgramData (A hidden folder)
  2. Folders had restricted access. The creator/owner has write access but no-one else.

One use case for this is to install fixed configurations at installation time and this works well since everyone has read access. However, if you need to write these after installation you normally do not have access.

The solution to use this? You need to set the permissions as part of a post install step to allow all users to have write access to the relevant folders.

One day, I may sit down and get this set up automatically as a post install step. For now, I have too many concerns about managing failures of this causing extra support. My solution? Use the public documents folder.

I follow the same structure but in Public Documents instead of Public Application Data. So far I’m happy with this decision and I haven’t had any headaches due to this.

I would love to hear your thoughts. What do you do? Am I wrong?

Given, When, Then In LabVIEW Tests

A few months ago in the Austrian alps I was skiing and attempting my second ever slalom run on my trip. Those of you that I have seen recently will now it didn’t end well!

I gathered too much speed, caught some ice and tore my ACL.

Since then I have to do some exercises each morning to prepare it for surgery. I tend to try and watch some interesting talks while I do this and make use of my “bonus” time. They are often TED talks but I had watched many of the latest ones so, super geeky, I switched to Go To Conference talks which are software talks – mainly based around web technologies.

I find it interesting to watch some of the talks that skirt the edge of the technical and understand how they can be applied to LabVIEW. This was certainly true of Level Up Your Unit Tests.

Descriptive Tests

The talk is somewhat the story of a developers transition to a new testing tool and there is one piece that really appealed to me.

There is a concept I have come across before for a structure for acceptance tests called Given, When, Then. The idea is it clearly describes every aspect of a test situation:

  • Given: The pre-conditions.
  • When: The trigger or action.
  • Then: What the software/system should do in response.

For example:

Given we have a high temperature alarm, when the user clears the alarm then an alarm should no longer show as active.

In the video Trisha Gee describes a test framework that was new to her that actually describes the tests in this structure which greatly helps with clarity and highlights problems with the code if any section gets too large. Ideally:

  • Given is small. If it starts to get quite big this starts to sound more like an integration test and less of a unit test. It should also contain no tests – this is not the subject of the unit test.
  • When is tiny. This should ONLY be the code you are actually testing.
  • Then is tiny. This contains your actual tests and assertions. Given a unit test should test one thing there should only be one assertion here or multiple tightly related assertions.

Why Looks Matter

What struck me was that my unit tests – while quite effective – are a mess compared to my normal code. I tend to rattle through then and not give them the full attention that they need. This can hurt me when I have to return to them to understand why they fail.

So I have experimented by taking the descriptive structure of the framework that Trisha describes in the video and implementing it in LabVIEW. The idea is we want clearly separate sections for these with defined boundaries so I found flat sequence structures work well.

Let me give you a (kind of) before and after.

Before:

old-test-style

This is the old style. It works, well. However just looking at the code there are test cases spread throughout (5 in total) and it isn’t clear from the code alone what is being tested.

After:

given-when-then-test

(Yes this is a different test, I haven’t rewritten all of my tests to this format)

Here it is much clearer what is just setup code, what is the code under test and then what the conditions are that we really care about.

It also makes it really obvious if I had tests that were really just checking that the setup has worked which is what some of the tests in the before case are doing (Sometimes this can be really useful though, I think the answer is though that this code should have been tested somewhere else – but I need to think this through more)

Now I really am running out of things to say on unit testing! I have a few more OO posts in the pipeline as well as a couple of tips & tricks posts that I hope to do this year. It has been very busy the past couple of months but I will be having some time off over the summer while they reconstruct my knee! So expect a few more posts then.

Bringing the Command Line Interface to LabVIEW

Those of you that know me or have been following the blog will know that for a while now I have been practicing test driven development in LabVIEW.

This is great, most of my LabVIEW projects now have 50-100 tests attached to them that check various parts of the system are working, but of course, this is only when I remember to run them!

We are all fallible, when it comes to 6pm and I want to go home, I add my last flourish, commit to source code and go home, forgetting to test.

 

Well with my JavaScript code I get a voice from the cloud that tells me off if I make a mistake. The voice is Jenkins – A build server which is used for continuous integration. Every time I check in my code, Jenkins tests and builds my code to make sure nothing is broken.

(Well the voice is an email, but you get the idea)

I get no such prompt with LabVIEW.

 

There have been a number of projects over the years to do this. JKI have a system that we managed to set up and get working some years ago but it depended on some intermediate files and took a bit of fiddling a couple of years ago.

In order to try and make it easier, a couple of years ago I learnt some Java to try and build a plugin for Jenkins which could talk to some corresponding LabVIEW code over TCP. But it was over-complicated and I was over-ambitious and never finished it.

When I then set it up for my javascript application, even though there is no built in support for it, it was so easy! Why?

 

The all-powerful command line interface!

 

Although most people have long given up on it (If you are a LabVIEW developer, your probably more of a GUI kind of person) it is still a common and straightforward way to get two programs to talk together (it forms the backbone of the Unix philosophy)

If we can use the command line, we can talk to Jenkins and any other technology which comes up for the next decade at least.

The problem is LabVIEW’s command line interface is quite basic. You can recieve calling arguments but can’t recieve or send text (stdIn and StdOut) and can’t return an exit code. That last bit is critical since it is how one program determines if another was successful, therefore how we signal to Jenkins whether we have succeeded or failed our tests.

labview-command-line-supported

JKI solved this problem through the use of batch files and text files but I wanted to try a different method.

I created LabVIEW CLI. This has two components:

labview-cli

  • A C# console application. This can run on the command line and basically proxies the interface through TCP to…
  • A corresponding LabVIEW library that gives us the ability to access the command line.

labview-cli-how-it-looks

The philosophy is that the C# application is very small but can be called directly by Jenkins or any other language and will then launch LabVIEW or our LabVIEW built application.

I’m hoping that by keeping it quite simple, it can become a building block for CI tools and others for LabVIEW.

So if your interested you can check out the builds on github and give it a go. I am not currently using this in anger so please don’t bet your job on it working! But early tests look good!

You will need to install the C# application with the installer and then there is a vip package for the LabVIEW library. Look at the readme on github and happy building!

Any issues you can create bugs on the github page or comment on the NI community page.

Labels, Labels, Everywhere

 

I’ve had a few long journeys over the last couple of weeks and managed to catch up on some reading.

In particular I have been jumping into some sections of one of my favourite programming books – Code Complete.

As with most books, the practical techniques directly apply to text based languages. Certain things like naming conventions for variables don’t readily apply to LabVIEW since everything is graphical.

However I think we need to think about text as well. It can be fast to read and unanbiguous in meaning (when done well).

LabVIEW supports text documentation, mostly you interact with it through the context help window.

However, as has been pointed out in the last couple of CLA Summits, we should look to reduce the amount we have to break out of our programming flow.

I’ve talked about comments before, but this triggered some other thoughts.

SubVI Names

Increasingly, certainly for internal APIs I find myself mostly relying on text on the icon other than where there is a well establised precendent (Like init or close).

I like this because it makes my code faster to scan, I don’t have to do the mental translation from icon to meaning if it isn’t automatic.

text-for-icon
Example where I have used text for icons

Think about the last time you drew a flow chart? You use text because it is quick and easy to get the meaning across.

(I know this has internationalisation implications, but this has not been a concern for me yet).

I am also aware of an advantage that we do have over C++ and it’s fellows when it comes to writing text. We can use descriptive names for our subVIs since we can include spaces.

Just because C++ has to use names like InitDaqCard() or ReadTemp() because they have to type it all the time. We can use full names for VIs like Initialise DAQ Card.vi or Read Ambient Temperature() and should take advantage of this.

Variables

Pah! Who needs them?

Well we may not name variables (very often) but controls and indicators can and should follow good conventions, with unambiguous names and inclusion of units. Like with the function names we are also not limited by text based languages naming restrictions in the same way.

Reading though, the part that really struck a chord with me though is about intermediate variables.

Code Complete advocates meaningful intermediate names (so not i, j for loop counters for example). This hugely impacts the readability of code.

It actually goes further and suggests forgoing the performance hit and even adding unneeded variables for complex routines where they improve the readability by giving meaning to intermediate values.

Again does this apply to LabVIEW? We don’t have variables?

Actually we do, they are called wires.

When was the last time you drew a block diagram on a whiteboard and left the arrows unlabeled?

It would be hard to read but I find I rarely label wires in LabVIEW (unless they are obviously unclear) and I suspect I’m not the only one.

Some comments but wires aren't clear or are ambiguous
Some comments but wires aren’t clear or are ambiguous

Context to these connections greatly improves readability. Again we may have them in LabVIEW using context help as some names automatically spread from functions.

But if we labeled wires more it could save you using context help, or even opening subVIs to understand what is going on. Do that a few times a day and the productivity adds up.

Larger but clearer with more wires labelled
Larger but clearer with more wires labelled

Oh, and unlike creating an intermediate variable for readability, this comes with no performance implications in LabVIEW!

I will be trying to use them more often over the next few weeks and see how much I can improve the readability of my code.

 

Does this make sense? Or am I completely wrong? Let me know in the comments.

Spies In LabVIEW

In my last post I covered how I am using unit testing in LabVIEW. In this post I want to share a useful tool on that journey.

Test Doubles

One of the first rules I operate under while testing is using as much of the real application as possible in testing a behaviour. Then each test becomes more valuable as it covers the integration of all the units as well as their contents.

Sometimes however this simply isn’t possible. You may need to talk to hardware that isn’t available, or a database or file that is hard to verify as part of the test (It’s normally possible somehow, but perhaps not worth it). This is where test doubles come in.

A test double is simply a substitute for a software component that you want to exclude from the test. In my last post I mentioned that OO make testing easier, this is why. If these components are written to an abstracted interface we can replace them with a double using dynamic dispatch.

Sometimes JohnnyDepp.lvclass can't be used (source
Sometimes JohnnyDepp.lvclass can’t be used (source)

There are different types of test doubles that are well covered in an article by Martin Fowler. Without getting bogged down into terminology these can return canned answers, simulate some behaviour or simply act as black holes for data!

Spies

One type is called a spy. A spy object is a way for us to peek inside an object. Essentially it is a double that stores information about how it was called so that you can query it later and make sure it was called in the way that you expected.

For example if we are sending a logging library a value of 1 to log, we want to see that the file writing function was called with a value of 1 for the channel name for itself.

Brittle Tests Health Warning – Overuse of this can create very brittle tests. The advantage of taking a high level approach to testing is that your tests aren’t coupled to implementation details which will change more than behaviours. If you use spies too much and you see a lot of implementation details you risk making tests that frequently break and/or require more maintenance.

Spies In LabVIEW

So how do we do this in LabVIEW? Because LabVIEW is a compiled language we must specifically implement new components that contain the spy abilities.

Essentially this component must be able to store the details of each call in a way that it can be read back later. You can do this however you like! But I have created a library to try and take some of the hard work out of it.

LabSpy API

By creating these spies inside of your class you can create a spy class that you can use in your testing. 

 

Typically when we do these there are a few steps:

  1. Create a setup and delete spies method which can be called in the setup and teardown of your test. Make sure the references they create are accessible (either output as an indicator or make accessible through another accessor).
  2. Create the override methods and add the register calls. If you want to track input parameters create a type def for them and wire into the parameters input of the register call function.
  3. Write the test and check the calls using the LabSpy API. The image below shows a simplified diagram showing what this could look like.

Simplified Test View

Now you can check that your software is making the calls with the correct parameters.

Where To Get It

This project is released under an open source license and you can find it on my github account at https://github.com/JamesMc86/LabSpy. There is a VI Package so that you can install it into the palettes which you can download from https://github.com/JamesMc86/LabSpy/releases.

Feel free to download it, play with it, suggest fixes or add feature that you would find useful.

Floating Point Precision

The problem with numbers is they always look right.

If your DAQ card says that the temperature is 23.1 degrees, who are you to argue! All the way from the sensor to the screen, the quality of the information typically degrades as it is converted and recalculated.

One such source of degradation is rounding errors due to floating point precision.

Whilst floating point numbers look continuous this is not true, they have rounding errors too. I’m not going to dig into how they work too much here, there are plenty of resources on that (Wikipedia has more than I can bear to read right now!) however I want to talk about how the format trades off precision vs range.

LabVIEW helps us by using the double precision format by default which gives a precision to approximately 16 decimal figures vs the standard float in many languages which only gives around 7 decimal figures.

But as with everything, there is a cost. The double values weight in at 64 bits vs. the singles 32 bits which when your storing a lot of data comes at a cost. I had such a case recently where I wanted to store timestamps in as small a space as possible with sub-millisecond precision, so the question arose, can it fit in a single?

Machine Epsilon

The first thing you will find when you go searching for precision on floating point numbers is the mystical Machine Epsilon.

This represents the smallest change that a floating point number can represent, there is a LabVIEW constant for this.

machine epsilon

This describes the best possible precision however it can be worse. Floating point numbers represent numbers as a combination of a significand and exponent (like scientific notation at school i.e. 5.2 x 10^5) which allows it to trade off range vs precision (hence the floating point), this means as the size of the number increases, the precision reduces.

For my example, this was particularly important as timestamps in a floating point format are extremely large values (seconds since 1904) which means they lose precision. Which makes this piece of code, break the laws of maths:

timestamp with machine epsilon

So I went in hunt of a definition of how precise these numbers are, which was surprisingly difficult! I think there are two reason why this doesn’t appear to be defined in many places:

  1. Maybe it’s just obvious to everyone else?
  2. A factor must be that the following formula makes assumptions about optimum representation, some numbers can be represented multiple ways which means that there is no single answer.

Eventually I came across a stack overflow question which covered this.

In essence the rules are:

  1. For a given exponent, the error is all the same (i.e. if we are multiplying by 2^2, the smallest change for all numbers would be 4).
  2. The exponent is set by the size of the number (i.e. if the number is 6, the exponent should be 3 as that gives us the best precision).
  3. Knowing the size of the number we can work out the exponent, given the size of the floating point number and a given exponents we can work out the smallest change.

The maths in the post is based on a function available in MATLAB that gives us the epsilon (eps) value for a given number. Translated into LabVIEW, looks like this:

calculate epsilon

With this I could see the answer to my problem, resolution of time as singles is abysmal!

time precision

QMH’s Hidden Secret

Queued Message Handlers (QMH) are an extremely common design pattern in LabVIEW and sit at the heart of many of the different frameworks available for use.

At CSLUG (my local user group) we had something of a framework smackdown with Chris Roebuck and James Powell discussing a couple of frameworks and looking at some of the weaknesses of common patterns.

James’ argument highglighted one of the most common flaws with this pattern which is clearly present in the shipping example in LabVIEW. When using a QMH you cannot guarantee that execution will happen in the order that you expect, on the data you expect.

The concept seems to work for many though, with a QMH style structure at the heart of most of the actor oriented programming and driving some of the largest LabVIEW applications around, what is the difference between success and failure?

A Thought Experiment

During James’ talk I had a bit of a personal epiphany about the QMH which involves a slightly different thought process.

This thought process starts by thinking about the QMH as a virtual machine or execution engine, not part of your application. So if this is the case what are the parts (click to enlarge):

QMH Virtual Machine

  1. The Instruction Set: The different cases of the case structure define the instruction set. This is all of the possible functions that the QMH can execute.
  2. The Program: This is the queue, this defines what the program executes and the order in which the instructions are executed.
  3. The Function Parameters: The data that is enqueued with the instruction.
  4. Global Memory: Any local, global variables used AND any shift registers on the loop (we will come back to this)

It’s All About Scope

Scope is important, we all know that when it comes to things like global variables. Scope however is all about context and control and there are two scoping concerns at the centre of many issues with the QMH pattern.

The Program: In the typical pattern any code with the queue reference can at any time decide to enqueue instructions.

Global Memory and, in particular, the shift registers on the loop also give some global state. The shift registers are a big part of the dirty little secret. Common sense says anything on a wire is locally scoped, it cannot be modified outside of the wire, however this is about context. To the QMH this is true, the shift register data is locally scoped. However to a function/instruction inside the QMH this is not true. In the context of a function this is global as other functions can modify this data i.e. you cannot guarantee the state is the same as you left it.

So how do you use the QMH safely? You should reduce the scope of at least one of these to ensure safety.

Reducing the Scope of the Queue

This is something that is beginning to emerge in a major way.

I first saw this pattern a couple of years in a framework called TLB’ that Norm Kirchner proposed. I have since seen at least two alternatives that follow a similar pattern (that I’m not sure are published but you know who you are, thanks!)

The gist of the pattern is that we separate two structural elements apart in the QMH

  1. An event hander that can take external events and determine what work needs to be done in reaction to that event.
  2. A work queue which is something like a more traditional QMH however only the event handler can add work items.

This could look something like this in LabVIEW:

This is vastly simplified to show the core structural elements
This is vastly simplified to show the core structural elements

(If you look at tlb’ it has the same elements but reversed on the screen).

This has some distinct advantages:

  1. As long as we don’t share the original queue reference only the event structure or the QMH itself can queue work items. This gives better control over race conditions in terms of order of execution.
  2. This overcomes another distinct drawback of the shipping QMH example, data can easily be shared between the event handler and the QMH on the wire using the same shift register structure as before, removing the need for various hacky ways of enabling this normally (again credit to James Powell on this observation).

The disadvantages?

  1. Now our event handling response time is limited to the time taken to complete the work backlog, we have made our program serial again. I suspect for the simplicity, this is a cost that can be handled by most applications.
  2. This doesn’t really deal naturally with time based systems like DAQ, but does QMH really?

I really like this structure, parallel programming is hard! This removes many of the complexities that it introduces for event-response type applications in LabVIEW. I expect we may see more and more of these come out over the next couple of years.

Reducing the Scope of Instruction Data

The above is a nice solution to the issue of controlling execution order for QMH and I believe a distinct improvement that I’ve been hoping to write about for a while. However I feel that this solves a symptom of a deeper root cause.

A robust implementation shouldn’t care about execution order. The fact that it does points to a more fundamental flaw of many QMH examples/implementations.

We should be used to this as a fundamental problem of parallel programming (the QMH execution engine model really has a concurrent programming model). If you want a function or, in this case, QMH Instruction, how do you ensure it is safe to run in parallel without race conditions?

You never use data that can be modified outside of that function.

Global variables, local variables (in some instances), Get/Set FGVs could all be modified at any time by another item making them susceptible to race conditions.

These is all still true of a QMH function, but now we add to our race condition risks the cluster on the shift register, which could be modified by any instruction called between our instruction being queued and actually executed.

I see two major solutions to avoid this:

  1. Pass all relevant with data with the instruction set (i.e. in the data part of the cluster), this ensure the integrity of the execution data.
  2. Don’t use it as a replacement for subVIs. This is common and you can see it in the shipping example below.

NI Shipping QMH

I think this is a common source of problems. Sure a subVI encapsulates functionality and so does a case of a QMH. However the QMH is effectively an asynchronous call which introduces so much more complexity.

This example with Initialize Data and Initialize Panel is typical example. This functionality could easily be encapsulated into a subVI allowing greater control over the data and when the functions are executed. Instead we queue them for later and can’t know what else might have been queued before them, or between them, creating a clear risk of race conditions.

Credits

This was a bit of a meaty post which was heavily inspired by others, I’ve tried to highlight their ideas throughout the post but just to say thanks:

  • The CLA Summit – A couple of presentations and lots of discussion inspired the start of this thought process. It was great, if your a CLA and want to improve I cannot recommend it highly enough.
  • Central South User Group (CSLUG) – A local user group which triggered my epiphany with great presentations and discussions- see above about improving!
  • Dr James Powell – Whos talk triggered said epiphany and highlighted some interesting flaws in the standard template.
  • Norm Kirchner – Who I’m going to credit as the first person I saw put in the isolated work queue model, if someone showed it to him, all credit!

 

External Video – TDD, Where Did It All Go Wrong?

Things have been a bit quiet as we have been going through a number of changes at Wiresmith Technology that has been taking up my time. In the last month we have moved into our first offices, so much time has been spent on making the dream development cave!

We have also taken on a Javascript contractor to help with some work which has taken up time, but he is also keeping me busy with plenty of great resources that he has been using to push his own development skills on things like test driven development, so much of my spare time has been taken with my head in books and youtube videos.

So I don’t have anything new to say today as I’m still absorbing all of this information and I hope to spit it out over the next few months in the form of various experiments, thoughts and translation to the LabVIEW world.

In the meantime, one of the great talks I have watched recently explains why Unit Testing != Testing Units and I’m trying to understand how best to apply this. It’s worth a watch and don’t worry, I have noted that this someone contradicts my last post! This is why I’m not adding anything until I have had a chance to process it properly.

Ian Cooper: TDD, where did it all go wrong from NDC Conferences on Vimeo.

Fixing a Simple Bug with Test Driven Development

So it was the CLA Summit last week that gave me more opportunity to bang on about software testing either further and was great to discuss it with various people and see the themes coming out of it.

My common theme was that I really like the interactive nature of the Unit Test Framework, I think it plays to LabVIEW’s strengths and allows for a nice workflow (for basic tests, using test vectors is far more long winded than it needs to be!).

Another positive I took was from Steve Watts’ talk on debugging and immediacy. He talked about the advantages of ‘runnable code’, that is having logic contained in subVIs that can run independently which aids the debugging process.

So as I worked this week I came across a bug which, the process of fixing, highlighted this well. I took a screencast of the process to highlight some of the benefits that I have found and I think highlights one of the most commonly cited benefits of testing, better code structure. (Go easy, I’m not as natural on camera!)


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close