Browse Category: LabVIEW Development

Spies In LabVIEW

In my last post I covered how I am using unit testing in LabVIEW. In this post I want to share a useful tool on that journey.

Test Doubles

One of the first rules I operate under while testing is using as much of the real application as possible in testing a behaviour. Then each test becomes more valuable as it covers the integration of all the units as well as their contents.

Sometimes however this simply isn’t possible. You may need to talk to hardware that isn’t available, or a database or file that is hard to verify as part of the test (It’s normally possible somehow, but perhaps not worth it). This is where test doubles come in.

A test double is simply a substitute for a software component that you want to exclude from the test. In my last post I mentioned that OO make testing easier, this is why. If these components are written to an abstracted interface we can replace them with a double using dynamic dispatch.

Sometimes JohnnyDepp.lvclass can't be used (source
Sometimes JohnnyDepp.lvclass can’t be used (source)

There are different types of test doubles that are well covered in an article by Martin Fowler. Without getting bogged down into terminology these can return canned answers, simulate some behaviour or simply act as black holes for data!

Spies

One type is called a spy. A spy object is a way for us to peek inside an object. Essentially it is a double that stores information about how it was called so that you can query it later and make sure it was called in the way that you expected.

For example if we are sending a logging library a value of 1 to log, we want to see that the file writing function was called with a value of 1 for the channel name for itself.

Brittle Tests Health Warning – Overuse of this can create very brittle tests. The advantage of taking a high level approach to testing is that your tests aren’t coupled to implementation details which will change more than behaviours. If you use spies too much and you see a lot of implementation details you risk making tests that frequently break and/or require more maintenance.

Spies In LabVIEW

So how do we do this in LabVIEW? Because LabVIEW is a compiled language we must specifically implement new components that contain the spy abilities.

Essentially this component must be able to store the details of each call in a way that it can be read back later. You can do this however you like! But I have created a library to try and take some of the hard work out of it.

LabSpy API

By creating these spies inside of your class you can create a spy class that you can use in your testing. 

 

Typically when we do these there are a few steps:

  1. Create a setup and delete spies method which can be called in the setup and teardown of your test. Make sure the references they create are accessible (either output as an indicator or make accessible through another accessor).
  2. Create the override methods and add the register calls. If you want to track input parameters create a type def for them and wire into the parameters input of the register call function.
  3. Write the test and check the calls using the LabSpy API. The image below shows a simplified diagram showing what this could look like.

Simplified Test View

Now you can check that your software is making the calls with the correct parameters.

Where To Get It

This project is released under an open source license and you can find it on my github account at https://github.com/JamesMc86/LabSpy. There is a VI Package so that you can install it into the palettes which you can download from https://github.com/JamesMc86/LabSpy/releases.

Feel free to download it, play with it, suggest fixes or add feature that you would find useful.

How We Unit Test LabVIEW Code

So if you have been following me for a while you will know that one of my new years resolutions was to really get to grips with unit testing LabVIEW code.
It has been a successful year for this and I thought I would share my results so far.

Result #1 – Always Unit Testing

The immediate question is whether it is worth it and my experience says it is. There is a time investment, that much is obvious but I have seen many benefits:
  • Better development process – by taking a component in isolation and being able to focus and test that thoroughly on its own means less time running full applications, clicking buttons in the right order to create the scenario you need to make sure the latest change works. This means more time writing code, less time debugging and testing. (as well as less code coupling).
  • Higher Confidence – When you have a good suite of tests around your code that you can run in an instant you are more confident in changing it and not causing bugs which looks good to customers and feels great. It also encourages refactoring as it becomes a low risk activity.
  • Smoother Deployments: Compared to last year my commissioning visits have been far smoother with less issues and those that do come up are easier to narrow down.
This is and will continue to be core to how we develop software at Wiresmith Technology.

Result #2 – JKI VI Tester and LabVIEW OO are the Tools of Choice

As late as May I was trying to make NI UTF work for me. I like the concepts and having the option of coverage testing is great but the usability sucks so I transitioned to VI Tester as the framework of choice.
VI tester generates a lot of code as you go! But it lets you write very specific tests in whatever style you like and follows standard testing conventions. My only concern is that it seems unclear what the state of development is and if it will continue. For example I would love to see a “Before” and “BeforeEach” function as opposed to a single “SetUp” VI. It is also very clunky with multiple project targets which I would love to understand what can be done.
Slightly more controversially, I feel that to be really effective you need to be using OO. This is simply because using OO and good design practices/patterns allows you to substitute test doubles where items are difficult to test (i.e. IO) or not relevant to the tests (i.e. another QMH that you don’t want to start, just see what interface calls are made). I just don’t see an effective way to do this with more traditional methods.

Result #3 – Test First not Test Driven

What I refer to here is the purist Test Driven Development. This says that the tests drive the design of your code. You write just enough code to pass each test, refactor the code and by the end you end up with an optimal design.
I have not found much success with this. I have tried a couple of projects where I did try to do this rather than having a proper up front design and the code did not feel very clear. Perhaps it is my style or not enough refactoring but it did not feel good to me.
What I will say is I do follow a test first process.
Unit Testing LabVIEW Process
The thing to remember with the test code is we are trying to call it from as high a level as possible, substituting any test doubles as low as possible to test as many of the real parts as possible, all the time trading off having to spend hours creating test doubles or the test code itself.
Why test first, it has a couple of key benefits:
  • You make sure your test fails first, if the test passes when you haven’t written the code your testing is wrong!
  • It helps you consider the problem domain, what parts are interacting with the behaviour you are working on.
  • It feels like (not saying it is, just feels like) taking two steps back when you are writing tests for code you have already written and know works.
So I think that is one of the first new years resolutions I have ever kept! There is not huge amounts of information out there on unit testing with LabVIEW so I hope this helps. I have a few specific posts in mind to cover some techniques and tools that I have developed along the way that I hope to put up over the next few months.
In the meantime I will once again plug the unit testing group on the community which remains a great resource on this topic.
EDIT: I wrote this post before the announcement of JKI’s new testing framework. I will be looking for a project to evaluate this new approach on and will report back!

Floating Point Precision

The problem with numbers is they always look right.

If your DAQ card says that the temperature is 23.1 degrees, who are you to argue! All the way from the sensor to the screen, the quality of the information typically degrades as it is converted and recalculated.

One such source of degradation is rounding errors due to floating point precision.

Whilst floating point numbers look continuous this is not true, they have rounding errors too. I’m not going to dig into how they work too much here, there are plenty of resources on that (Wikipedia has more than I can bear to read right now!) however I want to talk about how the format trades off precision vs range.

LabVIEW helps us by using the double precision format by default which gives a precision to approximately 16 decimal figures vs the standard float in many languages which only gives around 7 decimal figures.

But as with everything, there is a cost. The double values weight in at 64 bits vs. the singles 32 bits which when your storing a lot of data comes at a cost. I had such a case recently where I wanted to store timestamps in as small a space as possible with sub-millisecond precision, so the question arose, can it fit in a single?

Machine Epsilon

The first thing you will find when you go searching for precision on floating point numbers is the mystical Machine Epsilon.

This represents the smallest change that a floating point number can represent, there is a LabVIEW constant for this.

machine epsilon

This describes the best possible precision however it can be worse. Floating point numbers represent numbers as a combination of a significand and exponent (like scientific notation at school i.e. 5.2 x 10^5) which allows it to trade off range vs precision (hence the floating point), this means as the size of the number increases, the precision reduces.

For my example, this was particularly important as timestamps in a floating point format are extremely large values (seconds since 1904) which means they lose precision. Which makes this piece of code, break the laws of maths:

timestamp with machine epsilon

So I went in hunt of a definition of how precise these numbers are, which was surprisingly difficult! I think there are two reason why this doesn’t appear to be defined in many places:

  1. Maybe it’s just obvious to everyone else?
  2. A factor must be that the following formula makes assumptions about optimum representation, some numbers can be represented multiple ways which means that there is no single answer.

Eventually I came across a stack overflow question which covered this.

In essence the rules are:

  1. For a given exponent, the error is all the same (i.e. if we are multiplying by 2^2, the smallest change for all numbers would be 4).
  2. The exponent is set by the size of the number (i.e. if the number is 6, the exponent should be 3 as that gives us the best precision).
  3. Knowing the size of the number we can work out the exponent, given the size of the floating point number and a given exponents we can work out the smallest change.

The maths in the post is based on a function available in MATLAB that gives us the epsilon (eps) value for a given number. Translated into LabVIEW, looks like this:

calculate epsilon

With this I could see the answer to my problem, resolution of time as singles is abysmal!

time precision

QMH’s Hidden Secret

Queued Message Handlers (QMH) are an extremely common design pattern in LabVIEW and sit at the heart of many of the different frameworks available for use.

At CSLUG (my local user group) we had something of a framework smackdown with Chris Roebuck and James Powell discussing a couple of frameworks and looking at some of the weaknesses of common patterns.

James’ argument highglighted one of the most common flaws with this pattern which is clearly present in the shipping example in LabVIEW. When using a QMH you cannot guarantee that execution will happen in the order that you expect, on the data you expect.

The concept seems to work for many though, with a QMH style structure at the heart of most of the actor oriented programming and driving some of the largest LabVIEW applications around, what is the difference between success and failure?

A Thought Experiment

During James’ talk I had a bit of a personal epiphany about the QMH which involves a slightly different thought process.

This thought process starts by thinking about the QMH as a virtual machine or execution engine, not part of your application. So if this is the case what are the parts (click to enlarge):

QMH Virtual Machine

  1. The Instruction Set: The different cases of the case structure define the instruction set. This is all of the possible functions that the QMH can execute.
  2. The Program: This is the queue, this defines what the program executes and the order in which the instructions are executed.
  3. The Function Parameters: The data that is enqueued with the instruction.
  4. Global Memory: Any local, global variables used AND any shift registers on the loop (we will come back to this)

It’s All About Scope

Scope is important, we all know that when it comes to things like global variables. Scope however is all about context and control and there are two scoping concerns at the centre of many issues with the QMH pattern.

The Program: In the typical pattern any code with the queue reference can at any time decide to enqueue instructions.

Global Memory and, in particular, the shift registers on the loop also give some global state. The shift registers are a big part of the dirty little secret. Common sense says anything on a wire is locally scoped, it cannot be modified outside of the wire, however this is about context. To the QMH this is true, the shift register data is locally scoped. However to a function/instruction inside the QMH this is not true. In the context of a function this is global as other functions can modify this data i.e. you cannot guarantee the state is the same as you left it.

So how do you use the QMH safely? You should reduce the scope of at least one of these to ensure safety.

Reducing the Scope of the Queue

This is something that is beginning to emerge in a major way.

I first saw this pattern a couple of years in a framework called TLB’ that Norm Kirchner proposed. I have since seen at least two alternatives that follow a similar pattern (that I’m not sure are published but you know who you are, thanks!)

The gist of the pattern is that we separate two structural elements apart in the QMH

  1. An event hander that can take external events and determine what work needs to be done in reaction to that event.
  2. A work queue which is something like a more traditional QMH however only the event handler can add work items.

This could look something like this in LabVIEW:

This is vastly simplified to show the core structural elements
This is vastly simplified to show the core structural elements

(If you look at tlb’ it has the same elements but reversed on the screen).

This has some distinct advantages:

  1. As long as we don’t share the original queue reference only the event structure or the QMH itself can queue work items. This gives better control over race conditions in terms of order of execution.
  2. This overcomes another distinct drawback of the shipping QMH example, data can easily be shared between the event handler and the QMH on the wire using the same shift register structure as before, removing the need for various hacky ways of enabling this normally (again credit to James Powell on this observation).

The disadvantages?

  1. Now our event handling response time is limited to the time taken to complete the work backlog, we have made our program serial again. I suspect for the simplicity, this is a cost that can be handled by most applications.
  2. This doesn’t really deal naturally with time based systems like DAQ, but does QMH really?

I really like this structure, parallel programming is hard! This removes many of the complexities that it introduces for event-response type applications in LabVIEW. I expect we may see more and more of these come out over the next couple of years.

Reducing the Scope of Instruction Data

The above is a nice solution to the issue of controlling execution order for QMH and I believe a distinct improvement that I’ve been hoping to write about for a while. However I feel that this solves a symptom of a deeper root cause.

A robust implementation shouldn’t care about execution order. The fact that it does points to a more fundamental flaw of many QMH examples/implementations.

We should be used to this as a fundamental problem of parallel programming (the QMH execution engine model really has a concurrent programming model). If you want a function or, in this case, QMH Instruction, how do you ensure it is safe to run in parallel without race conditions?

You never use data that can be modified outside of that function.

Global variables, local variables (in some instances), Get/Set FGVs could all be modified at any time by another item making them susceptible to race conditions.

These is all still true of a QMH function, but now we add to our race condition risks the cluster on the shift register, which could be modified by any instruction called between our instruction being queued and actually executed.

I see two major solutions to avoid this:

  1. Pass all relevant with data with the instruction set (i.e. in the data part of the cluster), this ensure the integrity of the execution data.
  2. Don’t use it as a replacement for subVIs. This is common and you can see it in the shipping example below.

NI Shipping QMH

I think this is a common source of problems. Sure a subVI encapsulates functionality and so does a case of a QMH. However the QMH is effectively an asynchronous call which introduces so much more complexity.

This example with Initialize Data and Initialize Panel is typical example. This functionality could easily be encapsulated into a subVI allowing greater control over the data and when the functions are executed. Instead we queue them for later and can’t know what else might have been queued before them, or between them, creating a clear risk of race conditions.

Credits

This was a bit of a meaty post which was heavily inspired by others, I’ve tried to highlight their ideas throughout the post but just to say thanks:

  • The CLA Summit – A couple of presentations and lots of discussion inspired the start of this thought process. It was great, if your a CLA and want to improve I cannot recommend it highly enough.
  • Central South User Group (CSLUG) – A local user group which triggered my epiphany with great presentations and discussions- see above about improving!
  • Dr James Powell – Whos talk triggered said epiphany and highlighted some interesting flaws in the standard template.
  • Norm Kirchner – Who I’m going to credit as the first person I saw put in the isolated work queue model, if someone showed it to him, all credit!

 

External Video – TDD, Where Did It All Go Wrong?

Things have been a bit quiet as we have been going through a number of changes at Wiresmith Technology that has been taking up my time. In the last month we have moved into our first offices, so much time has been spent on making the dream development cave!

We have also taken on a Javascript contractor to help with some work which has taken up time, but he is also keeping me busy with plenty of great resources that he has been using to push his own development skills on things like test driven development, so much of my spare time has been taken with my head in books and youtube videos.

So I don’t have anything new to say today as I’m still absorbing all of this information and I hope to spit it out over the next few months in the form of various experiments, thoughts and translation to the LabVIEW world.

In the meantime, one of the great talks I have watched recently explains why Unit Testing != Testing Units and I’m trying to understand how best to apply this. It’s worth a watch and don’t worry, I have noted that this someone contradicts my last post! This is why I’m not adding anything until I have had a chance to process it properly.

Ian Cooper: TDD, where did it all go wrong from NDC Conferences on Vimeo.

Fixing a Simple Bug with Test Driven Development

So it was the CLA Summit last week that gave me more opportunity to bang on about software testing either further and was great to discuss it with various people and see the themes coming out of it.

My common theme was that I really like the interactive nature of the Unit Test Framework, I think it plays to LabVIEW’s strengths and allows for a nice workflow (for basic tests, using test vectors is far more long winded than it needs to be!).

Another positive I took was from Steve Watts’ talk on debugging and immediacy. He talked about the advantages of ‘runnable code’, that is having logic contained in subVIs that can run independently which aids the debugging process.

So as I worked this week I came across a bug which, the process of fixing, highlighted this well. I took a screencast of the process to highlight some of the benefits that I have found and I think highlights one of the most commonly cited benefits of testing, better code structure. (Go easy, I’m not as natural on camera!)

Chrome Wont Load LabVIEW Real Time Web Configuration Pages

I spent a week at the CLA summit last week (more to follow) and got a nasty shock on my return.

On attempting to login to configure a compactRIO system I get faced with a request to install silverlight:

Install Silverlight

 

As I have used this every day for a long time, I somehow don’t believe this.

It turns out that Google have removed support for some cryptic NPAPI that many plugins require to run, including Silverlight.

This finally explains why I have been seeing more security warnings about running silverlight pages, it appears this has been coming for some time, in fact over a year! Wish someone had told me.

Fear not, there is a workaround, but it will only work until September.

  1. Navigate to chrome://flags/#enable-npapi in the browser
  2. Click Enable under Enable NPAPIenable_npapi
  3. A box will appear at the bottom of the screen with a Relaunch Now button. Press this to relaunch chome ensuring you back up any work in the browser first
  4. And back to work fixing compactRIOs.

And in September…

I guess if there is no change from the NI side it’s a move to Firefox or IE!

LabVIEW 2014 SP1 – Notice Anything New?

It’s Spring! Which means it’s time for clocks to change, eclipses (well, that may have been a one off) and a service pack release from National Instruments.

Although this is normally touted as a bug fix release, if you dig in to the readme though they have snuck in a nice new feature.

The new feature is the Profile Buffer Allocations Window. This gives you a window into the run time performance of the LabVIEW memory manager that is otherwise hard to understand.

Previously we only had a couple of windows into the LabVIEW memory manager in the Tools -> Profile menu.

Show Buffer Allocations was the best way to understand where on the diagram memory could be allocated but it doesn’t tell us too much about what actually happens.

Performance and Memory shows us the run time memory usage on a VI level but no way to track it down to the actual code execution.

profile performance and memory

But now we can see more of this at run time.

 Step By Step

Launch the tool from a VI through Tools > Profile > Profile Buffer Allocations. Below you can see an example run of the Continuous Measurement and Logging sample project.

profile buffer allocations

  1. Profiling Control — To confuse things slightly, the workflow begins at the bottom! Set the minimum threshold you want to capture (default is 20 kB) and press Start. Press Stop once your satisfied you’ve captured the data your interested in.
  2. Buffer Filters — The bar at the top controls the display of the buffers in the table allowing you to filter by Application Instance, restrict the number of buffers in the table and adjust the units.
  3. Buffer Table — The buffer table displays the buffers that were allocated during the run as well as their vital stats. You can double-click a buffer to be taken to it’s location on the diagram.
  4. Time vs Memory Graph — Once of the coolest features! Selecting a buffer in the table will display a time graph of the size of that buffer during the run. I can imagine this would be great in understanding what is causing dynamic buffer allocations by seeing the size and frequency of changes.

I think anything that gives us more of a view into some of the more closed elements of LabVIEW has got to be beneficial, so go and try it and learn something new about your code.

4 Lessons Learnt Unit Testing LabVIEW FPGA Code

After declaring my intent earlier in the year on moving towards an increasingly test driven methodology, one of the first projects I aimed to use it on has been based on FPGA, which makes this less than trivial.

I was determined to find a way to make the Unit Test Framework work for this, I think it has a number of usability flaws and bugs but if they are fixed it could be a great product and I want to make the most of it.

So can it be used for unit testing LabVIEW FPGA code? Yes but there are a few things to be aware of.

1. You Can’t Directly Create A Unit Test

Falling at the first hurdle, creating unit tests on VIs under an FPGA target isn’t an available option on the right-click menu, normally the easiest way to create a test.

Instead you must manually create a unit test under a supported target type but as LabVIEW FPGA VIs are simply VIs, you can then point this at the FPGA code as long as it doesn’t contain any FPGA only structures.

This has it’s frustrations though. You must manually link the VI (obviously) and rename the test. This bit is a pain as I cannot seem to rename new tests without closing and re-opening the project. Re-arranging tests on disk can also be a frustrating task!

untitled-tests
Argh! Project Hell!

2. Test Vectors Are Your Friend

Unlike processor based code, FPGA logic generally works on data point by point and builds an internal history to perform processing. This means you may have to run the VI multiple times to test it well.

The test vectors function of the unit test framework allows you to do this, specifying inputs for several iterations of the VI and being able to test the VI over time.

Mini-tip: Using Excel or a text editor to do large edits can save you losing your hair! (Right-Click and Open In External Editor)

3. Code Resets

Related to 2, because FPGA code commonly uses feedback nodes/feedforward nodes/registers to store data at some point that has to be reset.

The lazy route to this is to simply wire inputs to the feedback nodes however this defaults to a reset on first call mode. This won’t work with test vectors as each iteration is counted as a new call.

At a minimum you should change the feedback node behaviour to initialize on compile or load, but better practice on FPGA is to have a reset input to the VI to allow explicit control of this. Then you must simply set this on the first element of the vector to get the expected behaviour.

4. FPGA Specific Code or Integration Testing

There will be code that the UTF can’t test directly if it can’t run under an Windows target. In this case the best way to include it in automated testing is with a user defined test. This allows you to create a custom VI for the test, which can include a FPGA desktop execution node to allow for testing the VI. Potentially this will have to be accompanied with a custom FPGA VI as well to be able to pipe the correct data into FIFOs or similar.

 

Hopefully this will help you get over the first few hurdles, I’m sure more will follow and I will try to update them here as I find them.

Applying Joel’s 12 Steps To Better Software to LabVIEW

Long Before the Third Grade Test

Continuing on my theme from the last post I wanted to talk about what goes around writing great software, other than the software itself. How can you improve your coding process?

I have seen in a couple of places references to Joel’s 12 Steps to Better Software and had to look a little closer.

This consists of 12 questions and you should be aiming to answer yes to 10 or more of these.

  1. Do you use source control?
  2. Can you make a build in one step?
  3. Do you make daily builds?
  4. Do you have a bug database?
  5. Do you fix bugs before writing new code?
  6. Do you have an up-to-date schedule?
  7. Do you have a spec?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools money can buy?
  10. Do you have testers?
  11. Do new candidates write code during their interview?
  12. Do you do hallway usability testing?

Stop and have a think about these with your team, what do you score?

Now what if I told you this article was 15 years old! I find this interesting for two reasons:

  • Some of these points seem quite out of date with new developments around agile methods of software development building on many of these points. There is a good article which reviews these points here.
  • I still know lots of developers that haven’t adopted many of these practices.

I must confess there are differences for most developers in the LabVIEW way of working. While many techniques are aimed towards 4-8 developers working on software for hundreds/thousands of users, many LabVIEW applications are developed by individuals or small teams for internal projects where you are probably already speaking to all of the end users.

Can we still infer great tips for those teams? Let’s look at what we can do with LabVIEW to add some more yes’s.

1. Do you use source control?

If you aren’t using this now, stop what your doing and go learn about it!

There are so many resources and technologies out there. SVN is extremely popular in the LabVIEW community although I would probably recommend trying Git or Mercurial as there are more services out there available to use now.

If your interested in these try the Atlassian SourceTree client which solved my previous concerns about a good Windows client for Git.

Credit: geek and poke
Credit: geek and poke

2. Can you make a build in one step?
3.Do you make daily builds?

Builds is a funny word in the LabVIEW world as this varies from many languages as LabVIEW is constantly compiling.

The key point behind this is that you should be continuously making sure the application can run. In LabVIEW this can boil down to being able to run the application consistently, especially as if you can’t run it, you can’t test it.

I make a point of running the full application after every change worth a commit into source code control or a bug in the bug database. This saves context switching of having to go back to old features because you didn’t test them at the time.

4. Do you have a bug database?

This should be step 2 after getting your source code control set up.

If you haven’t used these before, I find them invaluable as a means of tracking the state of the application.

Not just bugs but feature requests can go in the database and get assigned things like priorities, developers, due dates etc. and it becomes where you go when you need to know what to do next.

They also typically integrate with source code control. When I make a commit I mention a bug ID in the commit message and the system links the two.

So I now have a system that tracks what needs doing, who needs to do it, when they have done it and you can even find the exact commit so you can identify what code was changed to complete it.

Again, in this world of cloud computing this doesn’t even require any outlay or time to set up. Most source code hosting services have this built in.

Bitbucket is an easy one to start with as you can have free, private projects. I use a hosted Redmine server from Planio (which costs 9 Euro/month), others use Github (free for public projects).

Go set up a trial account and take a look around. As with many things it will take a bit of getting used to but I find it a far better way to work.

9. Do you use the best tools money can buy?

I’m skipping a few that aren’t specific to LabVIEW here so we fall on the best tools.

Firstly you chose LabVIEW! I think most people reading this will hopefully agree this is a good choice, but it doesn’t stop there.

There are probably two key things with LabVIEW to consider from a hardware point of view. a) It isn’t exactly lightweight and b) It uses a lot of Windows!

You don’t want to be trying to code LabVIEW on a 13″ laptop with a track pad. Get a mouse, get at least one large monitor, preferably two and a decent, modern machine.

My setup- Filling the desk with monitors!
My setup- Filling the desk with monitors!

 

The extra costs will soon be recovered in productivity gains of not having to wait for the software to load a new library or trying to switch between a couple of VIs and the probe window when debugging.

Also consider this from a software perspective as well. It is difficult but try and get familiar with some of the toolkits on the tools network or LAVA forums as they could save you a lot of time over writing features yourself.

There are many free tools but even if there is a cost associated you have to factor in your time to design, develop and maintain that extra code (which is probably not the area you are expert in).

10. Do you have testers?

This is one that is falling out of vogue at the minute as Test Driven Development (TDD) is becoming popular (see my previous post) which means the developers tend to write the tests themselves. That being said it is still useful to have people available for higher level testing as well but as mentioned previously we tend to have smaller teams in LabVIEW.

I think the key point right now is to have a plan for testing. Likely this should be a mix of automated testing and having some test procedure/schedule for higher level testing.

One of my first projects at Wiresmith Technology was where this failed me. I didn’t re-test the software thoroughly enough after changes (normally only the specific section which I had changed) which meant more problems went to the customer than I would like. They all got fixed but each problem means time wasted on communicating the problem as well as affecting the confidence of the customer.

Since then I now keep a procedure for testing the various areas of the software so I can do this before I send a release to the customer which has improved my hit rate as well as saving time in the long run.

With bringing in things like this there is always some pain adjusting but pick one, push through and you will come out writing better software.


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close