Testing Events In VI Tester

The APIs that you have to test are not always simple. As well as passing data they may involve events (with the front panel or with user events).

The other day I needed to test that an event fired as part of a test case. I could see a generic solution, so I created a template for it. I had two requirements:

  1. If the event doesn’t fire – test fails.
  2. If the event fires with the wrong data – test fails.

In my given when then sequence then we end up with a test that follows the structure:

  • Given: Who knows, in this case, a UI library has been tied to a control.
  • When: We take some action that should cause an event on that control.
  • Then: Check the event.

To check the event we create an event structure outside of a loop as we don’t want to handle multiple events. We need two cases:

  1. A timeout case with a suitable timeout – In this case, we call the Test Case.lvclass:fail.vi to fail the test. This should never run if the when code fired the event.

    Failing Path
  2. A case that handles the event – If you don’t care about the data then you can do nothing here, otherwise, include tests on the data included in the event.

    Passing Path

 

Additional Complexity

  1. Dynamic Event Registration: If this is a user event then you will need to register for the event. I’ve included this in my template, but you must move the event registration to the given case. If you haven’t registered the event before the action in the when case, it won’t ever fire.
  2. Parallel/Dynamic Event Generation: If your event is in some dynamic code you may need to have this running. My advice: DONT. Try and pull out the internal API and test synchronously. Asynchronous testing in LabVIEW introduces timing concerns which make your tests much more complicated.

Where To Get It

If you want to use this template, or even if you are just using VI tester you can download the new version of the VI Tester Advanced Comparisons (VITAC) tool from https://github.com/WiresmithTech/VITAC/releases/tag/v1.1.0.

 

Where Do I Save Config Files In LabVIEW?

When writing applications that will be used by anyone else you will need a configuration file. In my experience, this is almost universal and the more I make configurable, the more powerful the software becomes and the less small changes I have to make for my customers.

Where do we save config files in LabVIEW? The landscape is more complicated than you would think! In this post, I’m going to summarise what we do on our LabVIEW projects. We are focusing on Windows since RT is simpler (put it in /c/) and I don’t use Mac or Linux with LabVIEW.

Types of Config Data

I’m going to refer to two types of config data:

  • Global Data: No matter who logs into the system they should share the same configuration. In my experience, this covers the vast majority of industrial applications.
  • User Data: Configurations that should change depending on the user. This might be screen layouts for example.

Files or Registry?

Microsoft is actually quite keen that you put this data in the registry – that is what it is for. There is a Software folder in each top level folder where you should create your own Company/App folder structure and you can store settings as different variable types.

For user data, you can store it under HKEY_CURRENT_USER and for global data, you can store it under HKEY_LOCAL_MACHINE. In many ways it is a pretty nice solution to the problem, however, I’ve avoided it for 3 reasons:

  1. Files are much easier for users to get, edit or send you. Whilst I don’t want them directly editing the files much it is great that when there is a problem they can send me a file or even a screenshot of the file (when it is readable) so I can understand their setup.
  2. Files make save as… much easier if the user wants to be able to switch between configurations.
  3. Files are universal. Although I don’t have much cross-platform code I like that I can create multi purpose configuration libraries that work on Windows or RT. Without this, I would have to have different code for the different platforms.

I am curious though about who is using this. Please leave a comment below and let me know why you like this and if I have anything wrong.

If Files, Where?

OK, so we have decided on files, where should we put them? Helpfully Microsoft has an article on this however 7 years on there are still issues!

User Data

User data is the easiest and where Microsoft’s advice still works. In each user folder, there is a hidden AppData folder. This is designed to hold user-based configuration files and so the user has full read/write access to this. It is just hidden to protect you from “users with initiative” as Fab puts it in this presentation! Within here you should create a folder structure with Company Name\App Name to follow the standard convention.

To get this path use the Get System Directory.vi with the User Application Data input.

 

Global Data

Global data is where this gets messy. There is an equivalent folder to the user AppData folder for this purpose, but…

In XP all worked well. It was located under All Users\Application Data and all users had write access and software worked.

Then Windows 7 came and two changes occurred:

  1. The location was changed to C:\ProgramData (A hidden folder)
  2. Folders had restricted access. The creator/owner has write access but no-one else.

One use case for this is to install fixed configurations at installation time and this works well since everyone has read access. However, if you need to write these after installation you normally do not have access.

The solution to use this? You need to set the permissions as part of a post install step to allow all users to have write access to the relevant folders.

One day, I may sit down and get this set up automatically as a post install step. For now, I have too many concerns about managing failures of this causing extra support. My solution? Use the public documents folder.

I follow the same structure but in Public Documents instead of Public Application Data. So far I’m happy with this decision and I haven’t had any headaches due to this.

I would love to hear your thoughts. What do you do? Am I wrong?

Implications of WannaCry on NI Based Systems

What do problems like WannaCry mean for us?

The more I learn about cyber security, the more you realise how much it feels like we are on the back foot.

Fundamentally the issue is that the tactics and techniques used by hackers seem to move forward much faster that technology at large with many things we depend on having been designed before security was such a significant consideration.

WannaCry certainly brought this concerns to the forefront again, with legacy systems making the front page. The media scoffed at hospitals using Windows XP still, but in our industry, we know that it is not a simple job to keep complex and custom systems up to date. So what might this mean to the LabVIEW community?

Working with IT More

Antivirus and automatic updates can cause havoc with operational systems but as shown having insecure devices on the network can provide a weak link for exploitation. So while It can be a pain to work with on these systems, we must understand their wider concerns.

We probably need to develop some best practices for system updates – is there a way we can schedule updates to minimise impact? Or can we guarantee the system stays off the network, so it doesn’t risk spreading malicious software? Alternatively, can critical elements be run on LabVIEW RT which will likely require less frequent updates than desktop systems?

Stuxnet showed that you must also consider offline threats, USB sticks will continue to threaten offline systems and if users transfer data to and from systems with them, they must be educated about the risks of using un-vetted USB sticks.

Minimum System Access

I always think one of the best, and basic security practices is that of minimal access. If you don’t need the Web server, disable it. Firewalls should only allow access to required systems, and we have the option to install them to Linux RT targets now.

Critical to this is things like VI server remote access. This allows for arbitrary code execution which is a hackers dream! Make sure you turn it off if you don’t need it. If you do need it, make sure you protect it well.

If you have a multi-device system such as a test rack, then including a router which can provide an internal network with wider access but restrict the external network would be a sensible approach.

 

Minimum access also means only the required permissions for any given user. You should ideally never be running as an administrator as standard. I know it’s easier! But it also makes things much easier for malicious code. When you hit a permissions error, then make sure you give the standard user the permissions it requires. Using Linux trains you well in this and is one of the benefits of learning it. (I know Steve has found it worthwhile)

Examples of where these principles are important are the new Petya variant. The malware spreads through various means. This includes the SMB flaw that WannaCry used, but it will also then sniff the machine for administrator credentials. If it finds them, it will then use these to remotely access other systems that the account has access on, spreading further.

I also have it on my list to look more into the write filters on the Windows Embedded systems which mean that anything written to the disk is only temporary and every reboot brings it back to the original state. The system can still get infected, but it makes a recovery much easier.

Thinking About Recovery

One thing I have learnt over the past couple of years is a backup is only as good as the recovery. If a customer had a machine infected and was losing money while it was down, how fast could you recover it?

I take images of all RT systems, but I am considering whether Windows-based systems should also have an image taken and recovery disk creating on delivery. Then if a machine does get infected (and doesn’t store critical data that has to be recovered first), it can be up and running again in hours instead of days.

I know there are a lot more questions than answers there! But I think it is an interesting discussion to have and something I aim to improve on over time.

By Value vs By Reference In LabVIEW

After my previous post about Learning LabVIEW OOP there were some comments on by reference vs. by value which often come up when talking about OOP. I think there are two reasons that these are tightly linked to conversations about OOP.

  1. In “classical” OOP languages everything is by reference but in LabVIEW OOP is by value. This causes a clash when people have learned OOP from these languages.
  2. We do more by reference work in non-OOP LabVIEW than we sometimes like to admit.

I have been thinking about the techniques and analogs to these lately anyway so this is a bit of a meaty article covering the options that I see for implementing these and some thoughts on how they fit into teaching OOP.

By-Reference vs. By-Value

Lets first define my interpretation of these items. I like to think of them at this level as how data behaves rather than definitions of implementation.

  • By-Value: If you take a wire/data in LabVIEW and change it then it changes only for that piece of code and the code that is dataflow dependent on it. There is also no way another piece of code can change the data on the wire. Branching the wire risks creating a copy.
  • By-Reference: The data of is stored in one memory location. When you make changes they might affect other components that don’t have a dataflow dependency and if you read it twice in a row another piece of code could have changed it.
    (This may differ from a classic computer science definition but my main concern is the behavior I see, perhaps a different term is required but stay with me!)

These strongly overlap with data communication. Another way to think of this is that by-value is communication on a data wire where as by-reference is a tag based communication method (in some situations).

So Why Use By Reference In LabVIEW?

My preference is to always lead with by-value where it works. I think this is key to what makes LabVIEW a powerful language. The data on the wire is yours to do what you want with and you don’t have to worry about side effects when you are programming. I suspect it is one of the keys that makes LabVIEW much easier for people without a software background to pick it up.

There are perfectly valid reasons reasons to use by reference though either in spite of, or because of, these side effects. The following items are cases where I look to by-ref:

  1. Shared Application Resources: This is not a great term but what I mean by this are resources such as an error handler or a system configuration where you want every piece of code singing from the same hymn sheet.
  2. Hardware Resources: This is one of the most common – DAQmx and VISA already all run on a by reference API. If I have to add to the API (attach additional data) I will use a by-ref scheme so it continues to work as you expect.
  3. Huge Data: If a data structure is a big proportion of your application memory you need to make sure you don’t copy it often. You see this in the IMAQ library where images are handled by reference (and the confusion it can cause!). This is the in spite of case – you don’t get any programming benefit but you have to use it due to the constraint of the system.

How To Do By Reference In LabVIEW

There are multiple techniques to get the behaviour I mentioned above. I have put down the key ones below; My favourite will be obvious!

Variables

The simplest and most dangerous technique due to race conditions. This isn’t a problem in single process languages but in LabVIEW it can get you into lots of trouble!

There is one case that I may use them which is for WORM (write once, read many) globals which can be useful for configuration data but I never use them with OOP.

Variables for sharing data

DVRs

Data Value References (DVRs) allow you create your own reference wire to any data type. You access the data through the in-place memory structure which protects the data – no other code can access it until you are finished with it. This is very important for preventing race conditions.

DVRs for by-reference

This is my preferred method for by-ref objects. I would create the class normally but change the standard methods to use DVRs of the class rather than the class itself. What I have found is:

  1. Good – 100% scalable. Want 1/5/20/1000? Not a problem.
  2. Good – The call is synchronous. Once the subVI completes the function it performs is complete.
  3. Good – I have heard criticisms that this can lead to having more wires on the diagram. I think this is a good thing! Wires make it obvious what is coupled to what. Variables and AEs don’t make that obvious.
  4. Good – The property nodes for objects support this with no extra code.
  5. Bad – The boilerplate is tedious! Creating the references and the in place element structures with their weird error handling
  6. Bad – References can be invalid at run time which is avoided with the FGV/AEs.

FGVs/AEs

I’m not going to get into a debate on what term to use for these. What I mean is a non-reentrant VI which uses an uninitialised shift register to store data. Normally developers will have an enum input which defines what function is performed and/or the core FGV/AE is wrapped in another API to allow for an easier connector pane.

These are not traditionally referred to as a by-ref programming method but I would argue they are. The “reference” is which VI you are calling. That is what defines which data is modified or accessed and any changes can be seen anywhere else in the program.

FGV/AE

When I started my company I started down the DVR route already, having experimented with both so I haven’t used this as extensively in big applications but the reason I decided not to start with them were:

  1. No wires so coupling is somewhat hidden (you would have to view the VI hierarchy).
  2. Bloated connector panes (though wrapping it in another API does help with this).
  3. No scalability – you either have to write an addressing scheme into the FGV or maintain multiple versions of the VI.

That said these are the preferred methods of many. It is well understood by different developers and is simpler to create than DVRs and shares the benefits of being synchronous.

It is rare that I use these but one exception is actually when this is the exact behavior I want! If I want a singleton object (for my error handler for example) I create an FGV that stores an DVR/Queue Ref and on the first call it will initialise it. That way I get the same reference everywhere in my application.

Singleton Pattern (based on Aristos Queue Singleton design)

Queued Message Handlers/Actors

This is a bit more of an unusual item to add here but it can be and I believe is used for the same cases as above.

Increasingly these are being used in systems like a module. You have an actor for each instrument you want to talk to for example and you enqueue commands for it to complete.

QMH System (simplified)
QMH System (simplified)

This mode of operation is very similar to a traditional by-ref model. The “reference” is the queue reference or actor reference which ties you back to the “data” stored in the shift registers of the QMH. The QMH loop protects access to the shared resource by processing messages one by one, protecting you from race conditions.

It isn’t exactly like the other options, the key difference is that it can operate independently as well as responding to other requests which can make them hugely more powerful.

There is a major added complexity with this though which is that they are asynchronous. This means two-way communications are difficult for example, what happens if there is an error in the QMH? Also you can’t understand the time from the block diagram. I find I have to create sequence diagrams in order to understand the program flow.

Also you can’t understand the time from the block diagram. I find I have to create sequence diagrams in order to understand the program flow.

This is the reason one of the core tenets of an actor based system is that it is a request and you can’t care when or how it gets done. This rule means you must design your system in order to avoid these complexities, but I think this is a hard rule to follow consistently!

For these reasons, I avoid using these unless I need it to run independently or there is multiple classes interacting within it. I tend to use these for “processes”. For example, a DAQ system where the data just gets published onto an event when it is ready.

So When Do You Teach It?

So coming back to the previous article, when do you teach someone about this?

I would argue that this doesn’t need to be intertwined with OOP. In many cases, if they aren’t new to LabVIEW they will already be using one of these techniques. Sticking with that is the simplest route.

If they are new I believe that the decision between these is probably an architecture decision as each has pros and cons in different scenarios. It is hard to teach them all at once so I would look at your typical architectures. Does one of these tend to form the backbone or default option? (In my case it is the DVRs). In which case I would start with that. You can teach the other methods as they are required. If you suddenly need all four at once you could even hide one method behind another to get a new developer on board quickly.

Let Me Have It!

This is bit of a work in progress in my thought process which the question above prompted. I’m pretty sure my terminology isn’t great but I feel the idea is solid so please comment below or find me on twitter or the NI forums. I find it very helpful to help me understand this better.

How to Learn or Teach LabVIEW OOP

Last week was the European CLA summit and it was fantastic. If you are an architect in Europe and you didn’t go make sure to get your boss to budget it in for next year – I find it hugely valuable every year.

An interesting conversation I had a few times is about how to learn or train others on using LabVIEW Classes & OOP.

If you follow the NI training then you learn how to build a class on Thursday morning and by Friday afternoon you are introduced to design patterns.

Similarly when I speak to people they seem keen to quickly get people on to learning design patterns – certainly in the earlier days of adoption this topic always came up very early.

I think this is too fast. It adds additional complexity to learning OOP and personally I got very confused about where to begin.

To some extent I started again with my understanding following something like this process (in fact I still have to finish step 6) and I feel like it has made me a stronger developer. Thats why I share it here – it worked for me!

Step 1 – The Basics

Learn how to make a class and the practical elements like how the private scope works. Use them instead of whatever you used before for modules. e.g. action engines or libraries. Don’t worry about inheritance or design patterns at this stage, that will come.

Step 2 – Practice!

Work with the encapsulation you now have and refine your design skills to make objects that are highly cohesive and easy to read. Does each class do one job? Great you have learned the single responsibility principle, the first of the SOLID principles of OO design. Personally, I feel this is the most important one.

If your classes are large then make them smaller until they do just one job. Also pay attention to coupling. Try to design code that doesn’t couple too many classes together – this can be difficult at first but small, specific classes help.

Step 3 – Learn inheritance

Use dynamic dispatch methods to implement basic abstract classes when you need functionality that can be changed e.g. a simulated hardware class or support for two types of data logs. I’d look at the channeling pattern at this point too. Its a very simple pattern that uses inheritance and I have found helpful in a number of situations. But no peeking at the others!

Step 4 – Practice!

This stage is harder than the last. You need to make sure:

  • Each child class should exactly reflect the abstract methods. If your calling code ever cares which sub-class it is calling by using strange parameters or converting the type then you are violating LSP – the Liskov substitution principle – The L of solid.
  • Each child class should have something relevant to do in the abstract classes. If it has methods that make no sense this is a violation of the interface segregation principle.

Step 5 – Finish SOLID

Read about the open-closed principle and the dependency inversion principle and try it in a couple of sections of code.

Open-closed basically means that you leave interfaces (abstract classes in LabVIEW). Then you can change the behavior by creating a new child class (open for extension) without have to modify the original code (closed to modification). This goes well with the dependency inversion principle. This says that higher level classes should depend only on interfaces (again abstract classes). This means the lower level code implements these classes and so the high-level code can call the lower level code without a direct dependency.

This goes well with the dependency inversion principle. This says that higher level classes should depend only on interfaces (again abstract classes). This means the lower level code implements these classes and so the high-level code can call the lower level code without a direct dependency. This can help in places where coupling is difficult to design out.

I leave these principles to the end because I think they are the easiest to write difficult to read code. I’m still trying to find a balance with these – following them wholeheartedly creates lots of indirection which can affect readability. I also think we don’t get as much benefit in LabVIEW with these since we don’t tend to package code within projects in the same way as other languages. (this maybe a good topic for another post!)

Step 6 – Learn some design patterns

This was obviously part of the point of this article. When I came back to design patterns after understanding design better and the SOLID principles it allowed me to look at the patterns in a different way. I could relate them to the principles and I understood what problems they solved.

For example, the command pattern (where you effectively have a QMH which takes message classes) is a perfect example of a solution to meet the open-closed principle for an entire process. You can extend the message handler by adding support for new message types by creating new message classes instead of modifying the original code. This is how the actor framework works and has allowed the developers to create a framework where they have a reliable implementation of control of the actors but you can still add messages to define the functionality.

Once you understand why these design patterns exist you can then apply some critical thinking about why and when to use them. I personally dislike the command pattern in LabVIEW because I don’t think the additional overhead of a large number of message classes is worth the benefit of being able to add messages to a QMH without changing the original code.

I think this will help you to use them more effectively and are less likely to end up with a spaghetti of design patterns thrown together because that is what everyone was talking about.

Urmm… so what do I do?

So I know this doesn’t have the information you need to actually do this so much as set out a program. Actually, all the steps still follow the NI course on OOP so you could simply self-pace this for general learning material.

This doesn’t really cover SOLID very much from memory but if you google it you will find a HUGE number of resources on this, or you can get the book “Agile Software Development” by Robert Martin which I believe is the text that first coined the term SOLID (the principles already existed but this brought them together).

Given, When, Then In LabVIEW Tests

A few months ago in the Austrian alps I was skiing and attempting my second ever slalom run on my trip. Those of you that I have seen recently will now it didn’t end well!

I gathered too much speed, caught some ice and tore my ACL.

Since then I have to do some exercises each morning to prepare it for surgery. I tend to try and watch some interesting talks while I do this and make use of my “bonus” time. They are often TED talks but I had watched many of the latest ones so, super geeky, I switched to Go To Conference talks which are software talks – mainly based around web technologies.

I find it interesting to watch some of the talks that skirt the edge of the technical and understand how they can be applied to LabVIEW. This was certainly true of Level Up Your Unit Tests.

Descriptive Tests

The talk is somewhat the story of a developers transition to a new testing tool and there is one piece that really appealed to me.

There is a concept I have come across before for a structure for acceptance tests called Given, When, Then. The idea is it clearly describes every aspect of a test situation:

  • Given: The pre-conditions.
  • When: The trigger or action.
  • Then: What the software/system should do in response.

For example:

Given we have a high temperature alarm, when the user clears the alarm then an alarm should no longer show as active.

In the video Trisha Gee describes a test framework that was new to her that actually describes the tests in this structure which greatly helps with clarity and highlights problems with the code if any section gets too large. Ideally:

  • Given is small. If it starts to get quite big this starts to sound more like an integration test and less of a unit test. It should also contain no tests – this is not the subject of the unit test.
  • When is tiny. This should ONLY be the code you are actually testing.
  • Then is tiny. This contains your actual tests and assertions. Given a unit test should test one thing there should only be one assertion here or multiple tightly related assertions.

Why Looks Matter

What struck me was that my unit tests – while quite effective – are a mess compared to my normal code. I tend to rattle through then and not give them the full attention that they need. This can hurt me when I have to return to them to understand why they fail.

So I have experimented by taking the descriptive structure of the framework that Trisha describes in the video and implementing it in LabVIEW. The idea is we want clearly separate sections for these with defined boundaries so I found flat sequence structures work well.

Let me give you a (kind of) before and after.

Before:

old-test-style

This is the old style. It works, well. However just looking at the code there are test cases spread throughout (5 in total) and it isn’t clear from the code alone what is being tested.

After:

given-when-then-test

(Yes this is a different test, I haven’t rewritten all of my tests to this format)

Here it is much clearer what is just setup code, what is the code under test and then what the conditions are that we really care about.

It also makes it really obvious if I had tests that were really just checking that the setup has worked which is what some of the tests in the before case are doing (Sometimes this can be really useful though, I think the answer is though that this code should have been tested somewhere else – but I need to think this through more)

Now I really am running out of things to say on unit testing! I have a few more OO posts in the pipeline as well as a couple of tips & tricks posts that I hope to do this year. It has been very busy the past couple of months but I will be having some time off over the summer while they reconstruct my knee! So expect a few more posts then.

Bringing the Command Line Interface to LabVIEW

Those of you that know me or have been following the blog will know that for a while now I have been practicing test driven development in LabVIEW.

This is great, most of my LabVIEW projects now have 50-100 tests attached to them that check various parts of the system are working, but of course, this is only when I remember to run them!

We are all fallible, when it comes to 6pm and I want to go home, I add my last flourish, commit to source code and go home, forgetting to test.

 

Well with my JavaScript code I get a voice from the cloud that tells me off if I make a mistake. The voice is Jenkins – A build server which is used for continuous integration. Every time I check in my code, Jenkins tests and builds my code to make sure nothing is broken.

(Well the voice is an email, but you get the idea)

I get no such prompt with LabVIEW.

 

There have been a number of projects over the years to do this. JKI have a system that we managed to set up and get working some years ago but it depended on some intermediate files and took a bit of fiddling a couple of years ago.

In order to try and make it easier, a couple of years ago I learnt some Java to try and build a plugin for Jenkins which could talk to some corresponding LabVIEW code over TCP. But it was over-complicated and I was over-ambitious and never finished it.

When I then set it up for my javascript application, even though there is no built in support for it, it was so easy! Why?

 

The all-powerful command line interface!

 

Although most people have long given up on it (If you are a LabVIEW developer, your probably more of a GUI kind of person) it is still a common and straightforward way to get two programs to talk together (it forms the backbone of the Unix philosophy)

If we can use the command line, we can talk to Jenkins and any other technology which comes up for the next decade at least.

The problem is LabVIEW’s command line interface is quite basic. You can recieve calling arguments but can’t recieve or send text (stdIn and StdOut) and can’t return an exit code. That last bit is critical since it is how one program determines if another was successful, therefore how we signal to Jenkins whether we have succeeded or failed our tests.

labview-command-line-supported

JKI solved this problem through the use of batch files and text files but I wanted to try a different method.

I created LabVIEW CLI. This has two components:

labview-cli

  • A C# console application. This can run on the command line and basically proxies the interface through TCP to…
  • A corresponding LabVIEW library that gives us the ability to access the command line.

labview-cli-how-it-looks

The philosophy is that the C# application is very small but can be called directly by Jenkins or any other language and will then launch LabVIEW or our LabVIEW built application.

I’m hoping that by keeping it quite simple, it can become a building block for CI tools and others for LabVIEW.

So if your interested you can check out the builds on github and give it a go. I am not currently using this in anger so please don’t bet your job on it working! But early tests look good!

You will need to install the C# application with the installer and then there is a vip package for the LabVIEW library. Look at the readme on github and happy building!

Any issues you can create bugs on the github page or comment on the NI community page.

Labels, Labels, Everywhere

 

I’ve had a few long journeys over the last couple of weeks and managed to catch up on some reading.

In particular I have been jumping into some sections of one of my favourite programming books – Code Complete.

As with most books, the practical techniques directly apply to text based languages. Certain things like naming conventions for variables don’t readily apply to LabVIEW since everything is graphical.

However I think we need to think about text as well. It can be fast to read and unanbiguous in meaning (when done well).

LabVIEW supports text documentation, mostly you interact with it through the context help window.

However, as has been pointed out in the last couple of CLA Summits, we should look to reduce the amount we have to break out of our programming flow.

I’ve talked about comments before, but this triggered some other thoughts.

SubVI Names

Increasingly, certainly for internal APIs I find myself mostly relying on text on the icon other than where there is a well establised precendent (Like init or close).

I like this because it makes my code faster to scan, I don’t have to do the mental translation from icon to meaning if it isn’t automatic.

text-for-icon
Example where I have used text for icons

Think about the last time you drew a flow chart? You use text because it is quick and easy to get the meaning across.

(I know this has internationalisation implications, but this has not been a concern for me yet).

I am also aware of an advantage that we do have over C++ and it’s fellows when it comes to writing text. We can use descriptive names for our subVIs since we can include spaces.

Just because C++ has to use names like InitDaqCard() or ReadTemp() because they have to type it all the time. We can use full names for VIs like Initialise DAQ Card.vi or Read Ambient Temperature() and should take advantage of this.

Variables

Pah! Who needs them?

Well we may not name variables (very often) but controls and indicators can and should follow good conventions, with unambiguous names and inclusion of units. Like with the function names we are also not limited by text based languages naming restrictions in the same way.

Reading though, the part that really struck a chord with me though is about intermediate variables.

Code Complete advocates meaningful intermediate names (so not i, j for loop counters for example). This hugely impacts the readability of code.

It actually goes further and suggests forgoing the performance hit and even adding unneeded variables for complex routines where they improve the readability by giving meaning to intermediate values.

Again does this apply to LabVIEW? We don’t have variables?

Actually we do, they are called wires.

When was the last time you drew a block diagram on a whiteboard and left the arrows unlabeled?

It would be hard to read but I find I rarely label wires in LabVIEW (unless they are obviously unclear) and I suspect I’m not the only one.

Some comments but wires aren't clear or are ambiguous
Some comments but wires aren’t clear or are ambiguous

Context to these connections greatly improves readability. Again we may have them in LabVIEW using context help as some names automatically spread from functions.

But if we labeled wires more it could save you using context help, or even opening subVIs to understand what is going on. Do that a few times a day and the productivity adds up.

Larger but clearer with more wires labelled
Larger but clearer with more wires labelled

Oh, and unlike creating an intermediate variable for readability, this comes with no performance implications in LabVIEW!

I will be trying to use them more often over the next few weeks and see how much I can improve the readability of my code.

 

Does this make sense? Or am I completely wrong? Let me know in the comments.

European CLA Summit 2016

Wow.

I am sat in a hotel lobby after the European CLA summit blown away by the amount of talent coming through in the LabVIEW community.

The CLA summit is an event designed for LabVIEW Architects to come together and share concepts and ideas to continue learning after the end of the NI courses.

It was a huge event this year, with I believe around 140 attendees with a great proportion of new faces.

Some of my personal favourites:

  • James Powell presented by showing changes he would make to the standard QMH template to make it less likely to hit bugs by thinking “What would a subVI do?”. An earlier form of this discussion changed the way I view this pattern and I think that everyone should see it!
  • The G Code Manager – A simple plugin tool that can greatly speed up managing some of the properties of VIs and classes. This is now online here and I am definitely going to be trying it out.
  • LabVIEW Channels – Jeff Kodosky presented these and the more I learn about these the more intrigued I get. Especially when Stephen Loftus-Mercer illustrates a potential new design pattern using them.
  • LabVIEW Containers from Chris Cilino – Similarly I have seen bits of these a few times but something resonated a lot more this time. I saw it more as where it could save me time and will be looking to try these out.

But I have to say there was not one bad presentation. They all gave me ideas from looking at continuous integration in LabVIEW again to thinking about our role as a community and what we need to do to bring it forward.

Naturally, I couldn’t help but get up myself. I presented my unit testing methodology which I have previously written about and a short presentation about a command line tool I’m working on (expect to see something here on that soon!)

Finally, I find some of these things are best summarised by twitter!

 

 

How Much Should You Comment Your Code?

How much should you comment your code? As with many things, I’m not convinced there is a “right” answer but it is something you need to consider for your situation

The extremes

Initially when we start coding we are often told to write lots of comments to explain what the code does. Bad programmers don’t comment we are told In fact recently when a non programmer was trying to work out if my code was good the first question asked was whether it is commented (the second was whether I have used OO)

Recently, in the generation z (or whatever we are in now) comments are fought against, with some advocating you shouldn’t comment since the comments are often out of date with the code and become misleading. The idea is if you write good code (well named variables, functions, clean) then they can tell you more than comments anyway.

So Who’s Right?

As usual it isn’t black and white.

The idea of not having comments because they become out of date is rubbish. You are a professional,  it is your job to make sure your code is readable and correct so if your comments are regularly  out of date you have failed, do better next time.

The reality is thought I think this standpoint comes from a reaction to overly commented code. The comments should not just describe the code in gritty detail, that is what the code is for! So let’s not throw out the baby with the bathwater.

So what should you do?

As with anything step back and try to understand why you are commenting. What benefit are you looking for? Why is it hard without it? This should lead you to an answer.

For me comments should enhance the code and say something that the code doesn’t on its own. Off the top of my head these would be things like:

  • The overall intention of a block of code that doesn’t make sense as a subVI but works to a single goal. (This one is a little dubious, often a subVI might be the better choice)
    Comment Code Block
  • Design decisions for why you have structured the code the way you have.
    design option
  • Translating code terms into real world terms. For example using sub diagram labels that says ‘this step is complete’ rather than the standard “true” or “false” of the case statement.
    subdiagram labels
  • Units and derivations of constants and numbers, why are we dividing time by 3600? (Hint: seconds in an hour)

An interesting thing with these is that because they don’t just describe the code, most of them shouldn’t go stale over time since the outcomes of the code will often be the same. If all you do is say what the code is doing, every code change then means a comment change, and you can see how these can get missed.

Have I missed an obvious case? Let me know in the comments.


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close