Branches are For Robots Too

For a long time I  largely ignored branches in git when working on my own.

Quite frankly, as a solo developer I saw no benefit. I understood gitflow and its brethren but it looked like more work for me and no benefit. (again, as a solo developer, I have recommended it to many teams). I just created feature branches when there was a clear benefit.

Recently though, I’ve been looking to up my automation game. For a while now, I have had all commits to my repo for projects run through tests and builds. But I’ve always had a manual release process which had a number of small steps.

I can automate the release processes but I don’t want it to attempt to release every build. I also wanted a place for checks to take place before the final step.

When looking to automate this I realised the same concepts that apply to a team of people, apply to me and my trusty build server. Hence, my adoption of branching.

My Current Solution

What I do now is (mostly) follow a gitflow pattern. I’ve based it on gitflow because it matches my style of software release. To understand why I have added this caveat, check the notes of reflection on the original article at A successful Git branching model » nvie.com and consider how simple you can make it (like GitFlow considered harmful | End of Line Blog).

Once I have a release candidate on develop, I create a merge request to my released branch. (I’ve got no use for released branches right now)

This merge request gives my usual build checks and then I have a checklist covering anything manual:

  • Have I updated the user documentation?
  • Have I updated the version numbers in the VIs (soon to be automated)?
  • Have I run a complete system test?
  • Have I tested this with the end user?

Once I approve the merge request, the CI scripts can detect a merge to the released branch and will build and publish the installers. I’ve also got it making the release tag in Gitlab for me.

So remember, robots are team members too!

black background with text overlay screengrab

HTML On CompactRIO

For a while now, I’ve worked on using HTML pages as an interface to a remote cRIO system. In the National Grid power quality system I worked on, each unit has a web page for monitoring and configuration.

One thing that is much easier with HTML is making dynamic pages. I previously worked on a project with the embedded display on the compactRIO which had to be dynamic depending on the configuration. It was a bit of a headache, I had to script positioning different panels on the UI since there are no subpanels. It worked well in the end, but there was a lot of added complexity.

So when I had a new project with the needs for a pretty simple UI, but possibly some dynamic content (different channel counts) I wondered, could I write it in HTML and display it with a browser on the embedded UI?

I hoped this would:

  • Look better.
  • Be easier to make dynamic.
  • Integrate better with accessibility tools like the on-screen keyboard.

The Concept

The basic design consisted of 4 components:

  1. The actual LabVIEW VI. This was a very state-based UI so we could just write the latest state to global variables without much trouble. There were a few commands that could be sent through a queue.
  2. The LabVIEW HTTP Interface. I built a web interface so the state information and commands could be sent and received as HTTP or REST calls.
  3. The web page (thin client). I wrote this using Javascript in the Vue framework, but it could just as easily use the NXG web module (or whatever that will be called by the time you read this). This is served by the LabVIEW web server and talks to the HTTP interface.
  4. A browser. I need to launch a browser on the system to display the page.

Getting It Working

The LabVIEW and Javascript parts were straightforward enough. I had done similar things before and I got it working over a remote connection without too much issue. The question was, how to display this on the embedded UI?

There are a number of browsers available on Linux, but what would work on cRIO?

I went first to Firefox. It has a kiosk mode to make it appear as a full screen app and limit user interaction. And I managed to find instructions on what was needed to run it on cRIO.

The “installation” process was to install a couple of dependent libraries using the NI package feeds, then download and copy the pre-build x86 Linux binaries from the Firefox pages. As I write this, I see this complete guide exists which I missed before.

The thin client was included in the LabVIEW web services project as static files and that was included in the real-time executable. When the executable starts I used system exec to launch firefox in kiosk mode to the right page.

Aside – On Screen Keyboard

One thing I really thought this would make easier would be the on screen keyboard. I hoped by using HTML the integration with this would be automatic.

Not quite, it just didn’t work. I installed the florence keyboard but couldn’t get the auto-activation to work. There is something to do with accessibility libraries that are included with the Xfce desktop. Because of the time, I gave up on getting this working. The usage I needed was limited so I was able to integrate simple-keyboard.js instead.

First Look

Everything looked very promising. The page and display launched as expected.

I can’t share it as it is a customer project but it looked pretty good to. I had a rare experience on a LabVIEW project of a customer commenting on how good it looked!

Time to put it through final testing.

It All Falls Down

As with any embedded project, a key test is a longer running test. When I came back in after it was running overnight, it was gone. Nothing was running.

I repeated the test in case I did something wrong, same problem.

Not only was Firefox crashing, but it took the RT app with it. I tried some techniques to isolate them a bit more but fundamentally it wasn’t stable enough.

I looked at the performance and unsurprisingly, firefox was too much for a cRIO-9030. It was using significantly more RAM than phyisically existed, in itself that isn’t a dealbreaker, but something was causing it to crash. I managed to find logs that pointed to memory issues. I checked the javascript for memory leaks but no luck.

I tried to find other, lighter browsers to try but struggled to get any working easily. I think there probably are some that you could build for it, but at this point I didn’t have the time.

Back To LabVIEW

At this point I re-wrote the UI back in LabVIEW. I had planned time into the project in case I had too so this wasn’t a major concern. Here is what that meant for the system:

  • CPU dropped a lot. Firefox was using around 15-25% CPU, LabVIEW was generally under 10%. This isn’t a big surprise, especially with no graphics card for Firefox to leverage.
  • I couldn’t go back to a traditional LabVIEW UI. I took some time to replicate as much of the look and feel from the HTML pages as I could and it looked pretty good to be fair – just a few dynamic elements were hard to replicate like highlighting selected array items for example.
  • Luckily, I already had a basic on screen numpad implementation from a previous project for the same customer, otherwise that would have made it much harder.
  • I believe it took about 4 hours to rewrite in LabVIEW vs 20hrs in Javascript. Take this with a massive pinch of salt! I was basically learning the vue.js framework, writing in a numpad component which I already had for LabVIEW. I was also copying the styling and structure into LabVIEW. Still its rare you get to build the same thing twice and compare!
  • The system was stable.
  • When we increase the channels though, I will have to do additional work on the LabVIEW UI which I wouldn’t have to do on the HTML client. It automatically adapted to different channel counts.

Conclusions

So a failed experiment but an interesting one, I hope this might help or inspire someone else. The most interesting bits for me were:

  • Remembering how heavyweight browsers are. But also UIs. If this VI was doing anything more on the RT side I would be concerned about the embedded UI too and the impact on performance. As it happens, all the hard work is done on the FPGA.
  • There is a case for limiting usage of the embedded UI due to this and looking at panel PCs instead – though this really depends on how complex the communications might be. The isolation must be good for stability though.
  • The LabVIEW UI is remarkably powerful and fast to work with. It just does fall down when you want:
    • something more dynamic i.e. 8 or 16 channels.
    • to add a consistent styling throughout.
    • A form style input without a lot of faff.
  • HTML did look better, partly through better technology, but also through easier experimentation. Maybe using HTML to mock up panels before implementation in LabVIEW would lead to better UIs?
  • A nice, lightweight way of running a single HTML view could still be an interesting replacement for a LabVIEW UI. There may be some convergence with the work I’m doing in Rust at some point.

Testing HTTPS (SSL) Connections on NI LinuxRT

This is a VERY specific post – but it took me a while to avoid red herrings so I thought it was worth sharing. Sorry LabVIEW folks – this is all about LinuxRT!

I had a system that was having issues connecting to a HTTPS server (for WebDAV). It had previously been running without issue and then just stopped.

As I couldn’t disturb the software directly I logged into ssh and tried the following:

  • nslookup <server> – This pings for DNS to make sure we have a valid address for the server. This passed.
  • ping <server> – this failed but after testing from another system that also failed, so this server probably has ping responses blocked.

My next step would be to try curl which allows command line http(s) but that isn’t installed. Instead I found that I could do a security check with openssl. This should confirm that the certificates work and the port is reachable.

The command was:

openssl s_client -quiet  -connect <server>:443

This failed! Hmm – maybe because this is an older LinuxRT distribution there is a certificate problem. So I went hunting for the certificate store.

It turns out there isn’t a central store automatically recognised by openSSL. So for the proper test we actually need to link to NI’s certificate bundle at /etc/natinst/nissl/ca-bundle.crt (On LinuxRT 2017, but I would be surprised if this moves).

I expect this is true of any command line tool that uses SSL – so if you do install cURL it will probably need a link to this.

So now the command is:

openssl s_client -quiet -CAfile /etc/natinst/nissl/ca-bundle.crt -connect <server>:443

That works and I have to move on in my troubleshooting. (At this stage it is working again without any intervention – not sure whether to be happy or not!)

What To Do About GDevCon

Hello Everyone,

As you can imagine, there is a lot of uncertainty in the GDevCon team right now about how we handle GDevCon and COVID-19. We have been trying to figure out the right path to take with GDevCon. One of the things I think that we could have done more is communicate the options and the approach we’re taking to you all, so you can plan and try and understand what the future holds.

So this post is an attempt just to reset that a little bit, help you understand where we’re coming from, and also solicit any feedback, because we’re making assumptions about what is most useful to you as well. 

A couple of caveats upfront:

  • This post is my interpretation of the discussions in the GDevCon team (with their blessing). Conjecture may not be the same as everyone on the team.
  • No-one has a crystal ball so please don’t make hard plans based on this post!

Responsible Timing

So the first thing to say is we didn’t want to rush any decision. Obviously in March the severity of the situation became very obvious. With the event in September, six months later, we didn’t want to try and predict the future. We want GDevCon to go ahead if it can. We think it’s a very valuable event and quite frankly if it is able to go ahead, it’ll be a nice relief after these lockdown times.

So we made a decision early on to hold off a final decision until June. I don’t think that people are making many travel plans anyway so I expect this delay will have a minimal impact on attendees but is a big benefit to us to see what is happening in the world.

The Options

So what are we considering as the options to decide between:

  1. GDevCon #3 going ahead.
  2. Postponement (probably to early 2021)
  3. Online Event

Going Ahead

To go ahead though I think we will need to be confident that three conditions are in place by September:

  1. International travel needs to be easy. GDevCon is a global event with attendees, sponsors and speakers coming from all over the globe.
  2. CERN needs to be happy that they can host us in a way that everyone is safe.
  3. Companies need to be happy to send their staff in the confidence that everybody will be as safe as possible. There may be a situation where travel is possible but companies are still not allowing it. This is a harder one to define but worth discussing.

As I write these down, to be perfectly honest, I find it hard to see that we will hit one of these points, let alone all three. As countries begin to ease restrictions over the next month though this will probably become clearer.

Postponement

Postponement is probably the most likely option. We would look to postpone the event, perhaps to early 2021. 

We have had a great response from sponsors for GDevCon #3, and the presentations submissions have been fantastic. This takes effort from us and from presenters and sponsors so we would like to avoid throwing that effort away and take the same speakers, sponsors and attendees with us to a later date.

No-one would be obliged though, and full refunds would be available in this case.

The hard part of this decision is when will it be safe?

Online Event

We have had some basic discussions about an online event but are in broad agreement.

To be perfectly honest we’re not keen. Since we started GDevCon we saw the kind of team building and community building aspects to be as important as the content. So that’s why it was a two day event from the start, we wanted that evening event for people to mix and talk and continue conversations. And so we feel that we would prefer to avoid an online event, and focus on getting an in person event going.

 

The Future

The good news is we have been very aware of risk through this whole process and GDevCon 2020 is no exception. We are in a position where we can cancel the event, refund everyone and we will still be around and as strong for 2021.

Feedback

I hope this helps explain where we are and the options we are considering. 

This event is about you though so tell us what you think. Does making a decision earlier help you significantly? Has your company/spouse already ruled out travel in September anyway?

Understanding your position will help us to understand the options in front of us. So either comment below or get in touch with the team via email or social media.

e: admin@gdevcon.com
t: https://twitter.com/GDevConference
l: https://www.linkedin.com/company/gdevcon/

What to Do When Development is Quiet

Having worked for myself (and by myself mostly) for a few years now I feel lucky that I have had some training for the “new-normal”.

Right now things look OK for me – My current projects can largely be worked on remotely. But what if you can’t?

So here is my general plan for a quiet development period:

1. Work on processes and automation.

Look back at your development processes. Now is a great time to cast a critical eye over them.

In particular look for opportunities for automation such as CI or code scripting. The work you put in on that now will pay off for years to come.

2. Do experimental refactoring

Not expecting to be able to deliver code for a few months? Now might be a great time to work on those refactoring jobs that always get pushed back because “deadlines”. (Use source control in case it doesn’t work!)

3. Learn, learn, learn

It’s a great time to learn a new language, technology or skill so you can come out of this stronger.

Money may be tight but there are some great learning platforms out there.

I joined the ACM which gets you access to O’Reilly’s e-learning platform including their books and video courses.

I also use Pluralsight when picking up new technologies (I’ve been learning SaltStack at the minute)

For software engineering, I’ve always liked the look of Construx online learning as well.

EDIT: LabVIEW Learning

If you are already a competent LabVIEW developer I would use this as a chance to look outside LabVIEW a bit more. You will probably learn that in normal circumstances anyway and I find it hugely valuable to increase your world view. But I should mention resources here as well:

  • Since I posted this, I’ve seen that NI has announced that all their online learning is available for free until the end of April: https://learn.ni.com/training
  • There was an excellent book released recently 🙂 https://www.hive.co.uk/Product/Richard-Jennings/LabVIEW-Graphical-Programming-Fifth-Edition/24023115
  • Nicola Bavarone (thanks!) offered the great suggestion of looking at the badges on ni.com (https://learn.ni.com/badges). I’ve found these good for identifying gaps in my knowledge.

4. Give Back to the Community

Everyone is going to be in a bit of a funk. Wouldn’t a great new open-source tool make everyone feel more positive? Or perhaps try some teaching yourself. Recording some videos and put them on youtube to share your expertise.

What do you do that I’ve missed?

Unit Testing Large And Complex Data

This week I’ve been helping some developers with unit testing adoption which raise an interesting topic that I’ve not seen explicitly addressed.

Unit testing is great when you are working with simple data but what if you have larger or more complex data such as waveforms or images?

I’ve used a couple of techniques over the years:

Simplify

Maybe this is an obvious one – but the first option is to identify whether the method can still be applicable on a subset of the real data.

For example, on an application where I’m doing some image processing, the real data will be 256×256 pixels

However, my tests run over 3×3.

This is still applicable to the larger image as many algorithms involve handling edge conditions and the normal implementation in the middle. The larger arrays will have more of the normal implementation but the smaller tests will still cover the edge conditions as well (which is often where things can go wrong!).

Generate

In some cases, you need a full input set but you are just measuring some properties.

An example might be a frequency processing function where we want to extract the size of the peak at 1kHz.

The FFT parameters change a lot based on the size of the data you put in so really we want to use the actual size we expect in the application. So instead what I have done in the past is to write a basic routine to generate an appropriate signal.

In the example above I use generators to produce a multitone signal, perform the frequency analysis and manipulation which I am testing and then compare just the components of interest.

(Note: this is before I got a bit more structured with my testing!)

Credit to Piotr Demski at Sparkflow for pointing out an important point I missed when I first published this. If you are generating data – it should be the same every time i.e. beware random data sources. Your tests may fail without anything changing.

Golden Data

The approaches above may not work if it isn’t obvious how to generate the data or you can’t generate the important elements easy. They also only work where the problem is importing data – but what if you need to compare a large result.

Here I will revert to storing a constant of some reference data. Normally by running the system, using a probe on the data and copying it to my test.

Quick Demo of Golden Data Process

On the input, this can work without any compromise.

For expected cases there is an obvious catch – if you generate data from the VI under test it will of course pass. However, if you have no way of generating an expected output then we have to compromise.

Instead, we can write the algorithm until it works (validated manually) and capture that data for the test. It means the test doesn’t guarantee it is right, however, it will fail if you do anything that alters the output, which gives you a chance to catch potential regressions.

Another path is to use a known good algorithm (perhaps from another language) to capture the expected output in a similar way. This is fantastic when you are looking to replace or replicate legacy systems since you can take data directly from them.

Catch It At Different Levels

In some cases, it may simply not be viable to create a unit test for the algorithm. That’s OK – 100% coverage isn’t a useful goal in most cases anyway. Instead, consider:

  • Try and unit test the components inside the algorithm (depending on the structure of your code). This will provide some confidence.
  • Make sure this algorithm is verified in system or integration level tests. Ideally, find ways to make it fail loudly so it isn’t likely to be missed.

I hope that gives you some ideas for your use case. Feel free to post any questions you have.

Reflecting on GDevCon #2

GDevCon #2 is wrapped. I’m really proud of the event we put on this year and its got me really excited about where we can take it next year. We’ve got plenty to think about and I hope that we can top it again.


I’ve taken some really interesting thoughts away and I thought it would be interesting to share them (and help me by having to describe them!)


Practical Talks


Our target from the start has been that this has to be valuable to all members of a team. The CLA summits are often a place to pontificate about architecture and design but that is of no interest to someone getting going with LabVIEW or working on an established framework but with engineering challenges and deadlines to meet.


This year I feel we got this much better. I believe that showed in the variety of levels, engaging presentations and useful takeaways.


Pragmatism


On the whole, it felt like there was a more pragmatic approach than other conferences (influenced by DSH workshop’s pragmatic software development workshop?).

This is a combination of confident presenters who were happy to present what they do and accept it may not be perfect but it works for them. This was then largely reflected in comments and questions that respected this. (while still looking to inform when there are other options as well)


This really excites me as I think it makes it a more friendly and open environment and hopefully might encourage more to speak so we get different perspectives on topics.


So think about this in future presentations. It works for you! and try and think about the pros, the cons and your context which all factor into it being a good engineering decision.

“Chill Out” Room

I still love having the streaming room. I get frustrated sitting in one place for too long so it was great to have somewhere to stretch out but not miss the content (as I often have at other conferences). It does cost money! But it seems to be valued by those who use it.


Ready for Next Year!

I hope we can start soon with planning next year so make sure your signed up to the mailing list at gdevcon.com to hear about it.

We will be asking for feedback soon as well so do let us know what works and what doesn’t. My final takeaway is that I can be very wrong about what people will like! I’m happy to admit that I had concerns about a few things this year and how they would be received that were completely wrong. So help me to be right more by telling us what you need!

The Graphical Tax

I’ve been thinking a lot lately about tooling. I’ve been getting my CI system running smoothly (ish, more on that later) as well as exploring another new language and working on open source projects on GitHub.

I’ve realised that there are specific penalties that we have to face as LabVIEW developers that are largely down to the graphical nature of LabVIEW.

In my head, I think of this as the graphical tax and thought it would be interesting to put these out there, see if there are solutions I’m missing or at least let’s have a discussion.

Software Engineering Tools

This one is an easy place to start.

Most software engineering tools assume a text-based system. Git works incredibly well on text. Github can merge requests in the browser on text-based files. CI tools use a text-based interface (the command line) to automate tasks like building, testing and linting your code.

As LabVIEW developers, we miss out on a lot here:

  • I’ve tried to plug the gap to CI tools with G CLI ( https://github.com/JamesMc86/G-CLI) but it is far from seamless as LabVIEW still has many cases where it pops dialogues.
  • Git etc. do work but don’t understand the files so manual merge and diffs all round
  • Any tooling for the language has to start from scratch. We can’t leverage what already exists.

This one is frustrating as software engineering marches on – we have to be careful not to get left behind. NXG promises to help a little with a human-readable VI format but I’m not yet convinced it will solve many problems (though some I’m sure).

Consistent Style

This is one that I think is really fundamental and just something we have to accept.

Compared to LabVIEW, text-based style seems simple! spaces vs tabs, newlines or not, camel-case or snake-case. There are certainly huge debates but it is essentially a 1D problem. It’s also easier to manage with linters and code analysis tools.

With LabVIEW, we take that problem and add new dimensions. Style is literally 2D. There is so much room for variation. So many mitigating circumstances for given diagrams that having a common set of skills is hard.

With text, you can add your own visual style on top with your editor while keeping the code “pure”. With LabVIEW, if you want different colours for your background it saves into the code.

A great example was a pull request I got on GitHub. After downloading the code to my machine many of the comments were too small. When I requested they were made larger to fit the text, they sent back a screenshot showing it fits on their screen.

This is an area that interests me in understanding more. I’m not sure if it is hopeless or we just need to be stricter. It certainly feels like a barrier to more collaboration and so something that should be considered.

Web Tools for Review

I think this is going to get harder and harder over time.

Increasingly the web is the centre of our working world. Text translates to that world well.

Recently I found a bug in an open-source javascript project that I wanted to fix. But I wanted to understand it more first, so I could go to Github and explore the code in the browser rather than littering my computer with hundreds of GitHub repos I want to study.

Review is also getting bigger. The place this really hurts for me is in pull requests on GitHub. When I receive a pull request for the C# part of G CLI it can be a 5-minute process. I can review the diffs in GitHub and understand what is going on.

With LabVIEW, you are forced to pull the project down to the desktop to open the IDE and view it. So when, like as I write this, I’m not on my main dev machine I may not be able to provide any comment on what is happening.

This could be improved by having sites with plugins for LabVIEW that can generate code images and diffs. I know some work has been done on this in Jenkins so it can be improved. But it will never be as straightforward as commenting on text code (line by line comments are handly).

Why Pay?

So why pay the graphical tax? Or:

If your just going to moan about it James then use text based languages

Well, there are still some hugely compelling reasons:

  • I believe it is easier to understand program structure graphically which greatly speeds up development and debugging. At least I’m a very graphical thinker.
  • There are no other tools that allow you to target desktop, real-time and FPGA systems with very little difference in syntax and concepts.
  • Even on a single target – there are not many other languages that combine a fairly high level of abstraction and the level of performance that LabVIEW provides.

So I still think I’m a net gain for using LabVIEW. I have no plans to jump ship any time soon. But maybe, by sharing my concerns, I might trigger some thoughts on how we can tackle these issues.

Open Source LabVIEW – How To Contribute

Intro

Open source software projects are making huge contributions around the world. They allow communities to pool their resources and achieve progress that couldn’t be reached by individual teams in silos.

While there are open source projects in LabVIEW, it feels like a resource that we aren’t great at using as a community. Every project I’ve seen seems to handle things quite differently, and it can be hard to track down where to work. Since I started the first draft for this post it seems there have been discussions in the US on the same topic:

In this post (and another on setting up a project). I want to collate what appear to be best practices in other communities to see if we can’t improve the open source capabilities in our community. There are hundreds of guides out there – so I will be liberally linking to others and trying to highlight the potential differences in LabVIEW.

Open Source Project Structure

So what is an open source project?

In short, it is a project that someone has decided to make available for free to everyone to use and improve. I’m not advocating making all of your work free, but often, there are bits of sawdust – the bits of code or tools that you have lying around to make you more effective, that you can share and help drive the community forward.

Firstly let’s look at the people involved in a project:

  • Maintainers – Normally the original developer, they are responsible for the master copy of the project. It may also include trusted contributors who have taken on more responsibility. As well as working on the project, this is the group that will be involved in setting a direction for the project as well as assessing and accepting contributions.
  • Contributors – These are people that have provided code changes or documentation to the project.
  • Users – More obvious, people who are using the project themselves and may contribute bug reports and other issues.

Find Projects

There are some tools for finding open source projects to use or work on, but don’t be surprised if LabVIEW is missing from the list.

If you are looking for a project for a specific function, then Google may be the starting point. However, when I tried this with Modbus, the results weren’t great. Adding “open source” to the search got us closer, we now see download pages, but no source code.

So, to do better, we need to search some common places for open source projects to be hosted. We can stick to Google for this. By far the most common for new projects is github.com so we add site:github.com (bitbucket.org and gitlab.com can be worth trying too). Now we see two open source libraries

Of course, those sites have their search functions which can be useful too.

If you go to Github.com and enter “language:LabVIEW” into the search box, it will show you all of the LabVIEW projects on there. You can then add additional search terms to narrow it down.

A big concern with open source is how do you know if this project is reliable? There are a few indicators that you can consider:

  • Most services have some form of user rating – while the small size of the LabVIEW community doesn’t help these – you may be able to apply some judgement to them. For example, projects can be “starred” on GitHub.
  • Look at how active the project is. GitHub will show you the latest commit, and you can also see releases, commits and contributors at the top. If there is a good commit history or multiple contributors, this is a good sign the project is active which usually means a higher quality output (since maintainers are fixing bugs or improving the code base).
  • Finally, it is open source! Download the code and examples and take a look to see what you think, try it out quickly before committing to it.

Use and Report Issues

The simplest thing you can do to support a project is using it! From the GitHub page you should be able to find directions to install the project and possibly directions on reporting issues.

I try and will be convincing others to do the same, to use the built-in issue tracker on GitHub or bitbucket. This means that:

  • The issues and discussions are public so others can contribute.
  • When you have an issue, you can search this first and see if there is an existing solution. (yay for reduced support burden!)
  • The issues list becomes a list of items that new contributors can work on.

When you find a problem or something you want to change, the first thing to do is create an issue. This allows you to open a dialogue with the project maintainers about the problem. They may already have a fix, or they may suggest you contribute the change yourself.

Fix Issues and Contribute

Firstly, contributions don’t have to be code. I was delighted on the LabVIEW CLI project to have help producing a getting started guide. Maintainers sometimes leave documentation and examples to the end but they are hugely valuable. Of course, you can also contribute to the code base as well.

A good project should have a contributing guide in the repository. This will let you know how the maintainers want to receive contributions. This is also a great place for details on things like LabVIEW version, source code structure or build instructions.

If ther is no contributing guidelines, there are some excellent guides for this already, so I’m not going to repeat them here:

  • A good general description: https://www.hanselman.com/blog/GetInvolvedInOpenSourceTodayHowToContributeAPatchToAGitHubHostedOpenSourceProjectLikeCode52.aspx
  • A LabVIEW specific guide to Github (covers starting from scratch): https://github.com/NISystemsEngineering/GitHub-Hands-on/blob/master/docs/Git%20Basic%20Concepts.md

Further Reading

I hope this article gets you interested in contributing to open source in LabVIEW. If you wanted some more comprehensive guides then I suggest the following:

  • First Timers Only – A guide for those who haven’t contributed to open source before.
  • Open Source Guides – A comprehensive set of guides for open source. The link here is to their contributing section.

But most of all – starting using a project today and let the developers know what you think!

Getting VI Analyzer Into Our Workflow

We have long known that VI Analyzer is a good idea – much like unit testing – people on the other side of adoption swear by it.

We’ve found a few hurdles for mainstream adoption into our process, and I suspect yours too.

1. Understanding Why

The first step in adopting VI Analyzer (and keeping that adoption going) is understanding why you are doing it!

What I mean by this is you need to make the tool fit your process – not the other way round. If you use VI Analyzer because NI says so then you’re going to see less benefit and it will feel a lot more effort.

For me –  I believe that consistent style and code inspection reduces bugs and improves readability of the system. I have a style guideline, but I don’t always follow it. As I typically work on my own, then code reviews aren’t an option.

Your “why” will probably be very similar but the subtle differences will make some of the following items slightly different from mine.

2. Complicated Setup

As I said above, VI Analyzer is all about consistency for us. We want every project to follow the same style guidelines. Unfortunately, VI Analyzer does not make it very natural to create a standard test configuration and share it between projects since there is a single configuration file for test setup and which VIs to test.

These are the steps that I went through to build a standard configuration:

  1. Start with your style guidelines. I made mine into a table and identified what already had a VI Analyzer test, what had a custom test in the community, what could have a custom test and what was not testable.

    Style Guide With Tests
  2. I downloaded the custom tests that are available online and created a couple of key tests myself. I didn’t do them all, and I will expand my coverage over time.
  3. I used a project to create a VI Analyzer configuration file. I worked through the tests and configured them to my style guideline. Then I removed all of the VIs to be tested. I saved this configuration file as my standard.
  4. I created a VI package which would install the custom tests and the configuration file to a shared location in my documents. Full credit to Fabiola De La Cueva and Delacor for this idea. They have been helping their customers with this for some time. (You can see their video of this on Youtube)

By completing these steps up front, I can introduce a consistent VI Analyzer configuration to a new project quickly and easily.

3. Defining A Trigger

As with any good habit, you need a trigger to tell you when to do it else you often won’t follow through. There are a few common triggers that I have seen people using:

  • Post-Commit in a CI Server – I’m not a big fan of this one because you now need another trigger to review the results and implement the changes.
  • Pre-Code Review – This is a great one if you are on a team. You should test with VI Analyzer before a code review. You don’t want to waste time picking up things a machine can identify faster. As I am a solo developer, this one is limited for me.
  • Feature Branch Merge – If you are using a branching workflow in Git or Mercurial, then you can use a feature branch merge as a trigger. This is a good trigger as it should limit the scope of what needs testing. However, if the change list gets long, then there could be a lot to review.
  • Pre-Commit – I use a Javascript IDE that has a tick box in the commit tool to check style before the commit. This workflow is what I want in LabVIEW. Testing at each commit makes the tests quick (since only a few files have changed) and it means that you fix issues while you are in the mindset of that code. It also means that once you commit the code, it is “done”. The main problem with this in VI Analyzer is the execution time of the tests.

4. Speed

I wanted my trigger to be a commit. I feel like the code should go through VI Analyzer to be “finished” and only “finished” code should be committed.

This is a problem though. Analyzing a whole project can take many minutes, and I might commit 10+ times a day.

One way to solve this would be to test only changed code, but VI Analyzer lacks any native support for this workflow.

I have developed a tool to tie it to git changes. It isn’t great, so we haven’t shared it yet (it still lacks many essential features), but it has started us using VI Analyzer regularly.

This tool will take a configuration file and then run it against only the code that Git shows as changed. Testing only the changes cuts the test time and now means that it is possible to check at each commit and fix the changes before that commit.

Ugly – But Basically Works

By overcoming these hurdles, VI Analyzer has become a standard part of our workflow. I hope you can use this to incorporate it as well.


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close