A crab reading a book

How I Learned Rust

One question that has come up a few times is “how did you learn Rust?”

I am sharing my path with Rust and the other languages I have experimented with.

I want to emphasise that it is great for any developer to take some of these first steps into other languages.

Once I started on this path, I developed much faster in my primary language and what I was learning.

By learning other languages, I found I could separate the language-specific ideas from the underlying software design principles, making me a better designer and bringing new ideas back to the table.

My Background

First, I wanted to highlight some things that impacted my choices and the elements I spent more time on, as this will be different for you and affect how you approach this.

  1. When coming to Rust, I had already worked in several other languages, and at least in Javascript, I had a significant code base written, so I had gained familiarity with a text-based approach.
  2. I did a bit of C++ and computer architecture at university, so while my systems understanding was poor, I had some fundamentals.
  3. I’m rubbish at working on toy projects on the side.
  4. I am quite unstructured – skimming through the basics quickly and jumping back and forth as needed.
  5. I work for myself, so I don’t have to ask permission.

Getting Started – Do I Want To Learn This

At this point, I started exploring Rust around 4 years ago, so my memory is vague.

I wanted to pick up another systems language to expand what I could deliver to customers. But memory issues in C++ made me nervous.

I kept hearing about Rust on hacker news and decided to look further. My first question is, “do I think this is worth investigating more?”.

I don’t remember exactly, but I suspect this looked like this:

* Finding some basic tutorials and guides to understand the syntax and mental models.

* Following some of these to get something basic working with every shortcut possible.

* Finding libraries related to my work and looking at them on GitHub. Can I make sense of them?

Confirming My Interest – Minor Side Projects

My next step was to ensure my understanding of the fundamentals. I tried to reach a point of knowing what I don’t know and looking for places in my business which would exercise them.

They always have to be a little real to me. Otherwise, I struggle to work on them.

A couple of early examples:

  • I wanted to understand the industrial libraries, and we had issues with a Modbus system – so I created a small application which printed the values from the Modbus device to the command line.
  • I needed to write a LabVIEW driver for a TCP device I didn’t have, so I wrote a simulator in Rust. This project let me experiment with TCP.

In each case, I would know which bit of Rust I wanted to learn (let’s say lifetimes), and so if I hit a problem which needed another part that I didn’t know already, I would either work around it or park the project until I learned it.

This focus was critical:

  • I chose a language feature I wanted to learn.
  • I sidestepped other features I didn’t know.
  • I worked on problems I was already familiar with, so I didn’t also need to learn how to solve the main problem.

This combination meant each experiment had a singular learning goal; otherwise, I tended to get overwhelmed.

Also, note from the examples that they could all be thrown away when I was done – that takes the pressure off!

Committing – Could I do this in Rust?

For the next year or so after deciding I liked Rust, the question was, could I do this in Rust?

Answering this looked like taking projects I had already done, or was doing, in LabVIEW and _designing_ them in Rust. Generally, I didn’t build them all, but I would use the playground or a small project to work out:

  • How would I structure this bit in Rust?
  • What would the types look like in Rust?
  • Is there a library to do x in Rust?

Each of these might have just been a 20-line example solving the major issue in Rust. But it helped me build confidence, especially with lifetimes, which tripped me up the most in these experiments.

This stage also involved absorbing myself in the Rust community, lurking forums, reading blog posts – anything I could to build confidence or find the sharp edge which would stop it from working.

First Steps – Rust Components

At this point, I was convinced I could solve most problems in Rust with enough effort! Now, I needed to gain experience in building larger pieces of code.

Simple questions like how do I structure this in a project? What about CI? Can I be productive?

So, I started looking for changes in projects which could be done in Rust. Not building a whole project but just components of a project where Rust had a clear benefit.

In the end, the key projects that formed this were:

  • A device server in Rust – We needed to use some C++ APIs anyway, so I built a component in Rust which could interface to the device and communicate with LabVIEW through TCP.
  • A web backend in Rust – This was an easier choice as LabVIEW is not an option anyway, so this was displacing Node.js, which was my preferred language for this.
  • Migrating G CLI to Rust for the launcher made it easier to make it cross-platform.

I had to be realistic with my experience and accept I had to take on the additional cost. So, I charged what I would have charged with LabVIEW but allocated double the time to complete it.

I also had to be prepared to throw it away if it didn’t work. However, I did enough prototyping up front to be confident this wouldn’t be the case.

Again, there was a clear benefit to the project to having a separate component anyway. If it was a more minor addition that was just as good as LabVIEW, I stuck with that.

Today – Rust Projects & NI Libraries

That brings me to now – I have confidence that I can do _almost*_ anything I do in LabVIEW in Rust. This confidence means that Rust is now my default in the future.

Rust will be the choice for greenfield projects unless there is a compelling reason not to (and these exist, such as customer commitment to LabVIEW due to other developers or needing to prepare to make the relevant APIs in Rust).

I continue in LabVIEW for existing projects, but a couple have some compelling benefits in Rust. For these, I’ll be migrating new components to Rust with a long-term goal to replace LabVIEW in one case due to significant product benefits.

I’m also working on preparing integrations with key technologies in Rust. I’m working on an FPGA interface library (since I will still use LabVIEW FPGA) and a TDMS library. You can already find my crates for NI System Configuration API and LabVIEW interoperability on crates.io (the Rust package manager)

* .net integration and UIs are still a weak spot.

Logo of the Rust project

Why Rust? (The More Technical Edition)

I’ve written a high-level view of why I got interested in Rust on the Wiresmith Technology site.

After many conversations at GDevCon last week, I felt it would be helpful to put down the technical reasons that Rust appealed to me and how I (and therefore you) could decide what a suitable language is for your test & measurement applications.

What’s In A Language

As I spoke to various people, I got more explicit on the areas I’m evaluating for any given programming language.

  1. Mental/Programming “Model” – This is a hard one to explain, but the simple version is some languages just fit the way you think and some don’t. The model includes technical aspects such as your comfort with manual memory management or imperative vs. functional styles.
  2. Runtime/Deployment Capabilities – Can I run this where I want? Different languages are better suited to environments like the web, docker, microcontrollers, and desktops. There are also performance considerations – is this language’s run time environment capable of high-performance or real-time use cases if you need them?
  3. Tooling – Are the tools around the language high quality and meet your needs? These might be tools like package managers, documentation generators, build automation and orchestration.

These are the three most significant areas I use when examining languages.

Why Look Elsewhere (or where LabVIEW is weak)?

I first started using LabVIEW around 15 years ago. So it isn’t a bad tool. Otherwise, I would have jumped ship long ago!

However, my needs have changed, and I’ve seen new capabilities become available in other languages I envied.

My trajectory has moved me into more projects that I call product development or prototyping. In this area, I found some pain points with LabVIEW:

  • I was moving away from its core in test & measurement, meaning more gaps and less investment in the areas I’m using.
  • The emphasis on software engineering tools has grown as development is done over a more extended period and put into the hands of more users who aren’t familiar with the technology.
  • Only deploying to the desktop or NI’s hardware became restrictive. I’ve had customers where the right solution is an embedded PC or high-performance server, and LabVIEW is difficult to deploy to these.
  • Increasingly, there is a need to integrate into web technologies where LabVIEW is poor—for example, running a web server to collect data.

These are my reasons, and they will not apply to your exact circumstances. This is what I want to emphasise here.

My message is not that everyone should be switching to Rust. In my case, it is a great option. If you are working on an automated test system, for example, you probably don’t have the same pain points, and LabVIEW may work very well for you.

Where Did I Look?

So, lets talk through some languages I’ve investigated and highlight some of their unique capabilities and limitations that might be interesting.


My first trip into text-based languages was Javascript. Having built a web backend in LabVIEW, it turned out the web server had serious scalability issues at the time. LabVIEW’s web services are challenging to use in anything beyond simple use cases.

Javascript is an interpreted language and generally runs in a very event-driven manner. Being interpreted means it isn’t the fastest (although it’s [JIT compiler](https://medium.com/nerd-for-tech/inside-the-v8-engine-b81aff3eecdb) means it is faster than you might think). Still, it is well suited to web environments where you find it as _the_ language in the browser, and Node.js means you can build pretty complex web servers with it.

Put simply, though, it is not designed to interact with the hardware system, so it is unlikely you will ever see this on a desktop or embedded system (though there are projects to make it a little easier).

The tooling is reasonably good, but it is a little disjointed and moves quickly. Picking up an 8-year-old project recently required a decent chunk of time to modernise everything to the changes that have occurred!


In the LabVIEW NXG days, NI told us that it would be built in C#, and at least to start with, you would need to use C# to extend the environment.

So I decided I wanted to get my head around it, so I wrote the first version of G CLI in it.

C# is a compiled language that runs on a virtual machine-like runtime called the .net CLR (common language run time). It is designed to be extremely powerful for large applications, having been created by Microsoft as a response to Java.

One of the critical characteristics of C# is that it is a garbage-collected language.

Garbage collection means that instead of the user manually controlling memory allocation, a periodic process in the runtime reviews the current objects in memory and frees whichever aren’t used any more.

The big pro is that you don’t have to worry about forgetting to free memory or other memory-related issues as a user. A significant con is that when the garbage collector runs, it can affect the performance of your application. This performance isn’t critical in many applications, but high-performance or real-time applications are negatively affected.

The large runtime also impacts its uses in embedded applications, but it will run in most other places now there is Linux support and container support – it is most commonly found in desktop applications and web applications.

I also didn’t like the exception-style error handling, which has bothered me in many languages after coming from LabVIEW.

For me, the performance may limit its usage, so once NXG disappeared, so did my use of C#. However, this could be hugely powerful in many applications where you use LabVIEW if it ticks some other boxes.


Because of where I ended up – it is worth a small shout out to C and C++.

These are systems languages. Systems languages are designed to run “close to the hardware” and are what many operating systems are built with and interoperate very well with.

They are widely supported across different hardware, and pretty much every other language in the world can call libraries created with these since their minimal runtime is generally built into every environment you might run code in (excluding bare metal).

I picked them up because I needed maximum performance for part of a LabVIEW application, so we wrote the processing logic in C and called in from LabVIEW.

A lot of crucial elements overlap with Rust, so why did I skip over these:

  • C is not a particularly expressive language, and I felt like it would get unproductive without built in templating/generics/polymorphism. C++ is better for this.
  • The test library seemed broken again every time I returned to this project. Package management is limited (but improving), and I could see a lot of fighting with tooling.
  • It is very easy to make memory mistakes that cause application crashes. I was happy building small functions in them, but building a complete application without memory issues seemed daunting.


Another language I’ve been using more is Python.

Famously, Python has a huge user base and a wide array of libraries written for it, especially in data analysis.

The tooling is also good, with excellent testing capabilities and a reasonable package manager (with a few quirks!)

However, I found two significant downsides:

  • Python is an interpreted language, which means it has a significant run time and relatively poor performance. Although not as bad as some believe, most heavy lifting is handed off to compiled libraries.
  • I could not get on with the dynamic typing. I completed one web backend in Python and was cursing the typing constantly!

There is a lot of talk about Python in the LabVIEW ecosystem, but I suspect it is overhyped. For me, typing was a big issue that made me very reluctant to build anything significant in it.

Its power is in automation, connecting different libraries quickly and easily. So, in a test environment, it could be great for building individual test steps with a pretty defined scope.

I still use Python for simple automation tasks or interactive data analysis to experiment with different techniques. I know there will be a library somewhere in Python!


Rust has proven to be my way forward for several reasons. I have been working with it for 3-4 years at this point and found:

  • The language and type system to be expressive and powerful (but some find it too complex and explicit).
  • The memory safety guarantees resolved my no.1 concern with C++.
  • The tooling ecosystem seems to be second to none. Package management, testing, benchmarking, and documentation are all well-covered and ingrained in the language.
  • It is flexible with deployment. It is a systems language, so I can deploy it to a microcontroller or a large server. I’ve also been experimenting with OpenCL, which still needs small portions in C but allows the use of GPUs, DSPs and other hardware accelerators.

These allow me to offer a better service to my customers by leveraging a range of modern hardware without bumping up against limits. Meanwhile, I still feel as productive as I do in LabVIEW.

There are downsides, of course – Rust does require you to manage your memory, which can be a learning curve and makes applications more complex. But this is necessary for some of the deployment flexibility.

Summary – What Is Right For You?

So what should you do? Well, in many cases, sticking with LabVIEW is a good option if you aren’t feeling the same pain that I am. This article is my story, not a roadmap for everyone.

If you are feeling pain, though, hopefully, the summary above will help signpost you to a language that may fit what you need and start doing some research.

Does it fit my mental model? Do I like the community? Have others managed to solve similar problems in it?

This article should help you to start asking these questions.

Unmounting USB on NI LinuxRT

This is going to be super specific!

Recently I had a project where we needed to move data from a LinuxRT controller to a USB stick for offline data transfer.

As reported on the NI forums at https://forums.ni.com/t5/LabVIEW/cRIO-Close-USB-Flash-Drive/td-p/3327336/page/2 the problem is, removing the USB stick without unmounting it causes a scan and fix dialog when attaching the USB stick to Windows instead.

Using the details in that post and some more research I found you could do it using the udisk method when you have the permissions set correctly.

Note: This has been tested on LV2020 images of RT Linux. In other versions your mileage may vary.


First step is that you need the permissions to run the udisk tool. User code runs as the “lvuser” user so that must have access to the methods to unmount the disk.

I’ve tried to minimise the permissions granted to minimise any security risk from including this. (see principle of least privilege)

The UDisk tool uses the PolicyKit framework for granular permissions. To allow new permissions we create a new file in /etc/polkit-1/rules.d which it will read and apply. I called this 00-lvuser-unmount.rules. The contents I used are:

polkit.addRule(function(action, subject) {
  var YES = polkit.Result.YES;
  var permission = {
    // required for udisks1:
    "org.freedesktop.udisks.drive-eject": YES,
    "org.freedesktop.udisks.drive-detach": YES,
    // required for udisks2:
    "org.freedesktop.udisks2.eject-media": YES,
    "org.freedesktop.udisks2.power-off-drive": YES,
    "org.freedesktop.udisks2.filesystem-unmount-others": YES,
    "org.freedesktop.udisks2.eject-media-other-seat": YES,
    "org.freedesktop.udisks2.power-off-drive-other-seat": YES
  if (subject.user == "lvuser") {
    return permission[action.id];

I suspect this can be reduced further – now I see it again I’m trying to remember whether the “other” entries are needed and will now be testing without it! (Note: tested – it did not work without these).

Scripting the Unmount

We need to achieve a few things to unmount the drive:

  1. Identify the drive ID that is mounted. I’m assuming here that we are interested in the drive that is auto-mounted to /u on the cRIO
  2. Unmount the partition.
  3. Power down the device.

The script below can be provided to a system exec VI to achieve this. This uses findmnt based on the /u path and then lsblk to find the drive based on the partition name.

Then we use udisksctl to control the drive.

TARGET_PARTITION=$(findmnt -n -o source --target /u) &&
TARGET_DEVICE=/dev/$(lsblk -no pkname $TARGET_PARTITION) &&
udisksctl unmount -b $TARGET_PARTITION &&
udisksctl power-off -b $TARGET_DEVICE

That is it! With these two components you can unmount the drives from your LabVIEW code (or anything that can call a shell script).

Learning New Programming Languages

Over the last few years I’ve expanded the programming languages I’ve worked with. There have been business and technical requirements driving this but it has been remarkably beneficial to my skills as a developer as well.

By learning other languages I’ve found two huge benefits that stretch across all my development:

  • It has taught me to think about problems in different ways. Different languages often have different approaches and mentalities to problem which can cross-pollinate for better code across the board.
  • It has shown me what is common and important in software design. For example coupling and cohesion is important everywhere and different languages may be able to teach you something different about why.

So how have I started on these languages?

Picking the Problem and the Language

For me, this has been a gradual process so it hasn’t been an option for a big-bang swap over. Taking weeks out to learn the language through formal methods.

Instead I’ve either been problem-led or language-led. That is, sometimes I’ve had a problem I’ve needed to solve which existing languages weren’t great at, or I’ve decided I want to learn a language so I’ve looked for problems it can solve.

The problem and the language are symbiotic. Certain languages are good at certain classes of problems and so matching when you are first learning makes it far easier to pick it up. You will also get more support and more done in a short amount of time.

The Best Problems

There are a few problems that I’ve found as a good way to get started:

  1. They should be real problems – maybe this is a personal preference but I struggle without having a real goal or comparison in mind.
  2. They should be small – At least to start taking more than a day at a time on a project like this is challenging and you don’t want to end up taking 6 months to see if you can make progress.
  3. You should understand the problem well – You really want to make the work about the language. I’ve made this mistake a lot, for example trying to learn Rust on embedded systems when I didn’t know much about either. It becomes hard to know if you are struggling because of the language or the problem.
  4. They should be off the critical path. I don’t think you have to wait to be an expert before using a language in production, but to start with you don’t want a project depending on your work. They should be tangential to the main project so if you don’t get finished it doesn’t impact any projects.

Let me give you some of my stories which will make some of these examples clearer.

Committing to Rust

I’m very excited about a language called Rust. Compared to LabVIEW, it has a much broader reach. Using Rust allows me to work on high performance embedded systems across multiple platforms with modern tooling.

In this case I decided first on it as a language and wanted to evaluate whether I should commit to it as a core part of my work. After working through the basic exercises in the Rust book, I started looking at how I could evaluate it properly.

The first thing I used it for was a device emulator. For testing a project we wanted to fake a device which normally I would do with a simulated class. However since I wanted to try rust I tried (and succeeded!) in writing a basic simulator in Rust, streaming random data over the network.

If this had failed, I would have fallen back to a simulated class but it worked well while never putting the project at risk. It also isn’t something that ended up in production.

I’ve also built some prototypes in Rust.

Another interesting example that was my final test before committing to Rust as a primary language was to rewrite some processing code from another project. The fascinating thing with this was that it showed me a totally different way to approach it. This approach will make it back to the final project in C for a big performance boost!

Learning Python

Python has been another interesting exercise but one where the problem leads for me.

I originally picked up Python supporting some code from a customer which let me see where it’s strengths lie.

The nice thing with Python is the speed of development so I’ve found it great for prototypes or quick scripting.

So to learn it better I started by just using it for prototyping some data analysis. I had a customer that needed to do some data exploration and I knew this is a strength of Python. So I fired up a notebook and worked my way through the problem (with a lot of Google-fu to understand Pandas).

In this case the notebook provided a report and was not used again afterwards. So there is no risk of having to support the code in the long term.

I then move on to using it to script some system testing. Again this wasn’t on the critical path of the project, but supplemented the main application.

Getting to Production

So we aren’t collecting languages for the fun of it. How do we get to using these in production? I’ve followed a few patterns to help which boil down to taking on gradually more risk.

  • Starting with testing and support utilities allows you to get longer term experience of running the language without putting the main project at risk.
  • Look for smaller components that fit the language really well. For example you might just call the processing routines in Python from LabVIEW.
  • Whats the difference between prototype and production ready code? Often error handling. Make sure you have a good grasp on how that works in the new language first.
  • Test, test, test! This should go for all production code anyway, but make sure you use good testing procedures around the new code and the integration to catch problems early.
  • Be prepared to throw it away. Don’t bet the project on it the first time. I had this recently where I developed the UI in a new web framework but I couldn’t get it quite right. So I had to throw it away and rewrite in LabVIEW.

I hope this helps give you some ideas on how to take on that next language and quickly get to writing useful code.

I’ve really enjoyed this journey and it gets easier and easier over time. I can now choose languages to fit the problem I have instead of the other way around and have learnt so much in the process.

Branches are For Robots Too

For a long time I  largely ignored branches in git when working on my own.

Quite frankly, as a solo developer I saw no benefit. I understood gitflow and its brethren but it looked like more work for me and no benefit. (again, as a solo developer, I have recommended it to many teams). I just created feature branches when there was a clear benefit.

Recently though, I’ve been looking to up my automation game. For a while now, I have had all commits to my repo for projects run through tests and builds. But I’ve always had a manual release process which had a number of small steps.

I can automate the release processes but I don’t want it to attempt to release every build. I also wanted a place for checks to take place before the final step.

When looking to automate this I realised the same concepts that apply to a team of people, apply to me and my trusty build server. Hence, my adoption of branching.

My Current Solution

What I do now is (mostly) follow a gitflow pattern. I’ve based it on gitflow because it matches my style of software release. To understand why I have added this caveat, check the notes of reflection on the original article at A successful Git branching model » nvie.com and consider how simple you can make it (like GitFlow considered harmful | End of Line Blog).

Once I have a release candidate on develop, I create a merge request to my released branch. (I’ve got no use for released branches right now)

This merge request gives my usual build checks and then I have a checklist covering anything manual:

  • Have I updated the user documentation?
  • Have I updated the version numbers in the VIs (soon to be automated)?
  • Have I run a complete system test?
  • Have I tested this with the end user?

Once I approve the merge request, the CI scripts can detect a merge to the released branch and will build and publish the installers. I’ve also got it making the release tag in Gitlab for me.

So remember, robots are team members too!

black background with text overlay screengrab

HTML On CompactRIO

For a while now, I’ve worked on using HTML pages as an interface to a remote cRIO system. In the National Grid power quality system I worked on, each unit has a web page for monitoring and configuration.

One thing that is much easier with HTML is making dynamic pages. I previously worked on a project with the embedded display on the compactRIO which had to be dynamic depending on the configuration. It was a bit of a headache, I had to script positioning different panels on the UI since there are no subpanels. It worked well in the end, but there was a lot of added complexity.

So when I had a new project with the needs for a pretty simple UI, but possibly some dynamic content (different channel counts) I wondered, could I write it in HTML and display it with a browser on the embedded UI?

I hoped this would:

  • Look better.
  • Be easier to make dynamic.
  • Integrate better with accessibility tools like the on-screen keyboard.

The Concept

The basic design consisted of 4 components:

  1. The actual LabVIEW VI. This was a very state-based UI so we could just write the latest state to global variables without much trouble. There were a few commands that could be sent through a queue.
  2. The LabVIEW HTTP Interface. I built a web interface so the state information and commands could be sent and received as HTTP or REST calls.
  3. The web page (thin client). I wrote this using Javascript in the Vue framework, but it could just as easily use the NXG web module (or whatever that will be called by the time you read this). This is served by the LabVIEW web server and talks to the HTTP interface.
  4. A browser. I need to launch a browser on the system to display the page.

Getting It Working

The LabVIEW and Javascript parts were straightforward enough. I had done similar things before and I got it working over a remote connection without too much issue. The question was, how to display this on the embedded UI?

There are a number of browsers available on Linux, but what would work on cRIO?

I went first to Firefox. It has a kiosk mode to make it appear as a full screen app and limit user interaction. And I managed to find instructions on what was needed to run it on cRIO.

The “installation” process was to install a couple of dependent libraries using the NI package feeds, then download and copy the pre-build x86 Linux binaries from the Firefox pages. As I write this, I see this complete guide exists which I missed before.

The thin client was included in the LabVIEW web services project as static files and that was included in the real-time executable. When the executable starts I used system exec to launch firefox in kiosk mode to the right page.

Aside – On Screen Keyboard

One thing I really thought this would make easier would be the on screen keyboard. I hoped by using HTML the integration with this would be automatic.

Not quite, it just didn’t work. I installed the florence keyboard but couldn’t get the auto-activation to work. There is something to do with accessibility libraries that are included with the Xfce desktop. Because of the time, I gave up on getting this working. The usage I needed was limited so I was able to integrate simple-keyboard.js instead.

First Look

Everything looked very promising. The page and display launched as expected.

I can’t share it as it is a customer project but it looked pretty good to. I had a rare experience on a LabVIEW project of a customer commenting on how good it looked!

Time to put it through final testing.

It All Falls Down

As with any embedded project, a key test is a longer running test. When I came back in after it was running overnight, it was gone. Nothing was running.

I repeated the test in case I did something wrong, same problem.

Not only was Firefox crashing, but it took the RT app with it. I tried some techniques to isolate them a bit more but fundamentally it wasn’t stable enough.

I looked at the performance and unsurprisingly, firefox was too much for a cRIO-9030. It was using significantly more RAM than phyisically existed, in itself that isn’t a dealbreaker, but something was causing it to crash. I managed to find logs that pointed to memory issues. I checked the javascript for memory leaks but no luck.

I tried to find other, lighter browsers to try but struggled to get any working easily. I think there probably are some that you could build for it, but at this point I didn’t have the time.

Back To LabVIEW

At this point I re-wrote the UI back in LabVIEW. I had planned time into the project in case I had too so this wasn’t a major concern. Here is what that meant for the system:

  • CPU dropped a lot. Firefox was using around 15-25% CPU, LabVIEW was generally under 10%. This isn’t a big surprise, especially with no graphics card for Firefox to leverage.
  • I couldn’t go back to a traditional LabVIEW UI. I took some time to replicate as much of the look and feel from the HTML pages as I could and it looked pretty good to be fair – just a few dynamic elements were hard to replicate like highlighting selected array items for example.
  • Luckily, I already had a basic on screen numpad implementation from a previous project for the same customer, otherwise that would have made it much harder.
  • I believe it took about 4 hours to rewrite in LabVIEW vs 20hrs in Javascript. Take this with a massive pinch of salt! I was basically learning the vue.js framework, writing in a numpad component which I already had for LabVIEW. I was also copying the styling and structure into LabVIEW. Still its rare you get to build the same thing twice and compare!
  • The system was stable.
  • When we increase the channels though, I will have to do additional work on the LabVIEW UI which I wouldn’t have to do on the HTML client. It automatically adapted to different channel counts.


So a failed experiment but an interesting one, I hope this might help or inspire someone else. The most interesting bits for me were:

  • Remembering how heavyweight browsers are. But also UIs. If this VI was doing anything more on the RT side I would be concerned about the embedded UI too and the impact on performance. As it happens, all the hard work is done on the FPGA.
  • There is a case for limiting usage of the embedded UI due to this and looking at panel PCs instead – though this really depends on how complex the communications might be. The isolation must be good for stability though.
  • The LabVIEW UI is remarkably powerful and fast to work with. It just does fall down when you want:
    • something more dynamic i.e. 8 or 16 channels.
    • to add a consistent styling throughout.
    • A form style input without a lot of faff.
  • HTML did look better, partly through better technology, but also through easier experimentation. Maybe using HTML to mock up panels before implementation in LabVIEW would lead to better UIs?
  • A nice, lightweight way of running a single HTML view could still be an interesting replacement for a LabVIEW UI. There may be some convergence with the work I’m doing in Rust at some point.

Testing HTTPS (SSL) Connections on NI LinuxRT

This is a VERY specific post – but it took me a while to avoid red herrings so I thought it was worth sharing. Sorry LabVIEW folks – this is all about LinuxRT!

I had a system that was having issues connecting to a HTTPS server (for WebDAV). It had previously been running without issue and then just stopped.

As I couldn’t disturb the software directly I logged into ssh and tried the following:

  • nslookup <server> – This pings for DNS to make sure we have a valid address for the server. This passed.
  • ping <server> – this failed but after testing from another system that also failed, so this server probably has ping responses blocked.

My next step would be to try curl which allows command line http(s) but that isn’t installed. Instead I found that I could do a security check with openssl. This should confirm that the certificates work and the port is reachable.

The command was:

openssl s_client -quiet  -connect <server>:443

This failed! Hmm – maybe because this is an older LinuxRT distribution there is a certificate problem. So I went hunting for the certificate store.

It turns out there isn’t a central store automatically recognised by openSSL. So for the proper test we actually need to link to NI’s certificate bundle at /etc/natinst/nissl/ca-bundle.crt (On LinuxRT 2017, but I would be surprised if this moves).

I expect this is true of any command line tool that uses SSL – so if you do install cURL it will probably need a link to this.

So now the command is:

openssl s_client -quiet -CAfile /etc/natinst/nissl/ca-bundle.crt -connect <server>:443

That works and I have to move on in my troubleshooting. (At this stage it is working again without any intervention – not sure whether to be happy or not!)

What To Do About GDevCon

Hello Everyone,

As you can imagine, there is a lot of uncertainty in the GDevCon team right now about how we handle GDevCon and COVID-19. We have been trying to figure out the right path to take with GDevCon. One of the things I think that we could have done more is communicate the options and the approach we’re taking to you all, so you can plan and try and understand what the future holds.

So this post is an attempt just to reset that a little bit, help you understand where we’re coming from, and also solicit any feedback, because we’re making assumptions about what is most useful to you as well. 

A couple of caveats upfront:

  • This post is my interpretation of the discussions in the GDevCon team (with their blessing). Conjecture may not be the same as everyone on the team.
  • No-one has a crystal ball so please don’t make hard plans based on this post!

Responsible Timing

So the first thing to say is we didn’t want to rush any decision. Obviously in March the severity of the situation became very obvious. With the event in September, six months later, we didn’t want to try and predict the future. We want GDevCon to go ahead if it can. We think it’s a very valuable event and quite frankly if it is able to go ahead, it’ll be a nice relief after these lockdown times.

So we made a decision early on to hold off a final decision until June. I don’t think that people are making many travel plans anyway so I expect this delay will have a minimal impact on attendees but is a big benefit to us to see what is happening in the world.

The Options

So what are we considering as the options to decide between:

  1. GDevCon #3 going ahead.
  2. Postponement (probably to early 2021)
  3. Online Event

Going Ahead

To go ahead though I think we will need to be confident that three conditions are in place by September:

  1. International travel needs to be easy. GDevCon is a global event with attendees, sponsors and speakers coming from all over the globe.
  2. CERN needs to be happy that they can host us in a way that everyone is safe.
  3. Companies need to be happy to send their staff in the confidence that everybody will be as safe as possible. There may be a situation where travel is possible but companies are still not allowing it. This is a harder one to define but worth discussing.

As I write these down, to be perfectly honest, I find it hard to see that we will hit one of these points, let alone all three. As countries begin to ease restrictions over the next month though this will probably become clearer.


Postponement is probably the most likely option. We would look to postpone the event, perhaps to early 2021. 

We have had a great response from sponsors for GDevCon #3, and the presentations submissions have been fantastic. This takes effort from us and from presenters and sponsors so we would like to avoid throwing that effort away and take the same speakers, sponsors and attendees with us to a later date.

No-one would be obliged though, and full refunds would be available in this case.

The hard part of this decision is when will it be safe?

Online Event

We have had some basic discussions about an online event but are in broad agreement.

To be perfectly honest we’re not keen. Since we started GDevCon we saw the kind of team building and community building aspects to be as important as the content. So that’s why it was a two day event from the start, we wanted that evening event for people to mix and talk and continue conversations. And so we feel that we would prefer to avoid an online event, and focus on getting an in person event going.


The Future

The good news is we have been very aware of risk through this whole process and GDevCon 2020 is no exception. We are in a position where we can cancel the event, refund everyone and we will still be around and as strong for 2021.


I hope this helps explain where we are and the options we are considering. 

This event is about you though so tell us what you think. Does making a decision earlier help you significantly? Has your company/spouse already ruled out travel in September anyway?

Understanding your position will help us to understand the options in front of us. So either comment below or get in touch with the team via email or social media.

e: admin@gdevcon.com
t: https://twitter.com/GDevConference
l: https://www.linkedin.com/company/gdevcon/

What to Do When Development is Quiet

Having worked for myself (and by myself mostly) for a few years now I feel lucky that I have had some training for the “new-normal”.

Right now things look OK for me – My current projects can largely be worked on remotely. But what if you can’t?

So here is my general plan for a quiet development period:

1. Work on processes and automation.

Look back at your development processes. Now is a great time to cast a critical eye over them.

In particular look for opportunities for automation such as CI or code scripting. The work you put in on that now will pay off for years to come.

2. Do experimental refactoring

Not expecting to be able to deliver code for a few months? Now might be a great time to work on those refactoring jobs that always get pushed back because “deadlines”. (Use source control in case it doesn’t work!)

3. Learn, learn, learn

It’s a great time to learn a new language, technology or skill so you can come out of this stronger.

Money may be tight but there are some great learning platforms out there.

I joined the ACM which gets you access to O’Reilly’s e-learning platform including their books and video courses.

I also use Pluralsight when picking up new technologies (I’ve been learning SaltStack at the minute)

For software engineering, I’ve always liked the look of Construx online learning as well.

EDIT: LabVIEW Learning

If you are already a competent LabVIEW developer I would use this as a chance to look outside LabVIEW a bit more. You will probably learn that in normal circumstances anyway and I find it hugely valuable to increase your world view. But I should mention resources here as well:

  • Since I posted this, I’ve seen that NI has announced that all their online learning is available for free until the end of April: https://learn.ni.com/training
  • There was an excellent book released recently 🙂 https://www.hive.co.uk/Product/Richard-Jennings/LabVIEW-Graphical-Programming-Fifth-Edition/24023115
  • Nicola Bavarone (thanks!) offered the great suggestion of looking at the badges on ni.com (https://learn.ni.com/badges). I’ve found these good for identifying gaps in my knowledge.

4. Give Back to the Community

Everyone is going to be in a bit of a funk. Wouldn’t a great new open-source tool make everyone feel more positive? Or perhaps try some teaching yourself. Recording some videos and put them on youtube to share your expertise.

What do you do that I’ve missed?

Unit Testing Large And Complex Data

This week I’ve been helping some developers with unit testing adoption which raise an interesting topic that I’ve not seen explicitly addressed.

Unit testing is great when you are working with simple data but what if you have larger or more complex data such as waveforms or images?

I’ve used a couple of techniques over the years:


Maybe this is an obvious one – but the first option is to identify whether the method can still be applicable on a subset of the real data.

For example, on an application where I’m doing some image processing, the real data will be 256×256 pixels

However, my tests run over 3×3.

This is still applicable to the larger image as many algorithms involve handling edge conditions and the normal implementation in the middle. The larger arrays will have more of the normal implementation but the smaller tests will still cover the edge conditions as well (which is often where things can go wrong!).


In some cases, you need a full input set but you are just measuring some properties.

An example might be a frequency processing function where we want to extract the size of the peak at 1kHz.

The FFT parameters change a lot based on the size of the data you put in so really we want to use the actual size we expect in the application. So instead what I have done in the past is to write a basic routine to generate an appropriate signal.

In the example above I use generators to produce a multitone signal, perform the frequency analysis and manipulation which I am testing and then compare just the components of interest.

(Note: this is before I got a bit more structured with my testing!)

Credit to Piotr Demski at Sparkflow for pointing out an important point I missed when I first published this. If you are generating data – it should be the same every time i.e. beware random data sources. Your tests may fail without anything changing.

Golden Data

The approaches above may not work if it isn’t obvious how to generate the data or you can’t generate the important elements easy. They also only work where the problem is importing data – but what if you need to compare a large result.

Here I will revert to storing a constant of some reference data. Normally by running the system, using a probe on the data and copying it to my test.

Quick Demo of Golden Data Process

On the input, this can work without any compromise.

For expected cases there is an obvious catch – if you generate data from the VI under test it will of course pass. However, if you have no way of generating an expected output then we have to compromise.

Instead, we can write the algorithm until it works (validated manually) and capture that data for the test. It means the test doesn’t guarantee it is right, however, it will fail if you do anything that alters the output, which gives you a chance to catch potential regressions.

Another path is to use a known good algorithm (perhaps from another language) to capture the expected output in a similar way. This is fantastic when you are looking to replace or replicate legacy systems since you can take data directly from them.

Catch It At Different Levels

In some cases, it may simply not be viable to create a unit test for the algorithm. That’s OK – 100% coverage isn’t a useful goal in most cases anyway. Instead, consider:

  • Try and unit test the components inside the algorithm (depending on the structure of your code). This will provide some confidence.
  • Make sure this algorithm is verified in system or integration level tests. Ideally, find ways to make it fail loudly so it isn’t likely to be missed.

I hope that gives you some ideas for your use case. Feel free to post any questions you have.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.