Browse Tag: architecture

Why Do Your Loops Run?

One of the key architectural rules I learnt in LabVIEW is what I always call in my head the “single source of synchronisation” rule. I need to make two opening statements on this:

  • Someone/Something taught me this – Huge apologies I can’t remember the source of this but thank you whoever you are!
  • Its name makes something simple sound very complicated; it isn’t!

It made an appearance in my post on architectural language so I wanted to expand on it. The quote there was:

A loop within an application component. It has a single reason to run.

Why Loops Run

By “run” what I mean is what triggers an iteration of that loop. I’ve identified these sources:

  • Time – Intended to run every x ms.
  • Event – Either a UI or User Event, the loop contains an event structure.
  • Data – Basically a queued message handler but “data” means anything that means a loop waits for data. A data interface could be queues, notifiers, streams, DAQmx etc. Some external process is forcing us to wait for new data to be available.

The rule means, if one loop is trying to use two of these methods (or multiples of one of these types) then problems may occur. Examples might be:

  • Using the timeout on an event structure (did you know it won’t fire if user events fire, even if you aren’t handling them? Thanks Chris R for this demo!) Thanks Fab for the correction: This was a bug that has been fixed.
  • Using the timeout of a queue to perform a repetitive action.

As you can see – time is usually the conflict because the others are kind of obvious to avoid having in the same loop.

Why This Rule Exists

Put simply, in most cases where two of these exist there is a conflict which is hard to resolve consistently 100% of the time, mostly because the event and data drivers depend on external components which aren’t predictable.

Take the timeout case of the queued message handler. If you want to check a state about every 1 second, then the best case is a new external message once every 2 seconds. Then 1 second after that message arrives you perform the check. Maybe you will get a second check before the next message, maybe not. Most likely you will end up performing the check every 2 seconds. Perhaps that is acceptable?

However then an external component generates messages every 0.1 seconds. Now that check doesn’t happen ever, through no fault of yours! (In the context of the code in this loop).

But I Need To…

Perhaps there is a case where you want to break this rule. I have two solutions:

Proxy Loops

Use a second loop as a “proxy”. For example in the case above have a second loop which runs every 1 seconds and enqueues a message for the check.

Another example is when you have an API which generates messages on an event but you need it in a queue. If you have used the actor framework, this how you extend actor core for GUIs.

Break The Rule… And Be Very Careful

If you don’t see a way to design around this rule, then you need to know you are breaking it and be very careful. Most likely by adding some management code to a loop.

The example I could think of was if you needed data from two loops. In this case, you would write code to read from each queue. If one read timed out, perhaps you store the successful data and then try again, reading only the failed queue.

I don’t think there are many cases, but rules are about knowing when you break them and why!

Whats Your Architectural Language?

The word of the year here at Wiresmith Technology is process. In areas where I have standardised processes life has got easier, less stressful and more reliable. Now I’m looking at the software processes to see where we can get the same benefits.

Something that I have wanted to address for a while is architecture. Working on my own has given me the benefit of being able to be quite ad-hoc and try different designs on different projects.

Well, I often think coming back to your own work after 6 months isn’t so different from working with someone else and I’ve certainly felt the drag of having to review how I built the architecture on each project. So I want to at least have some standard templates.

What I found when I came to it was I first had to define what is in a program!

Language Is Important

There are so many conflicting terms and every framework has its own terminology.

I really wanted to start with knowing what I want in a generic sense. By doing this without looking at specific frameworks it gives me the freedom to find a framework that fits the way I work best (as well as the freedom to change or not use frameworks depending on the project).

I’ve seen this approach work in my business. I’ve been trying to find tools to help me be more productive, but it isn’t until I decide on what the process is that these tools are supporting, I waste hours trying to choose as I have no way to determine what is best!

So before even trying to work with templates or frameworks, I reviewed my previous projects to try and pull out and name the different elements of my architectures so I can map other tools to this.

What I Picked

So here is what I listed as my architectural definitions. Before you read them, understand I am sharing these for your curiosity – you may have your own set of definitions in your team already. This isn’t about right or wrong, this is about consistency between team members and projects.

Term Description Not to be confused with…
Application Component An asynchronous VI with it’s own lifetime and own control of when to run. This is the top level of the architecture design.
Data A piece of engineering data e.g. acquired data. Messages: We split this concept as messages are more framework-oriented.
Message A framework command for a process to do something. Data: although they are data in the strictest sense, they are not directly related to the data involved in the engineering domain.
Message Handler A process that receives heterogenous messages and data. Data-driven process: this has homogeneous data to handle.
Module A set of related code. In our system, it is a class. It is generally unit testable. Module (DQMH), Actor (AF) – These are processes or message handlers.
Process A loop within an application component. It has a single reason to run.

I’m pretty happy with this – the one element of confusion is where the actor style module (whether that is an actor framework actor or another QMH based framework like DQMH). In reality, this sits somewhere between a module and a process but I need to experiment more with how to think about those.

The one I think is particularly important is modules. Too often the important logic gets muddled and mixed with framework code

Next Steps

For me the next step now is to create templates or frameworks to handle these items in a consistent way – more on that in a future post.

My challenge to you is to think about this for yourself. Maybe you already have a framework so you don’t need definitions like this, but where do you find you or your time are inconsistent over time and would a common language help?

Why Software Should Be Like Onions

I have now been writing full applications for well over 18 months now and I can honestly say that my development has changed (and improved!) significantly in this time.

When I look at my software now I see two major areas of improvement. It is far more readable and decoupled than in the past which has been because I have made my software like onions, it has layers.

What Layers?

I will start by saying these are not rules, these are not fixed but these are common.

  • “Process” or “Application” level – This is essentially your application framework. This handles connecting the dots, moving data between asynchronous processes and is really the scaffolding of the application.
  • “Problem” level – Also often termed business logic. This is the part that is very application specific and really defines the function and process of the application. This level should all be described in the terms of the problem you are trying to solve
  • The IO layer – This is the layer that connects your problem to the outside world. The important distinction here is that the API/Interface is defined by the problem level
  • Reuse code and drivers

Essentially each layer is composed of the elements below and the IO layer is composed of libraries, interfaces and reuse code.

Why It Works – Readability

If I gave you a blank VI, could you rewrite your application as is? But you’ve done it once? We never write a whole application though, we focus on areas at a time. The layers hide the complexity below them from view, allowing focus on the task at hand.

The process layer should only show how the problem level components talk to each other.

The problem layer should only show how data is taken from a input, processed and sent to the relevant outputs.

The IO layer should show the specific combinations of driver calls to achieve the desired input or output.

This works because we can only hold so much of the program in our heads at once. By having driver calls and decision logic on one diagram our mental model of what is happening has to be much harder making us work slower and more prone to mistakes.

Why It Works – Dependendency Management

Similarly when there are different parts of the program coupled together it is hard for us to comprehend the knock on effects of changes.

By having these layers contained we have well defined interfaces and can track changes. Change the driver? We only need to look at the IO layer. Changing how processes communicate? You only need to worry about the process layer.

We can also identify uses of dependency inversion more easily. This means that rather than the problem domain being coupled to a driver directly, we define an interface (my implementation uses classes and dynamic dispatch) which is owned and designed from the perspective of the problem level. This is far less likely to change than the driver calls themselves and protects the problem logic from the IO logic.

The final benefit is more practical. I don’t let dependencies cross multiple boundaries, classic cases are type defs that end up spreading throughout an application. This reduces the dependencies which allows for faster load, editing and less “random” recompiles in LabVIEW. Sometimes this can be hard, it feels inefficient to convert between very similar type defs, but generally the computer cycles are cheap and the benefits in the software are worth it.


For me, increased awareness of these have been the single biggest gains in terms of the way I design my applications, I hope it might help you as well. I suspect these ideas need a bit of refining and distilling and would love your feedback or thoughts.

MVC In LabVIEW – Making More Modular Applications Easier

If you are reading around the internet on blogs like this you are also probably searching for the Mecca of clean, readable, maintainable code which is quick and easy.

OK, we all know that doesn’t exist but I have been working on a new MVC library that has the potential to help.

Model View Controller or MVC architectures appear to be somewhat of a staple of modern software design. The idea is that you divide your software into the three parts that interaction:
MVC Diagram (Model-View-Controller)
This helps to make your system more modular and easier to change, for example you should be able to completely change the GUI (View) without touching anything functional. There is a fairly well defined method of implementing MVC in LabVIEW using queued message handlers and user events.

I have been starting to work with Angular.js, an MVC (well MVWhatever) framework for JavaScript. In this, the view is provided by HTML & CSS pages and controllers and models are written in Javascript.  To bring it together you simply reference items in the view that exist in the scope and Angular.js does all of the binding to keep these in sync. I had to have something this simple to allow rapid and easy development of MVC applications in LabVIEW.

Luckily whilst having these thoughts, the CLD summit happened here in the UK so I proposed trying to work through this idea as part of a code challenge section of the day and managed to find a group of (hopefully) willing programmers to work through the idea with this.

The Model

So lets start with the model.

Javascript has the significant advantage of being a weakly typed language. To emulate this and to avoid the headache of having loads of VIs to manage different datatypes we defined a model which uses variants at the core to store the data.

This will have a performance impact but you can’t have it all! To make things more interesting as well I have often found it can be useful to refer into the model by name as well (this might be something we need later). Therefore these data points can get stored into a variant dictionary so that we can recall them by name.

labview variant dictionary
Adding Data Items to a Variant Dictionary

Note I have also wrapped the data items and models in objects which are in turn DVRs. objects because thats how I like to organise my code and helps to give these items a unique look, DVRs because fundamentally the model and datapoints should be common, wherever you access them in the program.

Then to access items we can update the item in the dictionary. Because we are using DVRs we actually only have to read the DVR back out and then we can update through the data class (which exposes a data write through a property node).

Updating a Data Point (Get variant attribute is in the find item VI)
Updating a Data Point
(Get variant attribute is in the get item VI)

So the model is not to difficult and to be fair there are several libraries and similar methods as essentially this is a variable engine (CVT library for example).

The View

The bit that was bugging me was the data binding. There are ways that it can be done but I really didnt want the developer to have to write any code for this, it should be as simple as naming controls in a given way without having to add another whole process to your code.

There are two basic approaches possible:

  1. Polling: Angular.js actually uses a polling mechanism checking for value changes, I have used a similar solution to bind shared variables with OPC UA tags. This involves spawning a separate process and the extra complexities with controlling this.
  2. Events: Events are highly efficient but again, we want to avoid dealing with a parallel processes. This is where event callbacks seemed to solve the problems.

Event callbacks allow you to register for an event but rather than using an event structure to define the code, we can just define a callback VI which is called every time an event fires. This happens in the background without needing a parallel process.

Despite what the help file says, these currently work without issue for front panel events as well as activeX or .net events.

This allows us to bind the data items to controls (by registering for the value change event) or bind to indicators (register for a user event which the data point fires every time it is updated).

Register Event Callback to Bind to Control
Register Event Callback to Bind to Control
VI Which is Called on Value Change
VI Which is Called on Value Change

The final step is to be able to bind to the front panel automatically, this is the simple bit. There is a VI that finds all of the labels that match a pattern {dataname} and automates the binding process.

Where can I get it?

I have packaged this code on bitbucket. Please download the VI package and give it a try.

What Next?

Firstly, let me know what you think, it makes it much more fruitful to know people are trying it and (hopefully) enjoying it.

I have a few ideas for improvements including:

  • Working out a good error handling system on the callbacks.
  • Allowing the callback VIs to be replaced with custom versions.
  • Saving and loading the model to/from file.

You can see the plans, or add bugs or features to the issues section on bitbucket.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.