Why Software Should Be Like Onions

I have now been writing full applications for well over 18 months now and I can honestly say that my development has changed (and improved!) significantly in this time.

When I look at my software now I see two major areas of improvement. It is far more readable and decoupled than in the past which has been because I have made my software like onions, it has layers.

What Layers?

I will start by saying these are not rules, these are not fixed but these are common.

  • “Process” or “Application” level – This is essentially your application framework. This handles connecting the dots, moving data between asynchronous processes and is really the scaffolding of the application.
  • “Problem” level – Also often termed business logic. This is the part that is very application specific and really defines the function and process of the application. This level should all be described in the terms of the problem you are trying to solve
  • The IO layer – This is the layer that connects your problem to the outside world. The important distinction here is that the API/Interface is defined by the problem level
  • Reuse code and drivers

Essentially each layer is composed of the elements below and the IO layer is composed of libraries, interfaces and reuse code.

Why It Works – Readability

If I gave you a blank VI, could you rewrite your application as is? But you’ve done it once? We never write a whole application though, we focus on areas at a time. The layers hide the complexity below them from view, allowing focus on the task at hand.

The process layer should only show how the problem level components talk to each other.

The problem layer should only show how data is taken from a input, processed and sent to the relevant outputs.

The IO layer should show the specific combinations of driver calls to achieve the desired input or output.

This works because we can only hold so much of the program in our heads at once. By having driver calls and decision logic on one diagram our mental model of what is happening has to be much harder making us work slower and more prone to mistakes.

Why It Works – Dependendency Management

Similarly when there are different parts of the program coupled together it is hard for us to comprehend the knock on effects of changes.

By having these layers contained we have well defined interfaces and can track changes. Change the driver? We only need to look at the IO layer. Changing how processes communicate? You only need to worry about the process layer.

We can also identify uses of dependency inversion more easily. This means that rather than the problem domain being coupled to a driver directly, we define an interface (my implementation uses classes and dynamic dispatch) which is owned and designed from the perspective of the problem level. This is far less likely to change than the driver calls themselves and protects the problem logic from the IO logic.

The final benefit is more practical. I don’t let dependencies cross multiple boundaries, classic cases are type defs that end up spreading throughout an application. This reduces the dependencies which allows for faster load, editing and less “random” recompiles in LabVIEW. Sometimes this can be hard, it feels inefficient to convert between very similar type defs, but generally the computer cycles are cheap and the benefits in the software are worth it.

 

For me, increased awareness of these have been the single biggest gains in terms of the way I design my applications, I hope it might help you as well. I suspect these ideas need a bit of refining and distilling and would love your feedback or thoughts.

Spies In LabVIEW

In my last post I covered how I am using unit testing in LabVIEW. In this post I want to share a useful tool on that journey.

Test Doubles

One of the first rules I operate under while testing is using as much of the real application as possible in testing a behaviour. Then each test becomes more valuable as it covers the integration of all the units as well as their contents.

Sometimes however this simply isn’t possible. You may need to talk to hardware that isn’t available, or a database or file that is hard to verify as part of the test (It’s normally possible somehow, but perhaps not worth it). This is where test doubles come in.

A test double is simply a substitute for a software component that you want to exclude from the test. In my last post I mentioned that OO make testing easier, this is why. If these components are written to an abstracted interface we can replace them with a double using dynamic dispatch.

Sometimes JohnnyDepp.lvclass can't be used (source
Sometimes JohnnyDepp.lvclass can’t be used (source)

There are different types of test doubles that are well covered in an article by Martin Fowler. Without getting bogged down into terminology these can return canned answers, simulate some behaviour or simply act as black holes for data!

Spies

One type is called a spy. A spy object is a way for us to peek inside an object. Essentially it is a double that stores information about how it was called so that you can query it later and make sure it was called in the way that you expected.

For example if we are sending a logging library a value of 1 to log, we want to see that the file writing function was called with a value of 1 for the channel name for itself.

Brittle Tests Health Warning – Overuse of this can create very brittle tests. The advantage of taking a high level approach to testing is that your tests aren’t coupled to implementation details which will change more than behaviours. If you use spies too much and you see a lot of implementation details you risk making tests that frequently break and/or require more maintenance.

Spies In LabVIEW

So how do we do this in LabVIEW? Because LabVIEW is a compiled language we must specifically implement new components that contain the spy abilities.

Essentially this component must be able to store the details of each call in a way that it can be read back later. You can do this however you like! But I have created a library to try and take some of the hard work out of it.

LabSpy API

By creating these spies inside of your class you can create a spy class that you can use in your testing. 

 

Typically when we do these there are a few steps:

  1. Create a setup and delete spies method which can be called in the setup and teardown of your test. Make sure the references they create are accessible (either output as an indicator or make accessible through another accessor).
  2. Create the override methods and add the register calls. If you want to track input parameters create a type def for them and wire into the parameters input of the register call function.
  3. Write the test and check the calls using the LabSpy API. The image below shows a simplified diagram showing what this could look like.

Simplified Test View

Now you can check that your software is making the calls with the correct parameters.

Where To Get It

This project is released under an open source license and you can find it on my github account at https://github.com/JamesMc86/LabSpy. There is a VI Package so that you can install it into the palettes which you can download from https://github.com/JamesMc86/LabSpy/releases.

Feel free to download it, play with it, suggest fixes or add feature that you would find useful.

How We Unit Test LabVIEW Code

So if you have been following me for a while you will know that one of my new years resolutions was to really get to grips with unit testing LabVIEW code.
It has been a successful year for this and I thought I would share my results so far.

Result #1 – Always Unit Testing

The immediate question is whether it is worth it and my experience says it is. There is a time investment, that much is obvious but I have seen many benefits:
  • Better development process – by taking a component in isolation and being able to focus and test that thoroughly on its own means less time running full applications, clicking buttons in the right order to create the scenario you need to make sure the latest change works. This means more time writing code, less time debugging and testing. (as well as less code coupling).
  • Higher Confidence – When you have a good suite of tests around your code that you can run in an instant you are more confident in changing it and not causing bugs which looks good to customers and feels great. It also encourages refactoring as it becomes a low risk activity.
  • Smoother Deployments: Compared to last year my commissioning visits have been far smoother with less issues and those that do come up are easier to narrow down.
This is and will continue to be core to how we develop software at Wiresmith Technology.

Result #2 – JKI VI Tester and LabVIEW OO are the Tools of Choice

As late as May I was trying to make NI UTF work for me. I like the concepts and having the option of coverage testing is great but the usability sucks so I transitioned to VI Tester as the framework of choice.
VI tester generates a lot of code as you go! But it lets you write very specific tests in whatever style you like and follows standard testing conventions. My only concern is that it seems unclear what the state of development is and if it will continue. For example I would love to see a “Before” and “BeforeEach” function as opposed to a single “SetUp” VI. It is also very clunky with multiple project targets which I would love to understand what can be done.
Slightly more controversially, I feel that to be really effective you need to be using OO. This is simply because using OO and good design practices/patterns allows you to substitute test doubles where items are difficult to test (i.e. IO) or not relevant to the tests (i.e. another QMH that you don’t want to start, just see what interface calls are made). I just don’t see an effective way to do this with more traditional methods.

Result #3 – Test First not Test Driven

What I refer to here is the purist Test Driven Development. This says that the tests drive the design of your code. You write just enough code to pass each test, refactor the code and by the end you end up with an optimal design.
I have not found much success with this. I have tried a couple of projects where I did try to do this rather than having a proper up front design and the code did not feel very clear. Perhaps it is my style or not enough refactoring but it did not feel good to me.
What I will say is I do follow a test first process.
Unit Testing LabVIEW Process
The thing to remember with the test code is we are trying to call it from as high a level as possible, substituting any test doubles as low as possible to test as many of the real parts as possible, all the time trading off having to spend hours creating test doubles or the test code itself.
Why test first, it has a couple of key benefits:
  • You make sure your test fails first, if the test passes when you haven’t written the code your testing is wrong!
  • It helps you consider the problem domain, what parts are interacting with the behaviour you are working on.
  • It feels like (not saying it is, just feels like) taking two steps back when you are writing tests for code you have already written and know works.
So I think that is one of the first new years resolutions I have ever kept! There is not huge amounts of information out there on unit testing with LabVIEW so I hope this helps. I have a few specific posts in mind to cover some techniques and tools that I have developed along the way that I hope to put up over the next few months.
In the meantime I will once again plug the unit testing group on the community which remains a great resource on this topic.
EDIT: I wrote this post before the announcement of JKI’s new testing framework. I will be looking for a project to evaluate this new approach on and will report back!

Floating Point Precision

The problem with numbers is they always look right.

If your DAQ card says that the temperature is 23.1 degrees, who are you to argue! All the way from the sensor to the screen, the quality of the information typically degrades as it is converted and recalculated.

One such source of degradation is rounding errors due to floating point precision.

Whilst floating point numbers look continuous this is not true, they have rounding errors too. I’m not going to dig into how they work too much here, there are plenty of resources on that (Wikipedia has more than I can bear to read right now!) however I want to talk about how the format trades off precision vs range.

LabVIEW helps us by using the double precision format by default which gives a precision to approximately 16 decimal figures vs the standard float in many languages which only gives around 7 decimal figures.

But as with everything, there is a cost. The double values weight in at 64 bits vs. the singles 32 bits which when your storing a lot of data comes at a cost. I had such a case recently where I wanted to store timestamps in as small a space as possible with sub-millisecond precision, so the question arose, can it fit in a single?

Machine Epsilon

The first thing you will find when you go searching for precision on floating point numbers is the mystical Machine Epsilon.

This represents the smallest change that a floating point number can represent, there is a LabVIEW constant for this.

machine epsilon

This describes the best possible precision however it can be worse. Floating point numbers represent numbers as a combination of a significand and exponent (like scientific notation at school i.e. 5.2 x 10^5) which allows it to trade off range vs precision (hence the floating point), this means as the size of the number increases, the precision reduces.

For my example, this was particularly important as timestamps in a floating point format are extremely large values (seconds since 1904) which means they lose precision. Which makes this piece of code, break the laws of maths:

timestamp with machine epsilon

So I went in hunt of a definition of how precise these numbers are, which was surprisingly difficult! I think there are two reason why this doesn’t appear to be defined in many places:

  1. Maybe it’s just obvious to everyone else?
  2. A factor must be that the following formula makes assumptions about optimum representation, some numbers can be represented multiple ways which means that there is no single answer.

Eventually I came across a stack overflow question which covered this.

In essence the rules are:

  1. For a given exponent, the error is all the same (i.e. if we are multiplying by 2^2, the smallest change for all numbers would be 4).
  2. The exponent is set by the size of the number (i.e. if the number is 6, the exponent should be 3 as that gives us the best precision).
  3. Knowing the size of the number we can work out the exponent, given the size of the floating point number and a given exponents we can work out the smallest change.

The maths in the post is based on a function available in MATLAB that gives us the epsilon (eps) value for a given number. Translated into LabVIEW, looks like this:

calculate epsilon

With this I could see the answer to my problem, resolution of time as singles is abysmal!

time precision

QMH’s Hidden Secret

Queued Message Handlers (QMH) are an extremely common design pattern in LabVIEW and sit at the heart of many of the different frameworks available for use.

At CSLUG (my local user group) we had something of a framework smackdown with Chris Roebuck and James Powell discussing a couple of frameworks and looking at some of the weaknesses of common patterns.

James’ argument highglighted one of the most common flaws with this pattern which is clearly present in the shipping example in LabVIEW. When using a QMH you cannot guarantee that execution will happen in the order that you expect, on the data you expect.

The concept seems to work for many though, with a QMH style structure at the heart of most of the actor oriented programming and driving some of the largest LabVIEW applications around, what is the difference between success and failure?

A Thought Experiment

During James’ talk I had a bit of a personal epiphany about the QMH which involves a slightly different thought process.

This thought process starts by thinking about the QMH as a virtual machine or execution engine, not part of your application. So if this is the case what are the parts (click to enlarge):

QMH Virtual Machine

  1. The Instruction Set: The different cases of the case structure define the instruction set. This is all of the possible functions that the QMH can execute.
  2. The Program: This is the queue, this defines what the program executes and the order in which the instructions are executed.
  3. The Function Parameters: The data that is enqueued with the instruction.
  4. Global Memory: Any local, global variables used AND any shift registers on the loop (we will come back to this)

It’s All About Scope

Scope is important, we all know that when it comes to things like global variables. Scope however is all about context and control and there are two scoping concerns at the centre of many issues with the QMH pattern.

The Program: In the typical pattern any code with the queue reference can at any time decide to enqueue instructions.

Global Memory and, in particular, the shift registers on the loop also give some global state. The shift registers are a big part of the dirty little secret. Common sense says anything on a wire is locally scoped, it cannot be modified outside of the wire, however this is about context. To the QMH this is true, the shift register data is locally scoped. However to a function/instruction inside the QMH this is not true. In the context of a function this is global as other functions can modify this data i.e. you cannot guarantee the state is the same as you left it.

So how do you use the QMH safely? You should reduce the scope of at least one of these to ensure safety.

Reducing the Scope of the Queue

This is something that is beginning to emerge in a major way.

I first saw this pattern a couple of years in a framework called TLB’ that Norm Kirchner proposed. I have since seen at least two alternatives that follow a similar pattern (that I’m not sure are published but you know who you are, thanks!)

The gist of the pattern is that we separate two structural elements apart in the QMH

  1. An event hander that can take external events and determine what work needs to be done in reaction to that event.
  2. A work queue which is something like a more traditional QMH however only the event handler can add work items.

This could look something like this in LabVIEW:

This is vastly simplified to show the core structural elements
This is vastly simplified to show the core structural elements

(If you look at tlb’ it has the same elements but reversed on the screen).

This has some distinct advantages:

  1. As long as we don’t share the original queue reference only the event structure or the QMH itself can queue work items. This gives better control over race conditions in terms of order of execution.
  2. This overcomes another distinct drawback of the shipping QMH example, data can easily be shared between the event handler and the QMH on the wire using the same shift register structure as before, removing the need for various hacky ways of enabling this normally (again credit to James Powell on this observation).

The disadvantages?

  1. Now our event handling response time is limited to the time taken to complete the work backlog, we have made our program serial again. I suspect for the simplicity, this is a cost that can be handled by most applications.
  2. This doesn’t really deal naturally with time based systems like DAQ, but does QMH really?

I really like this structure, parallel programming is hard! This removes many of the complexities that it introduces for event-response type applications in LabVIEW. I expect we may see more and more of these come out over the next couple of years.

Reducing the Scope of Instruction Data

The above is a nice solution to the issue of controlling execution order for QMH and I believe a distinct improvement that I’ve been hoping to write about for a while. However I feel that this solves a symptom of a deeper root cause.

A robust implementation shouldn’t care about execution order. The fact that it does points to a more fundamental flaw of many QMH examples/implementations.

We should be used to this as a fundamental problem of parallel programming (the QMH execution engine model really has a concurrent programming model). If you want a function or, in this case, QMH Instruction, how do you ensure it is safe to run in parallel without race conditions?

You never use data that can be modified outside of that function.

Global variables, local variables (in some instances), Get/Set FGVs could all be modified at any time by another item making them susceptible to race conditions.

These is all still true of a QMH function, but now we add to our race condition risks the cluster on the shift register, which could be modified by any instruction called between our instruction being queued and actually executed.

I see two major solutions to avoid this:

  1. Pass all relevant with data with the instruction set (i.e. in the data part of the cluster), this ensure the integrity of the execution data.
  2. Don’t use it as a replacement for subVIs. This is common and you can see it in the shipping example below.

NI Shipping QMH

I think this is a common source of problems. Sure a subVI encapsulates functionality and so does a case of a QMH. However the QMH is effectively an asynchronous call which introduces so much more complexity.

This example with Initialize Data and Initialize Panel is typical example. This functionality could easily be encapsulated into a subVI allowing greater control over the data and when the functions are executed. Instead we queue them for later and can’t know what else might have been queued before them, or between them, creating a clear risk of race conditions.

Credits

This was a bit of a meaty post which was heavily inspired by others, I’ve tried to highlight their ideas throughout the post but just to say thanks:

  • The CLA Summit – A couple of presentations and lots of discussion inspired the start of this thought process. It was great, if your a CLA and want to improve I cannot recommend it highly enough.
  • Central South User Group (CSLUG) – A local user group which triggered my epiphany with great presentations and discussions- see above about improving!
  • Dr James Powell – Whos talk triggered said epiphany and highlighted some interesting flaws in the standard template.
  • Norm Kirchner – Who I’m going to credit as the first person I saw put in the isolated work queue model, if someone showed it to him, all credit!

 

External Video – TDD, Where Did It All Go Wrong?

Things have been a bit quiet as we have been going through a number of changes at Wiresmith Technology that has been taking up my time. In the last month we have moved into our first offices, so much time has been spent on making the dream development cave!

We have also taken on a Javascript contractor to help with some work which has taken up time, but he is also keeping me busy with plenty of great resources that he has been using to push his own development skills on things like test driven development, so much of my spare time has been taken with my head in books and youtube videos.

So I don’t have anything new to say today as I’m still absorbing all of this information and I hope to spit it out over the next few months in the form of various experiments, thoughts and translation to the LabVIEW world.

In the meantime, one of the great talks I have watched recently explains why Unit Testing != Testing Units and I’m trying to understand how best to apply this. It’s worth a watch and don’t worry, I have noted that this someone contradicts my last post! This is why I’m not adding anything until I have had a chance to process it properly.

Ian Cooper: TDD, where did it all go wrong from NDC Conferences on Vimeo.

Fixing a Simple Bug with Test Driven Development

So it was the CLA Summit last week that gave me more opportunity to bang on about software testing either further and was great to discuss it with various people and see the themes coming out of it.

My common theme was that I really like the interactive nature of the Unit Test Framework, I think it plays to LabVIEW’s strengths and allows for a nice workflow (for basic tests, using test vectors is far more long winded than it needs to be!).

Another positive I took was from Steve Watts’ talk on debugging and immediacy. He talked about the advantages of ‘runnable code’, that is having logic contained in subVIs that can run independently which aids the debugging process.

So as I worked this week I came across a bug which, the process of fixing, highlighted this well. I took a screencast of the process to highlight some of the benefits that I have found and I think highlights one of the most commonly cited benefits of testing, better code structure. (Go easy, I’m not as natural on camera!)

Chrome Wont Load LabVIEW Real Time Web Configuration Pages

I spent a week at the CLA summit last week (more to follow) and got a nasty shock on my return.

On attempting to login to configure a compactRIO system I get faced with a request to install silverlight:

Install Silverlight

 

As I have used this every day for a long time, I somehow don’t believe this.

It turns out that Google have removed support for some cryptic NPAPI that many plugins require to run, including Silverlight.

This finally explains why I have been seeing more security warnings about running silverlight pages, it appears this has been coming for some time, in fact over a year! Wish someone had told me.

Fear not, there is a workaround, but it will only work until September.

  1. Navigate to chrome://flags/#enable-npapi in the browser
  2. Click Enable under Enable NPAPIenable_npapi
  3. A box will appear at the bottom of the screen with a Relaunch Now button. Press this to relaunch chome ensuring you back up any work in the browser first
  4. And back to work fixing compactRIOs.

And in September…

I guess if there is no change from the NI side it’s a move to Firefox or IE!

Strangers and Coding – Can Rock Help?

A bit of a break from the LabVIEW technical content but hopefully something many of you may find interesting.

I’ve been thinking a lot lately about the process of writing software for someone. Both as someone that others look to and currently looking for outside assistance on some JavaScript code, it is undeniable that this is often a scary process for everyone involved. Can rock help?

Taking on or outsourcing a project involves large unknowns. Unknowns generally require trust in your partners but that has to be built first, it’s a chicken and egg situation.

I have recently become a big fan of listening to podcasts, one of my favourites is NPR’s TED radio hour. As someone who loves TED talks but never gets around to watching them it is a great way to consume them.

This week’s theme was all about playing games. The interesting section for me was the first about strangers.

Research shows that just putting two strangers in a room together, no tasks, just in each others presence, raises their stress levels.

In fact the study goes on to show that this stress means it is very hard to empathise with the other person. In a related experiment on pain, participants tended to believe their pain was worst than the strangers, even when inflicted in the same way (holding your hand in ice water, no developers were harmed in the making of this article!).

This reminded me immediately of the relationships in taking on a new project, everyone is on edge being forced to leap into the unknown with a stranger.

The solution? In this case it was shown that 15 minutes of playing rock band together eliminates this stress, causing participants to empathise with each other as much as a good friend.

So as part of our on-boarding process, bring your singing voice! Not really, but it has certainly set my mind to work on what we can do to remove the unknown and make the process easier for everyone involved.

LabVIEW 2014 SP1 – Notice Anything New?

It’s Spring! Which means it’s time for clocks to change, eclipses (well, that may have been a one off) and a service pack release from National Instruments.

Although this is normally touted as a bug fix release, if you dig in to the readme though they have snuck in a nice new feature.

The new feature is the Profile Buffer Allocations Window. This gives you a window into the run time performance of the LabVIEW memory manager that is otherwise hard to understand.

Previously we only had a couple of windows into the LabVIEW memory manager in the Tools -> Profile menu.

Show Buffer Allocations was the best way to understand where on the diagram memory could be allocated but it doesn’t tell us too much about what actually happens.

Performance and Memory shows us the run time memory usage on a VI level but no way to track it down to the actual code execution.

profile performance and memory

But now we can see more of this at run time.

 Step By Step

Launch the tool from a VI through Tools > Profile > Profile Buffer Allocations. Below you can see an example run of the Continuous Measurement and Logging sample project.

profile buffer allocations

  1. Profiling Control — To confuse things slightly, the workflow begins at the bottom! Set the minimum threshold you want to capture (default is 20 kB) and press Start. Press Stop once your satisfied you’ve captured the data your interested in.
  2. Buffer Filters — The bar at the top controls the display of the buffers in the table allowing you to filter by Application Instance, restrict the number of buffers in the table and adjust the units.
  3. Buffer Table — The buffer table displays the buffers that were allocated during the run as well as their vital stats. You can double-click a buffer to be taken to it’s location on the diagram.
  4. Time vs Memory Graph — Once of the coolest features! Selecting a buffer in the table will display a time graph of the size of that buffer during the run. I can imagine this would be great in understanding what is causing dynamic buffer allocations by seeing the size and frequency of changes.

I think anything that gives us more of a view into some of the more closed elements of LabVIEW has got to be beneficial, so go and try it and learn something new about your code.


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close