Browse Author: James McNally

How to Learn or Teach LabVIEW OOP

Last week was the European CLA summit and it was fantastic. If you are an architect in Europe and you didn’t go make sure to get your boss to budget it in for next year – I find it hugely valuable every year.

An interesting conversation I had a few times is about how to learn or train others on using LabVIEW Classes & OOP.

If you follow the NI training then you learn how to build a class on Thursday morning and by Friday afternoon you are introduced to design patterns.

Similarly when I speak to people they seem keen to quickly get people on to learning design patterns – certainly in the earlier days of adoption this topic always came up very early.

I think this is too fast. It adds additional complexity to learning OOP and personally I got very confused about where to begin.

To some extent I started again with my understanding following something like this process (in fact I still have to finish step 6) and I feel like it has made me a stronger developer. Thats why I share it here – it worked for me!

Step 1 – The Basics

Learn how to make a class and the practical elements like how the private scope works. Use them instead of whatever you used before for modules. e.g. action engines or libraries. Don’t worry about inheritance or design patterns at this stage, that will come.

Step 2 – Practice!

Work with the encapsulation you now have and refine your design skills to make objects that are highly cohesive and easy to read. Does each class do one job? Great you have learned the single responsibility principle, the first of the SOLID principles of OO design. Personally, I feel this is the most important one.

If your classes are large then make them smaller until they do just one job. Also pay attention to coupling. Try to design code that doesn’t couple too many classes together – this can be difficult at first but small, specific classes help.

Step 3 – Learn inheritance

Use dynamic dispatch methods to implement basic abstract classes when you need functionality that can be changed e.g. a simulated hardware class or support for two types of data logs. I’d look at the channeling pattern at this point too. Its a very simple pattern that uses inheritance and I have found helpful in a number of situations. But no peeking at the others!

Step 4 – Practice!

This stage is harder than the last. You need to make sure:

  • Each child class should exactly reflect the abstract methods. If your calling code ever cares which sub-class it is calling by using strange parameters or converting the type then you are violating LSP – the Liskov substitution principle – The L of solid.
  • Each child class should have something relevant to do in the abstract classes. If it has methods that make no sense this is a violation of the interface segregation principle.

Step 5 – Finish SOLID

Read about the open-closed principle and the dependency inversion principle and try it in a couple of sections of code.

Open-closed basically means that you leave interfaces (abstract classes in LabVIEW). Then you can change the behavior by creating a new child class (open for extension) without have to modify the original code (closed to modification). This goes well with the dependency inversion principle. This says that higher level classes should depend only on interfaces (again abstract classes). This means the lower level code implements these classes and so the high-level code can call the lower level code without a direct dependency.

This goes well with the dependency inversion principle. This says that higher level classes should depend only on interfaces (again abstract classes). This means the lower level code implements these classes and so the high-level code can call the lower level code without a direct dependency. This can help in places where coupling is difficult to design out.

I leave these principles to the end because I think they are the easiest to write difficult to read code. I’m still trying to find a balance with these – following them wholeheartedly creates lots of indirection which can affect readability. I also think we don’t get as much benefit in LabVIEW with these since we don’t tend to package code within projects in the same way as other languages. (this maybe a good topic for another post!)

Step 6 – Learn some design patterns

This was obviously part of the point of this article. When I came back to design patterns after understanding design better and the SOLID principles it allowed me to look at the patterns in a different way. I could relate them to the principles and I understood what problems they solved.

For example, the command pattern (where you effectively have a QMH which takes message classes) is a perfect example of a solution to meet the open-closed principle for an entire process. You can extend the message handler by adding support for new message types by creating new message classes instead of modifying the original code. This is how the actor framework works and has allowed the developers to create a framework where they have a reliable implementation of control of the actors but you can still add messages to define the functionality.

Once you understand why these design patterns exist you can then apply some critical thinking about why and when to use them. I personally dislike the command pattern in LabVIEW because I don’t think the additional overhead of a large number of message classes is worth the benefit of being able to add messages to a QMH without changing the original code.

I think this will help you to use them more effectively and are less likely to end up with a spaghetti of design patterns thrown together because that is what everyone was talking about.

Urmm… so what do I do?

So I know this doesn’t have the information you need to actually do this so much as set out a program. Actually, all the steps still follow the NI course on OOP so you could simply self-pace this for general learning material.

This doesn’t really cover SOLID very much from memory but if you google it you will find a HUGE number of resources on this, or you can get the book “Agile Software Development” by Robert Martin which I believe is the text that first coined the term SOLID (the principles already existed but this brought them together).

Given, When, Then In LabVIEW Tests

A few months ago in the Austrian alps I was skiing and attempting my second ever slalom run on my trip. Those of you that I have seen recently will now it didn’t end well!

I gathered too much speed, caught some ice and tore my ACL.

Since then I have to do some exercises each morning to prepare it for surgery. I tend to try and watch some interesting talks while I do this and make use of my “bonus” time. They are often TED talks but I had watched many of the latest ones so, super geeky, I switched to Go To Conference talks which are software talks – mainly based around web technologies.

I find it interesting to watch some of the talks that skirt the edge of the technical and understand how they can be applied to LabVIEW. This was certainly true of Level Up Your Unit Tests.

Descriptive Tests

The talk is somewhat the story of a developers transition to a new testing tool and there is one piece that really appealed to me.

There is a concept I have come across before for a structure for acceptance tests called Given, When, Then. The idea is it clearly describes every aspect of a test situation:

  • Given: The pre-conditions.
  • When: The trigger or action.
  • Then: What the software/system should do in response.

For example:

Given we have a high temperature alarm, when the user clears the alarm then an alarm should no longer show as active.

In the video Trisha Gee describes a test framework that was new to her that actually describes the tests in this structure which greatly helps with clarity and highlights problems with the code if any section gets too large. Ideally:

  • Given is small. If it starts to get quite big this starts to sound more like an integration test and less of a unit test. It should also contain no tests – this is not the subject of the unit test.
  • When is tiny. This should ONLY be the code you are actually testing.
  • Then is tiny. This contains your actual tests and assertions. Given a unit test should test one thing there should only be one assertion here or multiple tightly related assertions.

Why Looks Matter

What struck me was that my unit tests – while quite effective – are a mess compared to my normal code. I tend to rattle through then and not give them the full attention that they need. This can hurt me when I have to return to them to understand why they fail.

So I have experimented by taking the descriptive structure of the framework that Trisha describes in the video and implementing it in LabVIEW. The idea is we want clearly separate sections for these with defined boundaries so I found flat sequence structures work well.

Let me give you a (kind of) before and after.

Before:

old-test-style

This is the old style. It works, well. However just looking at the code there are test cases spread throughout (5 in total) and it isn’t clear from the code alone what is being tested.

After:

given-when-then-test

(Yes this is a different test, I haven’t rewritten all of my tests to this format)

Here it is much clearer what is just setup code, what is the code under test and then what the conditions are that we really care about.

It also makes it really obvious if I had tests that were really just checking that the setup has worked which is what some of the tests in the before case are doing (Sometimes this can be really useful though, I think the answer is though that this code should have been tested somewhere else – but I need to think this through more)

Now I really am running out of things to say on unit testing! I have a few more OO posts in the pipeline as well as a couple of tips & tricks posts that I hope to do this year. It has been very busy the past couple of months but I will be having some time off over the summer while they reconstruct my knee! So expect a few more posts then.

 

UPDATE:

Thanks to the commenters below I wanted to include a couple of links.

  1. This template is now built into the VITAC toolkit at https://github.com/WiresmithTech/VITAC/releases (thanks for the prompt Fab!)
  2. Ivan’s tip below for adding this to the default class is great and I have this on all my systems now – http://kosist.org/2018/08/modify-default-vi-tester-testcase-template/

Bringing the Command Line Interface to LabVIEW

Those of you that know me or have been following the blog will know that for a while now I have been practicing test driven development in LabVIEW.

This is great, most of my LabVIEW projects now have 50-100 tests attached to them that check various parts of the system are working, but of course, this is only when I remember to run them!

We are all fallible, when it comes to 6pm and I want to go home, I add my last flourish, commit to source code and go home, forgetting to test.

 

Well with my JavaScript code I get a voice from the cloud that tells me off if I make a mistake. The voice is Jenkins – A build server which is used for continuous integration. Every time I check in my code, Jenkins tests and builds my code to make sure nothing is broken.

(Well the voice is an email, but you get the idea)

I get no such prompt with LabVIEW.

 

There have been a number of projects over the years to do this. JKI have a system that we managed to set up and get working some years ago but it depended on some intermediate files and took a bit of fiddling a couple of years ago.

In order to try and make it easier, a couple of years ago I learnt some Java to try and build a plugin for Jenkins which could talk to some corresponding LabVIEW code over TCP. But it was over-complicated and I was over-ambitious and never finished it.

When I then set it up for my javascript application, even though there is no built in support for it, it was so easy! Why?

 

The all-powerful command line interface!

 

Although most people have long given up on it (If you are a LabVIEW developer, your probably more of a GUI kind of person) it is still a common and straightforward way to get two programs to talk together (it forms the backbone of the Unix philosophy)

If we can use the command line, we can talk to Jenkins and any other technology which comes up for the next decade at least.

The problem is LabVIEW’s command line interface is quite basic. You can recieve calling arguments but can’t recieve or send text (stdIn and StdOut) and can’t return an exit code. That last bit is critical since it is how one program determines if another was successful, therefore how we signal to Jenkins whether we have succeeded or failed our tests.

labview-command-line-supported

JKI solved this problem through the use of batch files and text files but I wanted to try a different method.

I created LabVIEW CLI. This has two components:

labview-cli

  • A C# console application. This can run on the command line and basically proxies the interface through TCP to…
  • A corresponding LabVIEW library that gives us the ability to access the command line.

labview-cli-how-it-looks

The philosophy is that the C# application is very small but can be called directly by Jenkins or any other language and will then launch LabVIEW or our LabVIEW built application.

I’m hoping that by keeping it quite simple, it can become a building block for CI tools and others for LabVIEW.

So if your interested you can check out the builds on github and give it a go. I am not currently using this in anger so please don’t bet your job on it working! But early tests look good!

You will need to install the C# application with the installer and then there is a vip package for the LabVIEW library. Look at the readme on github and happy building!

Any issues you can create bugs on the github page or comment on the NI community page.

Labels, Labels, Everywhere

 

I’ve had a few long journeys over the last couple of weeks and managed to catch up on some reading.

In particular I have been jumping into some sections of one of my favourite programming books – Code Complete.

As with most books, the practical techniques directly apply to text based languages. Certain things like naming conventions for variables don’t readily apply to LabVIEW since everything is graphical.

However I think we need to think about text as well. It can be fast to read and unanbiguous in meaning (when done well).

LabVIEW supports text documentation, mostly you interact with it through the context help window.

However, as has been pointed out in the last couple of CLA Summits, we should look to reduce the amount we have to break out of our programming flow.

I’ve talked about comments before, but this triggered some other thoughts.

SubVI Names

Increasingly, certainly for internal APIs I find myself mostly relying on text on the icon other than where there is a well establised precendent (Like init or close).

I like this because it makes my code faster to scan, I don’t have to do the mental translation from icon to meaning if it isn’t automatic.

text-for-icon
Example where I have used text for icons

Think about the last time you drew a flow chart? You use text because it is quick and easy to get the meaning across.

(I know this has internationalisation implications, but this has not been a concern for me yet).

I am also aware of an advantage that we do have over C++ and it’s fellows when it comes to writing text. We can use descriptive names for our subVIs since we can include spaces.

Just because C++ has to use names like InitDaqCard() or ReadTemp() because they have to type it all the time. We can use full names for VIs like Initialise DAQ Card.vi or Read Ambient Temperature() and should take advantage of this.

Variables

Pah! Who needs them?

Well we may not name variables (very often) but controls and indicators can and should follow good conventions, with unambiguous names and inclusion of units. Like with the function names we are also not limited by text based languages naming restrictions in the same way.

Reading though, the part that really struck a chord with me though is about intermediate variables.

Code Complete advocates meaningful intermediate names (so not i, j for loop counters for example). This hugely impacts the readability of code.

It actually goes further and suggests forgoing the performance hit and even adding unneeded variables for complex routines where they improve the readability by giving meaning to intermediate values.

Again does this apply to LabVIEW? We don’t have variables?

Actually we do, they are called wires.

When was the last time you drew a block diagram on a whiteboard and left the arrows unlabeled?

It would be hard to read but I find I rarely label wires in LabVIEW (unless they are obviously unclear) and I suspect I’m not the only one.

Some comments but wires aren't clear or are ambiguous
Some comments but wires aren’t clear or are ambiguous

Context to these connections greatly improves readability. Again we may have them in LabVIEW using context help as some names automatically spread from functions.

But if we labeled wires more it could save you using context help, or even opening subVIs to understand what is going on. Do that a few times a day and the productivity adds up.

Larger but clearer with more wires labelled
Larger but clearer with more wires labelled

Oh, and unlike creating an intermediate variable for readability, this comes with no performance implications in LabVIEW!

I will be trying to use them more often over the next few weeks and see how much I can improve the readability of my code.

 

Does this make sense? Or am I completely wrong? Let me know in the comments.

European CLA Summit 2016

Wow.

I am sat in a hotel lobby after the European CLA summit blown away by the amount of talent coming through in the LabVIEW community.

The CLA summit is an event designed for LabVIEW Architects to come together and share concepts and ideas to continue learning after the end of the NI courses.

It was a huge event this year, with I believe around 140 attendees with a great proportion of new faces.

Some of my personal favourites:

  • James Powell presented by showing changes he would make to the standard QMH template to make it less likely to hit bugs by thinking “What would a subVI do?”. An earlier form of this discussion changed the way I view this pattern and I think that everyone should see it!
  • The G Code Manager – A simple plugin tool that can greatly speed up managing some of the properties of VIs and classes. This is now online here and I am definitely going to be trying it out.
  • LabVIEW Channels – Jeff Kodosky presented these and the more I learn about these the more intrigued I get. Especially when Stephen Loftus-Mercer illustrates a potential new design pattern using them.
  • LabVIEW Containers from Chris Cilino – Similarly I have seen bits of these a few times but something resonated a lot more this time. I saw it more as where it could save me time and will be looking to try these out.

But I have to say there was not one bad presentation. They all gave me ideas from looking at continuous integration in LabVIEW again to thinking about our role as a community and what we need to do to bring it forward.

Naturally, I couldn’t help but get up myself. I presented my unit testing methodology which I have previously written about and a short presentation about a command line tool I’m working on (expect to see something here on that soon!)

Finally, I find some of these things are best summarised by twitter!

 

 

How Much Should You Comment Your Code?

How much should you comment your code? As with many things, I’m not convinced there is a “right” answer but it is something you need to consider for your situation

The extremes

Initially when we start coding we are often told to write lots of comments to explain what the code does. Bad programmers don’t comment we are told In fact recently when a non programmer was trying to work out if my code was good the first question asked was whether it is commented (the second was whether I have used OO)

Recently, in the generation z (or whatever we are in now) comments are fought against, with some advocating you shouldn’t comment since the comments are often out of date with the code and become misleading. The idea is if you write good code (well named variables, functions, clean) then they can tell you more than comments anyway.

So Who’s Right?

As usual it isn’t black and white.

The idea of not having comments because they become out of date is rubbish. You are a professional,  it is your job to make sure your code is readable and correct so if your comments are regularly  out of date you have failed, do better next time.

The reality is thought I think this standpoint comes from a reaction to overly commented code. The comments should not just describe the code in gritty detail, that is what the code is for! So let’s not throw out the baby with the bathwater.

So what should you do?

As with anything step back and try to understand why you are commenting. What benefit are you looking for? Why is it hard without it? This should lead you to an answer.

For me comments should enhance the code and say something that the code doesn’t on its own. Off the top of my head these would be things like:

  • The overall intention of a block of code that doesn’t make sense as a subVI but works to a single goal. (This one is a little dubious, often a subVI might be the better choice)
    Comment Code Block
  • Design decisions for why you have structured the code the way you have.
    design option
  • Translating code terms into real world terms. For example using sub diagram labels that says ‘this step is complete’ rather than the standard “true” or “false” of the case statement.
    subdiagram labels
  • Units and derivations of constants and numbers, why are we dividing time by 3600? (Hint: seconds in an hour)

An interesting thing with these is that because they don’t just describe the code, most of them shouldn’t go stale over time since the outcomes of the code will often be the same. If all you do is say what the code is doing, every code change then means a comment change, and you can see how these can get missed.

Have I missed an obvious case? Let me know in the comments.

Why Software Should Be Like Onions

I have now been writing full applications for well over 18 months now and I can honestly say that my development has changed (and improved!) significantly in this time.

When I look at my software now I see two major areas of improvement. It is far more readable and decoupled than in the past which has been because I have made my software like onions, it has layers.

What Layers?

I will start by saying these are not rules, these are not fixed but these are common.

  • “Process” or “Application” level – This is essentially your application framework. This handles connecting the dots, moving data between asynchronous processes and is really the scaffolding of the application.
  • “Problem” level – Also often termed business logic. This is the part that is very application specific and really defines the function and process of the application. This level should all be described in the terms of the problem you are trying to solve
  • The IO layer – This is the layer that connects your problem to the outside world. The important distinction here is that the API/Interface is defined by the problem level
  • Reuse code and drivers

Essentially each layer is composed of the elements below and the IO layer is composed of libraries, interfaces and reuse code.

Why It Works – Readability

If I gave you a blank VI, could you rewrite your application as is? But you’ve done it once? We never write a whole application though, we focus on areas at a time. The layers hide the complexity below them from view, allowing focus on the task at hand.

The process layer should only show how the problem level components talk to each other.

The problem layer should only show how data is taken from a input, processed and sent to the relevant outputs.

The IO layer should show the specific combinations of driver calls to achieve the desired input or output.

This works because we can only hold so much of the program in our heads at once. By having driver calls and decision logic on one diagram our mental model of what is happening has to be much harder making us work slower and more prone to mistakes.

Why It Works – Dependendency Management

Similarly when there are different parts of the program coupled together it is hard for us to comprehend the knock on effects of changes.

By having these layers contained we have well defined interfaces and can track changes. Change the driver? We only need to look at the IO layer. Changing how processes communicate? You only need to worry about the process layer.

We can also identify uses of dependency inversion more easily. This means that rather than the problem domain being coupled to a driver directly, we define an interface (my implementation uses classes and dynamic dispatch) which is owned and designed from the perspective of the problem level. This is far less likely to change than the driver calls themselves and protects the problem logic from the IO logic.

The final benefit is more practical. I don’t let dependencies cross multiple boundaries, classic cases are type defs that end up spreading throughout an application. This reduces the dependencies which allows for faster load, editing and less “random” recompiles in LabVIEW. Sometimes this can be hard, it feels inefficient to convert between very similar type defs, but generally the computer cycles are cheap and the benefits in the software are worth it.

 

For me, increased awareness of these have been the single biggest gains in terms of the way I design my applications, I hope it might help you as well. I suspect these ideas need a bit of refining and distilling and would love your feedback or thoughts.

Spies In LabVIEW

In my last post I covered how I am using unit testing in LabVIEW. In this post I want to share a useful tool on that journey.

Test Doubles

One of the first rules I operate under while testing is using as much of the real application as possible in testing a behaviour. Then each test becomes more valuable as it covers the integration of all the units as well as their contents.

Sometimes however this simply isn’t possible. You may need to talk to hardware that isn’t available, or a database or file that is hard to verify as part of the test (It’s normally possible somehow, but perhaps not worth it). This is where test doubles come in.

A test double is simply a substitute for a software component that you want to exclude from the test. In my last post I mentioned that OO make testing easier, this is why. If these components are written to an abstracted interface we can replace them with a double using dynamic dispatch.

Sometimes JohnnyDepp.lvclass can't be used (source
Sometimes JohnnyDepp.lvclass can’t be used (source)

There are different types of test doubles that are well covered in an article by Martin Fowler. Without getting bogged down into terminology these can return canned answers, simulate some behaviour or simply act as black holes for data!

Spies

One type is called a spy. A spy object is a way for us to peek inside an object. Essentially it is a double that stores information about how it was called so that you can query it later and make sure it was called in the way that you expected.

For example if we are sending a logging library a value of 1 to log, we want to see that the file writing function was called with a value of 1 for the channel name for itself.

Brittle Tests Health Warning – Overuse of this can create very brittle tests. The advantage of taking a high level approach to testing is that your tests aren’t coupled to implementation details which will change more than behaviours. If you use spies too much and you see a lot of implementation details you risk making tests that frequently break and/or require more maintenance.

Spies In LabVIEW

So how do we do this in LabVIEW? Because LabVIEW is a compiled language we must specifically implement new components that contain the spy abilities.

Essentially this component must be able to store the details of each call in a way that it can be read back later. You can do this however you like! But I have created a library to try and take some of the hard work out of it.

LabSpy API

By creating these spies inside of your class you can create a spy class that you can use in your testing. 

 

Typically when we do these there are a few steps:

  1. Create a setup and delete spies method which can be called in the setup and teardown of your test. Make sure the references they create are accessible (either output as an indicator or make accessible through another accessor).
  2. Create the override methods and add the register calls. If you want to track input parameters create a type def for them and wire into the parameters input of the register call function.
  3. Write the test and check the calls using the LabSpy API. The image below shows a simplified diagram showing what this could look like.

Simplified Test View

Now you can check that your software is making the calls with the correct parameters.

Where To Get It

This project is released under an open source license and you can find it on my github account at https://github.com/JamesMc86/LabSpy. There is a VI Package so that you can install it into the palettes which you can download from https://github.com/JamesMc86/LabSpy/releases.

Feel free to download it, play with it, suggest fixes or add feature that you would find useful.

How We Unit Test LabVIEW Code

So if you have been following me for a while you will know that one of my new years resolutions was to really get to grips with unit testing LabVIEW code.
It has been a successful year for this and I thought I would share my results so far.

Result #1 – Always Unit Testing

The immediate question is whether it is worth it and my experience says it is. There is a time investment, that much is obvious but I have seen many benefits:
  • Better development process – by taking a component in isolation and being able to focus and test that thoroughly on its own means less time running full applications, clicking buttons in the right order to create the scenario you need to make sure the latest change works. This means more time writing code, less time debugging and testing. (as well as less code coupling).
  • Higher Confidence – When you have a good suite of tests around your code that you can run in an instant you are more confident in changing it and not causing bugs which looks good to customers and feels great. It also encourages refactoring as it becomes a low risk activity.
  • Smoother Deployments: Compared to last year my commissioning visits have been far smoother with less issues and those that do come up are easier to narrow down.
This is and will continue to be core to how we develop software at Wiresmith Technology.

Result #2 – JKI VI Tester and LabVIEW OO are the Tools of Choice

As late as May I was trying to make NI UTF work for me. I like the concepts and having the option of coverage testing is great but the usability sucks so I transitioned to VI Tester as the framework of choice.
VI tester generates a lot of code as you go! But it lets you write very specific tests in whatever style you like and follows standard testing conventions. My only concern is that it seems unclear what the state of development is and if it will continue. For example I would love to see a “Before” and “BeforeEach” function as opposed to a single “SetUp” VI. It is also very clunky with multiple project targets which I would love to understand what can be done.
Slightly more controversially, I feel that to be really effective you need to be using OO. This is simply because using OO and good design practices/patterns allows you to substitute test doubles where items are difficult to test (i.e. IO) or not relevant to the tests (i.e. another QMH that you don’t want to start, just see what interface calls are made). I just don’t see an effective way to do this with more traditional methods.

Result #3 – Test First not Test Driven

What I refer to here is the purist Test Driven Development. This says that the tests drive the design of your code. You write just enough code to pass each test, refactor the code and by the end you end up with an optimal design.
I have not found much success with this. I have tried a couple of projects where I did try to do this rather than having a proper up front design and the code did not feel very clear. Perhaps it is my style or not enough refactoring but it did not feel good to me.
What I will say is I do follow a test first process.
Unit Testing LabVIEW Process
The thing to remember with the test code is we are trying to call it from as high a level as possible, substituting any test doubles as low as possible to test as many of the real parts as possible, all the time trading off having to spend hours creating test doubles or the test code itself.
Why test first, it has a couple of key benefits:
  • You make sure your test fails first, if the test passes when you haven’t written the code your testing is wrong!
  • It helps you consider the problem domain, what parts are interacting with the behaviour you are working on.
  • It feels like (not saying it is, just feels like) taking two steps back when you are writing tests for code you have already written and know works.
So I think that is one of the first new years resolutions I have ever kept! There is not huge amounts of information out there on unit testing with LabVIEW so I hope this helps. I have a few specific posts in mind to cover some techniques and tools that I have developed along the way that I hope to put up over the next few months.
In the meantime I will once again plug the unit testing group on the community which remains a great resource on this topic.
EDIT: I wrote this post before the announcement of JKI’s new testing framework. I will be looking for a project to evaluate this new approach on and will report back!

Floating Point Precision

The problem with numbers is they always look right.

If your DAQ card says that the temperature is 23.1 degrees, who are you to argue! All the way from the sensor to the screen, the quality of the information typically degrades as it is converted and recalculated.

One such source of degradation is rounding errors due to floating point precision.

Whilst floating point numbers look continuous this is not true, they have rounding errors too. I’m not going to dig into how they work too much here, there are plenty of resources on that (Wikipedia has more than I can bear to read right now!) however I want to talk about how the format trades off precision vs range.

LabVIEW helps us by using the double precision format by default which gives a precision to approximately 16 decimal figures vs the standard float in many languages which only gives around 7 decimal figures.

But as with everything, there is a cost. The double values weight in at 64 bits vs. the singles 32 bits which when your storing a lot of data comes at a cost. I had such a case recently where I wanted to store timestamps in as small a space as possible with sub-millisecond precision, so the question arose, can it fit in a single?

Machine Epsilon

The first thing you will find when you go searching for precision on floating point numbers is the mystical Machine Epsilon.

This represents the smallest change that a floating point number can represent, there is a LabVIEW constant for this.

machine epsilon

This describes the best possible precision however it can be worse. Floating point numbers represent numbers as a combination of a significand and exponent (like scientific notation at school i.e. 5.2 x 10^5) which allows it to trade off range vs precision (hence the floating point), this means as the size of the number increases, the precision reduces.

For my example, this was particularly important as timestamps in a floating point format are extremely large values (seconds since 1904) which means they lose precision. Which makes this piece of code, break the laws of maths:

timestamp with machine epsilon

So I went in hunt of a definition of how precise these numbers are, which was surprisingly difficult! I think there are two reason why this doesn’t appear to be defined in many places:

  1. Maybe it’s just obvious to everyone else?
  2. A factor must be that the following formula makes assumptions about optimum representation, some numbers can be represented multiple ways which means that there is no single answer.

Eventually I came across a stack overflow question which covered this.

In essence the rules are:

  1. For a given exponent, the error is all the same (i.e. if we are multiplying by 2^2, the smallest change for all numbers would be 4).
  2. The exponent is set by the size of the number (i.e. if the number is 6, the exponent should be 3 as that gives us the best precision).
  3. Knowing the size of the number we can work out the exponent, given the size of the floating point number and a given exponents we can work out the smallest change.

The maths in the post is based on a function available in MATLAB that gives us the epsilon (eps) value for a given number. Translated into LabVIEW, looks like this:

calculate epsilon

With this I could see the answer to my problem, resolution of time as singles is abysmal!

time precision


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close