Browse Tag: LabVIEW

Floating Point Precision

The problem with numbers is they always look right.

If your DAQ card says that the temperature is 23.1 degrees, who are you to argue! All the way from the sensor to the screen, the quality of the information typically degrades as it is converted and recalculated.

One such source of degradation is rounding errors due to floating point precision.

Whilst floating point numbers look continuous this is not true, they have rounding errors too. I’m not going to dig into how they work too much here, there are plenty of resources on that (Wikipedia has more than I can bear to read right now!) however I want to talk about how the format trades off precision vs range.

LabVIEW helps us by using the double precision format by default which gives a precision to approximately 16 decimal figures vs the standard float in many languages which only gives around 7 decimal figures.

But as with everything, there is a cost. The double values weight in at 64 bits vs. the singles 32 bits which when your storing a lot of data comes at a cost. I had such a case recently where I wanted to store timestamps in as small a space as possible with sub-millisecond precision, so the question arose, can it fit in a single?

Machine Epsilon

The first thing you will find when you go searching for precision on floating point numbers is the mystical Machine Epsilon.

This represents the smallest change that a floating point number can represent, there is a LabVIEW constant for this.

machine epsilon

This describes the best possible precision however it can be worse. Floating point numbers represent numbers as a combination of a significand and exponent (like scientific notation at school i.e. 5.2 x 10^5) which allows it to trade off range vs precision (hence the floating point), this means as the size of the number increases, the precision reduces.

For my example, this was particularly important as timestamps in a floating point format are extremely large values (seconds since 1904) which means they lose precision. Which makes this piece of code, break the laws of maths:

timestamp with machine epsilon

So I went in hunt of a definition of how precise these numbers are, which was surprisingly difficult! I think there are two reason why this doesn’t appear to be defined in many places:

  1. Maybe it’s just obvious to everyone else?
  2. A factor must be that the following formula makes assumptions about optimum representation, some numbers can be represented multiple ways which means that there is no single answer.

Eventually I came across a stack overflow question which covered this.

In essence the rules are:

  1. For a given exponent, the error is all the same (i.e. if we are multiplying by 2^2, the smallest change for all numbers would be 4).
  2. The exponent is set by the size of the number (i.e. if the number is 6, the exponent should be 3 as that gives us the best precision).
  3. Knowing the size of the number we can work out the exponent, given the size of the floating point number and a given exponents we can work out the smallest change.

The maths in the post is based on a function available in MATLAB that gives us the epsilon (eps) value for a given number. Translated into LabVIEW, looks like this:

calculate epsilon

With this I could see the answer to my problem, resolution of time as singles is abysmal!

time precision

QMH’s Hidden Secret

Queued Message Handlers (QMH) are an extremely common design pattern in LabVIEW and sit at the heart of many of the different frameworks available for use.

At CSLUG (my local user group) we had something of a framework smackdown with Chris Roebuck and James Powell discussing a couple of frameworks and looking at some of the weaknesses of common patterns.

James’ argument highglighted one of the most common flaws with this pattern which is clearly present in the shipping example in LabVIEW. When using a QMH you cannot guarantee that execution will happen in the order that you expect, on the data you expect.

The concept seems to work for many though, with a QMH style structure at the heart of most of the actor oriented programming and driving some of the largest LabVIEW applications around, what is the difference between success and failure?

A Thought Experiment

During James’ talk I had a bit of a personal epiphany about the QMH which involves a slightly different thought process.

This thought process starts by thinking about the QMH as a virtual machine or execution engine, not part of your application. So if this is the case what are the parts (click to enlarge):

QMH Virtual Machine

  1. The Instruction Set: The different cases of the case structure define the instruction set. This is all of the possible functions that the QMH can execute.
  2. The Program: This is the queue, this defines what the program executes and the order in which the instructions are executed.
  3. The Function Parameters: The data that is enqueued with the instruction.
  4. Global Memory: Any local, global variables used AND any shift registers on the loop (we will come back to this)

It’s All About Scope

Scope is important, we all know that when it comes to things like global variables. Scope however is all about context and control and there are two scoping concerns at the centre of many issues with the QMH pattern.

The Program: In the typical pattern any code with the queue reference can at any time decide to enqueue instructions.

Global Memory and, in particular, the shift registers on the loop also give some global state. The shift registers are a big part of the dirty little secret. Common sense says anything on a wire is locally scoped, it cannot be modified outside of the wire, however this is about context. To the QMH this is true, the shift register data is locally scoped. However to a function/instruction inside the QMH this is not true. In the context of a function this is global as other functions can modify this data i.e. you cannot guarantee the state is the same as you left it.

So how do you use the QMH safely? You should reduce the scope of at least one of these to ensure safety.

Reducing the Scope of the Queue

This is something that is beginning to emerge in a major way.

I first saw this pattern a couple of years in a framework called TLB’ that Norm Kirchner proposed. I have since seen at least two alternatives that follow a similar pattern (that I’m not sure are published but you know who you are, thanks!)

The gist of the pattern is that we separate two structural elements apart in the QMH

  1. An event hander that can take external events and determine what work needs to be done in reaction to that event.
  2. A work queue which is something like a more traditional QMH however only the event handler can add work items.

This could look something like this in LabVIEW:

This is vastly simplified to show the core structural elements
This is vastly simplified to show the core structural elements

(If you look at tlb’ it has the same elements but reversed on the screen).

This has some distinct advantages:

  1. As long as we don’t share the original queue reference only the event structure or the QMH itself can queue work items. This gives better control over race conditions in terms of order of execution.
  2. This overcomes another distinct drawback of the shipping QMH example, data can easily be shared between the event handler and the QMH on the wire using the same shift register structure as before, removing the need for various hacky ways of enabling this normally (again credit to James Powell on this observation).

The disadvantages?

  1. Now our event handling response time is limited to the time taken to complete the work backlog, we have made our program serial again. I suspect for the simplicity, this is a cost that can be handled by most applications.
  2. This doesn’t really deal naturally with time based systems like DAQ, but does QMH really?

I really like this structure, parallel programming is hard! This removes many of the complexities that it introduces for event-response type applications in LabVIEW. I expect we may see more and more of these come out over the next couple of years.

Reducing the Scope of Instruction Data

The above is a nice solution to the issue of controlling execution order for QMH and I believe a distinct improvement that I’ve been hoping to write about for a while. However I feel that this solves a symptom of a deeper root cause.

A robust implementation shouldn’t care about execution order. The fact that it does points to a more fundamental flaw of many QMH examples/implementations.

We should be used to this as a fundamental problem of parallel programming (the QMH execution engine model really has a concurrent programming model). If you want a function or, in this case, QMH Instruction, how do you ensure it is safe to run in parallel without race conditions?

You never use data that can be modified outside of that function.

Global variables, local variables (in some instances), Get/Set FGVs could all be modified at any time by another item making them susceptible to race conditions.

These is all still true of a QMH function, but now we add to our race condition risks the cluster on the shift register, which could be modified by any instruction called between our instruction being queued and actually executed.

I see two major solutions to avoid this:

  1. Pass all relevant with data with the instruction set (i.e. in the data part of the cluster), this ensure the integrity of the execution data.
  2. Don’t use it as a replacement for subVIs. This is common and you can see it in the shipping example below.

NI Shipping QMH

I think this is a common source of problems. Sure a subVI encapsulates functionality and so does a case of a QMH. However the QMH is effectively an asynchronous call which introduces so much more complexity.

This example with Initialize Data and Initialize Panel is typical example. This functionality could easily be encapsulated into a subVI allowing greater control over the data and when the functions are executed. Instead we queue them for later and can’t know what else might have been queued before them, or between them, creating a clear risk of race conditions.

Credits

This was a bit of a meaty post which was heavily inspired by others, I’ve tried to highlight their ideas throughout the post but just to say thanks:

  • The CLA Summit – A couple of presentations and lots of discussion inspired the start of this thought process. It was great, if your a CLA and want to improve I cannot recommend it highly enough.
  • Central South User Group (CSLUG) – A local user group which triggered my epiphany with great presentations and discussions- see above about improving!
  • Dr James Powell – Whos talk triggered said epiphany and highlighted some interesting flaws in the standard template.
  • Norm Kirchner – Who I’m going to credit as the first person I saw put in the isolated work queue model, if someone showed it to him, all credit!

 

External Video – TDD, Where Did It All Go Wrong?

Things have been a bit quiet as we have been going through a number of changes at Wiresmith Technology that has been taking up my time. In the last month we have moved into our first offices, so much time has been spent on making the dream development cave!

We have also taken on a Javascript contractor to help with some work which has taken up time, but he is also keeping me busy with plenty of great resources that he has been using to push his own development skills on things like test driven development, so much of my spare time has been taken with my head in books and youtube videos.

So I don’t have anything new to say today as I’m still absorbing all of this information and I hope to spit it out over the next few months in the form of various experiments, thoughts and translation to the LabVIEW world.

In the meantime, one of the great talks I have watched recently explains why Unit Testing != Testing Units and I’m trying to understand how best to apply this. It’s worth a watch and don’t worry, I have noted that this someone contradicts my last post! This is why I’m not adding anything until I have had a chance to process it properly.

Ian Cooper: TDD, where did it all go wrong from NDC Conferences on Vimeo.

Fixing a Simple Bug with Test Driven Development

So it was the CLA Summit last week that gave me more opportunity to bang on about software testing either further and was great to discuss it with various people and see the themes coming out of it.

My common theme was that I really like the interactive nature of the Unit Test Framework, I think it plays to LabVIEW’s strengths and allows for a nice workflow (for basic tests, using test vectors is far more long winded than it needs to be!).

Another positive I took was from Steve Watts’ talk on debugging and immediacy. He talked about the advantages of ‘runnable code’, that is having logic contained in subVIs that can run independently which aids the debugging process.

So as I worked this week I came across a bug which, the process of fixing, highlighted this well. I took a screencast of the process to highlight some of the benefits that I have found and I think highlights one of the most commonly cited benefits of testing, better code structure. (Go easy, I’m not as natural on camera!)

LabVIEW 2014 SP1 – Notice Anything New?

It’s Spring! Which means it’s time for clocks to change, eclipses (well, that may have been a one off) and a service pack release from National Instruments.

Although this is normally touted as a bug fix release, if you dig in to the readme though they have snuck in a nice new feature.

The new feature is the Profile Buffer Allocations Window. This gives you a window into the run time performance of the LabVIEW memory manager that is otherwise hard to understand.

Previously we only had a couple of windows into the LabVIEW memory manager in the Tools -> Profile menu.

Show Buffer Allocations was the best way to understand where on the diagram memory could be allocated but it doesn’t tell us too much about what actually happens.

Performance and Memory shows us the run time memory usage on a VI level but no way to track it down to the actual code execution.

profile performance and memory

But now we can see more of this at run time.

 Step By Step

Launch the tool from a VI through Tools > Profile > Profile Buffer Allocations. Below you can see an example run of the Continuous Measurement and Logging sample project.

profile buffer allocations

  1. Profiling Control — To confuse things slightly, the workflow begins at the bottom! Set the minimum threshold you want to capture (default is 20 kB) and press Start. Press Stop once your satisfied you’ve captured the data your interested in.
  2. Buffer Filters — The bar at the top controls the display of the buffers in the table allowing you to filter by Application Instance, restrict the number of buffers in the table and adjust the units.
  3. Buffer Table — The buffer table displays the buffers that were allocated during the run as well as their vital stats. You can double-click a buffer to be taken to it’s location on the diagram.
  4. Time vs Memory Graph — Once of the coolest features! Selecting a buffer in the table will display a time graph of the size of that buffer during the run. I can imagine this would be great in understanding what is causing dynamic buffer allocations by seeing the size and frequency of changes.

I think anything that gives us more of a view into some of the more closed elements of LabVIEW has got to be beneficial, so go and try it and learn something new about your code.

4 Lessons Learnt Unit Testing LabVIEW FPGA Code

After declaring my intent earlier in the year on moving towards an increasingly test driven methodology, one of the first projects I aimed to use it on has been based on FPGA, which makes this less than trivial.

I was determined to find a way to make the Unit Test Framework work for this, I think it has a number of usability flaws and bugs but if they are fixed it could be a great product and I want to make the most of it.

So can it be used for unit testing LabVIEW FPGA code? Yes but there are a few things to be aware of.

1. You Can’t Directly Create A Unit Test

Falling at the first hurdle, creating unit tests on VIs under an FPGA target isn’t an available option on the right-click menu, normally the easiest way to create a test.

Instead you must manually create a unit test under a supported target type but as LabVIEW FPGA VIs are simply VIs, you can then point this at the FPGA code as long as it doesn’t contain any FPGA only structures.

This has it’s frustrations though. You must manually link the VI (obviously) and rename the test. This bit is a pain as I cannot seem to rename new tests without closing and re-opening the project. Re-arranging tests on disk can also be a frustrating task!

untitled-tests
Argh! Project Hell!

2. Test Vectors Are Your Friend

Unlike processor based code, FPGA logic generally works on data point by point and builds an internal history to perform processing. This means you may have to run the VI multiple times to test it well.

The test vectors function of the unit test framework allows you to do this, specifying inputs for several iterations of the VI and being able to test the VI over time.

Mini-tip: Using Excel or a text editor to do large edits can save you losing your hair! (Right-Click and Open In External Editor)

3. Code Resets

Related to 2, because FPGA code commonly uses feedback nodes/feedforward nodes/registers to store data at some point that has to be reset.

The lazy route to this is to simply wire inputs to the feedback nodes however this defaults to a reset on first call mode. This won’t work with test vectors as each iteration is counted as a new call.

At a minimum you should change the feedback node behaviour to initialize on compile or load, but better practice on FPGA is to have a reset input to the VI to allow explicit control of this. Then you must simply set this on the first element of the vector to get the expected behaviour.

4. FPGA Specific Code or Integration Testing

There will be code that the UTF can’t test directly if it can’t run under an Windows target. In this case the best way to include it in automated testing is with a user defined test. This allows you to create a custom VI for the test, which can include a FPGA desktop execution node to allow for testing the VI. Potentially this will have to be accompanied with a custom FPGA VI as well to be able to pipe the correct data into FIFOs or similar.

 

Hopefully this will help you get over the first few hurdles, I’m sure more will follow and I will try to update them here as I find them.

My 2015 LabVIEW Resolution

It’s a new year which means we must assume everything we did before last week was rubbish and change everything.

OK, that’s a slightly cynical view but it does always amaze me how quickly advertising, blogs and news changes at this time of year. That’s also not to say that I haven’t joined in, as I ease back into work I have enjoyed looking back at 2014 as a year of great change (starting Wiresmith Technology) and now look forward to improving in 2015.

 Quality means doing it right when no one is looking – Henry Ford

One big focus point for me is going to be software quality. As old projects finish and new ones start, I want to make sure I have done everything in my power to prevent old projects coming back with issues. Some problems are inevitable but it costs time to fix and lost time on other projects to have to refocus.

I find it interesting that when it comes to software bugs are considered inevitable, even as I started by business and got contracts drawn up, the templates all include clauses stating this. Whilst there is an element of truth (software is a complex system) I also think it can lead to a relaxed attitude towards defects that you wouldn’t find in other fields.

Software never was perfect and won’t get perfect. But is that a license to create garbage? The missing ingredient is our reluctance to quantify quality – Boris Beizer

When it comes to quality I think the first step has to be testing. I have been using the unit test framework from NI on a couple of projects now as well as unit test frameworks in javascript and I am convinced it is the way forward. By the end of the year I want to be doing something akin to TDD.

The key reason is simple, when I have been writing tests as I have written code (not always strictly first, but as I am initially developing) I have found bugs. Therefore, there is only two possible outcomes to me not testing:

  1. I discover that bug as I test the integration of that code into the main product. This could take a lot of time if it is not clear what subVI is the source of the bug and I have to go back and fix it. Even knowing the subVI it means I have to get back into the same mentality as when I wrote it, which also takes time.
  2. I still miss it and the customer finds it instead. This is more costly to debug as you are likely going to find it harder to debug from the customers descriptions down to a subVI, never mind knocking the customers confidence in you.

To do this requires two things, the right mentality and the right tools.

For the mentality, discipline is the biggest requirement to begin with. I know the process and it will feel unnatural at first but I hope to push through the initial pain to get to the rolling green pastures on the other side.

For the tools, there are really two in existence for LabVIEW. The Unit Test Framework (UTF) from NI and JKI’s VI Tester. I have tried UTF quite a bit and want to return and evaluate VI Tester over the next couple of months to understand its advantages.

For both of these, keep an eye out over the next few months when I hope to report back on my progress and findings with them. No doubt I will also be discussing this on the NI community as well. Check out the Unit Testing group over there if you want to learn more (and from more experienced people).

TDMS Fragmentation: Why Your TDMS Files Use Too Much Memory

Morning all!

It’s been pretty hectic around here but nothing would have stopped me getting to the Central South LabVIEW User Group (CSLUG) this week. Since working on my own these events are even more valuable.

This time around I presented on TDMS files. Not the sexiest subject going, but by understanding how they work you can avoid some key pitfalls.

What is TDMS?

In summary, TDMS is a structured file format by National Instruments. It is used heavily in LabVIEW and also DIAdem (which I’m a big fan of) because it allows you to save files with:

  • A similar footprint and precision as a binary file.
  • A self descriptive structure (can be loaded by LabVIEW, DIAdem, Excel or any other application with a TDMS library without knowing how it was produced).
  • The ability to be efficiently data mined through DIAdem or the LabVIEW datafinder toolkit.

If this is brand new to you I recommend you read this article first as we are going to jump in a little deeper.

Sounds Great, What are the Pitfalls?

In applications with a simple writing structure it is a fantastic format that lives up to what it promises. However when you get to more complex writing patterns we can end up with a problem called TDMS fragmentation. This can occur if you:

  • Write a separate timestamp and data channels (or any pattern where you are writing multiple data types).
  • Write different channels to the same file alternately.
  • Write to multiple groups simultaneously.

To understand why we must look at the structure.

The TDMS Structure

TDMS Segments

The first thing to understand is that TDMS files allow streaming by using a segmented file structure (their predecessor, TDM files had to be written in one go). In essence, every time you call a TDMS write function, a new segment is added to the file.

tdms segment contents

Each segment contains:

  1. Header or Lead In data – This describes what the segment contains and offset information to allow random access of the file.
  2. Meta Data – This states what channels are included in the segment, any new properties and a description of the raw data format.
  3. Raw Data – Binary data for the channels described.

So how does this impact our disk or memory footprint?

TDMS has a number of optimisations built in to try and bring the footprint as close to binary as possible.

When you write two segments which have the same channel list and meta data the TDMS format will skip the meta data (and even the lead in) for that segment, meaning that the space used is only that of the raw data, giving an effective “compression ratio” of 100%.

Taking the scenario where we write exactly the same channels repeatedly to the file, we only get one copy of meta data and all the rest is raw data, exactly what we want.

But consider this scenario:

tdms_alternate_writes

A common case where we may want to write twice to the file. Each TDMS write is going to write a segment to the file, in this case because it will alternate between the two the meta data does change and has to be written every time. This leads to a fragmented file.

TDMS Fragmentation Visualised

This will happen in any scenario where we are using multiple TDMS write nodes to a single file.

You can also see the level of fragmentation is going to depend on how much raw data is included in each write.

If we write 10,000 points each time the meta data will still be much smaller than the raw data and although fragmented, it is probably acceptable.

If however we write 1 sample each time, those green areas are going to shrink a lot and you could end up with more meta data than real data!

We can measure the impact of fragmentation by looking at the size of the tdms_index files that are generate when the files are used. This is essentially all of the meta data that has been extracted from the file.

tdms_fragmented_files

Here we can see file 2.tdms is exactly what we want. 1kB of meta data to a 15MB file. 0.tdms however is heavily fragmented, 12MB of the 36MB file is used by meta data (in this case file 0.tdms and 1.tdms actually contain exactly the same data but use some of the techniques mentioned later and demonstrated in an example mentioned at the end).

When working with fragmented files you will also see the memory usage of the library increase over time. This is because the TDMS library is keeping a model of the file in memory, collating the meta so that it can do things like perform random access. The more meta data, the more memory required.

(Contrary so some reports this is not a “memory leak” in the strict definition of being unexpected, it’s entirely predictable, not that it makes you feel much better about it!)

To reduce the memory you either need to reduce fragmentation, or close and open a new file periodically.

So how do we avoid TDMS fragmentation?

  • Writing as a single datatype (i.e a single write node). This means if we have timestamp data, converting to seconds first, or writing an offset time.
  • Write seperate files and combine them later.
  • Write larger chunks of data. This will still give a fragmented file but the meta data is spread across much more raw data and the effect is not as pronounced.
  • Use TDMS buffering. There is a special property you can set in the TDMS file called “NI_minimumBufferSize”.
    When you write a numeric to this property, the library will buffer all data for a segment in memory until it has that many samples. This is the easiest solution but does mean:
    a) Additional RAM usage
    b) In the event of a crash/power loss you will lose the most recent data.
  • If disk space is the main concern, defragment the files before storage. there is a defrag function in the TDMS palettes that can be used once the file is complete to reduce the size.

Further Reading

Your homework to investigate this is in an example I posted on the community which demonstrates

  1. the effect of fragmentation in the cases shown earlier and
  2. the effectiveness of the memory buffer in solving the problem.
  3. (also creates the 0,1,2.tdms files I showed earlier)

Go take a look and keep an eye on the size of those index files!

You can also access my original slides on the CSLUG pages and read more about the internal structure in an NI whitepaper.

NI Week 2014 Highlights – New Releases

Obviously a big part of NI week is getting to see the new releases. Whilst you can get this from the web what I found useful was attending some of the sessions on the new products as many of the R&D teams attend so there aren’t many questions that can go unanswered!

Here are some of the new highlights I saw this year…

LabVIEW 2014

In my previous post I spoke of evolution not revolution. On that theme, the LabVIEW 2014 release was a remarkably understated event at NI Week with few headline new features (though it was great to see the community highlighted again with John Bergmans showing his Labsocket LabVIEW addon in the keynote).

Having had the chance to review the release notes though there are a few that could be of benefit.

  • You can now select an input to the case structure and make that the case selector. This productivity gain will definitely build up, even if its only 20 seconds at a time.
  • New window icons to show the version and bitness of LabVIEW. A minor update but useful for those of us using multiple versions.
  • 64 bit support for Mac and Linux. I think the slow update of 64 bit LabVIEW is almost certainly hampering it’s image as a data processing platform in many fields and this seems like a great commitment to moving it forward.

The others seem like changes you will find as you work in 2014 so let me know in the comments what you like.

What is great is having more stuff rolled up into base packages. I strongly believe there is a software engineering revolution needed in LabVIEW to bring it to the next level so putting these tools into the hands of more users is always good.

LabVIEW Professional now includes Database Connectivity, Desktop Execution Trace Toolkit, Report Generation, Unit Testing and VI Analyzer. LabVIEW FPGA also includes the cloud compile service which gives faster compiles than ever with the latest updates or the compile farm toolkit if you want to keep your data on site.

VI Package Manager

One evening I was lucky enough to attend a happy hour hosted by JKI, who among other achievements, created VI package manager which is by far the easiest way of sharing LabVIEW libraries.

They announced a beta release of VIPM in the browser. This allows you to search, browse and install packages in your browser, promising faster performance than doing the same in the standard application. The bit I think will also be hugely beneficial is bringing in the ability to star your favorite packages. I’m very excited about this as I hope it will make it easier to discover great packages rather than just finding those you are already aware of.

vipm_homepage
You can browse the public respositories and find popular packages
Each package has it's own page and can be installed from here (launches the desktop app)
Each package has it’s own page and can be installed from here (launches the desktop app)

This is live now at vipm.jki.net. Don’t for get to leave any feedback on their ideas exchange, feedback makes moving things like this forward so much easier.

The CompactRIO Revolution Continues!

Two years ago compactRIOs were fun as a developer, not so much if you were new to LabVIEW. They were powerful in the right hands but seriously limited on resources compared to a desktop PC.

A few years ago the Intel i7 version was release which offered huge increases in CPU performance but was big, embedded was a hard word to apply! (That’s not to say it wasn’t appreciated)

Last year the first Linux RT based cRIO was released based on the Xilinx Zync chip, this year it feels like cRIO has made a giant leap forward with the new range.

When you see some of the specs jump like this you can see why as a cRIO geek I am very excited!

cRIO-9025 + cRIO-9118
(Top spec of previous rugged generation)
cRIO-9033
(New Top Dog)
Change
CPU 800MHz PowerPC 1.3GHz Dual Core Intel Atom
CPU Usage on Control BMark 64.1% 10.9%
RAM 512MB 2GB +300%
FPGA Multipliers 64 600 +837%!!
FPGA LUTS 69,120 162,240 +135%

These new controllers are no incremental upgrade, they are a leap forward. My only concern is that it will be easier to make applications fit which is/was a bit of a specialty of mine! The new generation of FPGAs really drives part of this, the same difference is seen on the R-Series and FlexRIO ranges as well.

There is also a removable SD card slot, additional built in I/O and the headline grabber, support for an embedded HMI.

At the session on this we got to see this a bit closer. The good news is that it is using the standard Linux graphic support. This means it should support standard monitors and input devices rather than needing any specialist hardware.

Obviously it is going to have some impact on performance. In the benchmark I linked earlier they suggest you could see a 10% increase in CPU. I’m looking forward to trying this out, you could easily see 50% increase on the old generation just by having graphs on a remote panel so for many applications this seems acceptable.

There is also a KB detailing how to disable the in built GPU. This suggests that there is extra jitter which will become significant at loop rates of >5 kHz, so just keep an eye out for that.

Anyway, that got a little serious, I will be back with a final NI Week highlight later in the week but for now I leave you with the cRIO team:

MVC In LabVIEW – Making More Modular Applications Easier

If you are reading around the internet on blogs like this you are also probably searching for the Mecca of clean, readable, maintainable code which is quick and easy.

OK, we all know that doesn’t exist but I have been working on a new MVC library that has the potential to help.

Model View Controller or MVC architectures appear to be somewhat of a staple of modern software design. The idea is that you divide your software into the three parts that interaction:
MVC Diagram (Model-View-Controller)
This helps to make your system more modular and easier to change, for example you should be able to completely change the GUI (View) without touching anything functional. There is a fairly well defined method of implementing MVC in LabVIEW using queued message handlers and user events.

I have been starting to work with Angular.js, an MVC (well MVWhatever) framework for JavaScript. In this, the view is provided by HTML & CSS pages and controllers and models are written in Javascript.  To bring it together you simply reference items in the view that exist in the scope and Angular.js does all of the binding to keep these in sync. I had to have something this simple to allow rapid and easy development of MVC applications in LabVIEW.

Luckily whilst having these thoughts, the CLD summit happened here in the UK so I proposed trying to work through this idea as part of a code challenge section of the day and managed to find a group of (hopefully) willing programmers to work through the idea with this.

The Model

So lets start with the model.

Javascript has the significant advantage of being a weakly typed language. To emulate this and to avoid the headache of having loads of VIs to manage different datatypes we defined a model which uses variants at the core to store the data.

This will have a performance impact but you can’t have it all! To make things more interesting as well I have often found it can be useful to refer into the model by name as well (this might be something we need later). Therefore these data points can get stored into a variant dictionary so that we can recall them by name.

labview variant dictionary
Adding Data Items to a Variant Dictionary

Note I have also wrapped the data items and models in objects which are in turn DVRs. objects because thats how I like to organise my code and helps to give these items a unique look, DVRs because fundamentally the model and datapoints should be common, wherever you access them in the program.

Then to access items we can update the item in the dictionary. Because we are using DVRs we actually only have to read the DVR back out and then we can update through the data class (which exposes a data write through a property node).

Updating a Data Point (Get variant attribute is in the find item VI)
Updating a Data Point
(Get variant attribute is in the get item VI)

So the model is not to difficult and to be fair there are several libraries and similar methods as essentially this is a variable engine (CVT library for example).

The View

The bit that was bugging me was the data binding. There are ways that it can be done but I really didnt want the developer to have to write any code for this, it should be as simple as naming controls in a given way without having to add another whole process to your code.

There are two basic approaches possible:

  1. Polling: Angular.js actually uses a polling mechanism checking for value changes, I have used a similar solution to bind shared variables with OPC UA tags. This involves spawning a separate process and the extra complexities with controlling this.
  2. Events: Events are highly efficient but again, we want to avoid dealing with a parallel processes. This is where event callbacks seemed to solve the problems.

Event callbacks allow you to register for an event but rather than using an event structure to define the code, we can just define a callback VI which is called every time an event fires. This happens in the background without needing a parallel process.

Despite what the help file says, these currently work without issue for front panel events as well as activeX or .net events.

This allows us to bind the data items to controls (by registering for the value change event) or bind to indicators (register for a user event which the data point fires every time it is updated).

Register Event Callback to Bind to Control
Register Event Callback to Bind to Control
VI Which is Called on Value Change
VI Which is Called on Value Change

The final step is to be able to bind to the front panel automatically, this is the simple bit. There is a VI that finds all of the labels that match a pattern {dataname} and automates the binding process.

Where can I get it?

I have packaged this code on bitbucket. Please download the VI package and give it a try.

What Next?

Firstly, let me know what you think, it makes it much more fruitful to know people are trying it and (hopefully) enjoying it.

I have a few ideas for improvements including:

  • Working out a good error handling system on the callbacks.
  • Allowing the callback VIs to be replaced with custom versions.
  • Saving and loading the model to/from file.

You can see the plans, or add bugs or features to the issues section on bitbucket.


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close