Browse Month: June 2014

MVC In LabVIEW – Making More Modular Applications Easier

If you are reading around the internet on blogs like this you are also probably searching for the Mecca of clean, readable, maintainable code which is quick and easy.

OK, we all know that doesn’t exist but I have been working on a new MVC library that has the potential to help.

Model View Controller or MVC architectures appear to be somewhat of a staple of modern software design. The idea is that you divide your software into the three parts that interaction:
MVC Diagram (Model-View-Controller)
This helps to make your system more modular and easier to change, for example you should be able to completely change the GUI (View) without touching anything functional. There is a fairly well defined method of implementing MVC in LabVIEW using queued message handlers and user events.

I have been starting to work with Angular.js, an MVC (well MVWhatever) framework for JavaScript. In this, the view is provided by HTML & CSS pages and controllers and models are written in Javascript.  To bring it together you simply reference items in the view that exist in the scope and Angular.js does all of the binding to keep these in sync. I had to have something this simple to allow rapid and easy development of MVC applications in LabVIEW.

Luckily whilst having these thoughts, the CLD summit happened here in the UK so I proposed trying to work through this idea as part of a code challenge section of the day and managed to find a group of (hopefully) willing programmers to work through the idea with this.

The Model

So lets start with the model.

Javascript has the significant advantage of being a weakly typed language. To emulate this and to avoid the headache of having loads of VIs to manage different datatypes we defined a model which uses variants at the core to store the data.

This will have a performance impact but you can’t have it all! To make things more interesting as well I have often found it can be useful to refer into the model by name as well (this might be something we need later). Therefore these data points can get stored into a variant dictionary so that we can recall them by name.

labview variant dictionary
Adding Data Items to a Variant Dictionary

Note I have also wrapped the data items and models in objects which are in turn DVRs. objects because thats how I like to organise my code and helps to give these items a unique look, DVRs because fundamentally the model and datapoints should be common, wherever you access them in the program.

Then to access items we can update the item in the dictionary. Because we are using DVRs we actually only have to read the DVR back out and then we can update through the data class (which exposes a data write through a property node).

Updating a Data Point (Get variant attribute is in the find item VI)
Updating a Data Point
(Get variant attribute is in the get item VI)

So the model is not to difficult and to be fair there are several libraries and similar methods as essentially this is a variable engine (CVT library for example).

The View

The bit that was bugging me was the data binding. There are ways that it can be done but I really didnt want the developer to have to write any code for this, it should be as simple as naming controls in a given way without having to add another whole process to your code.

There are two basic approaches possible:

  1. Polling: Angular.js actually uses a polling mechanism checking for value changes, I have used a similar solution to bind shared variables with OPC UA tags. This involves spawning a separate process and the extra complexities with controlling this.
  2. Events: Events are highly efficient but again, we want to avoid dealing with a parallel processes. This is where event callbacks seemed to solve the problems.

Event callbacks allow you to register for an event but rather than using an event structure to define the code, we can just define a callback VI which is called every time an event fires. This happens in the background without needing a parallel process.

Despite what the help file says, these currently work without issue for front panel events as well as activeX or .net events.

This allows us to bind the data items to controls (by registering for the value change event) or bind to indicators (register for a user event which the data point fires every time it is updated).

Register Event Callback to Bind to Control
Register Event Callback to Bind to Control
VI Which is Called on Value Change
VI Which is Called on Value Change

The final step is to be able to bind to the front panel automatically, this is the simple bit. There is a VI that finds all of the labels that match a pattern {dataname} and automates the binding process.

Where can I get it?

I have packaged this code on bitbucket. Please download the VI package and give it a try.

What Next?

Firstly, let me know what you think, it makes it much more fruitful to know people are trying it and (hopefully) enjoying it.

I have a few ideas for improvements including:

  • Working out a good error handling system on the callbacks.
  • Allowing the callback VIs to be replaced with custom versions.
  • Saving and loading the model to/from file.

You can see the plans, or add bugs or features to the issues section on bitbucket.

What to Expect for the CLED Exam

student_hat_2Last week I got my student head on again to take the Certified LabVIEW Embedded Systems Developer (CLED) exams and want to let you know what to expect if you are thinking about taking the exam, or if you are wondering if it is for you?

The What and the Why

NI introduced the CLED certification last year as a pilot exam in the US and is now available in Europe (and possibly other regions). It’s a 2 part exam to test your knowledge of using RT and FPGA technologies with a massive emphasis on compactRIO embedded applications.

As a self-proclaimed LabVIEW embedded developer I am very happy this has been introduced (as I can remove the “self-proclaimed”!).

There are a huge number of developers using LabVIEW under Windows for test and measurement applications who are very skilled and may well be CLD or CLA, but this doesn’t mean too much when it comes to RT and FPGA. If you don’t understand the differences and subtleties of these then you can get yourself in a mess.

The CLED allows embedded LabVIEW developers to differentiate themselves and differentiation means your next pay rise/promotion/job.

So what can you expect?

The Pre-Requisites

Before you even open the prep kit make sure you have the pre-requisites which seem to be more involved on this exam than may, as I will explain though don’t be frustrated, if you don’t have these pre-requisites you may well waste your time going through the process to find you aren’t ready yet.

From the NI site:

Prerequisites
– Valid Certified LabVIEW Developer (CLD) or Certified LabVIEW Architect (CLA) certification
– Completion of LabVIEW Real-Time 2: Architecting Embedded Systems or RIO Integrator’s Training course
– Completion of the LabVIEW FPGA training course 

Recommended Experience Level
– 18 to 24 months of experience in developing medium- to large-scale LabVIEW control and monitoring applications with NI CompactRIO, NI Single-Board RIO, and/or NI R Series hardware 

If you don’t meet these then there is your first task!

Part 1: Multiple Choice

Your first test is a 1hr multiple choice exam à la CLAD. You must pass this with a 70% result before attempting part 2.

It tests the theory aspects of developing for RT and FPGA on compactRIO all the way from deciding how to configure the hardware to how loop priorities work. I did find it a little more conceptual than the CLAD, less “what is this function” and less triple negatives to decode the answer.

Because of my time at NI I didn’t find this element too stressful, having taught the courses for a few years you get familiar with the nuances of these things but there are some resources I found very useful:

  • CLED Practice Exam: It goes without saying that the sample exam is useful. Also there is a link to a video of a prep presentation at NI Week last year.
  • cRIO Developers Guide: I would suggest this is probably the bible for part 1. I think that probably every question could be answered in here (though I haven’t checked)

I think the key challenge here is that you must make sure you have broad knowledge on cRIO. For example there are questions in the sample exam on selecting whether to run FPGA mode or scan mode (or hybrid) which I expect many developers may not have come across too often. There are also questions on other more niche topics such as FPGA optimisation, advanced shared variable settings and there will certainly be questions around functions that cause or prevent dynamic memory allocations.

Part 2: Practical Exam

Yep, another 4 hour practical exam but somehow it is different again!

I must admit I was a little dismissive of this initially. Having done the CLD and CLA, as well as training many people on how to take the CLD exam through CLD prep days I was fairly confident I knew how to tackle these exams, this lasted up until about an hour into my practice exam when I realised the emphasis changes again.

The practical exam is an application development problem which you deploy onto a single-board RIO which is connected to a simulator. You must write the host code (front panel provided), RT and FPGA according to the requirements provided, these are more similar in scope to the CLD rather than being as heavy as the CLA requirements. The clue to the key difference is the marking scheme:

  • 50% Functionality
  • 30% Design
  • 15% Style
  • 5% Documentation

Compared to the other exams functionality is massively more important, you could have rubbish style and documentation (although I hope you wouldn’t at this stage) and still get 80% on the exam.

My Experience

This was my first error, in my practice exam I was working to get my style to the level that I would for the CLD exam but there simply isn’t time.

In fact time is incredibly tight, I think with another 30-60 mins I may have had time to get the functionality done and tested. From looking at the sample solution as well the attitude I took towards it is to treat it as a well designed prototype, what I mean by this is things like:

  • Design is important. Those 30% should be the easy ones to pick up as even just starting functionality in the right places with the right communication methods should win marks quickly.
  • Style is less important. With 4 hours to create an end to end solution something has to give and this is it. Don’t go back to pre-CLD, I still used non-default icons, modularised and commented where appropriate but there isn’t time to design nice libraries and nitpick.
  • Take every requirement short cut. You can see this in the sample exam solution, if they don’t explicitly ask for it, use whatever route makes your life easier. Obviously in real life there is some value in predicting what a user might ask for next and account for it, that will never happen with the exam!

I think there are probably two different approaches to the exam:

The Horizontal Approach

horizontal approach CLED

This would mean completing the FPGA functionality, then building the RT on top of that and then the host.

There is some advantage that then you don’t have to switch context too often which may be less efficient. You can also get your FPGA compile out of the way although I found mine to be <10 minutes anyway.

It also means you hit some of the key aspects earlier, I think every exam is probably going to need a watchdog, maybe safe states, recovery ability and this is all going to be lower level functionality which you can get out of the way first.

My concern with this approach was that it could depend how it is marked. Technically many features need functionality at every level which may make this more risky on time. You could probably mitigate this by laying out your design first (and getting those design points) so even if you don’t get the UI bit done it is clear how you would.

The Vertical Approach

vertical approach CLED

This is what I opted for. I took each section of the requirements document and implemented the full requirement from top to bottom before moving the next section.

This has the advantage of being able to fully tick off requirements earlier, it is easier to test that it is working one step at a time and I just find it a way of coding that builds confidence as you move forward, but don’t think this worked flawlessly!

The first problem is those design points. I laid out much of the design to start anyway so that those points are done, this also helps move this approach forward so you’re not having to make higher level design decisions as you go. I do wish I had jotted these down so that I can put the paper in the envelope as well to show the design (presuming these get looked at).

The main issue with this is deciding how to prioritise the features. I had a success with this and a failure.

The success was one feature that was very easy to implement and fitted my design well. Because of this I set this at a low priority and never implemented it. The reason I am happy with this was that I hope the design was obvious, I documented the changes needed for it and freed up my time for other tasks where this would have been harder. Hopefully I still got some marks for my intentions though.

The failure was some of the core features such as my watchdog. There was one chunk of the application that worked on start to finish (which too 1-1.5hrs so a big piece). This didn’t leave me much time when I moved onto the fault handling including the watchdog. I got it in but only just and not well tested. I should have split the larger task, again documenting where advanced features would go and moved onto the more core watchdog functionality.

Did I Pass?

I don’t know. Obviously I passed part 1 to attempt part 2, now I’m in the waiting game for the results. If I had to guess I think I would put my chances for passing at about 75% I reckon but lets see. I was much happier with the real exam but I definitely ran out of time which will affect my functionality marks and could have done with making the design decisions more obvious with documentation. We will see!

If you’re attempting the exam, good luck and I hope this was useful. Once you have tried it or got your results please comment below on how you did and any more tips you may have, it would be great to here what approaches other people took.

 


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close