I am sat in a hotel lobby after the European CLA summit blown away by the amount of talent coming through in the LabVIEW community.
The CLA summit is an event designed for LabVIEW Architects to come together and share concepts and ideas to continue learning after the end of the NI courses.
It was a huge event this year, with I believe around 140 attendees with a great proportion of new faces.
Some of my personal favourites:
James Powell presented by showing changes he would make to the standard QMH template to make it less likely to hit bugs by thinking “What would a subVI do?”. An earlier form of this discussion changed the way I view this pattern and I think that everyone should see it!
The G Code Manager – A simple plugin tool that can greatly speed up managing some of the properties of VIs and classes. This is now online here and I am definitely going to be trying it out.
LabVIEW Channels – Jeff Kodosky presented these and the more I learn about these the more intrigued I get. Especially when Stephen Loftus-Mercer illustrates a potential new design pattern using them.
LabVIEW Containers from Chris Cilino – Similarly I have seen bits of these a few times but something resonated a lot more this time. I saw it more as where it could save me time and will be looking to try these out.
But I have to say there was not one bad presentation. They all gave me ideas from looking at continuous integration in LabVIEW again to thinking about our role as a community and what we need to do to bring it forward.
Naturally, I couldn’t help but get up myself. I presented my unit testing methodology which I have previously written about and a short presentation about a command line tool I’m working on (expect to see something here on that soon!)
Finally, I find some of these things are best summarised by twitter!
It is so great to meet up with a community of like minded engineers. Many of you I would not have seen for a couple of years and I met many new people (as well as adding faces to avatars).
Talking of community, thanks to Mark Balla you can download videos of many sessions. Thanks again Mark, these are a great asset for many events where people can’t attend. Remind me, and everyone else, to buy you a beer the next time the opportunity arises.
Also Fabiola De La Cueva of Delacor recorded her excellent session on unit testing, if your interested in the concept I recommend a watch.
Obviously a big part of NI week is getting to see the new releases. Whilst you can get this from the web what I found useful was attending some of the sessions on the new products as many of the R&D teams attend so there aren’t many questions that can go unanswered!
Here are some of the new highlights I saw this year…
In my previous post I spoke of evolution not revolution. On that theme, the LabVIEW 2014 release was a remarkably understated event at NI Week with few headline new features (though it was great to see the community highlighted again with John Bergmans showing his Labsocket LabVIEW addon in the keynote).
Having had the chance to review the release notes though there are a few that could be of benefit.
You can now select an input to the case structure and make that the case selector. This productivity gain will definitely build up, even if its only 20 seconds at a time.
New window icons to show the version and bitness of LabVIEW. A minor update but useful for those of us using multiple versions.
64 bit support for Mac and Linux. I think the slow update of 64 bit LabVIEW is almost certainly hampering it’s image as a data processing platform in many fields and this seems like a great commitment to moving it forward.
The others seem like changes you will find as you work in 2014 so let me know in the comments what you like.
What is great is having more stuff rolled up into base packages. I strongly believe there is a software engineering revolution needed in LabVIEW to bring it to the next level so putting these tools into the hands of more users is always good.
LabVIEW Professional now includes Database Connectivity, Desktop Execution Trace Toolkit, Report Generation, Unit Testing and VI Analyzer. LabVIEW FPGA also includes the cloud compile service which gives faster compiles than ever with the latest updates or the compile farm toolkit if you want to keep your data on site.
VI Package Manager
One evening I was lucky enough to attend a happy hour hosted by JKI, who among other achievements, created VI package manager which is by far the easiest way of sharing LabVIEW libraries.
They announced a beta release of VIPM in the browser. This allows you to search, browse and install packages in your browser, promising faster performance than doing the same in the standard application. The bit I think will also be hugely beneficial is bringing in the ability to star your favorite packages. I’m very excited about this as I hope it will make it easier to discover great packages rather than just finding those you are already aware of.
This is live now at vipm.jki.net. Don’t for get to leave any feedback on their ideas exchange, feedback makes moving things like this forward so much easier.
The CompactRIO Revolution Continues!
Two years ago compactRIOs were fun as a developer, not so much if you were new to LabVIEW. They were powerful in the right hands but seriously limited on resources compared to a desktop PC.
A few years ago the Intel i7 version was release which offered huge increases in CPU performance but was big, embedded was a hard word to apply! (That’s not to say it wasn’t appreciated)
Last year the first Linux RT based cRIO was released based on the Xilinx Zync chip, this year it feels like cRIO has made a giant leap forward with the new range.
These new controllers are no incremental upgrade, they are a leap forward. My only concern is that it will be easier to make applications fit which is/was a bit of a specialty of mine! The new generation of FPGAs really drives part of this, the same difference is seen on the R-Series and FlexRIO ranges as well.
There is also a removable SD card slot, additional built in I/O and the headline grabber, support for an embedded HMI.
At the session on this we got to see this a bit closer. The good news is that it is using the standard Linux graphic support. This means it should support standard monitors and input devices rather than needing any specialist hardware.
Obviously it is going to have some impact on performance. In the benchmark I linked earlier they suggest you could see a 10% increase in CPU. I’m looking forward to trying this out, you could easily see 50% increase on the old generation just by having graphs on a remote panel so for many applications this seems acceptable.
There is also a KB detailing how to disable the in built GPU. This suggests that there is extra jitter which will become significant at loop rates of >5 kHz, so just keep an eye out for that.
Anyway, that got a little serious, I will be back with a final NI Week highlight later in the week but for now I leave you with the cRIO team:
NI Week 2014 is unfortunately over (although it means I do get to return to temperatures I seem to be better built for!). I wanted to share some of my highlights which will hopefully get you as excited as I feel and who knows, even persuade you to come next year! As I started writing, this got longer and longer so for now here is part 1:
1. Buzzwords Galore
This year was certainly prime for buzzword bingo with “Internet of Things” and “Big Data” flying around.
The thing that frustrates me about these is the image they produce of some magic black art that you must pay thousands to get in the club and understand it.
The reality is we don’t wake up one morning and build an internet of things. It is a constant evolution of current technology towards blue sky thinking. As Jim Robinson from Intel said in the Wednesday Keynote,
[The internet of things] is the overnight sensation that’s been 30 years in the making.
The great thing about NI Week is that many of the people making those steps are around and it really makes you feel like progress is happening.
For me it was particularly exciting to have a customer of mine showcasing their work in these areas.
National Grid are working to connect 135 power quality monitors to substations in the UK, built with compactRIO, with the goal to collate this data to ensure the stability of the power grid. In the processed form, we will be capturing >11 Billion processed measurements per year from across the UK and connecting to the monitors live to allow power engineers to keep an eye on grid conditions.
You can see more by watching the video from the keynote. (Wednesday – The Internet of Things for Jim Robinson and Wednesday – SmartGrid for National Grid)
As for big data, I found this to be somewhat demystified by a great talk from external speaker from Dell Software. Unfortunately I failed to take down his name and I’m pretty certain it isn’t who is listed (if so you need to update your linkedin profile pic!). I took away a few interesting points:
Big data is really all about analytics (which by the way has been done for years!).
He chose to define “extreme data” as when this processing cannot be done on data at rest in the database. Rather it must be done as the data is captured.
There are multiple stages to these analytics, from simple dumping to a database for mining working through more advanced structuring, deriving management dashboards up to neural networks and advanced analytics for decision making. Each step reduces the data and provides more insight.
Next year I have learnt I must take more pictures to make describing sessions easier!
As a result of NI Week 2014, I definitely feel I finally have a better feeling of what these mean to me and am excited that we are all part of this revolution evolution.
For Part 2 I will talk about some of the new products I am excited about…
I am very excited to have made it to Austin, Texas this week for the annual NI Week conference. Hosted by National Instruments it represents a great opportunity to get together with like-minded LabVIEW developers and users of other NI technologies. It also heralds the release of many new products including LabVIEW 2014. As well as this I’m very excited that one of our projects will be on the Wednesday keynote stage.
No doubt I will be posting on here over the next week or so about some of the most exciting things that come out of the conference but in the meantime this represents one of the main times I actually pick up twitter much more so feel free to follow @JamesMc86 and/or @WiresmithTech and if your at the conference feel free to come say hello, I’m best described as the hairy one!
I gave a presentation at the CLD summit last week talking about some of the design considerations and a few ideas for techniques that can help when it comes to high throughput applications. If you click the cog you can also open the speaker notes which elaborate on some of the points.
The problem with long term waveform data storage
This was a key item that I didn’t manage to get to. Currently there are two prevalent techniques I would look at:
These are your SQL based databases whether MySQL, MS SQL Server or similar.
The idea in these is that all of the data is stored in tables. The columns available are fixed in the design of the database and you add data by filling rows. The relational element comes from the fact that each row has a unique identifier which can be referenced in other tables.
The challenge with waveform data is understanding how this translates to a table.
You could store the entire waveform in a single field as a binary blob but this limits the searchablity (which I think is a word!).
Alternatively you must create a new row for every datapoint and each row would need a timestamp, seriously increasing the storage capacity and reducing the performance of the searches. This is before you get into working out the correct design to get optimum performance.
Datafinder is National Instruments’ solution to this. Datafinder is a file indexer. You store all of the files that you want to make searchable in a common place which Datafinder can index.
Through dataplugins, multiple file types can be supported but they all get translated to the TDMS style structure to make the properties searchable. You then use the toolkit for LabVIEW or DIAdem to mine through the data.
This has some appealing characteristics. It is ridiculously simple to set up compared to a database, just put your files in the right place. This also makes it quite flexible, being able to take data from different sources and still keep it easily searchable.
The main issue with this is the fact that it is file based, if the data is continuous across files and the section your interested in spans files then you have to code around this to load from multiple files.
However I am wondering whether there maybe a new kid on the block that could overcome some of these issues:
NoSQL and MongoDB
So I’m pretty sure every credible writer starts with a wikipedia quote:
A NoSQL or Not Only SQL database provides a mechanism for storage and retrieval of data that is modelled in means other than the tabular relations used in relational databases. Motivations for this approach include simplicity of design, horizontal scaling and finer control over availability.
In short NoSQL is one of those wonderful buzzwords which doesn’t mean anything specific, just different!
I was quite intrigued though by some of the different data models. The one that stands out is the document model used by MongoDB among others.
This means that instead of defining tables you add data as documents. These documents contain fields which can be indexed but it is quite flexible, different documents don’t have to exactly match in structure. At the very least this will match the structure of datafinder very nicely and could be a viable alternative where the file based management is unappealing.
My next step though is to investigate the capabilities of spreading data across documents. Most databases allow you to define database side functions about how to retrieve data. This is typically high performance and allows it to be used from any language that has a database driver. I’m planning to investigate whether this will allow for a structure that can retrieve continuous data out of files and make some of our “big data” challenges go away.
For the last two days I have been at the inaugural UK CLD Summit at the NI offices in Newbury (conveniently 15 minutes walk so no excuses not to!).
It was an excellent event where I still meet developers that I haven’t met before and catch up with those I do. My big takeaways were:
Working through an MVC framework idea with a group of developers in the developer jam, inspired by multiple presentations on the subject. You can find the results on bitbucket which are still rough but watch this space, this is a project that I want to continue after the event with some community backing.
I’m convinced about the idea of working with open document formats to reduce dependencies which Steve discusses in his blog post and covered in the Central South User Group.
Monkey’s and Teddy Bears will help my business!
I need to get better at talking to people between these events.
Whats more, it has continued to convince me that as a community we achieve so much more together than apart. Over various discussions the topic came up at how much we work from the ground up where many people have all off the pieces. It must be possible to increase productivity in LabVIEW if the community can bridge some of the gaps or just build on top of the existing offering from NI. I feel some rambling coming which I will save for another post for now!
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.