External Video – TDD, Where Did It All Go Wrong?

Things have been a bit quiet as we have been going through a number of changes at Wiresmith Technology that has been taking up my time. In the last month we have moved into our first offices, so much time has been spent on making the dream development cave!

We have also taken on a Javascript contractor to help with some work which has taken up time, but he is also keeping me busy with plenty of great resources that he has been using to push his own development skills on things like test driven development, so much of my spare time has been taken with my head in books and youtube videos.

So I don’t have anything new to say today as I’m still absorbing all of this information and I hope to spit it out over the next few months in the form of various experiments, thoughts and translation to the LabVIEW world.

In the meantime, one of the great talks I have watched recently explains why Unit Testing != Testing Units and I’m trying to understand how best to apply this. It’s worth a watch and don’t worry, I have noted that this someone contradicts my last post! This is why I’m not adding anything until I have had a chance to process it properly.

Ian Cooper: TDD, where did it all go wrong from NDC Conferences on Vimeo.

Fixing a Simple Bug with Test Driven Development

So it was the CLA Summit last week that gave me more opportunity to bang on about software testing either further and was great to discuss it with various people and see the themes coming out of it.

My common theme was that I really like the interactive nature of the Unit Test Framework, I think it plays to LabVIEW’s strengths and allows for a nice workflow (for basic tests, using test vectors is far more long winded than it needs to be!).

Another positive I took was from Steve Watts’ talk on debugging and immediacy. He talked about the advantages of ‘runnable code’, that is having logic contained in subVIs that can run independently which aids the debugging process.

So as I worked this week I came across a bug which, the process of fixing, highlighted this well. I took a screencast of the process to highlight some of the benefits that I have found and I think highlights one of the most commonly cited benefits of testing, better code structure. (Go easy, I’m not as natural on camera!)

Chrome Wont Load LabVIEW Real Time Web Configuration Pages

I spent a week at the CLA summit last week (more to follow) and got a nasty shock on my return.

On attempting to login to configure a compactRIO system I get faced with a request to install silverlight:

Install Silverlight

 

As I have used this every day for a long time, I somehow don’t believe this.

It turns out that Google have removed support for some cryptic NPAPI that many plugins require to run, including Silverlight.

This finally explains why I have been seeing more security warnings about running silverlight pages, it appears this has been coming for some time, in fact over a year! Wish someone had told me.

Fear not, there is a workaround, but it will only work until September.

  1. Navigate to chrome://flags/#enable-npapi in the browser
  2. Click Enable under Enable NPAPIenable_npapi
  3. A box will appear at the bottom of the screen with a Relaunch Now button. Press this to relaunch chome ensuring you back up any work in the browser first
  4. And back to work fixing compactRIOs.

And in September…

I guess if there is no change from the NI side it’s a move to Firefox or IE!

Strangers and Coding – Can Rock Help?

A bit of a break from the LabVIEW technical content but hopefully something many of you may find interesting.

I’ve been thinking a lot lately about the process of writing software for someone. Both as someone that others look to and currently looking for outside assistance on some JavaScript code, it is undeniable that this is often a scary process for everyone involved. Can rock help?

Taking on or outsourcing a project involves large unknowns. Unknowns generally require trust in your partners but that has to be built first, it’s a chicken and egg situation.

I have recently become a big fan of listening to podcasts, one of my favourites is NPR’s TED radio hour. As someone who loves TED talks but never gets around to watching them it is a great way to consume them.

This week’s theme was all about playing games. The interesting section for me was the first about strangers.

Research shows that just putting two strangers in a room together, no tasks, just in each others presence, raises their stress levels.

In fact the study goes on to show that this stress means it is very hard to empathise with the other person. In a related experiment on pain, participants tended to believe their pain was worst than the strangers, even when inflicted in the same way (holding your hand in ice water, no developers were harmed in the making of this article!).

This reminded me immediately of the relationships in taking on a new project, everyone is on edge being forced to leap into the unknown with a stranger.

The solution? In this case it was shown that 15 minutes of playing rock band together eliminates this stress, causing participants to empathise with each other as much as a good friend.

So as part of our on-boarding process, bring your singing voice! Not really, but it has certainly set my mind to work on what we can do to remove the unknown and make the process easier for everyone involved.

LabVIEW 2014 SP1 – Notice Anything New?

It’s Spring! Which means it’s time for clocks to change, eclipses (well, that may have been a one off) and a service pack release from National Instruments.

Although this is normally touted as a bug fix release, if you dig in to the readme though they have snuck in a nice new feature.

The new feature is the Profile Buffer Allocations Window. This gives you a window into the run time performance of the LabVIEW memory manager that is otherwise hard to understand.

Previously we only had a couple of windows into the LabVIEW memory manager in the Tools -> Profile menu.

Show Buffer Allocations was the best way to understand where on the diagram memory could be allocated but it doesn’t tell us too much about what actually happens.

Performance and Memory shows us the run time memory usage on a VI level but no way to track it down to the actual code execution.

profile performance and memory

But now we can see more of this at run time.

 Step By Step

Launch the tool from a VI through Tools > Profile > Profile Buffer Allocations. Below you can see an example run of the Continuous Measurement and Logging sample project.

profile buffer allocations

  1. Profiling Control — To confuse things slightly, the workflow begins at the bottom! Set the minimum threshold you want to capture (default is 20 kB) and press Start. Press Stop once your satisfied you’ve captured the data your interested in.
  2. Buffer Filters — The bar at the top controls the display of the buffers in the table allowing you to filter by Application Instance, restrict the number of buffers in the table and adjust the units.
  3. Buffer Table — The buffer table displays the buffers that were allocated during the run as well as their vital stats. You can double-click a buffer to be taken to it’s location on the diagram.
  4. Time vs Memory Graph — Once of the coolest features! Selecting a buffer in the table will display a time graph of the size of that buffer during the run. I can imagine this would be great in understanding what is causing dynamic buffer allocations by seeing the size and frequency of changes.

I think anything that gives us more of a view into some of the more closed elements of LabVIEW has got to be beneficial, so go and try it and learn something new about your code.

4 Lessons Learnt Unit Testing LabVIEW FPGA Code

After declaring my intent earlier in the year on moving towards an increasingly test driven methodology, one of the first projects I aimed to use it on has been based on FPGA, which makes this less than trivial.

I was determined to find a way to make the Unit Test Framework work for this, I think it has a number of usability flaws and bugs but if they are fixed it could be a great product and I want to make the most of it.

So can it be used for unit testing LabVIEW FPGA code? Yes but there are a few things to be aware of.

1. You Can’t Directly Create A Unit Test

Falling at the first hurdle, creating unit tests on VIs under an FPGA target isn’t an available option on the right-click menu, normally the easiest way to create a test.

Instead you must manually create a unit test under a supported target type but as LabVIEW FPGA VIs are simply VIs, you can then point this at the FPGA code as long as it doesn’t contain any FPGA only structures.

This has it’s frustrations though. You must manually link the VI (obviously) and rename the test. This bit is a pain as I cannot seem to rename new tests without closing and re-opening the project. Re-arranging tests on disk can also be a frustrating task!

untitled-tests
Argh! Project Hell!

2. Test Vectors Are Your Friend

Unlike processor based code, FPGA logic generally works on data point by point and builds an internal history to perform processing. This means you may have to run the VI multiple times to test it well.

The test vectors function of the unit test framework allows you to do this, specifying inputs for several iterations of the VI and being able to test the VI over time.

Mini-tip: Using Excel or a text editor to do large edits can save you losing your hair! (Right-Click and Open In External Editor)

3. Code Resets

Related to 2, because FPGA code commonly uses feedback nodes/feedforward nodes/registers to store data at some point that has to be reset.

The lazy route to this is to simply wire inputs to the feedback nodes however this defaults to a reset on first call mode. This won’t work with test vectors as each iteration is counted as a new call.

At a minimum you should change the feedback node behaviour to initialize on compile or load, but better practice on FPGA is to have a reset input to the VI to allow explicit control of this. Then you must simply set this on the first element of the vector to get the expected behaviour.

4. FPGA Specific Code or Integration Testing

There will be code that the UTF can’t test directly if it can’t run under an Windows target. In this case the best way to include it in automated testing is with a user defined test. This allows you to create a custom VI for the test, which can include a FPGA desktop execution node to allow for testing the VI. Potentially this will have to be accompanied with a custom FPGA VI as well to be able to pipe the correct data into FIFOs or similar.

 

Hopefully this will help you get over the first few hurdles, I’m sure more will follow and I will try to update them here as I find them.

Applying Joel’s 12 Steps To Better Software to LabVIEW

Long Before the Third Grade Test

Continuing on my theme from the last post I wanted to talk about what goes around writing great software, other than the software itself. How can you improve your coding process?

I have seen in a couple of places references to Joel’s 12 Steps to Better Software and had to look a little closer.

This consists of 12 questions and you should be aiming to answer yes to 10 or more of these.

  1. Do you use source control?
  2. Can you make a build in one step?
  3. Do you make daily builds?
  4. Do you have a bug database?
  5. Do you fix bugs before writing new code?
  6. Do you have an up-to-date schedule?
  7. Do you have a spec?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools money can buy?
  10. Do you have testers?
  11. Do new candidates write code during their interview?
  12. Do you do hallway usability testing?

Stop and have a think about these with your team, what do you score?

Now what if I told you this article was 15 years old! I find this interesting for two reasons:

  • Some of these points seem quite out of date with new developments around agile methods of software development building on many of these points. There is a good article which reviews these points here.
  • I still know lots of developers that haven’t adopted many of these practices.

I must confess there are differences for most developers in the LabVIEW way of working. While many techniques are aimed towards 4-8 developers working on software for hundreds/thousands of users, many LabVIEW applications are developed by individuals or small teams for internal projects where you are probably already speaking to all of the end users.

Can we still infer great tips for those teams? Let’s look at what we can do with LabVIEW to add some more yes’s.

1. Do you use source control?

If you aren’t using this now, stop what your doing and go learn about it!

There are so many resources and technologies out there. SVN is extremely popular in the LabVIEW community although I would probably recommend trying Git or Mercurial as there are more services out there available to use now.

If your interested in these try the Atlassian SourceTree client which solved my previous concerns about a good Windows client for Git.

Credit: geek and poke
Credit: geek and poke

2. Can you make a build in one step?
3.Do you make daily builds?

Builds is a funny word in the LabVIEW world as this varies from many languages as LabVIEW is constantly compiling.

The key point behind this is that you should be continuously making sure the application can run. In LabVIEW this can boil down to being able to run the application consistently, especially as if you can’t run it, you can’t test it.

I make a point of running the full application after every change worth a commit into source code control or a bug in the bug database. This saves context switching of having to go back to old features because you didn’t test them at the time.

4. Do you have a bug database?

This should be step 2 after getting your source code control set up.

If you haven’t used these before, I find them invaluable as a means of tracking the state of the application.

Not just bugs but feature requests can go in the database and get assigned things like priorities, developers, due dates etc. and it becomes where you go when you need to know what to do next.

They also typically integrate with source code control. When I make a commit I mention a bug ID in the commit message and the system links the two.

So I now have a system that tracks what needs doing, who needs to do it, when they have done it and you can even find the exact commit so you can identify what code was changed to complete it.

Again, in this world of cloud computing this doesn’t even require any outlay or time to set up. Most source code hosting services have this built in.

Bitbucket is an easy one to start with as you can have free, private projects. I use a hosted Redmine server from Planio (which costs 9 Euro/month), others use Github (free for public projects).

Go set up a trial account and take a look around. As with many things it will take a bit of getting used to but I find it a far better way to work.

9. Do you use the best tools money can buy?

I’m skipping a few that aren’t specific to LabVIEW here so we fall on the best tools.

Firstly you chose LabVIEW! I think most people reading this will hopefully agree this is a good choice, but it doesn’t stop there.

There are probably two key things with LabVIEW to consider from a hardware point of view. a) It isn’t exactly lightweight and b) It uses a lot of Windows!

You don’t want to be trying to code LabVIEW on a 13″ laptop with a track pad. Get a mouse, get at least one large monitor, preferably two and a decent, modern machine.

My setup- Filling the desk with monitors!
My setup- Filling the desk with monitors!

 

The extra costs will soon be recovered in productivity gains of not having to wait for the software to load a new library or trying to switch between a couple of VIs and the probe window when debugging.

Also consider this from a software perspective as well. It is difficult but try and get familiar with some of the toolkits on the tools network or LAVA forums as they could save you a lot of time over writing features yourself.

There are many free tools but even if there is a cost associated you have to factor in your time to design, develop and maintain that extra code (which is probably not the area you are expert in).

10. Do you have testers?

This is one that is falling out of vogue at the minute as Test Driven Development (TDD) is becoming popular (see my previous post) which means the developers tend to write the tests themselves. That being said it is still useful to have people available for higher level testing as well but as mentioned previously we tend to have smaller teams in LabVIEW.

I think the key point right now is to have a plan for testing. Likely this should be a mix of automated testing and having some test procedure/schedule for higher level testing.

One of my first projects at Wiresmith Technology was where this failed me. I didn’t re-test the software thoroughly enough after changes (normally only the specific section which I had changed) which meant more problems went to the customer than I would like. They all got fixed but each problem means time wasted on communicating the problem as well as affecting the confidence of the customer.

Since then I now keep a procedure for testing the various areas of the software so I can do this before I send a release to the customer which has improved my hit rate as well as saving time in the long run.

With bringing in things like this there is always some pain adjusting but pick one, push through and you will come out writing better software.

My 2015 LabVIEW Resolution

It’s a new year which means we must assume everything we did before last week was rubbish and change everything.

OK, that’s a slightly cynical view but it does always amaze me how quickly advertising, blogs and news changes at this time of year. That’s also not to say that I haven’t joined in, as I ease back into work I have enjoyed looking back at 2014 as a year of great change (starting Wiresmith Technology) and now look forward to improving in 2015.

 Quality means doing it right when no one is looking – Henry Ford

One big focus point for me is going to be software quality. As old projects finish and new ones start, I want to make sure I have done everything in my power to prevent old projects coming back with issues. Some problems are inevitable but it costs time to fix and lost time on other projects to have to refocus.

I find it interesting that when it comes to software bugs are considered inevitable, even as I started by business and got contracts drawn up, the templates all include clauses stating this. Whilst there is an element of truth (software is a complex system) I also think it can lead to a relaxed attitude towards defects that you wouldn’t find in other fields.

Software never was perfect and won’t get perfect. But is that a license to create garbage? The missing ingredient is our reluctance to quantify quality – Boris Beizer

When it comes to quality I think the first step has to be testing. I have been using the unit test framework from NI on a couple of projects now as well as unit test frameworks in javascript and I am convinced it is the way forward. By the end of the year I want to be doing something akin to TDD.

The key reason is simple, when I have been writing tests as I have written code (not always strictly first, but as I am initially developing) I have found bugs. Therefore, there is only two possible outcomes to me not testing:

  1. I discover that bug as I test the integration of that code into the main product. This could take a lot of time if it is not clear what subVI is the source of the bug and I have to go back and fix it. Even knowing the subVI it means I have to get back into the same mentality as when I wrote it, which also takes time.
  2. I still miss it and the customer finds it instead. This is more costly to debug as you are likely going to find it harder to debug from the customers descriptions down to a subVI, never mind knocking the customers confidence in you.

To do this requires two things, the right mentality and the right tools.

For the mentality, discipline is the biggest requirement to begin with. I know the process and it will feel unnatural at first but I hope to push through the initial pain to get to the rolling green pastures on the other side.

For the tools, there are really two in existence for LabVIEW. The Unit Test Framework (UTF) from NI and JKI’s VI Tester. I have tried UTF quite a bit and want to return and evaluate VI Tester over the next couple of months to understand its advantages.

For both of these, keep an eye out over the next few months when I hope to report back on my progress and findings with them. No doubt I will also be discussing this on the NI community as well. Check out the Unit Testing group over there if you want to learn more (and from more experienced people).

labview tools

What Tools Do You Use?

So last week I bought something I would never dream there was any use for, a gaming mouse mat. I’ve always been a little concerned that gaming hardware is normal hardware in angry cases with a nice mark up but it has been fantastic.

For those that haven’t tried one it now feels like my mouse is running on an air hockey table making it faster and more responsive.

What I want to do today is ask you, what tools must you have for LabVIEW development?

I want to write these up into an article so feel free to comment below, tweet them (@wiresmithtech) or get them to me in anyway.

The more unexpected or unusual the better as I want to make my desk a LabVIEW developers paradise!

TDMS Fragmentation: Why Your TDMS Files Use Too Much Memory

Morning all!

It’s been pretty hectic around here but nothing would have stopped me getting to the Central South LabVIEW User Group (CSLUG) this week. Since working on my own these events are even more valuable.

This time around I presented on TDMS files. Not the sexiest subject going, but by understanding how they work you can avoid some key pitfalls.

What is TDMS?

In summary, TDMS is a structured file format by National Instruments. It is used heavily in LabVIEW and also DIAdem (which I’m a big fan of) because it allows you to save files with:

  • A similar footprint and precision as a binary file.
  • A self descriptive structure (can be loaded by LabVIEW, DIAdem, Excel or any other application with a TDMS library without knowing how it was produced).
  • The ability to be efficiently data mined through DIAdem or the LabVIEW datafinder toolkit.

If this is brand new to you I recommend you read this article first as we are going to jump in a little deeper.

Sounds Great, What are the Pitfalls?

In applications with a simple writing structure it is a fantastic format that lives up to what it promises. However when you get to more complex writing patterns we can end up with a problem called TDMS fragmentation. This can occur if you:

  • Write a separate timestamp and data channels (or any pattern where you are writing multiple data types).
  • Write different channels to the same file alternately.
  • Write to multiple groups simultaneously.

To understand why we must look at the structure.

The TDMS Structure

TDMS Segments

The first thing to understand is that TDMS files allow streaming by using a segmented file structure (their predecessor, TDM files had to be written in one go). In essence, every time you call a TDMS write function, a new segment is added to the file.

tdms segment contents

Each segment contains:

  1. Header or Lead In data – This describes what the segment contains and offset information to allow random access of the file.
  2. Meta Data – This states what channels are included in the segment, any new properties and a description of the raw data format.
  3. Raw Data – Binary data for the channels described.

So how does this impact our disk or memory footprint?

TDMS has a number of optimisations built in to try and bring the footprint as close to binary as possible.

When you write two segments which have the same channel list and meta data the TDMS format will skip the meta data (and even the lead in) for that segment, meaning that the space used is only that of the raw data, giving an effective “compression ratio” of 100%.

Taking the scenario where we write exactly the same channels repeatedly to the file, we only get one copy of meta data and all the rest is raw data, exactly what we want.

But consider this scenario:

tdms_alternate_writes

A common case where we may want to write twice to the file. Each TDMS write is going to write a segment to the file, in this case because it will alternate between the two the meta data does change and has to be written every time. This leads to a fragmented file.

TDMS Fragmentation Visualised

This will happen in any scenario where we are using multiple TDMS write nodes to a single file.

You can also see the level of fragmentation is going to depend on how much raw data is included in each write.

If we write 10,000 points each time the meta data will still be much smaller than the raw data and although fragmented, it is probably acceptable.

If however we write 1 sample each time, those green areas are going to shrink a lot and you could end up with more meta data than real data!

We can measure the impact of fragmentation by looking at the size of the tdms_index files that are generate when the files are used. This is essentially all of the meta data that has been extracted from the file.

tdms_fragmented_files

Here we can see file 2.tdms is exactly what we want. 1kB of meta data to a 15MB file. 0.tdms however is heavily fragmented, 12MB of the 36MB file is used by meta data (in this case file 0.tdms and 1.tdms actually contain exactly the same data but use some of the techniques mentioned later and demonstrated in an example mentioned at the end).

When working with fragmented files you will also see the memory usage of the library increase over time. This is because the TDMS library is keeping a model of the file in memory, collating the meta so that it can do things like perform random access. The more meta data, the more memory required.

(Contrary so some reports this is not a “memory leak” in the strict definition of being unexpected, it’s entirely predictable, not that it makes you feel much better about it!)

To reduce the memory you either need to reduce fragmentation, or close and open a new file periodically.

So how do we avoid TDMS fragmentation?

  • Writing as a single datatype (i.e a single write node). This means if we have timestamp data, converting to seconds first, or writing an offset time.
  • Write seperate files and combine them later.
  • Write larger chunks of data. This will still give a fragmented file but the meta data is spread across much more raw data and the effect is not as pronounced.
  • Use TDMS buffering. There is a special property you can set in the TDMS file called “NI_minimumBufferSize”.
    When you write a numeric to this property, the library will buffer all data for a segment in memory until it has that many samples. This is the easiest solution but does mean:
    a) Additional RAM usage
    b) In the event of a crash/power loss you will lose the most recent data.
  • If disk space is the main concern, defragment the files before storage. there is a defrag function in the TDMS palettes that can be used once the file is complete to reduce the size.

Further Reading

Your homework to investigate this is in an example I posted on the community which demonstrates

  1. the effect of fragmentation in the cases shown earlier and
  2. the effectiveness of the memory buffer in solving the problem.
  3. (also creates the 0,1,2.tdms files I showed earlier)

Go take a look and keep an eye on the size of those index files!

You can also access my original slides on the CSLUG pages and read more about the internal structure in an NI whitepaper.


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close