LabVIEW 2014 SP1 – Notice Anything New?

It’s Spring! Which means it’s time for clocks to change, eclipses (well, that may have been a one off) and a service pack release from National Instruments.

Although this is normally touted as a bug fix release, if you dig in to the readme though they have snuck in a nice new feature.

The new feature is the Profile Buffer Allocations Window. This gives you a window into the run time performance of the LabVIEW memory manager that is otherwise hard to understand.

Previously we only had a couple of windows into the LabVIEW memory manager in the Tools -> Profile menu.

Show Buffer Allocations was the best way to understand where on the diagram memory could be allocated but it doesn’t tell us too much about what actually happens.

Performance and Memory shows us the run time memory usage on a VI level but no way to track it down to the actual code execution.

profile performance and memory

But now we can see more of this at run time.

 Step By Step

Launch the tool from a VI through Tools > Profile > Profile Buffer Allocations. Below you can see an example run of the Continuous Measurement and Logging sample project.

profile buffer allocations

  1. Profiling Control — To confuse things slightly, the workflow begins at the bottom! Set the minimum threshold you want to capture (default is 20 kB) and press Start. Press Stop once your satisfied you’ve captured the data your interested in.
  2. Buffer Filters — The bar at the top controls the display of the buffers in the table allowing you to filter by Application Instance, restrict the number of buffers in the table and adjust the units.
  3. Buffer Table — The buffer table displays the buffers that were allocated during the run as well as their vital stats. You can double-click a buffer to be taken to it’s location on the diagram.
  4. Time vs Memory Graph — Once of the coolest features! Selecting a buffer in the table will display a time graph of the size of that buffer during the run. I can imagine this would be great in understanding what is causing dynamic buffer allocations by seeing the size and frequency of changes.

I think anything that gives us more of a view into some of the more closed elements of LabVIEW has got to be beneficial, so go and try it and learn something new about your code.

4 Lessons Learnt Unit Testing LabVIEW FPGA Code

After declaring my intent earlier in the year on moving towards an increasingly test driven methodology, one of the first projects I aimed to use it on has been based on FPGA, which makes this less than trivial.

I was determined to find a way to make the Unit Test Framework work for this, I think it has a number of usability flaws and bugs but if they are fixed it could be a great product and I want to make the most of it.

So can it be used for unit testing LabVIEW FPGA code? Yes but there are a few things to be aware of.

1. You Can’t Directly Create A Unit Test

Falling at the first hurdle, creating unit tests on VIs under an FPGA target isn’t an available option on the right-click menu, normally the easiest way to create a test.

Instead you must manually create a unit test under a supported target type but as LabVIEW FPGA VIs are simply VIs, you can then point this at the FPGA code as long as it doesn’t contain any FPGA only structures.

This has it’s frustrations though. You must manually link the VI (obviously) and rename the test. This bit is a pain as I cannot seem to rename new tests without closing and re-opening the project. Re-arranging tests on disk can also be a frustrating task!

untitled-tests
Argh! Project Hell!

2. Test Vectors Are Your Friend

Unlike processor based code, FPGA logic generally works on data point by point and builds an internal history to perform processing. This means you may have to run the VI multiple times to test it well.

The test vectors function of the unit test framework allows you to do this, specifying inputs for several iterations of the VI and being able to test the VI over time.

Mini-tip: Using Excel or a text editor to do large edits can save you losing your hair! (Right-Click and Open In External Editor)

3. Code Resets

Related to 2, because FPGA code commonly uses feedback nodes/feedforward nodes/registers to store data at some point that has to be reset.

The lazy route to this is to simply wire inputs to the feedback nodes however this defaults to a reset on first call mode. This won’t work with test vectors as each iteration is counted as a new call.

At a minimum you should change the feedback node behaviour to initialize on compile or load, but better practice on FPGA is to have a reset input to the VI to allow explicit control of this. Then you must simply set this on the first element of the vector to get the expected behaviour.

4. FPGA Specific Code or Integration Testing

There will be code that the UTF can’t test directly if it can’t run under an Windows target. In this case the best way to include it in automated testing is with a user defined test. This allows you to create a custom VI for the test, which can include a FPGA desktop execution node to allow for testing the VI. Potentially this will have to be accompanied with a custom FPGA VI as well to be able to pipe the correct data into FIFOs or similar.

 

Hopefully this will help you get over the first few hurdles, I’m sure more will follow and I will try to update them here as I find them.

Applying Joel’s 12 Steps To Better Software to LabVIEW

Long Before the Third Grade Test

Continuing on my theme from the last post I wanted to talk about what goes around writing great software, other than the software itself. How can you improve your coding process?

I have seen in a couple of places references to Joel’s 12 Steps to Better Software and had to look a little closer.

This consists of 12 questions and you should be aiming to answer yes to 10 or more of these.

  1. Do you use source control?
  2. Can you make a build in one step?
  3. Do you make daily builds?
  4. Do you have a bug database?
  5. Do you fix bugs before writing new code?
  6. Do you have an up-to-date schedule?
  7. Do you have a spec?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools money can buy?
  10. Do you have testers?
  11. Do new candidates write code during their interview?
  12. Do you do hallway usability testing?

Stop and have a think about these with your team, what do you score?

Now what if I told you this article was 15 years old! I find this interesting for two reasons:

  • Some of these points seem quite out of date with new developments around agile methods of software development building on many of these points. There is a good article which reviews these points here.
  • I still know lots of developers that haven’t adopted many of these practices.

I must confess there are differences for most developers in the LabVIEW way of working. While many techniques are aimed towards 4-8 developers working on software for hundreds/thousands of users, many LabVIEW applications are developed by individuals or small teams for internal projects where you are probably already speaking to all of the end users.

Can we still infer great tips for those teams? Let’s look at what we can do with LabVIEW to add some more yes’s.

1. Do you use source control?

If you aren’t using this now, stop what your doing and go learn about it!

There are so many resources and technologies out there. SVN is extremely popular in the LabVIEW community although I would probably recommend trying Git or Mercurial as there are more services out there available to use now.

If your interested in these try the Atlassian SourceTree client which solved my previous concerns about a good Windows client for Git.

Credit: geek and poke
Credit: geek and poke

2. Can you make a build in one step?
3.Do you make daily builds?

Builds is a funny word in the LabVIEW world as this varies from many languages as LabVIEW is constantly compiling.

The key point behind this is that you should be continuously making sure the application can run. In LabVIEW this can boil down to being able to run the application consistently, especially as if you can’t run it, you can’t test it.

I make a point of running the full application after every change worth a commit into source code control or a bug in the bug database. This saves context switching of having to go back to old features because you didn’t test them at the time.

4. Do you have a bug database?

This should be step 2 after getting your source code control set up.

If you haven’t used these before, I find them invaluable as a means of tracking the state of the application.

Not just bugs but feature requests can go in the database and get assigned things like priorities, developers, due dates etc. and it becomes where you go when you need to know what to do next.

They also typically integrate with source code control. When I make a commit I mention a bug ID in the commit message and the system links the two.

So I now have a system that tracks what needs doing, who needs to do it, when they have done it and you can even find the exact commit so you can identify what code was changed to complete it.

Again, in this world of cloud computing this doesn’t even require any outlay or time to set up. Most source code hosting services have this built in.

Bitbucket is an easy one to start with as you can have free, private projects. I use a hosted Redmine server from Planio (which costs 9 Euro/month), others use Github (free for public projects).

Go set up a trial account and take a look around. As with many things it will take a bit of getting used to but I find it a far better way to work.

9. Do you use the best tools money can buy?

I’m skipping a few that aren’t specific to LabVIEW here so we fall on the best tools.

Firstly you chose LabVIEW! I think most people reading this will hopefully agree this is a good choice, but it doesn’t stop there.

There are probably two key things with LabVIEW to consider from a hardware point of view. a) It isn’t exactly lightweight and b) It uses a lot of Windows!

You don’t want to be trying to code LabVIEW on a 13″ laptop with a track pad. Get a mouse, get at least one large monitor, preferably two and a decent, modern machine.

My setup- Filling the desk with monitors!
My setup- Filling the desk with monitors!

 

The extra costs will soon be recovered in productivity gains of not having to wait for the software to load a new library or trying to switch between a couple of VIs and the probe window when debugging.

Also consider this from a software perspective as well. It is difficult but try and get familiar with some of the toolkits on the tools network or LAVA forums as they could save you a lot of time over writing features yourself.

There are many free tools but even if there is a cost associated you have to factor in your time to design, develop and maintain that extra code (which is probably not the area you are expert in).

10. Do you have testers?

This is one that is falling out of vogue at the minute as Test Driven Development (TDD) is becoming popular (see my previous post) which means the developers tend to write the tests themselves. That being said it is still useful to have people available for higher level testing as well but as mentioned previously we tend to have smaller teams in LabVIEW.

I think the key point right now is to have a plan for testing. Likely this should be a mix of automated testing and having some test procedure/schedule for higher level testing.

One of my first projects at Wiresmith Technology was where this failed me. I didn’t re-test the software thoroughly enough after changes (normally only the specific section which I had changed) which meant more problems went to the customer than I would like. They all got fixed but each problem means time wasted on communicating the problem as well as affecting the confidence of the customer.

Since then I now keep a procedure for testing the various areas of the software so I can do this before I send a release to the customer which has improved my hit rate as well as saving time in the long run.

With bringing in things like this there is always some pain adjusting but pick one, push through and you will come out writing better software.

My 2015 LabVIEW Resolution

It’s a new year which means we must assume everything we did before last week was rubbish and change everything.

OK, that’s a slightly cynical view but it does always amaze me how quickly advertising, blogs and news changes at this time of year. That’s also not to say that I haven’t joined in, as I ease back into work I have enjoyed looking back at 2014 as a year of great change (starting Wiresmith Technology) and now look forward to improving in 2015.

 Quality means doing it right when no one is looking – Henry Ford

One big focus point for me is going to be software quality. As old projects finish and new ones start, I want to make sure I have done everything in my power to prevent old projects coming back with issues. Some problems are inevitable but it costs time to fix and lost time on other projects to have to refocus.

I find it interesting that when it comes to software bugs are considered inevitable, even as I started by business and got contracts drawn up, the templates all include clauses stating this. Whilst there is an element of truth (software is a complex system) I also think it can lead to a relaxed attitude towards defects that you wouldn’t find in other fields.

Software never was perfect and won’t get perfect. But is that a license to create garbage? The missing ingredient is our reluctance to quantify quality – Boris Beizer

When it comes to quality I think the first step has to be testing. I have been using the unit test framework from NI on a couple of projects now as well as unit test frameworks in javascript and I am convinced it is the way forward. By the end of the year I want to be doing something akin to TDD.

The key reason is simple, when I have been writing tests as I have written code (not always strictly first, but as I am initially developing) I have found bugs. Therefore, there is only two possible outcomes to me not testing:

  1. I discover that bug as I test the integration of that code into the main product. This could take a lot of time if it is not clear what subVI is the source of the bug and I have to go back and fix it. Even knowing the subVI it means I have to get back into the same mentality as when I wrote it, which also takes time.
  2. I still miss it and the customer finds it instead. This is more costly to debug as you are likely going to find it harder to debug from the customers descriptions down to a subVI, never mind knocking the customers confidence in you.

To do this requires two things, the right mentality and the right tools.

For the mentality, discipline is the biggest requirement to begin with. I know the process and it will feel unnatural at first but I hope to push through the initial pain to get to the rolling green pastures on the other side.

For the tools, there are really two in existence for LabVIEW. The Unit Test Framework (UTF) from NI and JKI’s VI Tester. I have tried UTF quite a bit and want to return and evaluate VI Tester over the next couple of months to understand its advantages.

For both of these, keep an eye out over the next few months when I hope to report back on my progress and findings with them. No doubt I will also be discussing this on the NI community as well. Check out the Unit Testing group over there if you want to learn more (and from more experienced people).

labview tools

What Tools Do You Use?

So last week I bought something I would never dream there was any use for, a gaming mouse mat. I’ve always been a little concerned that gaming hardware is normal hardware in angry cases with a nice mark up but it has been fantastic.

For those that haven’t tried one it now feels like my mouse is running on an air hockey table making it faster and more responsive.

What I want to do today is ask you, what tools must you have for LabVIEW development?

I want to write these up into an article so feel free to comment below, tweet them (@wiresmithtech) or get them to me in anyway.

The more unexpected or unusual the better as I want to make my desk a LabVIEW developers paradise!

TDMS Fragmentation: Why Your TDMS Files Use Too Much Memory

Morning all!

It’s been pretty hectic around here but nothing would have stopped me getting to the Central South LabVIEW User Group (CSLUG) this week. Since working on my own these events are even more valuable.

This time around I presented on TDMS files. Not the sexiest subject going, but by understanding how they work you can avoid some key pitfalls.

What is TDMS?

In summary, TDMS is a structured file format by National Instruments. It is used heavily in LabVIEW and also DIAdem (which I’m a big fan of) because it allows you to save files with:

  • A similar footprint and precision as a binary file.
  • A self descriptive structure (can be loaded by LabVIEW, DIAdem, Excel or any other application with a TDMS library without knowing how it was produced).
  • The ability to be efficiently data mined through DIAdem or the LabVIEW datafinder toolkit.

If this is brand new to you I recommend you read this article first as we are going to jump in a little deeper.

Sounds Great, What are the Pitfalls?

In applications with a simple writing structure it is a fantastic format that lives up to what it promises. However when you get to more complex writing patterns we can end up with a problem called TDMS fragmentation. This can occur if you:

  • Write a separate timestamp and data channels (or any pattern where you are writing multiple data types).
  • Write different channels to the same file alternately.
  • Write to multiple groups simultaneously.

To understand why we must look at the structure.

The TDMS Structure

TDMS Segments

The first thing to understand is that TDMS files allow streaming by using a segmented file structure (their predecessor, TDM files had to be written in one go). In essence, every time you call a TDMS write function, a new segment is added to the file.

tdms segment contents

Each segment contains:

  1. Header or Lead In data – This describes what the segment contains and offset information to allow random access of the file.
  2. Meta Data – This states what channels are included in the segment, any new properties and a description of the raw data format.
  3. Raw Data – Binary data for the channels described.

So how does this impact our disk or memory footprint?

TDMS has a number of optimisations built in to try and bring the footprint as close to binary as possible.

When you write two segments which have the same channel list and meta data the TDMS format will skip the meta data (and even the lead in) for that segment, meaning that the space used is only that of the raw data, giving an effective “compression ratio” of 100%.

Taking the scenario where we write exactly the same channels repeatedly to the file, we only get one copy of meta data and all the rest is raw data, exactly what we want.

But consider this scenario:

tdms_alternate_writes

A common case where we may want to write twice to the file. Each TDMS write is going to write a segment to the file, in this case because it will alternate between the two the meta data does change and has to be written every time. This leads to a fragmented file.

TDMS Fragmentation Visualised

This will happen in any scenario where we are using multiple TDMS write nodes to a single file.

You can also see the level of fragmentation is going to depend on how much raw data is included in each write.

If we write 10,000 points each time the meta data will still be much smaller than the raw data and although fragmented, it is probably acceptable.

If however we write 1 sample each time, those green areas are going to shrink a lot and you could end up with more meta data than real data!

We can measure the impact of fragmentation by looking at the size of the tdms_index files that are generate when the files are used. This is essentially all of the meta data that has been extracted from the file.

tdms_fragmented_files

Here we can see file 2.tdms is exactly what we want. 1kB of meta data to a 15MB file. 0.tdms however is heavily fragmented, 12MB of the 36MB file is used by meta data (in this case file 0.tdms and 1.tdms actually contain exactly the same data but use some of the techniques mentioned later and demonstrated in an example mentioned at the end).

When working with fragmented files you will also see the memory usage of the library increase over time. This is because the TDMS library is keeping a model of the file in memory, collating the meta so that it can do things like perform random access. The more meta data, the more memory required.

(Contrary so some reports this is not a “memory leak” in the strict definition of being unexpected, it’s entirely predictable, not that it makes you feel much better about it!)

To reduce the memory you either need to reduce fragmentation, or close and open a new file periodically.

So how do we avoid TDMS fragmentation?

  • Writing as a single datatype (i.e a single write node). This means if we have timestamp data, converting to seconds first, or writing an offset time.
  • Write seperate files and combine them later.
  • Write larger chunks of data. This will still give a fragmented file but the meta data is spread across much more raw data and the effect is not as pronounced.
  • Use TDMS buffering. There is a special property you can set in the TDMS file called “NI_minimumBufferSize”.
    When you write a numeric to this property, the library will buffer all data for a segment in memory until it has that many samples. This is the easiest solution but does mean:
    a) Additional RAM usage
    b) In the event of a crash/power loss you will lose the most recent data.
  • If disk space is the main concern, defragment the files before storage. there is a defrag function in the TDMS palettes that can be used once the file is complete to reduce the size.

Further Reading

Your homework to investigate this is in an example I posted on the community which demonstrates

  1. the effect of fragmentation in the cases shown earlier and
  2. the effectiveness of the memory buffer in solving the problem.
  3. (also creates the 0,1,2.tdms files I showed earlier)

Go take a look and keep an eye on the size of those index files!

You can also access my original slides on the CSLUG pages and read more about the internal structure in an NI whitepaper.

NI Week 2014 Highlights – All Y’All

Part 3. All Y’All

It is so great to meet up with a community of like minded engineers. Many of you I would not have seen for a couple of years and I met many new people (as well as adding faces to avatars).

Talking of community, thanks to Mark Balla you can download videos of many sessions. Thanks again Mark, these are a great asset for many events where people can’t attend. Remind me, and everyone else, to buy you a beer the next time the opportunity arises.

Also Fabiola De La Cueva of Delacor recorded her excellent session on unit testing, if your interested in the concept I recommend a watch.

  I don’t get on twitter very often but I find it great with events like NI week so I will leave you with some of my favourites, consider it a #ff post!

Hope to see you all at another NI Week/NI Days/CLA Summit soon!

NI Week 2014 Highlights – New Releases

Obviously a big part of NI week is getting to see the new releases. Whilst you can get this from the web what I found useful was attending some of the sessions on the new products as many of the R&D teams attend so there aren’t many questions that can go unanswered!

Here are some of the new highlights I saw this year…

LabVIEW 2014

In my previous post I spoke of evolution not revolution. On that theme, the LabVIEW 2014 release was a remarkably understated event at NI Week with few headline new features (though it was great to see the community highlighted again with John Bergmans showing his Labsocket LabVIEW addon in the keynote).

Having had the chance to review the release notes though there are a few that could be of benefit.

  • You can now select an input to the case structure and make that the case selector. This productivity gain will definitely build up, even if its only 20 seconds at a time.
  • New window icons to show the version and bitness of LabVIEW. A minor update but useful for those of us using multiple versions.
  • 64 bit support for Mac and Linux. I think the slow update of 64 bit LabVIEW is almost certainly hampering it’s image as a data processing platform in many fields and this seems like a great commitment to moving it forward.

The others seem like changes you will find as you work in 2014 so let me know in the comments what you like.

What is great is having more stuff rolled up into base packages. I strongly believe there is a software engineering revolution needed in LabVIEW to bring it to the next level so putting these tools into the hands of more users is always good.

LabVIEW Professional now includes Database Connectivity, Desktop Execution Trace Toolkit, Report Generation, Unit Testing and VI Analyzer. LabVIEW FPGA also includes the cloud compile service which gives faster compiles than ever with the latest updates or the compile farm toolkit if you want to keep your data on site.

VI Package Manager

One evening I was lucky enough to attend a happy hour hosted by JKI, who among other achievements, created VI package manager which is by far the easiest way of sharing LabVIEW libraries.

They announced a beta release of VIPM in the browser. This allows you to search, browse and install packages in your browser, promising faster performance than doing the same in the standard application. The bit I think will also be hugely beneficial is bringing in the ability to star your favorite packages. I’m very excited about this as I hope it will make it easier to discover great packages rather than just finding those you are already aware of.

vipm_homepage
You can browse the public respositories and find popular packages
Each package has it's own page and can be installed from here (launches the desktop app)
Each package has it’s own page and can be installed from here (launches the desktop app)

This is live now at vipm.jki.net. Don’t for get to leave any feedback on their ideas exchange, feedback makes moving things like this forward so much easier.

The CompactRIO Revolution Continues!

Two years ago compactRIOs were fun as a developer, not so much if you were new to LabVIEW. They were powerful in the right hands but seriously limited on resources compared to a desktop PC.

A few years ago the Intel i7 version was release which offered huge increases in CPU performance but was big, embedded was a hard word to apply! (That’s not to say it wasn’t appreciated)

Last year the first Linux RT based cRIO was released based on the Xilinx Zync chip, this year it feels like cRIO has made a giant leap forward with the new range.

When you see some of the specs jump like this you can see why as a cRIO geek I am very excited!

cRIO-9025 + cRIO-9118
(Top spec of previous rugged generation)
cRIO-9033
(New Top Dog)
Change
CPU 800MHz PowerPC 1.3GHz Dual Core Intel Atom
CPU Usage on Control BMark 64.1% 10.9%
RAM 512MB 2GB +300%
FPGA Multipliers 64 600 +837%!!
FPGA LUTS 69,120 162,240 +135%

These new controllers are no incremental upgrade, they are a leap forward. My only concern is that it will be easier to make applications fit which is/was a bit of a specialty of mine! The new generation of FPGAs really drives part of this, the same difference is seen on the R-Series and FlexRIO ranges as well.

There is also a removable SD card slot, additional built in I/O and the headline grabber, support for an embedded HMI.

At the session on this we got to see this a bit closer. The good news is that it is using the standard Linux graphic support. This means it should support standard monitors and input devices rather than needing any specialist hardware.

Obviously it is going to have some impact on performance. In the benchmark I linked earlier they suggest you could see a 10% increase in CPU. I’m looking forward to trying this out, you could easily see 50% increase on the old generation just by having graphs on a remote panel so for many applications this seems acceptable.

There is also a KB detailing how to disable the in built GPU. This suggests that there is extra jitter which will become significant at loop rates of >5 kHz, so just keep an eye out for that.

Anyway, that got a little serious, I will be back with a final NI Week highlight later in the week but for now I leave you with the cRIO team:

NI Week 2014 Highlights – Buzzwords Galore

NI Week 2014 is unfortunately over (although it means I do get to return to temperatures I seem to be better built for!). I wanted to share some of my highlights which will hopefully get you as excited as I feel and who knows, even persuade you to come next year! As I started writing, this got longer and longer so for now here is part 1:

1. Buzzwords Galore

This year was certainly prime for buzzword bingo with “Internet of Things” and “Big Data” flying around.

The thing that frustrates me about these is the image they produce of some magic black art that you must pay thousands to get in the club and understand it.

The reality is we don’t wake up one morning and build an internet of things. It is a constant evolution of current technology towards blue sky thinking. As Jim Robinson from Intel said in the Wednesday Keynote,

[The internet of things] is the overnight sensation that’s been 30 years in the making.

The great thing about NI Week is that many of the people making those steps are around and it really makes you feel like progress is happening.

For me it was particularly exciting to have a customer of mine showcasing their work in these areas.

National Grid are working to connect 135 power quality monitors to substations in the UK, built with compactRIO, with the goal to collate this data to ensure the stability of the power grid. In the processed form, we will be capturing >11 Billion processed measurements per year from across the UK and connecting to the monitors live to allow power engineers to keep an eye on grid conditions.

You can see more by watching the video from the keynote.  (Wednesday – The Internet of Things for Jim Robinson and Wednesday – SmartGrid for National Grid)

As for big data, I found this to be somewhat demystified by a great talk from external speaker from Dell Software. Unfortunately I failed to take down his name and I’m pretty certain it isn’t who is listed (if so you need to update your linkedin profile pic!). I took away a few interesting points:

  • Big data is really all about analytics (which by the way has been done for years!).
  • He chose to define “extreme data” as when this processing cannot be done on data at rest in the database. Rather it must be done as the data is captured.
  • There are multiple stages to these analytics, from simple dumping to a database for mining working through more advanced structuring, deriving management dashboards up to neural networks and advanced analytics for decision making. Each step reduces the data and provides more insight.

Next year I have learnt I must take more pictures to make describing sessions easier!

As a result of NI Week 2014, I definitely feel I finally have a better feeling of what these mean to me and am excited that we are all part of this revolution evolution.

For Part 2 I will talk about some of the new products I am excited about…

NI Week 2014!

I am very excited to have made it to Austin, Texas this week for the annual NI Week conference. Hosted by National Instruments it represents a great opportunity to get together with like-minded LabVIEW developers and users of other NI technologies. It also heralds the release of many new products including LabVIEW 2014. As well as this I’m very excited that one of our projects will be on the Wednesday keynote stage.

No doubt I will be posting on here over the next week or so about some of the most exciting things that come out of the conference but in the meantime this represents one of the main times I actually pick up twitter much more so feel free to follow @JamesMc86 and/or @WiresmithTech and if your at the conference feel free to come say hello, I’m best described as the hairy one!

Headshot Jul 2014


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close