Implications of WannaCry on NI Based Systems

What do problems like WannaCry mean for us?

The more I learn about cyber security, the more you realise how much it feels like we are on the back foot.

Fundamentally the issue is that the tactics and techniques used by hackers seem to move forward much faster that technology at large with many things we depend on having been designed before security was such a significant consideration.

WannaCry certainly brought this concerns to the forefront again, with legacy systems making the front page. The media scoffed at hospitals using Windows XP still, but in our industry, we know that it is not a simple job to keep complex and custom systems up to date. So what might this mean to the LabVIEW community?

Working with IT More

Antivirus and automatic updates can cause havoc with operational systems but as shown having insecure devices on the network can provide a weak link for exploitation. So while It can be a pain to work with on these systems, we must understand their wider concerns.

We probably need to develop some best practices for system updates – is there a way we can schedule updates to minimise impact? Or can we guarantee the system stays off the network, so it doesn’t risk spreading malicious software? Alternatively, can critical elements be run on LabVIEW RT which will likely require less frequent updates than desktop systems?

Stuxnet showed that you must also consider offline threats, USB sticks will continue to threaten offline systems and if users transfer data to and from systems with them, they must be educated about the risks of using un-vetted USB sticks.

Minimum System Access

I always think one of the best, and basic security practices is that of minimal access. If you don’t need the Web server, disable it. Firewalls should only allow access to required systems, and we have the option to install them to Linux RT targets now.

Critical to this is things like VI server remote access. This allows for arbitrary code execution which is a hackers dream! Make sure you turn it off if you don’t need it. If you do need it, make sure you protect it well.

If you have a multi-device system such as a test rack, then including a router which can provide an internal network with wider access but restrict the external network would be a sensible approach.

 

Minimum access also means only the required permissions for any given user. You should ideally never be running as an administrator as standard. I know it’s easier! But it also makes things much easier for malicious code. When you hit a permissions error, then make sure you give the standard user the permissions it requires. Using Linux trains you well in this and is one of the benefits of learning it. (I know Steve has found it worthwhile)

Examples of where these principles are important are the new Petya variant. The malware spreads through various means. This includes the SMB flaw that WannaCry used, but it will also then sniff the machine for administrator credentials. If it finds them, it will then use these to remotely access other systems that the account has access on, spreading further.

I also have it on my list to look more into the write filters on the Windows Embedded systems which mean that anything written to the disk is only temporary and every reboot brings it back to the original state. The system can still get infected, but it makes a recovery much easier.

Thinking About Recovery

One thing I have learnt over the past couple of years is a backup is only as good as the recovery. If a customer had a machine infected and was losing money while it was down, how fast could you recover it?

I take images of all RT systems, but I am considering whether Windows-based systems should also have an image taken and recovery disk creating on delivery. Then if a machine does get infected (and doesn’t store critical data that has to be recovered first), it can be up and running again in hours instead of days.

I know there are a lot more questions than answers there! But I think it is an interesting discussion to have and something I aim to improve on over time.

2 Comments

  • Jonathan Hird

    July 3, 2017

    Great post as always James! I think PXIs being dispatched with Acronis is great, and deploying an easily disturbed installer again makes it simple to recover. But one thing I’ve been thinking about is customer data, test results etc.

    9/10 times it needs to be sent to a networked database or some generic location on disk or even a separate file server. For me this draws the question of where’s the line of customer driven Security and ensuring your own deployed system is safe!? I guess you could write whole other blog post on that bit! But really I think we can only be sensible, logical and reasonable with how we deploy systems!

    Reply
    • James McNally

      July 4, 2017

      That’s a good question in its own right.

      Partly it comes down to deciding on a level of trust. Downloading UUT details is probably low risk while downloading code updates requires a high level of trust

      I find in those cases the external system may be owned by another project or department so you have to educate your customers in the right questions and do what you can. It is often hard to find the line between causing unnecessary concerns and making sure they are fully informed of the risks

      Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.


By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close