E.U. Safe Harbor Deadline Nears – What Will Happen?

As the self imposed (by the E.U.) deadline (for coming up with a replacement for Safe harbor) of January 31st looms near, we don’t really know what is going to happen.  My guess is not much, but stay tuned.

The background is that when the European Court Of Justice struck down Safe Harbor last year, Working Party 29, the group responsible for cleaning up the mess in the aftermath of the ruling, created a deadline of January 31 of this year for a new agreement to be in place or else.  Or else what?  Not really clear.  What could happen is ALL that data transfer which was done under the old Safe Harbor agreement stops.  I don’t believe that will happen.

There are a lot of negotiations happening behind the scenes.

One critical piece, a U.S. law that gives E.U. residents the right to sue for redress in  U.S. court for privacy violations – a right that they do not have today and a right which the E.U. said was critical to not shutting down data transfer, passed a vote in a Senate committee.  Typically, there is a long and winding path between a committee vote and the President signing a bill into law, but still, this is a move in the right direction.  Do I think this will get signed by January 31?  No.

On the other side of the coin is the data sharing provisions (what used to be called CISA) in the recent budget bill.  Since the Senate took out many of the privacy provisions, some say that even if an agreement is signed, the ECJ might say that CISA is a huge hole in E.U. citizens’ privacy rights since the law says that you can’t sue companies if they share your private data with the NSA.  Oh, wait, companies share it with Homeland Security.  Who is free to share it with NSA, FBI, DoJ and a whole raft of three letter agencies.

The E.U. has basically approved the new data protection agreement for Europe called the General Data Protection Regulation or GDPR.  It is actually much stricter in terms of provisions than the old law.

I think February could be very interesting.

Information for this post came from The Register and Dark Reading.

Facebooktwitterredditlinkedinmailby feather

4 Health Care Breaches Reported This Week Alone

The Examiner reported about 4 health care data breaches on the 20th.  See if you can find a common element.

Information on 21,000 California Blue Shield customers, including health care info, was compromised when a vendor call center employee was socially engineered, their login information compromised and their customer data stolen.

Last week Montana’s New West Health Services said an unencrypted laptop with data on 25,000 patients was stolen. It included patient information, bank account information health information and other information.  On an unencrypted laptop out in the field.

Also last week, at St. Luke’s Cornwall Hospital in New York, a USB drive was stolen with information on 29,000+ patients which included patient names, services received and other information.  The drive, it would appear, was not encrypted.  The reason I assume it was not encrypted is that if it was encrypted and the encryption key was not taped to the device, the hospital would not have to report this event.

Finally, Indiana University Health Arnett Hospital lost a “storage device” with information on 29,324 patients containing information such as patient name, birthdate, diagnosis, treating physician and other information.  Again, likely this information was not encrypted.

Anyone figure out the common element?  All of these events would have been non-events if these companies had reasonable cyber security practices in place.

The call center employee was socially engineered.

An unencypted laptop was stolen (where was it left)?  Why was it unencypted?  Why did it have patient data on it?

A flash drive with patient data was lost.  Why was  it not encrypted and did the data need to be on the flash drive at all?

And, a storage device was stolen.  That happens.  Why was it not encrypted?

How much training did the call center do to train employees about social engineering?  Why was the laptop not encrypted?  Why was the flash drive not encrypted? And, why was the storage device not encrypted.

I keep pointing to encryption because if you have a breach but the data is not readable by the thief, you don’t have to warn customers.  It is a very simple step to take.  JUST DO IT.

Only in the flash drive case could the encryption cause a problem if you need to be able to share the drive with someone else.  The other two situations, the encryption would be transparent to the user.

Especially when it comes to health data, you need to be careful.  AND this does not only mean hospitals and doctors.  Sony lost protected health information when they were hacked.  PHI has been lost in other hacks too.  Most organizations store PHI somewhere (often it is HR or in risk management).

While some things in cyber security are hard to do, many things are not hard to do.  If we start with the easy stuff, we do make the job harder for the bad guys.  Not impossible, just harder.  Let’s start doing the simple stuff.  We can worry about the hard stuff a little later.

Information for this post came from The Examiner.

 

Facebooktwitterredditlinkedinmailby feather

When Will They Ever Learn?

As the folk music group Peter, Paul and Mary wrote in 1962 – about a completely different subject – When Will They Ever Learn?  It appears that, for software companies, the answer is a big question mark.

First Juniper got caught with a hard coded back door of unknown origins in their routers and firewalls.  Then Cisco got in trouble for hard coded credentials.  Now it is Fortinet.

The interesting thing is that these three companies are all security vendors.  If they can’t figure it out, is it likely that the rest of the software community has it figured out?

In Fortinet’s case, it wasn’t a back door in the sense of something designed to allow unauthorized people to log in to their firewalls, switches and other devices.  But the effect is the same.  Fortinet makes a central management application that allows a company to manage their Fortinet Security appliances and switches remotely.  That management console needs to exchange information with the devices in order to allow a network administrator to manage all those devices remotely.

Fortinet, of course, wants to make this easy for administrators.  What better way to do that than to hard code a set of credentials (userid and password) between the management console and the devices to be managed.

What could go wrong with that?

Vulnerable products are FortiAnalyzer release 5.0 and 5.2, Fortiswitch 3.3, Forticache 3.0 and FortiOS 4.1, 4.2, 4.3 and 5.0.

Obviously this is a problem for Fortinet customers, but there is a bigger issue here.

If security product vendors are not smart enough to figure out that hard coding credentials, no matter how well intentioned, is a problem, what are millions of other vendors doing?  Likely the same thing.  Or, MUCH WORSE!

And do I think hackers are smart enough to look for those hard coded credentials? Probably.  No, definitely.

The systems that are probably at the biggest risk are those that are remotely managed and/or those that are managed by a third party.  An example of both of these are many point of sale cash register systems, such as some of those that have been hacked in the last few years.  For systems to be managed remotely, especially by third parties, it is a whole lot easier if every system can be access using a single userid and password.

If you have one or more systems (such as a POS or Alarm system), you should ask the vendor about how credentials work and how you can periodically change the password to comply with your company’s security policy.  If the answer is that you can’t change the password, then what you have is a backdoor.  Maybe an authorized one, but still a backdoor.

If you do have a back door, then you need to figure out how to mitigate the risk.  I used to have, many years ago, a high end phone system that could be remote managed, via modem, by the vendor.  I had a simple answer to hackers.  I unplugged the modem unless I was talking to the vendor and they said they needed to remotely access it.  Simple.  But effective.

For more information on the Fortinet problem, read their blog post here.

Facebooktwitterredditlinkedinmailby feather

Why Turning Over Thumb Drives to the Cops Can Be Hazardous To Your Defense

There is an article in Cyber Security Docket talking about the SEC’s new strategy of issuing subpoenas for electronic storage devices or ESDs. Rather than asking for documents, they are asking for devices.

Without getting into a legal argument about whether the Securities Exchange Act of 1934 (almost a century old) contemplated thumb drives or not – I will leave that argument to the author of that article, who thinks the answer is no, I think that the columnist points to an important point for everyone.

Assuming the thumb drive is not encrypted – or if is encrypted that you do not plan to turn over the password – electronic devices contain a wealth of information that is not obvious.  That is why the current rules of civil procedure require a party to a lawsuit to turn over evidence in a form requested by the other party during discovery as long as that form exists.  If they want to discover a Word document, you cannot convert that to a PDF or print it out and be compliant, if the other side says they want the Word formatted document.

Turning over a device during discovery is even worse.  Not only do you have all of the artifacts inside the document from old versions of the document, but now you have artifacts on the disk – deleted files, for example, which you are giving over to the other side.

The bad news is that you cannot just go off and start wiping disks when you get notified that you may be party to a lawsuit.  Companies have tried that and judges don’t take to kindly to that.  They often tell the juries that the jury should assume the evidence that was deleted likely would support the other side – otherwise the party would not have deleted it.  If it supported the party’s case, they would want it to be in evidence.

On the other hand, if, as a matter of corporate practice, you have a document retention policy, document destruction policy, media destruction policy, media wiping policy, etc. and you regularly follow those policies, then the company cannot be accused of spoliation – the intentional destruction of evidence.  One caveat to that – once you have been notified that you are likely party to a lawsuit or likely to be charged with a crime, you have to suspend those policies if it is possible that following those policies will destroy evidence.

Still, you greatly reduce the chances of the wrong stuff falling into the wrong hands – including hackers – if you have and follow these policies religiously.  If you don’t have these policies, you should.

During the Microsoft antitrust trial, Microsoft turned over LOTS of emails that hurt their case.  If those things were never said in email in the first case or, at least, were expunged in a timely manner as part of Microsoft’s document retention and destruction policies, they would not exist and they would not have to turn them over.

And this applies to turning devices over to business partners as well.  Splurge.  Unwrap a new flash drive if that is how you are distributing the content to partners.  They are VERY cheap. Just put the content that you want to share on the drive.  If the partner gives the drive back to you, destroy it.  DO NOT REUSE it.  Trust me, this could be way cheaper than the consequences of saving a few bucks by reusing flash drives.

 

 

information for this post came from Cyber Security Docket.

Facebooktwitterredditlinkedinmailby feather

Washington Beginning To Look At Smart Car Cyber Security

The National Highway Traffic Safety Administration (NHTSA) put on a forum yesterday to discuss cyber security and cars.  The conclusion of the author of the article on the subject is that cars will never be secure.

I don’t know if I am THAT pessimistic, but it is certainly a difficult problem because of conflicting requirements that cannot be easily satisfied at the same time.

That being said, at least people are starting to talk about it publicly, formally and seriously.  As in other 12 step programs, admitting that you have a problem is a critical first step.  Unfortunately, I think that the automakers are only at step one of their twelve step program.

The article summarizes the day in 7 points:

  1. Serviceability – the right that a car owner or third party mechanic has to fix your car makes securing it much harder.  If you could just snap closed a padlock over it, that would make securing it a lot easier.
  2. Software updates are a fact of life.  In a high end car with millions of lines of code, the likelihood of zero errors is, well, zero.  If the car company can send you a flash drive to patch your car like Dodge recently did with it’s trucks, what is to stop a hacker from doing the same thing and owning your car?
  3. Should software updates be mandatory or optional?  The consensus of the group, apparently, was not applying updates should void your warranty.  Of course, that addresses cars in their first two or three years of life (I own 3 cars, none of which are under any warranty), but ignores the majority of the cars on the road.  And, that doesn’t address the security of the update process
  4. Auto makers need to support the security research community.  In 2015 there were several very public examples of where researchers worked with automakers, but my guess it that we only saw the tip of the iceberg.  We need to do a lot more work in this area.
  5. The ODB-II port is a gaping security hole.  Enough said.  Gaping hole is polite.  This HAS to be fixed and soon.
  6. Car makers are conflicted over spending money on securing your car because there is no “car security certification” which would get them off the legal hook and security is not like tailfins or power windows – you can’t charge extra for it.   It is a war that carmakers can’t win and can’t even charge for the fighting of.
  7. Car makers don’t really comprehend the scope of the problem.  I liken this to PC security about 15 years ago – that is the level of appreciation of the problem.  Car makers figure that automobile cyber security hasn’t affected sales yet, but they do understand the potential for that and also the potential for Washington to “help them” run their business.  Still they are grappling with what and how much to do.

Given my assertion that automakers are, today, where PC software makers were 10 or 15 year ago, combined with the fact that each year there is more software in every new car, means that we should expect to see more demonstrations like the one we saw roll out in real time on national prime time TV (60 Minutes) last year.

May we live in interesting times.  We do.

Information for this post came from SemiWiki.

Facebooktwitterredditlinkedinmailby feather

The Security Challenges Of Wifi Protected Setup (WPS)

If you have followed me for any time, you know that I often say that you can pick security or convenience, but not both.  Here is another example of that.

WiFi Protected Setup was a mechanism created by the manufacturers because users were having too much trouble setting up WiFi connections, which reduces sales.

In its most common configuration, typically enabled by default, the WiFi router or access point has a PIN which is printed on a label on the device.  When you want to connect a new device, you enter this PIN and the WiFi router delivers the password to the device.  Suposedly, only the owner (or anyone who has seen the router) would know the PIN.

In another mode, the user has to press a button on the Wifi router in order to enable this feature – to improve security, but not all devices support this mode – it was an enhancement after people found out about the weakness of PIN mode.

A design defect in the specification allows an attacker to very quickly try all possible combinations.    While the 8 digit PIN allows for 99, 999,999 combinations, the last digit is a check digit, so there are only 9,999,999 combinations.  But there is a design defect.  You can ask the WiFi router if the first four digits match – 10,000 possibilities.  Then all you have left is 4 digits less the check digit, or three digits, meaning 999 combinations to choose.

To add insult to the process, you can try all 10,000 + 999 combinations without the router shutting the attack down.  That won’t take very long.

Bell Canada announced this week that a couple of the routers that they give to their customers have an option to turn off WPS.  Only it doesn’t really turn WPS off.  It only turns off announcing WPS.  An attacker can still try the 11,000 possible combinations.

The real questions are (a) Does your WiFi device support WPS?  (b) Is WPS on?  (c) Can it be turned off?

The best answer is to use a WiFi router that doesn’t have WPS in it at all.

Facebooktwitterredditlinkedinmailby feather