This Week In Hacks and Breaches

Too many attacks to write about individually, so I am just going to write a short blurb on each with a link.  Oh, My!

British Airways – hackers accessed “tens of thousands” of frequent flyer accounts forcing BA to lock down the system, denying users access to the system and requesting that they change their passwords (see link).  This does not appear to be a hack of the BA system itself, but rather accounts were used via compromised credentials (possibly via compromised PCs or phones?).

Puush, the screen sharing platform was hacked and users were told by the Puush update process to uninstall the old version and install the new (infected) version (see article).  Puush is telling users to install a new, new, uninfected version.  Puush says that passwords stored locally and in your browser – all of them – may be compromised, so change them all.

gitHub, the open source developer’s web site was hit by the largest denial of service attack they have seen.  After 4 days, they seem to have gotten the attack under control (see article).  The good news is that GitHub’s defenses seem to be holding.  It is believed that the Chinese are mad that GitHub is storing programs that help access banned sites.

The Indiana State Medical Association reported on March 26th that two backup drives with policy information for 40,000 people were stolen on February 13th.  Why they waited 6 weeks to report this is unclear.  It contained all the usual stuff – names, addresses, socials, and medical history.  The article does not say, but we should assume the drives were not encrypted (see article).

TheHill is reporting that thousands of Uber customer passwords are showing up for sale on the dark web.   The price is cheap – selling for as little as a dollar.

Uber says they were not breached.  Still, somehow, the userids and passwords are for sale.  The fact that Uber can’t find a breach also does not mean there wasn’t one.  Uber is particularly sensitive since the personal information for 50,000 of their drivers WAS taken from their servers last month.  That was not caused by a smart hacker, but rather by an employee (?ex-employee?) who posted the credentials to the database online.

A hacked Uber account is of limited value – you can use it to get an Uber cab, check a customer’s history and get their home address, among a few other things.

St. Mary’s Health reported that several employee’s userids and passwords were compromised as a result of an email hacking attempt (it sounds like it was not an attempt but rather a successful attack).  St. Mary’s said they found out about the breach on Dec 3, 2014 and on Jan 8, 2015 found out that the email accounts of these employees have protected health information for 4,400 patients.

This is small enough that I would not write about it normally, but it raises some questions.  It is vague but appears that protected health information was found in email.  Was it encrypted?  Is this a HIPAA violation on top of everything else?  Did they disclose this within the 60 day HIPAA requirement – this is not clear?

I assume the data was not encrypted, but if it was encrypted transparently, with the hackers knowing the userids and passwords of users, that does not help you in the least.  This is why one has to be very careful when implementing encryption – it may give you some protection or just the illusion of protection.

In the “This is embarrassing” column, The Department Of Justice is charging two former agents – one from the Secret Service and one from the DEA with money laundering and wire fraud for stealing crypto currency (bitcoins) related to the Silk Road darknet takedown.  Both were involved in the investigation (see article).


Government “Equities Process” For Zero Day Vulnerabilities

Wired reported on the process that the U.S. Federal Government, and more specifically, the Intelligence Community, uses to decide when to keep bugs secret and use them against systems they want to attack and when to reveal them to the vendors to fix.

The bugs, known as zero days or O Days, are ones that have not been discovered yet.  According to the article, the government spends about $25 million to buy zero days, but there is not a lot of transparency in the process, so we don’t know if they are buying them from hackers or from services or a combination.  $25 Mil sounds like a lot to you and me, but to big companies, it is just a cost of doing business.

As early as 2008, the intelligence community figured out that they needed to have a policy regarding how they handle these bugs.  Since the NSA wears two hats – attacking systems and protecting systems, they have to decide whether to reveal a bug or keep it secret.  They decided to create a group inside the NSA’s Information Assurance Division to make these decisions.

Last year, the government intelligence reform committee reported that this process was flawed and needed to be rethought.  This goes back to reports from last year that the government knew about the SSL Heartbleed bug for several years and used it rather than reveal it.  The government denied that, but doubts remain.

At that time, Michael Daniels, the President’s advisor on cyber said that the government had a rigorous process for deciding which bugs to keep secret and which ones to reveal, but didn’t offer any details on that process or how many bugs they revealed vs. kept secret.

Last year Daniels revealed to Wired that the Equities Process had not been implemented to the degree that it should have been and the process was moved out of the NSA last year into the National Security Council.

The Wired article is an interesting insight into the challenges that the Intelligence Community has to face – choosing between protecting us and hacking into bad guys.

Retailers Ask Congress To Fix The Cyber Security Problem

The National Retail Federation, in testimony before Congress (see article), said that the government should expand protections for debit card users (Federal protections for debit card users are less than for credit card users), pass a national breach notification law and boost prosecution for cyber crimes.

The harder question is who is responsible for breaches.  Is it the software companies that make buggy software?  Is it the businesses that don’t install patches and take aggressive measures to protection consumer’s information?  Or is it consumers that choose passwords like 123456.

The answer to this is that all of these parties share blame and all of these parties need to take action to fix the problem.  Absent that, the bad guys will likely continue to win.  While consumers are not liable for more than $50 when hackers use their credit cards, those costs show up somewhere.  That somewhere is higher bank fees and prices at stores.

Will changing laws on debit cards stop the Target attack?  Will a national breach notification law protect Sony or it’s employees?  Will more prosecutors or different laws stop the Chinese (if it is them) from attacking Anthem.  Unfortunately, the answer to all of these questions is no.

The only way we are going to make any impact on hacking is if we – businesses, software makers and consumers – start taking the right actions.

The article points out that some retailers, like Target, are swapping out mag stripe credit card readers for chip and pin based readers.  These cards, already in use in many countries but not widely used in the United States, the article says and I agree,  will reduce credit card fraud because they are harder to counterfeit.

Lets examine why those stores are doing that.

Merchants don’t want to get new credit card readers because they have to pay for them and train both employees and customers on how to use them.  This is especially painful for older people who did not grow up in the digital world.

So if this is true, why are businesses starting to replace their credit card readers?

Mastercard and Visa have changed the rules.  Effective October of this year, if credit card fraud takes place and the store does not use chip based credit card readers, the store eats the fraud rather than Mastercard and Visa (this is a slight simplification, but basically accurate).

You draw your own conclusions.

I suggest that people – Software developers, businesses and consumers – will change their ways when it is more painful or expensive to not change rather than to change. Unfortunate but true.

My two cents.



Want To Hack Into A Car? Got $60?

Yup, that is all it takes.

Eric Evenchick will present at Blackhat Asia a $60, open source, car hacking tool (see article).  You have to provide your own USB and OBD2 cables.  With Eric’s CANCard and his library of Python based scripts, you can hack around in your car (or maybe someone else’s) and see what kind of havoc you can wreak.

Before you panic, your car is not likely to be hacked because the car companies have one thing going for them.  Diversity.

Unlike your Windows computer or iPhone, there is a huge amount of variability between cars – between cars from different companies, between cars of the same company but different models and between cars of the same model but different years,

That means that any hack you make might only work on a 2014 Ford Taurus – and not on a 2013 Taurus or 2014 Ford Escape and certainly not on a 2010 Chrysler 300.  Or it might.  It’s a crapshoot.

That also probably explains why it takes so long to get a new car from design to production – the designers insist on reinventing the wheel with every car.  Ever notice how many auto light bulbs or wipe blades there are in an auto parts store.

Still, for $60 plus a couple of cables you too can mess with someone’s car.  That has to increase the likelihood of people messing around.  And when they mess around they will find stuff.

Depending on the car companies attitude when the hackers tell them about their problems, it could enhance reliability and security.

On the other hand, it may be hard for auto makers to patch your window control without having you bring the car into the dealership, which is expensive.  BMW very proudly patched a security hole in their telematics system (that is sort of a fancy term for a cell phone built into your car and all the stuff that is connected to  – like GM OnStar or Ford Sync) without having owners bring their cars in.  High end cars are more likely to have telematics – but it is still an option in most cases.

And, if car companies can call your car and patch your window control, can hackers do it also?

Or maybe the hackers will decide to publicly disclose the security hole to embarrass the car companies into action.

Or maybe, they will report what they find to the National Transportation Safety Board.

These last two options probably will keep car executives up at night.

A bit scary.


Stingray Tracking Devices – Who’s Got Them

The ACLU put together an interesting web page (see here).  By surfing the web, they have put together a map with information – as best they have at the moment – of what states are using Stingrays to track citizens and what states are not. I say citizens and not crooks because a Stingray will collect data on every cell phone in say a 1 or 2 square mile area, as long as their cell phone is on.  What we don’t know is the specifics of that.  For example, does it just collect data for one carrier at a time or any phone, any carrier?



The map is interactive – if you click on a state, it will give you links to web pages with articles about some agency’s use of Stingrays.

In addition to listing what state agencies are using Stingrays, the web page also links to federal agencies (such as the FBI, DEA and Secret Service, among others) that have solicitations for procuring Stingray devices.

I think the cat is out of the bag.  I am sure that there is some crook somewhere that does not know about the use of cell phone trackers, AKA Stingray, but certainly every big time crook is aware of it.  And I think most citizens also understand that a cell phone is a homing beacon for them and the only way to stop that is to remove the battery (yes, turning it off doesn’t work – the baseband radio may still be on.  Sorry iPhone users).

Amazon has a couple of dozen different Faraday bags to stick your phone and other electronic goodies in to contain the radio waves and shield it from EMPs (electro magnetic pulses).  I guess it is a big business.

It would be nice if department and agencies would explain how they use them and how they manage the data that they capture for citizens who are not suspected of any wrong doing.

Android Allows App Hijacking On Install

A couple of months ago I wrote about an iPhone bug that allows users to unintentionally install rogue iPhone Apps (see post).

Well now Android users are getting hit with a similar attack.  Ars technica is reporting that they have found an Android Installer hijacker (see article).

Like the iPhone bug, it only works if you install an app from somewhere other than the Google Play store.  Like the iPhone bug, the vulnerability allows the user to think they are installing App A when in fact they are installing App B.  The mechanics of how it works is different than the Apple bug, but both are related to inadequate validation of the installers at install time.

The bug was patched in Android 4.3_r0.9, but apparently some versions of 4.3 are still vulnerable.  Android 4.4 and Lollipop (5.0) are not vulnerable.

Unfortunately like some other Android bugs, this means about 900 million phones or 49% of all Android users are vulnerable.

If you steer clear of third party app stores you will not have a problem, even if you are running a vulnerable version of the Andoid OS.