Lenovo Settles With FTC Over Superfish

Some of you will remember back in mid 2014 that Lenovo added some software called Visual Discovery by Superfish to hundreds of thousands of computers.  The purpose of Visual Discovery is to “help” you by intercepting your browser communications and either insert ads into your web traffic or even redirect you to web sites that Superfish thinks you need to visit.

If the traffic to the original web site is encrypted, then Superfish decrypts that traffic without telling you so that it can “help” you and then re-encrypts it, often in a way that is not as secure due to flaws in the Visual Discovery software.

In early 2015, the cat was let out of the bag by researchers and the media started reported about what Lenovo was doing. Lenovo tried, unsuccessfully, to do damage control and eventually released a utility that allowed people to uninstall the Superfish software.  Without this hack, there was, literally, no way to uninstall the Superfish software.

Since they were intercepting user’s encrypted traffic, they likely had access to medical, financial and other sensitive information.  All without obvious notice to the consumer.

It is likely that Lenovo didn’t think too much about what their partner, Superfish was doing, didn’t think much about the security implications, apparently did not look at the coding techniques that Superfish had used and was likely only interested in the size of the commission checks they were cashing.  This is all speculation on my part, but I doubt  that Lenovo gave Superfish access to hundreds of thousands of their customers for free.

Well the fallout has finally happened.  It took over two years, but Lenovo and the Federal Trade Commission have come to an agreement in the form of a consent decree.  A few of the highlights of the agreement:

  • Lenovo does not have to admit any guilt.  This is pretty typical.
  • Lenovo agrees that if they ever do anything that even remotely looks like this again, which I doubt, but you never know, they will create a clear and conspicuous disclosure and require the consumer to OPT-IN not opt-out.
  • Again, if they do this again, they will give the consumer the ability to opt-out at any point in time and also give the consumer the ability to uninstall the software.  None of these were done with Superfish, although there was a brief blurb when they first fired up the browser.
  • Lenovo is prohibited from making misleading representations regarding promotions like this.
  • Lenovo will implement and maintain a software security program designed to address software security risks and protect customer’s information.
  • They will identify a point person – the proverbial one throat to choke (or jail) to manage the program.
  • They will hire an outside expert to conduct software security audits every two years for the next twenty years.  That is a long time to have the FTC breathing down your neck.

Suffice it to say, this is a large pile of turds; Lenovo will spend millions of dollars and the FTC will be watching closely.  FOR THE NEXT TWENTY YEARS.

All this trouble to make a few bucks from ads to their customers.

The moral of this story is to think through the security implications of programs that hijack user’s traffic and have significant privacy implications.

More than likely, any company that was considering doing something similar to what Lenovo was doing is reconsidering that plan.  It is just not worth the risk.

Information for this post came from the FTC web site.

Why Browser Extensions Are a Security Risk

Web browsers have become the center of our daily Internet universe.  But browsers, by themselves, are often not sufficiently powerful to do what people want them to do.  Enter the world of plugins or addins or browser extensions.  These little bits of code allow a browser to do something that they were not designed by the browser vendor to do.

One of the most well known browser plugins is Adobe Flash.  Flash was invented years ago to handle video and interact with the browser user in ways that were not possible to do at the time.  Now, HTML5 does most of what Flash can do and does it much more securely.

But Flash has been the subject of so many security holes (see this post for June’s bug patch fest – 36 in one month alone and that was not unusual.  In fact, we used to joke about the morning Flash patches followed by the evening Flash patches) that many people removed it from their browsers and last month Adobe announced the long awaited death of Flash, but not until the end of 2020.

But this post is not about Flash;  it is about browser plugins in general.

This problem is the kissing cousin to installing “apps” on your phone or tablet.  Both of them add security vulnerabilities to your computing environment.

In January Cisco revealed a security hole (see announcement) that would allow an attacker to use the Cisco Webex browser plugin to execute ANY ARBITRARY CODE on the user’s machine that the attacker wanted to execute.  Cisco released a patch in February for this hole.

Assuming that every single user who has installed the Webex extension in Chrome, Firefox or Internet Explorer (there are tens of millions of them) has patched it in each browser in which it is installed, then they are safe from this particular bug.  But, I am sure, that there are a lot of people who have not installed the new version or maybe don’t even realize that they have the plugin installed.

Fast forward to July and Cisco revealed another security hole (see announcement) that allowed an attacker – drum roll please – to execute arbitrary code.  Cisco says this only affected Chrome and Firefox users and not Internet Explorer users.  In August Cisco released another patch.

This post is not about beating up Cisco for poor software security design.  But it is about users understanding the security risks of installing software.  Every time you install a piece of software you add an attack surface.  If you don’t patch each and every one of those plugins, then you have made it easy for the bad guys.

So, there are two take-aways from this:

1a. Don’t install software, including browser plugins, unless you need them.

1b. If you don’t need the software anymore, uninstall it.

2. If you have the software installed then make sure that you patch it regularly.  Including browser plugins.

Do you even know what browser plugins employees in your company may have installed?

Many companies have software that looks for security updates.  The problem is that there are so many software products out there that no product provides 100% protection.  There are also products that do software asset tracking, but again, those don’t provide 100% coverage either.    This doesn’t  mean that these software products are worthless.  What is does mean is that they are not a silver bullet.

For personal users, the free products cover a very limited subset of the software out there.  Much less than the paid products.

Bottom line – see the two rules above.

Patching IoT Gets Out of Hand

In what may be the first of its kind event, the FDA recalled a pacemaker from St Jude, now owned by Abbott Labs.

Researchers discovered the flaws prior to Abbott’s acquisition of St. Jude and reported them to both the FDA and St. Jude.  Both decided to do nothing about it until the researchers went public.

In April of this year, the FDA put out a “warning” – also likely a first of it’s kind – that the devices which can be controlled remotely, were likely hackable and also had a battery problem that could cause it to go dead – possibly along with the patient  – before it was supposed to.  At that time Abbott said that they took security seriously and had fixed all the problems (see Fox Business).

Fast forward to this week and the FDA has now issued a recall of close to a half million of the supposedly fixed devices.

Since the devices are implanted inside people, the plan is NOT to perform a half million surgeries to remove them, but rather to go to their doctor to have the firmware in the device updated.

As I recall, one of the problems WAS this update capability.  The researchers were able, I think, to buy pacemaker programmers on eBay and reprogram any pacemaker from that manufacturer without authentication.    All they had to do is be in radio range of it.

Obviously, being able to reprogram the pacemaker (which has to be done in a facility that can control a patient’s heart rhythm while the pacemaker is being hacked.  Err, patched.  Err, upgraded) is a LOT safer than a half million surgeries, but still it is not without risk.

No clue what the cost of this little adventure will be, but it won’t be cheap.  Even if each doctor visit costs a hundred bucks – which is highly unlikely – that would still be a cost of $50 million.  If the cost is $500, then the total would likely be in the $250 to $500 million range when you add legal fees, fines and support costs.

One other interesting feature.  The researchers approached St. Jude about paying them a bug bounty, which is common in the tech world, and they decided not to.  Instead, the researchers approached Muddy Waters Capital, who sold the stock short, then announced the vulnerabilities.  When the stock price went down, which it did, Muddy Waters covered their short sell and made out very nicely.  Muddy Waters and the researchers had a deal to do some sort of split of the profits.  There were some people who that was a bit too capitalistic, but, it is not illegal.  Maybe next time, they will work with the researchers when they approach them.

Information for this post came from The Guardian.

Courts Easing on Requirements For “Standing” in Breach Cases?

One of the things that has always been a barrier for people who’s data was compromised during a breach is what lawyers call “Standing”.  Standing derives from Article III of the U.S. Constitution.  The courts have said that there are three requirements for “standing” to bring an action against another – Injury in fact, causation and redressability.  I am not going to even try to pretend that I am a lawyer, but basically, it says that you have to suffer harm, that the harm can be reasonably linked to the action of the defendant and that a favorable court decision will reasonably redress the situation (Wikipedia).

For the most part, the courts have ruled that, most of the time, people do not have standing and therefore cannot sue.

In February, the Fourth Circuit Court of Appeals made it harder to show standing by ruling that plaintiffs had to show that the data thieves intentionally targeted the personal information that is stolen in the breach.  The decision centers on the hypothetical future harm and whether you were injured.  There have been a number of court rulings like this (Fenwick and West).

However, there are more cases that are starting to rule in the other direction.  Not overwhelmingly, and ultimately, it will likely will have to be decided by the Supremes.

Earlier this week U.S. District Court Judge Lucy Koh ruled that a case against Yahoo due to the breaches in 2013, 2014, 2015 and 2016 can proceed, in part due to the actions of Yahoo in not disclosing for years that the breaches occurred.

Before this is blown out of proportion, Judge Koh is only a District Court judge.  On the other hand, she was the presiding judge in Apple v. Samsung and made companies like Adobe, Google and Intel bow to her will, so her opinion is not like that of some guy in a diner.

Verizon, who bought Yahoo, had hoped that this case would just go away, but at least, for right now, the case will move forward.

Judicial doctrine takes years, even decades, to create.  The doctrine in this case is no different.  When it comes to determining standing with respect to the Constitution, it will take time.  This is just another building block as the courts continue to figure this out.

When companies reimburse people after a credit card breach or offer them credit monitoring, it is to reduce the injury-in-fact part. This, in turn, makes it harder for people to have standing.

The Yahoo case is a little different.  Since they kept the breaches secret for years;  didn’t offer to reimburse people and didn’t offer credit monitoring, they did little to reduce the injury-in-fact part.  In fact they didn’t even tell people so that they could do these things themselves.

Companies have to make this particular decision all the time.  Do we disclose a breach or keep it secret?  Do we endure the bad P.R. or do we hope that word doesn’t get out.    In Yahoo’s case, the shareholders got to take a $350 million haircut in the form of a reduced purchase price, along with having to own responsibility for certain legal costs associated with the breach as a result of that decision.

As this case moves forward, other companies will be watching closely.  Again, this is just one piece in a very large puzzle.

Information for this post came from Reuters.