Insurance Companies Deny Cyber Insurance Claims

As I predicted (which did not require a large amount of clairvoyance) after the Cottage Health fiasco, insurance companies prefer to deposit premium checks and have begun to fight cyber insurance claims.  Since most people don’t read their insurance policies and even fewer make sure that they are in compliance with the terms of the policy, this is kind of like taking candy from a baby – an unfair fight.

In the Cottage Health case, Cottage was breached and their cyber insurance carrier, a division of CNA, paid the $4 million claim.  CNA later said that Cottage was not in compliance with the terms of their policy even though the insurance carrier initially paid the $4 million claim, and is suing to get their money, legal fees and other costs back.  That suit is currently withdrawn pending back room negotiations between the two parties.

There are now two new lawsuits.

Ameriforge Group is suing Chubb because they were suckered into a business email compromise (where a hacker convinces someone in the company to wire money to some place because of a secret deal the CEO is working on or whatever).  Chubb says that the policy covers fraud (where someone writes a bogus check or wire, for example), but in this case, an authorized employee got suckered and, sorry to be impolite, there is no sucker coverage in the policy.  In this case the loss was around $500,000.

The second case is similar.

Earlier last year, Chubb was sued by Medidata Solutions after it was suckered out of about $5,000,000 in a similar “super secret” deal.  Even though in this case, the company said there was some hacking involved, Chubb said the employee voluntarily sent the money, so no coverage.

The moral in this story is that companies need to understand what coverage they have and what coverage they do not have.  Cyber risk insurance is not a standard form of insurance, so policy coverages vary significantly.

And, as Cottage Health discovered, even if you have coverage you have to make sure that you follow the rules if you want to get paid.

Information for this post came from Krebs on Security.

 

U.S. Discloses Zero-Day Exploitation Practices

The U.S. government acknowledged that it uses zero-day bugs not only for espionage and intelligence gathering, but also for law enforcement.  What else it uses them for is still unknown.

Last November, the government released a document titled Vulnerabilities Equities Process.  This policy describes the policy, dating back to 2010, that allows agencies to decide whether to tell vendors about bugs they know about or use them as they see fit.

The document was redacted as the government claimed that confirming what everyone already knows – that they don’t always report bugs that they know about – would damage national security.  Not sure how that could possibly be, but that is what they claimed.

The government has removed some of those redactions and thereby confirmed what everyone already knew – that the government uses zero-day exploits so that the FBI and other agencies can hack into U.S. citizen’s computers, hopefully with appropriate oversight – although the oversight process, if it exists, is still unknown.

The document says that there is a group within the government that reviews zero-days and decides how they will be handled and to whom they will be distributed.  The NSA, not surprisingly, is in charge of this group.

Before we beat up the U.S. government too much, likely every other government on the planet does the same thing – likely with similar rules of engagement.

Still, this release of information does eliminate the question about whether “We’re from the government, we’re here to help you.”

Not always.

Open Source Software Does Not Solve All Of The World’s Problems

While I am not a Linux user personally, I am a big fan of it.  However, I am not delusional enough to think that just because a piece of software is open source, it is secure and bug free.

Anyone who thought that should have had those delusions ripped away when the Heartbleed bug was publicized.  For those readers not familiar with Heartbleed,  Heartbleed is the name given to the bug that affected the wildly popular open source software that implements SSL or HTTPS, the protocol used to protect secure many web sites.

It was thought that the bug affected around a half million to one million ecommerce web sites, many of which still have not been fixed 18 months later.

As popular as this software is, many, many people looked at it and even made contributions to it.  Still, this bug lived in the software from December 31, 2011 until a fix was released (but of course released does not mean that people have integrated into software that used the flawed version) on April 7, 2014.

To me, this proves that open source software, no matter the goals and desires of developers, may have security holes in it.

Fast forward to this week.

All versions of Linux released since Kernel version 3.8 (released in early 2013 -about 3 years ago) have a bug in the OS keyring, where encryption keys, security tokens and other sensitive security data is stored.

Whether hackers and foreign intelligence agents knew about this over the last few years or not is unknown, but we expect many Linux variants will release a patch this week.

More importantly, at least some versions of Android, which is based on Linux, also have this bug.  The researchers who found the bug said it affected tens of millions of Linux PCs and servers and 66% of all Android phones and tablets.

Google says that it does not think that Android devices are vulnerable to this bug being exploited by third parties and the total number of devices impacted is significantly smaller than the researchers though.  In this case, I trust Google researchers.  Google will have a patch available within 60 days, but getting that patch through the phone carrier release process could take a while.  I call this patch process TOTALLY BROKEN.  The only phones that we know will be patched quickly will be Google Nexus phones because Google releases those patches directly.

So, one more time, a major and highly visible piece of open source software is found to have a significant security hole for years.  This post talks about two examples, but there are many, many others.

If open source software as popular as Linux and OpenSSL has security holes, imagine the holes that MIGHT live in other, less popular open source software.  Some open source software might only be used by tens of people and only be looked at by one person.

The moral of this story is NOT that you should not use open source software;  it is no less or more risky than closed source software.  The moral is that you should ALWAYS consider the potential risks in using software and to the maximum degree possible, test for and mitigate potential security bugs.  And be ready to deal with the new ones when they are found.

Information on the OS Keyring bug can be found here.

Information on Heartbleed can be found here.

Do We Really Know How Successful Hackers Are? No!

As we see the news of attacks day after day and think “can it get any worse?”, the reality is that likely, it is much worse than we think.

Buried in the mass of data that was released by the Sony hackers were some emails from VP of legal compliance Courtney Schaberg telling some people inside the company, including chief counsel Leah Weil, that Sony had another hack, that credentials had been obtained by hackers, the accounts were now disabled, that the attackers uploaded malware and some data was compromised.

A follow up email describes the data that was taken along with an assessment that Brazil, where the attack took place, does not have a breach notification law although they have other privacy related laws.

The email, labelled Privileged and Confidential, goes on to say that she recommends against telling people that the data was compromised because the law doesn’t require it, that data taken wasn’t terribly sensitive and telling the people who’s data was hacked wouldn’t help them much in mitigating the damage.

Part of the logic in deciding whether to disclose or not was whether the media would out them.  In this case, the Brazilian media had not mentioned Sony by name after a reporter contacted them, so maybe they could squeak by.

In the U.S. only certain kinds of breaches (such as credit card data and health care data, among a few others) REQUIRE disclosure and even then, many of the laws allow the businesses to decide what the risk of the compromise is to the victim in deciding whether to tell people that their data was hacked.

In the absence of a Federal law requiring all companies to fess up to breaches all the time, breaches will be under disclosed.  After all, disclosing a breach that a company might be able to sweep under the rug will definitely cost the company more money and cause more problems, including lawsuits.

As a result, in addition to those breaches where the company doesn’t realize they have been hacked, there are likely many other breaches that go undisclosed following this very same logic that Sony used.

What percentage of the leaks are disclosed?  No one knows.  But probably way less than we think.

Information for this post came from Gawker.

Dell, Lenovo, AOL and Shodan Make Life Easy For Hackers and Foreign Intelligence Services

Here is an interesting group of vulnerabilities that make life easy for hackers and the Chinese (or Russians, or Ukrainians or pick your country).

  1. Dell has a couple of features in Dell Foundation Services.  One allows an unauthenticated user to get the Service Tag (Dell’s version of a serial number) over the net.  With that, you can go to Dell’s web site and get the complete hardware and software configuration of the computer – useful to hackers, intelligence agencies and scammers.  Another bug allows an attacker to remotely execute Windows WMI commands which allow you to access the system configuration including running processes and the file system and remotely run programs.  Dells service runs on port 7779 and provides a SOAP interface – for ease of exploit.  Err, ease of use.
  2. Lenovo has a bug in Lenovo Solution Center.  It listens on port 55555 and allows an attacker to remotely execute any program – with SYSTEM privileges based on a whole series of flaws described in the article below.  This could also allow a local attacker to execute programs with more privileges than the user has.

Both of these, most likely, are done to make support easier for either the vendor or enterprise users – without regard to the security consequences.

In theory these ports should be closed from the Internet – but not always – read below.  Still, if an attacker gets onto your local network some other way, this is an easy way to increase the attacker’s footprint in your network.

3. AOL Desktop, an absolutely antique piece of software from the early 1990s is still being run by some users.  It was an early attempt to access the web in a graphical fashion when the only connectivity users had was slow dialup.  It uses a proprietary language called DFO which allows AOL’s servers to execute functions remotely on a user’s desktop.  Given this was written more than two decades ago, no one thought about requiring authentication and it did not use SSL to protect the data stream.  This means that all an attacker needs to do is find a system that is still running this antique and it can own it in a heartbeat.

Potentially, attacks from the outside should be mitigated by the user’s firewall, but apparently not always.

John Matherly of Shodan, the search engine for Internet of Things attacks, did a quick search to see if he could find systems that responded.  For the Dell feature, he found around 12,800 webservers that responded to that port.  Of those, about 2,300 are running software that looks like it is from Dell,  He ran a quick script and was able to collect about 1,000 Dell service tags.  He didn’t try this for the other exploits – that I know about.

Quickly.

Obviously, we did not know, until now, about these wonderful Dell, Lenovo and AOL features.  That doesn’t mean that hackers and foreign (or domestic) intelligence agencies didn’t know about them.

Why bother with really obscure and hard attacks to get into computers that you want to when you can just, basically, walk in the front door.

The big question is how many more of these features exist that we have not found.

And since manufacturers have no liability as a result (other than getting a little bad press that blows over quickly), they have no incentive to do things securely.  And also, since they don’t even tell you that they are doing it, you as a user cannot make an educated decision as to whether you want the manufacturer’s “help” in this manner.

Soooooo, HOW MANY MORE FEATURES ARE THERE?  Features that are here today or will be here tomorrow.  As vendors try to help users without considering the security implications. This is just from a quick round up of the news that I happened to hear about today.

 

Information on the Shodan search can be found here.

For information on the Dell feature, go to LizardHQ.

For information the Lenovo feature, go to PC World.

Android.Bankosy Malware Defeats Two Factor Authentication

As Businesses up their security act, the hackers are upping their act too.

Banks, for example, have added two factor authentication to make logins more secure.  In fact companies from Amazon to Paypal have added optional two factor authentication.

So the hackers decided to up their game too.

Welcome Bankosy malware.  This malware intercepts your text messages from the target web site and forwards them to Outer Timbuktu and deletes them.  That way you don’t even know that someone attempted to log in to the site.

So now the banks added voice authentication – instead of sending a text, they call you and a computer speaks the two factor code.

So the malware puts your phone on silent, locks the screen and forwards your calls to Outer Timbuktu, grabs the code and un-forwards your calls.  All while your phone is locked and on silent.

I have never tried to forward my cell phone, but after doing some research, I did find the codes to do that.  For Sprint, they charge you 20 cents a minute for forwarded calls, for which I am not sure what the justification is.  So not only do you lose your money, but you get to pay for the call as well.

This is the downside of using your phone for the second factor.  It is very convenient, but not so secure.  If you used a stand alone RSA key fob, which generates the code locally and which you cannot program or install software one, it is virtually impossible to hack.

If you put the second factor on a general purpose computing device, it is convenient, but, apparently, hackable.

Which means that if you do online banking, you should be careful about what apps you install on your phone – even if you do the online banking from another computer.

That is why we say that cyber security is like peeling an onion – when you peel away a layer you can’t see any difference and it is often accompanied by some crying.  But eventually, you do get the result that you want.  It just takes a while.

Information for this post came from Symantec.