Category Archives: Business Continuity

Why Hackers Win So Easily

It might be a fair fight if companies would do just a few of the right things, but for a lot of them, they do not.

There is a form of ransomware going around now that attacks web sites rather than workstations.  Encrypting all the data on your web site will probably make you willing to pay a bigger ransom than Joe’s PC in the marketing department.  This particular attack doesn’t try to compromise the operating system; it goes after buggy plugins and addons that companies don’t seem to be able to patch.  In this case, described by Brian Krebs in his blog, the ransomware writers are going after plugins.  One they found is a shopping cart addon called Magento.  There was a patch released in February of this year and the vulnerability was disclosed in April, but still many web sites haven’t installed the patch.  PATCHING JUST THE OPERATING SYSTEM IS NOT ENOUGH – YOU HAVE TO PATCH EACH AND EVERY TOOL THAT YOU USE AND YOU HAVE TO DO IT QUICKLY.

For some ransomware victims, the problem is even bigger.  Apparently the Power Worm ransomware has a bug in it so that even if you pay the ransom, the attacker is unable to decrypt your files.

Given that this is your web site, having it offline, even for hours (and if you don’t have good backups, then maybe for days or more) is likely a problem for your business.

Now back to how the hackers get in.

They use the unpatched vulnerabilities to hack your own company’s web site.  Then they add a new page that looks like all of the other pages on your site.  Finally, they phish your employees to get them to click on a link to a web page on your own company’s web site and poof, they are in.  Once they control one machine, they escalate their permissions and propagate themselves all over your network.  It can happen VERY quickly.

So, what mistakes do companies make?

  1. Underestimate the risk of unsecure web applications. This means that you have to have a security development life cycle, test your applications and apply patches, among other things.
  2. Lack of continuous monitoring.  If you are not watching in real time what is going on in your network, you have made it pretty easy for the attackers.   Testing your web site once or even twice a year is a guaranteed fail.
  3. Lack of a disaster recovery, business continuity and incident response plan.   If you don’t plan for it and don’t test the plan, then when the kaka hits the rotating-air-movement-devices (aka when the sh*t hits the fan), you will be that proverbial deer in the headlight.
  4. If convenience or features that marketing wants always win out over security, then you give the hackers a free pass.  That does not mean that security always wins, but you need to have a clear process for evaluating security issues and understanding what risks you are willing to accept and which ones you are not willing to.
  5. Not dealing with third party security issues.  Whether it is vendor risk management (think the Target breach, Home Depot Breach or OPM breach) or third party software bugs (like the Magento bug described above), problems with a third party are your problems and if your contracts are not written correctly, you probably don’t even have any recourse to go after them for damages.  Most software licenses say that they don’t warrant that their software works correctly – you use it at your own risk.  If that software (or a third party vendor) lets a hacker in, good luck getting any money out of them.

So this is an opportunity to tighten things up and make it a little harder for the bad guys.  Maybe they will go after some other company rather than yours.

Information for this post came from Krebs on Security and CSO.

Facebooktwitterredditlinkedinmailby feather

Your Air Safety Is Dependent on Windows 3.1 – And Vacuum Tubes

As if Paris didn’t have enough problems, Paris’ Orly Airport had to close briefly last week because a Windows 3.1 system that sends Runway Visual Range information to pilots failed.  Windows 3.1 dates back to 1992.  The French air traffic control union said that Paris airports use systems running 4 operating systems, including Windows 3.1 and XP, all are between 10 and 20 years old.  The system should be upgraded anywhere between 2017 and 2021, depending on who you talk to.

But don’t beat up the French too much.  Until the late 1990s or early 2000s, the FAA was still using systems running with VACUUM TUBES.  Seriously.  For a while, the U.S. Government was the largest user of vacuum tubes, which had to be specially made for them.

And many of you probably remember last year when a mentally ill technician attempted suicide after setting fire to an Air Route Traffic Control Center outside Chicago.  Air traffic around the country was screwed up for weeks.

Fundamentally, there is a lot critical infrastructure in the U.S. and around the world that is older than most of the readers of this blog.  Software that is 20, 30 or even 40 years old is not likely to be as secure, reliable or robust as software built today.  However, whether it is inside power plants, trains, or air traffic control systems, it is what we got.

From a hacker standpoint, that is a dream.  Much of the software was designed and built pre-Internet, but much of it is connected to the Internet anyway.  Which is why Admiral Rogers, head of the NSA, told Congress recently that he is convinced that there are several countries that have the ability to take out pieces of our critical infrastructure.  Several today.  Probably more soon.

Unfortunately, there is so much of it and the critical points are almost all under private ownership.  Nationwide, we are talking hundreds of thousands of pieces of infrastructure – drinking water, gas, electric, waste water, etc.

Unless we get serious about upgrading it,some hacker is going to get there first.  That is not a very exciting thought.

Information for this post came from ARS Technica, Baseline and Wired.

Facebooktwitterredditlinkedinmailby feather

4 in 10 Businesses With Cyber Insurance Have Filed A Claim

A Wells Fargo survey of 100 large and mid market companies found that 85%  have purchased cyber insurance and more than 4 in 10 have already filed a cyber insurance claim.

While that survey didn’t ask how much the claims were for, a NetDiligence study says the average claim is about $5 million.

There are a lot of factors that affect the cost of cyber insurance, but a realistic guideline is $2,000 per million dollars of coverage, but that number can vary a lot depending on many factors.

As insurers pay more claims, they are also raising the premiums.  Insurance companies have raised premiums 32% in the first half of 2015 alone for high risk businesses such as retail.  Insurance companies are also increasing deductibles.  Anthem had to agree to a $25 million deductible to get their policy renewed.  Businesses that do make a claim may discover that their policy won’t be renewed at all or the price for a renewal is out of their budget.

All of the breach related lawsuits are not making insurance companies happy either.  They get to pay for the legal fees in addition to the damages and judgements.  For the bigger policies, legal fees are above and beyond the policy limits – a $10 million policy might have to pay out $10 million for remediation and recovery and maybe another $10 million for legal fees.

Another scary statistic – Lloyds of London modeled a breach that left 93 million people in the NY-DC corridor in the dark.  The cost of that ranged from $250 BILLION to $1 TRILLION.  That is based on a hack which causes an extended outage.  If generation and distribution facilities are damaged to the extent that they have to be replaced, it could take as long as a year, or more, to order and install new equipment since most of it is custom built, you have to wait in line and almost none of it is built in the U.S.

Admiral Mike Rogers, head of the NSA, said that there are several countries that already have the ability to shut down the computers that manage the U.S. power grid.  Depending on how much damage is done, it could take months to even a few years to repair all the damage.

In the early 2000s, the Idaho National Labs demonstrated the ability to cause a generator to set itself on fire by hacking it.  The video is available on YouTube.

Unfortunately, this is only going to get worse before it gets better.

Information for this post came from NBC.

Facebooktwitterredditlinkedinmailby feather

What Happens When Online Services Go Down?

This afternoon, Google Apps went down for a few hours.  Judging by the activity on the Twitterverse, you would have thought the world had ended.  You can check the outage yourself by going to Google’s AppsStatus page on the web (google.com/appsstatus).

Google Tweet

It appears that Google Docs, Sheets, Drive and other parts of the Google Apps universe were down for 2-4 hours this afternoon, depending on which app and which user.

While that is not the end of the world, it certainly is inconvenient and if you needed to either work on or deliver a file which is stored in the cloud, it was probably a problem for you.

For most users, they probably left early on a Friday, especially on the East coast where sanity didn’t return until 5 PM.

There is a moral here.  Having a business continuity plan is always a good thing.

While storing things in the cloud is convenient – I do it myself – it does mean that if the vendor has an outage – and every one of them will at some point in time – you may well not be able to get to that file or service until it is repaired.

This is true for Amazon Web Services, Google Apps, Microsoft Azure, Salesforce and everyone else – nothing is 100% available.

Also remember that the cloud is likely more reliable than your own, internal servers.  If your laptop, tablet or server crashes, assuming a reboot doesn’t fix it, how long will you have to go without?  For most vendors, if you pay a lot, you may get the vendor to be on site in say 4 hours.  That does NOT mean that the part that you need will be there with him – that might not arrive until tomorrow or the next day.

So this doesn’t mean that the cloud is bad.  Or good.  It means that technology is imperfect and you need to consider the consequences of an outage, assume that it is going to happen and have a “Plan B”.

For some people, Plan B might mean call it a day.  However, if the outage affects the way that your customers connect with you or how your team supports your customers, that particular Plan B might not be the best answer.

THAT is why you need a business continuity PLAN.  For some applications, waiting is probably a perfectly acceptable plan – for a certain amount of time.  An hour.  A day. A week.  Likely not a month.  For other applications, that might be a terrible plan.

And planning is usually way better than running around the house or office doing your best chicken little imitation.  No, the sky is not falling.  But it might be very cloudy.  Or not cloudy enough.

Facebooktwitterredditlinkedinmailby feather