Why Crisis Communications is Important

It used to be that large companies could control the news cycle.  Used to be, that is.

Now, with social media, in reality, no one is in control of the news cycle.

Dow Jones, the parent company of the Wall Street Journal,  whom you would think would know a thing or two about the news cycle, apparently has not sorted this out for itself yet.

So, what happened?

On May 30th, Upguard researcher Chris Vickery, who has been in the news on a regular basis lately due to his findings, found a dataset in the Amazon cloud with incorrect permissions on it.  The dataset contained Dow Jones customer information and due to this error, it was accessible for download by anyone who had an Amazon web services account – likely millions of people.  Vickery says that based on his analysis, he thinks data on around 4 million customers was exposed.   Dow Jones says that it wasn’t that bad;  their guess it that it only exposed data on 2.2 million customers.

For some reason, it took Dow Jones a week to change the permissions on this file.  A week.  Why did it take a week?  One possible reason might be tied to their head of communications explanation that this wasn’t really a big deal.  Just customer information.  Nothing to see, keep moving.

In this Amazon S3 bucket were multiple files.  Looking at the data, Chris found customer names, home and work addresses, Dow Jones account numbers,  account details, last four of their credit cards, email addresses and other information.  There were  many files in this bucket and Chris didn’t download all of them, so who knows what else was there.

Dow Jones said that is wasn’t a breach.   True, it wasn’t.  Then again no one said that it was a breach, only that people who should not be able to read the data could read the data.

Dow Jones called that a data over-exposure.  Well, certainly true – even though I have never heard that term used before.  Over-exposure is what happens when you stay out in the sun too long or set the controls on your camera incorrectly.  I have never heard anyone refer to leaking private customer information as a data over exposure.

Dow Jones Director of Communications Steve Severinghaus said that the data was over-exposed only on Amazon and not on the Internet.  I guess we should feel better that only a few million people could download it rather than a few billion people.  There is some validity to that, but a few million is a large number in its own right.

Dow Jones said that they were not going to issue a public announcement (not to worry, it is all over the media, so an announcement is not really needed) because passwords and credit cards weren’t leaked.  Probably, also, because they were hoping they could sweep this breach under the rug.

While Dow Jones’ Wall Street Journal may have a paywall to stop nosy people from reading about the breach, The Register, The Inquirer, SC Magazine, and Upguard do not have paywalls.

These are just a few things that Dow Jones did wrong.  You would think that they would have a crisis communications team.  We certainly tell our customers that they need to have one.  Maybe they do have one but this item just got out of control.

Any crisis communications team worth anything will tell you that hunkering down and hoping that no one will notice is a risky proposition.  It did not work here and likely won’t work for you.

The odd thing is that the WSJ ought to know better.  After all, they break embarrassing news stories for breakfast.  And lunch.  Even for dinner.

What were they thinking?

Information for this post came from SC Magazine, Upguard and The Register.

 

Facebooktwitterredditlinkedinmailby feather

Anatomy of a Ransomware Attack

Lately we have had the opportunity to see inside some ransomware attacks and what the cost has been to businesses.  For example, I wrote about the Petya malware and what it did to the shipping giant Maersk and the law firm giant DLA Piper.

Now we get to find out what happened inside a different ransomware attack at KQED TV and Radio in San Francisco.

As you will see, the impact to this organization has been profound and is still not over.  They have chosen to make a number of security changes – after the horses are out of the barn and the barn has burned to the ground.  Probably not the best strategy, but better late than never.

The value in reading about their misery is to learn from their experience – so that you don’t have to repeat it.  Here goes:

On June 15th, more than a month ago, KQED was hit with a ransomware attack.  After consulting with the FBI, they decided not to pay the ransom.  They have been – slowly – rebuilding their entire infrastructure, piece by piece and it is not done yet.

Now was the time to roll out the Incident Response Program, their Disaster Recovery program and their Business Continuity program.  Oh, wait, they didn’t have any of those.

Other than their Internet stream being down for half a day, they have not lost any broadcast time.  The pain, however, has been non-stop.

One of their reporters said it was like being bombed back 20 years, technology wise.

The article says that they had up to date security systems – whatever that means – and that they reported about cyberattacks frequently and still got hacked.  It is important to understand whether what their definition of up to date security systems means and also, reporting about cyber attacks as a concept is very different than practicing what you preach as you will see below, but still, there is some validity to the point.  Everyone has to up their game if they want to stay safe.

Having Incident Response, Disaster Recovery and Business Continuity programs would be a good start.

After the attack, email was down and so were all network connected devices.  Wireless was down for several days and email was down for two weeks.  What would that do to your company?

The day after the attack, reporters had to show up at 5 AM to redo a broadcast that had been recorded earlier, but lost in the attack.

For two weeks they had to record broadcasts at the University of California Hastings since their studio wasn’t operational.  At least they were still able to broadcast.

Even now, scripts are printed out on an old ink jet printer and placed in a box in the studio so that everyone can find it.

Timing of segments is not done by computer any more – now they are using a stopwatch.

Even getting in and out of the building was a challenge since the badge system was not working.

At the time, every computer was on the same network.  Now they are segmenting computers so that attacks that take out reporter’s laptops cannot take out the studio.  That is considered normal best practice, but they were not doing it before the attack.

Just to be clear, no one thinks KQED was targeted.  It was, as the cops say, a crime of opportunity.  A crime which the employees, a month later, are still dealing with.

On the other hand, the staffers have gotten very creative.

Translate this to your company – think about what you would do if this was you rather than KQED.

Information for this post came from the San Francisco Chronicle.

Facebooktwitterredditlinkedinmailby feather

Great Questions For Your Board to be Asking

If you don’t have a board, then the CEO would be a great person to ask these questions.  The key thing is that the CIO and CISO need to be able to answer them.  The questions came from (Dell) Secureworks.

If you are the CIO or CISO, you should ask and answer these questions before your CEO reads this post.

1. Do we have the visibility to detect the threats most relevant to us, whether that be everyday malware, nation states, cyber criminals, insiders or hacktivists?

So many times were hear about attackers that have been inside a company’s systems for months or even years.  We have to get that number down to days or even hours.

2. What do you assess our main cyber risks to be, how well protected against them are we and how are they changing? What gaps exist in current strategies and budgets?

The only way to deal with these threats is to put them out on the table.  Once we know what we are dealing with we can begin to handle it.  The CEO and Board need to be on the hook for this – if they don’t make this a priority and fund and staff it then the breach is on their hands.

3. Are we prepared with a plan to deal with a breach? Do we know when this gets triggered and where responsibilities lie? Has it been tested?

The company’s incident response program prevents an incident from becoming a crisis.  No program, no training, no team – that makes it very unlikely to avoid a crisis.

4. Do you feel security training is tailored and delivered to ensure that each workforce segment is aware of threat actors and their CURRENT tactics?

We still hear companies say that they get people into a dark room once a year and watch them fall asleep over Powerpoints.  Training has to be interactive, ongoing and engaging.  Do something every month.  Phish your employees every week.  The old methodology doesn’t work any more.

Wherever you fit in the corporate or IT food chain, these are great questions be considering.  While this is not a silver bullet, it will start some very useful conversations.

Information for this post came from Secureworks.

Facebooktwitterredditlinkedinmailby feather

MasterPrints

Before a few days ago, I had never heard of MasterPrints.  Of course, there are many things that I have never heard about.  MasterPrints, of course, have to do with security.  Have you ever heard of MasterPrints?

Here is the story.  Everyone is familiar with fingerprint sensors on cell phones and other devices.  They are used to make credit card payments, unlock phones, disable alarms and perform other sensitive transactions.

But how do these fingerprint sensors work?

Well it turns out that the sensor is so small that it cannot capture the entire fingerprint, so, instead, it captures multiple partial pieces of the fingerprint – say maybe 6.  If the system allows you to “enroll” more than one finger, you might have 12 or 18 partials.

Since fingerprint security is more about convenience than it is about security, the system will consider it a match if any piece matches any one of the stored pieces.

The Apple Touch ID is said to have a 1 in 50,000 chance of a false match.  That makes it more secure than the old, discarded, 4 digit Apple PIN.  And twenty times less secure than a 6 digit PIN.

There have been many ways that people have tried to attack password authentication before, but this is a new way.

What if one of the parts of one of the fingerprints was a match to many fingerprints – maybe some of which are yours and some are not yours.  The MasterPrint concept is born.

Not much research has been done regarding this small fingerprint sensors on phones.  Yet.

What if an attacker was able to lift a partial fingerprint from the owner of the device – say off a glass.  What happens to the probability of success then.

What if a researcher – or a hacker – could synthesize a MasterPrint – kind of like a skeleton key, but for fingerprints.  What then?

The team did a series of tests and was able to create matches when matches did not exist for a significant percentage of their test (around 7 percent).  That is a much higher false positive rate than a 1 in 10,000 on a 4 digit PIN or 1 in 1,000,000 for a 6 digit PIN.

This area of fingerprint science is relatively new.  Compared to traditional fingerprint forensics where the scientist has most or all of the fingerprint, this is very different.  In one test they used 12 partials.  That means that if only 8% of the total finger matches, that would be considered a match.

Of course you could increase the rejection rate;  that would improve security, but it would also increase the rejection rate of fingerprints that should have matched.  Security or convenience – pick one.  Likely the smart thing to do in high security situations is to raise the rejection rate.  Assuming the system even allows the user to control the rejection rate.  Most do not allow that.

Alternatively, vendors could build larger and more precise fingerprint sensors.  That likely will happen, but it will take time.

In the meantime, users and system security pros need to consider the consequences of a false match and make decisions based on those consequences.  Security is never simple.

Information for this post came from a team at Michigan State University.

 

Facebooktwitterredditlinkedinmailby feather

Who Controls The Cloud?

This story is fictitious, but probably something very similar has happened way too many times to someone.

The CEO/CIO of the company tries to log in to one of the cloud services that the company uses and it says that the password is incorrect.  He or she calls customer service and explains the situation.  I was able to log in two days ago, the CEO says.

The rep says that  Mr. Disgruntled, the person listed as the primary contact, logged in two days ago and changed the master password.

The CEO says that Mr. Disgruntled was fired six months ago.  I own the company.  Make me the primary contact and remove Mr. Disgruntled.

The rep says can’t do that unless Mr. Disgruntled approves it.  In fact, since you are not the authorized owner, I can’t really say anything else to you, but have a nice weekend.

What the rep might have said but never would say is try not to worry about Mr. Disgruntled destroying your data, contacting your customers and stealing your proprietary data.

But that is the reality of it and we do see cases of this across the country  Maybe Mr. Disgruntled told customers that he is still associated with the company, accepting orders and promising deliveries.  However, he is not associated and the orders did not arrive.  And, not surprisingly, customers are not happy.

We have also seen cases of Mr. Disgruntled destroying data and costing companies tens to hundreds of thousands of dollars to recover.

You say that you are good because you have backups.  Maybe so, but does Mr. Disgruntled control those backups?  Could they be deleted?

The FBI and DHS issued an alert this week saying that there has been an increase in disgruntled ex-employees exploiting networks and disrupting them.

The article says that the Computer Fraud and Abuse Act protects employers in this case.  In concept that is true.  Employers can sue their former employees.  Maybe in a few years it will come to court.  Let’s say you win – how exactly are you going to recover that money from someone who likely is “judgement proof”.  In theory criminal charges could be filed, but that still won’t get your networks working again and those cases take a long time as well.

All that assumes that you can prove that Mr. Disgruntled did what you think he did.  Juries are unpredictable.

The FBI says they have seen costs as high as $5,000,000 to recover.

So what is a person to do?

First, work with your cloud service providers to make sure that you can’t be locked out of your own accounts. That is often harder than you think because the cloud provider will say that if you make Mr. Disgruntled an administrator, he can remove you from the account.  As I say, it may be hard, but it likely is possible.  Remember that you have to do this for every critical account.  And every important account.  Whatever accounts you DON’T do this with is an account that you could possibly get locked out of.  Make that is an outcome that you can live with.  Remember that Mr. Disgruntled might not be who you think it is.

Next, manage passwords and permissions.  That means tracking accounts and permissions for every service that you use.  That list has to be kept current.  Making sure that Mr. Disgruntled is not in charge of recording Mr. Disgruntled’s permissions is also difficult – but important.

When Mr. Disgruntled leaves, make sure that you disable his accounts but also change passwords on other accounts.  The challenge is knowing EVERY password that Mr. Disgruntled might know.  Two factor authentication might make this easier, but it is not a silver bullet.

Make sure that any vendors who Mr. Disgruntled might have a relationship with knows that he has been terminated.

The article has even more suggestions, but this is not a simple problem.

Remember, too, that Mr. Disgruntled might not show up on your radar screen until he wipes out your accounting data.

You can deal with this now or deal with it later – your choice.

Information for this post came from JDSupra.

Facebooktwitterredditlinkedinmailby feather

Verizon Loses Control of Customer Information

Different sources are reporting different numbers, but the personal information on between 6 million and 14 million Verizon Wireless customers has been exposed.

Verizon Store by Mike Mozart-Flickr-Creative Commons Commercial License

The information includes name, address, phone number, general information on calls made to customer service and, in some cases, the user’s security PIN.

The details of this are going to sound all too familiar.

  1. The data was stored in the Amazon cloud
  2. The data was not password protected
  3. The data was not encrypted
  4. The data was not stored there by Verizon, but rather a third party business partner.

The partner, Nice Systems of Israel, said that the data was exposed as a result of a configuration error.  I am reasonably confident that this is true, but that doesn’t seem to make any difference, really.

Like the recent discovery of the large Republican voter data leak, this leak was also discovered by Upguard; specifically researcher Chris Vickery.

Unlike some of the other leaks which got taken down immediately, it took Verizon 9 days to lock up this data.

Verizon is claiming that no data was “stolen”, but Vickery says that due to the nature of this Amazon S3 service, there is no way that Verizon could know that.  While both sides have a vested interest in this fight, I would tend to side with Upguard in this case.

This seems like a broken record to me –

What do you need to be doing –

#1 – You’ve got to set up a third party cyber risk management program.  Verizon is going to take the heat in this case, but it is is NICE’s screw up.  The third party risk management program is designed to make sure that vendors have security controls in place.

Verizon is taking the heat because the customers have the relationship with Verizon, not NICE.  In fact, until today, most customers have never heard of NICE.  This is Verizon’s problem and they have to own it.  So far, all I have heard is a bit of spin – not to worry; nothing to see – keep moving.  That does not inspire confidence.

#2 – Amazon. Amazon. Amazon.  While this is definitely not Amazon’s fault, at this point, every company that uses any cloud services – or allows their business partners to use cloud services – needs to be checking cloud permissions very carefully.  With great power comes great responsibility.

#3 – Have an incident response plan in place.  By Verizon saying that there was nothing to worry about without any explanation isn’t very comforting.  They need to work on the bedside manner (or in this case, their cloudside manner).  You have to give people a better story than don’t worry.

Why did it take Verizon 9 days to lock down this data.  Sounds like their incident response program needs some work.

While this could have happened to anyone – and has happened to several companies just in the last month, given all the occurrences that we have seen recently, companies need to step up their game or they will get skewered in the court of public opinion.

Information for this post came from Slashgear.

 

Facebooktwitterredditlinkedinmailby feather
Visit Us On FacebookCheck Our Feed