Tag Archives: incident response

Dolce and Gabbana Needs a Better Incident Response Program

Stefano Gabbana is known for very edgy ads and posts on social media.  Some people say over the edge – way over the edge.

The brand ran a series of commercials of Chinese people eating pizza and other Italian foods with chopsticks on the eve of a star-studded fashion show in Hong Kong.  I suspect someone thought that it was something the Chinese would find funny (?).

Then Gabbana’s Instagram account sent out racist taunts to people who were complaining about the ad campaign.

The company’s response was to claim that both Stefano’s and the Company’s Instagram accounts were hijacked.  Few people believed that.  Stefano posted this note on his instagram account after.

If there is one thing the Chinese are, it is loyal to their country.  Models pulled out of the show. Next celebrity guests pulled out.  The show was cancelled less than 24 hours before it was scheduled to go on.

Now D&G merchandise is being pulled from store shelves and removed from web sites.  A full scale disaster for the company.

So what lessons are there to learn from this?

The obvious one is that if your strategy for getting attention is edgy commercials and racist social media posts, you might want to rethink that, especially in certain countries.

In reality, most companies don’t do that, at least on purpose.

The bigger issue is how to respond to cyber incidents.

Lets assume their accounts were hijacked.  It is certainly possible.  Obviously, you want to beef up your social media security if you are doing things that might attract attackers, but more importantly, nothing is bulletproof in cyberspace, so you need an incident response program to deal with it. 

That incident response program needs to deal with the reputational fallout of events that may or may not be in the company’s control.  Crisis communications is a key part of incident response.

The Incident response team needs to be identified and then the team members need to be trained.  That can be done with “table-top” exercises.

Bottom line -prepare for the next cyber event. Information for this post came from SC  Magazine and the New York Times.

 

Facebooktwitterredditlinkedinmailby feather

Cathay Pacific is Beginning to Fess Up and it Likely Won’t Help Their GDPR Fine

As a reminder, Cathay Pacific Airlines recently admitted it was hacked and lost data on over 9 million passengers.  Information taken includes names, addresses, passport information, birth dates and other information

They took a lot of heat for waiting 6 months to tell anyone about it (remember that GDPR requires you to tell the authorities within 72 hours).

Now they are reporting on the breach to Hong Kong’s Legco (their version of Parliament) and they admitted that they knew they were under attack in March, April and May AND it continued after that.  So now, instead of waiting 6 months to fess up, it is coming out that they waited 9 months,

They also admitted that they really didn’t know what was taken and they didn’t know if the data taken would be usable to a hacker as it was pieces and parts of databases.

Finally, they said after all that, they waited some more to make sure that the information that they were telling people was precisely accurate.

Now they have set up a dedicated website at https://infosecurity.cathaypacific.com/en_HK.html for people who think their data has gone “walkies”.

So what lessons can you take away from their experience?

First of all, waiting 6 months to tell people their information has gone walkies is not going to make you a lot of friends with authorities inside or outside the United States.  9 months isn’t any better.

One might suggest that if they were fighting the bad guys for three months, they probably either didn’t have the right resources or sufficient resources on the problem.

It also means that they likely did not have an adequate incident response program.

Their business continuity program was also lacking.

None of these facts will win them brownie points with regulators, so you should review your programs and make sure that you could effectively respond to an attack.

Their next complaint was that they didn’t know what was taken.  Why?  Inadequate logs.  You need to make sure that you are logging what you should be in order to respond to an attack.

They said that they wanted to make sure that they could tell people exactly what happened.  While that is a nice theory, if you can’t do that within the legally required time, that bit of spin will cost you big time.

Clearly there is a lot that they could have done better.

While the authorities in Europe may fine them for this transgression, in China they have somewhat “harsher” penalties.  Glad I am not in China.

Information for this post came from The Register.

 

 

Facebooktwitterredditlinkedinmailby feather

Incident Response 101 – Preserving Evidence

A robust incident response program and a well trained incident response team know exactly what to do and what not to do.

One critical task in incident response is to preserve evidence.  Evidence may need to be preserved based on specific legal requirements, such as for defense contractors.  In other cases, evidence must be preserved based on the presumption of being sued.

In all cases, if you have been notified that someone intends to sue you or has actually filed a lawsuit against you, you are required to preserve all relevant evidence.

This post is the story of what happens when you don’t do that.

In this case, the situation is a lawsuit resulting from the breach of one of the Blue Cross affiliates, Premera.

The breach was well covered in the press; approximately 11 million customers data was impacted.

In this case, based on forensics, 35 computers were infected by the bad guys.  In the grand scheme of things, this is a very small number of computers to be impacted by a breach.  Sometimes, it might infect  thousands of computers in a big organization.  The fact that we are not talking about thousands of computers may not make any difference to the court, but it will be more embarrassing to Premera.

The plaintiffs in this case asked to examine these 35 computers for signs that the bad guys exfiltrated data.  Exfiltrated is a big word for stole (technically uploaded to the Internet in this case).  Premera was able to produce 34 of the computers but curiously, not the 35th.  The also asked for the logs from the data protection software that Premera used called Bluecoat.

This 35th computer is believed to be ground zero for the hackers and may well have been the computer where the data was exfiltrated from.  The Bluecoat logs would have provided important information regarding any data that was exported.

Why are these two crucial pieces of evidence missing?  No one is saying, but if there was incriminating evidence on it or evidence that might have cast doubt on the story that Premera is putting forth, making that evidence disappear might seem like a wise idea.

Only one problem.  The plaintiffs are asking the court to sanction Premera and prohibit them from producing any evidence or experts to claim that no data was stolen during the hack.

The plaintiffs claim that Premera destroyed the evidence after the lawsuit was filed.

In fact, the plaintiffs are asking the judge to instruct the jury to assume that data was stolen.

Even if the judge agrees to all of this,  it doesn’t mean that the plaintiffs are going to win, but it certainly doesn’t help their case.

So what does this mean to you?

First you need to have a robust incident response program and a trained incident response team.

Second, the incident response plan needs to address evidence preservation and that includes a long term  plan to catalog and preserve evidence.

Evidence preservation is just one part of a full incident response program.  That program could be the difference between winning and losing a lawsuit.

Information for this post came from ZDNet.

 

 

Facebooktwitterredditlinkedinmailby feather

Why Your Incident Response Program is Critical

Police think that hackers hacked the pumps at a Detroit area gas station allowing drivers to get free gas.

Ten cars figured it was okay to steal gas from “The Man” to the tune of about 600 gallons.  While 600 gallons of gas is not the end of the world, it does make a point.

The article said that the gas station attendant was unable to shut off the pump that was giving away free gas for 90 minutes until he used something called an emergency kit.

This happened at 1:00 in the afternoon – in broad daylight, a few minutes from downtown Detroit, so this is not a “in the dark of night in the middle of nowhere” kind of attack.

One industry insider said that it is possible that the hackers put the pump into some kind of diagnostic mode that had the pump operate without talking to the system inside the booth.

In the grand scheme of things, this is not a big deal, but it does make a point.

If the gas station owner had an incident response plan, then it would not have taken 90 minutes to turn off the pump.

For example, the circuit breakers that power the pumps in the tanks are in the booth where the person is.  I PROMISE that if you turn off the power to the pumps, you will stop the flow of free gas.  Then you can put a sign on the pumps that say that you are sorry, but the pumps are not working right now.

This time is was a gas station, but next time, it could be much worse.

But the important part is that you need to have an incident response plan.

The article said that the didn’t call the police until after he figured out how to turn off the pump after 90 minutes.  Is that what the owner wants to happen?

It doesn’t say if he talked to the owner during that 90 minutes.

Is there a tech support number he should have called to get help?

Bottom line is that even a low tech business like a gas station needs a plan.

You have to figure out what the possible attacks are.  That is the first step.

Then you have to figure out what the course of action should be for each scenario.

After that, you can train people.

Oh yeah, one last thing.  How do you handle the scenario that you didn’t think about?

That is what incident response plans need to be tested and modified.  Nothing is forever.

Information for this post came from The Register.

 

 

Facebooktwitterredditlinkedinmailby feather

The Fallout From a Ransomware Attack

We have heard from two big name firms who succumbed to the recent Petya/NotPetya ransomware attack and they provide interesting insights into dealing with the attack.

First a quick background.  A week ago the world was coming to grips with a new ransomware attack.  Initially called Petya because it looked like a strain of the Petya ransomware, but then called NotPetya because it became clear that it was an attempt to look like Petya but really was not the same malware.

One major difference is that it appears that this malware was just designed to inflict as much pain as possible.  And it did.

While we have no idea of all the pain it inflicted, we do have a couple of very high profile pain points.

The first case study is DLA Piper.  DLA Piper is a global law firm with offices in 40 countries and over 4,000 lawyers.

However, last week, this is what employees saw on their screens:

When employees came to work in the London office, they were greeted with this sign in the lobby:

Suffice it to say, this is not what attorneys in the firm needed when they had trials to attend to, motions to file and clients to talk to.

To further their embarrassment, DLA Piper had jumped on the WannaCry band wagon telling everyone how wonderful their cyber security practice was and that people should hire them.  Now they were on the other side of the problem.

In today’s world of social media, that sign in the lobby of DLA Piper’s London office went viral instantly and DLA Piper was not really ready to respond.  Their response said that client data was not hacked.  No one said that it was.

As of last Thursday, 3+ days into the attack, DLA Piper was not back online. Email was still out, for example.

If client documents were DESTROYED in the attack because they were sitting on staff workstations which were attacked, then they would need to go back to clients and tell them that their data wasn’t as safe as the client might have thought and would they please send them another copy.

If there were court pleadings due, they would have to beg the mercy of the court – and their adversaries – and ask for extensions.  The court likely would grant them, but it certainly wouldn’t help their case.

The second very public case is the Danish mega-shipping company A.P. Moller-Maersk.

They also were taken out by the NotPetya malware but in their case they had two problems.

Number one was the computer systems that controlled their huge container ships were down, making it impossible to load or unload ships.

The second problem was that another division of the company runs many of the big ports around the world and those port operations were down as well.  That means that even container ships of competing shipping companies could not unload at those ports.  Ports affected were located in the United States, India, Spain and The Netherlands.  The South Florida Container Terminal, for example, said that it could not deliver dry cargo and no container would be received.  At the JPNT port near Mumbai, India, they said that they did not know when the terminal would be running smoothly.

Well now we do have more information.  As of Monday (yesterday), Maersk said it had restored its major applications.  Maersk said on Friday that it expected client facing systems to return to normal by Monday and was resuming deliveries at its major ports.

You may ask why am I spilling so much virtual ink on this story (I already wrote about it once).  The answer is if these mega companies were not prepared for a major outage then smaller companies are likely not prepared either.

While we have not seen financial numbers from either of these firms as to the cost of recovering from these attacks, it is likely in the multiple millions of dollars, if not more, for each of them.

And, they were effectively out of business for a week or more.  Notice that Maersk said that major customer facing applications were back online after a week.  What about the rest of their application suite?

Since ransomware – or in this case destructoware since there was no way to reverse the encryption even if you paid the ransom – is a huge problem around the world, the likelihood of your firm being hit is much higher than anyone would like.

Now is the time to create your INCIDENT RESPONSE PLAN, your DISASTER RECOVERY PLAN and your BUSINESS CONTINUITY PLAN.

If you get hit with an attack and you don’t have these plans in place, trained and tested, it is not going to be a fun couple of weeks.  Assuming you are still in business.  When Sony got attacked it took them three months to get basic systems back online.  Sony had a plan – it just had not been updated in six years.

Will you be able to survive the effects of this kind of attack?

Information for this post came from Fortune, Reuters and another Reuters article.

Facebooktwitterredditlinkedinmailby feather

One Login Cloud Identity Manager Has Critical Breach

Onelogin, a cloud based identity and access manager, reported being hacked on May 30th.  This is the challenge with cloud based IDaaS managers.

WARNING: Normally I try to make my posts non-techie.  I failed at this one.  Sorry!  If the post stops making sense, then just stop reading.  I promise that tomorrow’s post, whatever it is, will be much less techie.

Onelogin’s blog post on the subject of the breach said that an attacker obtained a set of Amazon authentication keys and created some new instances inside of their Amazon environment.  From there the attackers did reconnaissance.  This started around 2 PM.  By 9 PM the attackers were done with their reconnaissance and started accessing databases.

The information the attackers accessed included user information, applications and various kinds of keys.

Onelogin says that while they encrypt certain sensitive data at rest, at this time we cannot rule out the possibility that the hacker also obtained the ability to decrypt the data.  Translating this into English, since Onelogin could decrypt the data, it is possible or even likely that the hacker could also decrypt the data.

That is all that Onelogin is saying at this time.

Motherboard says that they obtained a copy of a message that Onelogin sent to their customers.  They count around 2,000 companies in 44 countries as customers.  The message gave instructions on how to revoke cloud keys and OAuth toktens.  For Onelogin customers, this about as bad as it can get.  Of course, Onelogin is erring on the side of caution.  It is possible – but no one knows – that all the attackers got was encrypted data before they were shut down.  It is also possible that they did not have time to send the data home.  But if they did get the data home, they have the luxury of time to decrypt the data, hence the reason that Onelogin is telling to expire anything and everything from keys to certificates to secret phrases – everything.

The way Onelogin works, once the customer logs into Onelogin’s cloud, Onelogin has all the passwords needed to be able to manage (aka log in to) all of a company’s cloud instances and user accounts.  In fact, one of the reasons that you use a system like Onelogin is that it can keep track of tens or hundreds of thousands of user passwords, but to do that, it needs to be able to decrypt them.  Needless to say, if they are hacked, it spells trouble.

One thing that is important to distinguish.  Consumer password managers like LastPass also store your passwords in the cloud to synchronize them between devices, but those applications NEVER have the decryption keys.  If the encryption algorithm is good and the passphrase to protect them is well chosen, even with a lot of resources, it will take a while to decrypt.

For those people (like me) who are extra paranoid, the simple answer to that problem is to not let the password manager sync into the cloud.  It still works as a local password manager, it just won’t synchronize between devices.

Gartner vice president and distinguished analyst Avivah Litan says that she has discouraged the practice of using services like that for years because it is like putting all of your eggs in one basket.  I certainly agree with that.  However, it also is convenient.  A lesser risk scenario would be to have the system that manages those passwords completely under your control.  You get to control the security and if an instance is attacked, only one customer is affected, instead of thousands.

This does not spell the end of cloud computing as we know it.

It is, however, a reminder that you are ultimately responsible for your security and if you choose to put something in the cloud (or even in your own data center), you need to first understand the risks and accept them and then put together and test an incident response plan so that when the worse case scenario happens, you can respond.

For a customer of Onelogin with even just say a thousand users and say those users only have ten passwords each, that means that 10,000 passwords across hundreds of systems likely have to be changed.  Many of their customers are ten times or fifty timesAnd those changes have to be communicated to the users.

Incident response.  Critical when you need it. Unimportant the rest of the time.

Information for this post came from Onelogin’s blog and Krebs on Security.

Facebooktwitterredditlinkedinmailby feather