Tag Archives: incident response

Incident Response 101 – Preserving Evidence

A robust incident response program and a well trained incident response team know exactly what to do and what not to do.

One critical task in incident response is to preserve evidence.  Evidence may need to be preserved based on specific legal requirements, such as for defense contractors.  In other cases, evidence must be preserved based on the presumption of being sued.

In all cases, if you have been notified that someone intends to sue you or has actually filed a lawsuit against you, you are required to preserve all relevant evidence.

This post is the story of what happens when you don’t do that.

In this case, the situation is a lawsuit resulting from the breach of one of the Blue Cross affiliates, Premera.

The breach was well covered in the press; approximately 11 million customers data was impacted.

In this case, based on forensics, 35 computers were infected by the bad guys.  In the grand scheme of things, this is a very small number of computers to be impacted by a breach.  Sometimes, it might infect  thousands of computers in a big organization.  The fact that we are not talking about thousands of computers may not make any difference to the court, but it will be more embarrassing to Premera.

The plaintiffs in this case asked to examine these 35 computers for signs that the bad guys exfiltrated data.  Exfiltrated is a big word for stole (technically uploaded to the Internet in this case).  Premera was able to produce 34 of the computers but curiously, not the 35th.  The also asked for the logs from the data protection software that Premera used called Bluecoat.

This 35th computer is believed to be ground zero for the hackers and may well have been the computer where the data was exfiltrated from.  The Bluecoat logs would have provided important information regarding any data that was exported.

Why are these two crucial pieces of evidence missing?  No one is saying, but if there was incriminating evidence on it or evidence that might have cast doubt on the story that Premera is putting forth, making that evidence disappear might seem like a wise idea.

Only one problem.  The plaintiffs are asking the court to sanction Premera and prohibit them from producing any evidence or experts to claim that no data was stolen during the hack.

The plaintiffs claim that Premera destroyed the evidence after the lawsuit was filed.

In fact, the plaintiffs are asking the judge to instruct the jury to assume that data was stolen.

Even if the judge agrees to all of this,  it doesn’t mean that the plaintiffs are going to win, but it certainly doesn’t help their case.

So what does this mean to you?

First you need to have a robust incident response program and a trained incident response team.

Second, the incident response plan needs to address evidence preservation and that includes a long term  plan to catalog and preserve evidence.

Evidence preservation is just one part of a full incident response program.  That program could be the difference between winning and losing a lawsuit.

Information for this post came from ZDNet.

 

 

Facebooktwitterredditlinkedinmailby feather

Why Your Incident Response Program is Critical

Police think that hackers hacked the pumps at a Detroit area gas station allowing drivers to get free gas.

Ten cars figured it was okay to steal gas from “The Man” to the tune of about 600 gallons.  While 600 gallons of gas is not the end of the world, it does make a point.

The article said that the gas station attendant was unable to shut off the pump that was giving away free gas for 90 minutes until he used something called an emergency kit.

This happened at 1:00 in the afternoon – in broad daylight, a few minutes from downtown Detroit, so this is not a “in the dark of night in the middle of nowhere” kind of attack.

One industry insider said that it is possible that the hackers put the pump into some kind of diagnostic mode that had the pump operate without talking to the system inside the booth.

In the grand scheme of things, this is not a big deal, but it does make a point.

If the gas station owner had an incident response plan, then it would not have taken 90 minutes to turn off the pump.

For example, the circuit breakers that power the pumps in the tanks are in the booth where the person is.  I PROMISE that if you turn off the power to the pumps, you will stop the flow of free gas.  Then you can put a sign on the pumps that say that you are sorry, but the pumps are not working right now.

This time is was a gas station, but next time, it could be much worse.

But the important part is that you need to have an incident response plan.

The article said that the didn’t call the police until after he figured out how to turn off the pump after 90 minutes.  Is that what the owner wants to happen?

It doesn’t say if he talked to the owner during that 90 minutes.

Is there a tech support number he should have called to get help?

Bottom line is that even a low tech business like a gas station needs a plan.

You have to figure out what the possible attacks are.  That is the first step.

Then you have to figure out what the course of action should be for each scenario.

After that, you can train people.

Oh yeah, one last thing.  How do you handle the scenario that you didn’t think about?

That is what incident response plans need to be tested and modified.  Nothing is forever.

Information for this post came from The Register.

 

 

Facebooktwitterredditlinkedinmailby feather

The Fallout From a Ransomware Attack

We have heard from two big name firms who succumbed to the recent Petya/NotPetya ransomware attack and they provide interesting insights into dealing with the attack.

First a quick background.  A week ago the world was coming to grips with a new ransomware attack.  Initially called Petya because it looked like a strain of the Petya ransomware, but then called NotPetya because it became clear that it was an attempt to look like Petya but really was not the same malware.

One major difference is that it appears that this malware was just designed to inflict as much pain as possible.  And it did.

While we have no idea of all the pain it inflicted, we do have a couple of very high profile pain points.

The first case study is DLA Piper.  DLA Piper is a global law firm with offices in 40 countries and over 4,000 lawyers.

However, last week, this is what employees saw on their screens:

When employees came to work in the London office, they were greeted with this sign in the lobby:

Suffice it to say, this is not what attorneys in the firm needed when they had trials to attend to, motions to file and clients to talk to.

To further their embarrassment, DLA Piper had jumped on the WannaCry band wagon telling everyone how wonderful their cyber security practice was and that people should hire them.  Now they were on the other side of the problem.

In today’s world of social media, that sign in the lobby of DLA Piper’s London office went viral instantly and DLA Piper was not really ready to respond.  Their response said that client data was not hacked.  No one said that it was.

As of last Thursday, 3+ days into the attack, DLA Piper was not back online. Email was still out, for example.

If client documents were DESTROYED in the attack because they were sitting on staff workstations which were attacked, then they would need to go back to clients and tell them that their data wasn’t as safe as the client might have thought and would they please send them another copy.

If there were court pleadings due, they would have to beg the mercy of the court – and their adversaries – and ask for extensions.  The court likely would grant them, but it certainly wouldn’t help their case.

The second very public case is the Danish mega-shipping company A.P. Moller-Maersk.

They also were taken out by the NotPetya malware but in their case they had two problems.

Number one was the computer systems that controlled their huge container ships were down, making it impossible to load or unload ships.

The second problem was that another division of the company runs many of the big ports around the world and those port operations were down as well.  That means that even container ships of competing shipping companies could not unload at those ports.  Ports affected were located in the United States, India, Spain and The Netherlands.  The South Florida Container Terminal, for example, said that it could not deliver dry cargo and no container would be received.  At the JPNT port near Mumbai, India, they said that they did not know when the terminal would be running smoothly.

Well now we do have more information.  As of Monday (yesterday), Maersk said it had restored its major applications.  Maersk said on Friday that it expected client facing systems to return to normal by Monday and was resuming deliveries at its major ports.

You may ask why am I spilling so much virtual ink on this story (I already wrote about it once).  The answer is if these mega companies were not prepared for a major outage then smaller companies are likely not prepared either.

While we have not seen financial numbers from either of these firms as to the cost of recovering from these attacks, it is likely in the multiple millions of dollars, if not more, for each of them.

And, they were effectively out of business for a week or more.  Notice that Maersk said that major customer facing applications were back online after a week.  What about the rest of their application suite?

Since ransomware – or in this case destructoware since there was no way to reverse the encryption even if you paid the ransom – is a huge problem around the world, the likelihood of your firm being hit is much higher than anyone would like.

Now is the time to create your INCIDENT RESPONSE PLAN, your DISASTER RECOVERY PLAN and your BUSINESS CONTINUITY PLAN.

If you get hit with an attack and you don’t have these plans in place, trained and tested, it is not going to be a fun couple of weeks.  Assuming you are still in business.  When Sony got attacked it took them three months to get basic systems back online.  Sony had a plan – it just had not been updated in six years.

Will you be able to survive the effects of this kind of attack?

Information for this post came from Fortune, Reuters and another Reuters article.

Facebooktwitterredditlinkedinmailby feather

One Login Cloud Identity Manager Has Critical Breach

Onelogin, a cloud based identity and access manager, reported being hacked on May 30th.  This is the challenge with cloud based IDaaS managers.

WARNING: Normally I try to make my posts non-techie.  I failed at this one.  Sorry!  If the post stops making sense, then just stop reading.  I promise that tomorrow’s post, whatever it is, will be much less techie.

Onelogin’s blog post on the subject of the breach said that an attacker obtained a set of Amazon authentication keys and created some new instances inside of their Amazon environment.  From there the attackers did reconnaissance.  This started around 2 PM.  By 9 PM the attackers were done with their reconnaissance and started accessing databases.

The information the attackers accessed included user information, applications and various kinds of keys.

Onelogin says that while they encrypt certain sensitive data at rest, at this time we cannot rule out the possibility that the hacker also obtained the ability to decrypt the data.  Translating this into English, since Onelogin could decrypt the data, it is possible or even likely that the hacker could also decrypt the data.

That is all that Onelogin is saying at this time.

Motherboard says that they obtained a copy of a message that Onelogin sent to their customers.  They count around 2,000 companies in 44 countries as customers.  The message gave instructions on how to revoke cloud keys and OAuth toktens.  For Onelogin customers, this about as bad as it can get.  Of course, Onelogin is erring on the side of caution.  It is possible – but no one knows – that all the attackers got was encrypted data before they were shut down.  It is also possible that they did not have time to send the data home.  But if they did get the data home, they have the luxury of time to decrypt the data, hence the reason that Onelogin is telling to expire anything and everything from keys to certificates to secret phrases – everything.

The way Onelogin works, once the customer logs into Onelogin’s cloud, Onelogin has all the passwords needed to be able to manage (aka log in to) all of a company’s cloud instances and user accounts.  In fact, one of the reasons that you use a system like Onelogin is that it can keep track of tens or hundreds of thousands of user passwords, but to do that, it needs to be able to decrypt them.  Needless to say, if they are hacked, it spells trouble.

One thing that is important to distinguish.  Consumer password managers like LastPass also store your passwords in the cloud to synchronize them between devices, but those applications NEVER have the decryption keys.  If the encryption algorithm is good and the passphrase to protect them is well chosen, even with a lot of resources, it will take a while to decrypt.

For those people (like me) who are extra paranoid, the simple answer to that problem is to not let the password manager sync into the cloud.  It still works as a local password manager, it just won’t synchronize between devices.

Gartner vice president and distinguished analyst Avivah Litan says that she has discouraged the practice of using services like that for years because it is like putting all of your eggs in one basket.  I certainly agree with that.  However, it also is convenient.  A lesser risk scenario would be to have the system that manages those passwords completely under your control.  You get to control the security and if an instance is attacked, only one customer is affected, instead of thousands.

This does not spell the end of cloud computing as we know it.

It is, however, a reminder that you are ultimately responsible for your security and if you choose to put something in the cloud (or even in your own data center), you need to first understand the risks and accept them and then put together and test an incident response plan so that when the worse case scenario happens, you can respond.

For a customer of Onelogin with even just say a thousand users and say those users only have ten passwords each, that means that 10,000 passwords across hundreds of systems likely have to be changed.  Many of their customers are ten times or fifty timesAnd those changes have to be communicated to the users.

Incident response.  Critical when you need it. Unimportant the rest of the time.

Information for this post came from Onelogin’s blog and Krebs on Security.

Facebooktwitterredditlinkedinmailby feather

Data Breach Incident Response: Questions and New Laws

As more and more breaches happen every month, businesses everywhere need to consider what would happen if their company had a breach.  Here is advice from the national law firm of Perkins Coie.

  1. Is the breach reportable?  The list of data items which, when compromised, triggers a reportable breach keeps growing.  For example, this year Illinois and Nebraska joined a number of other states that dictate that compromised account credentials are now reportable.   This year Tennessee removed language which used to say that if the data was encrypted, the breach is not reportable.  For some states now, if the data was encrypted but the keys were likely compromised, the breach is reportable.  And, remember, what matters is where the owners of the data reside, not where your office is.
  2. How fast should you notify?  That is not a simple question to answer.  While different laws come into play for different groups, caution is advisable.  If the data lost was covered by HIPAA, you have a specific amount of time to notify.  If you are a Defense Contractor, you have a different, VERY SHORT amount of time to notify the Department of Defense.  What we saw earlier this year in the P.F. Chang breach is that they over disclosed and when it was discovered that relatively few customers were impacted and they tried to get lawsuits dismissed, the court said that they told everyone that they were at risk.
  3. What should the notice look like?  Some states, like Rhode Island, specify in significant detail what needs to be in the letter, but this language can get you in trouble later.  Judges are sometimes not real good at understanding the laws of other states.  When Neiman Marcus told customers, after their breach, to check their credit reports, even though the breach did not reveal any information that would allow a hacker to open a new account, the judge discounted Neiman’s claim that the reason they told people to check their credit report was that they were legally required to say that in some states.  Eventually, the courts and the legislatures will get in sync, but not as long as the legislatures keep tinkering with the laws.
  4. Who receives notice?  Well, besides the affected people, in some states, the state Attorney General must be notified.  For HIPAA breaches of over 500 records, the Secretary of Health and Human Services must be notified and for defense contractors, the DoD must be notified.  These are just SOME parties that have to be notified.  And, of course, you must use the approved, state specific form.
  5. Should we offer credit monitoring services? Credit monitoring and credit repair services seem to be the norm these days, at least for big breaches, but even this can come back to haunt you.  In the Neiman’s breach mentioned above, the court said that because they offered credit monitoring there must have been a risk for fraud – even though there wasn’t any, other than someone using your Neiman’s card.

All this says that the landscape is filled with landmines and you MUST have a cyber breach litigation wise attorney in your camp from the VERY FIRST MOMENTS.  As you can tell from the words above, even simple decisions have the possibility to backfire.

So if you do not have a cyber incident response plan written, approved, disseminated and tested, I recommend that be added to the high priority to do list.

Information for this post came from JDSupra.

 

Facebooktwitterredditlinkedinmailby feather