Tag Archives: incident response

Norsk Hydro Ransomware Attack Impacts Price of Aluminum

Update:  The Washington Post pointed out that malware probably did not spread from Norsk’s IT network to it’s plant floor or OT network since they were able to run some plants manually.  This is where network segmentation is really important, even within the IT network.  They also pointed out that Norsk was very public about what was going on, even though it had a (likely) short term impact on their stock price.  They definitely should get gold stars for that.  Source: The Washington Post.

Aluminum Giant Norsk Hydro was hit with a ransomware attack this week.

The attack has forced the company to shut down several plants and take other other plants offline to stop the spread of the attack.

Other plants were operating in “manual” mode.

The Norwegian company employs 35,000 employees in 40 countries.  They report that their entire worldwide network is down affecting production and office  operations.

While some smelting operations can run manually, the company has had to shut down some of its extrusion plants.

The company says that it doesn’t plan to pay the ransom and plans to restore its systems from backups.

One expert suggested that the attacker(s) might have gained domain admin access and then installed a malicious executable on the domain controllers.  From there it gets downloaded to any machine that logs on to the network – workstation or server.  That is why they had to completely shut down the network.

The interesting thing is that they said that this attack is so big that it is affecting the spot price of aluminum on the world market.

So what does this have to do with you?

Let’s assume that you got hit with a ransomware attack.  Not a great thought but not impossible either.

Now assume that you had to shut down the entire company network.   Maybe computers can be powered up, but maybe not.  Since the network is down, the cloud based phone system doesn’t work.  No email and your cell is only useful as a phone.  As long as it doesn’t need WiFi access to work.

How will your company operate?

Are you prepared for an event like this?

Do you have a plan?  Have you tested it?  When?

This is not an isolated event.  We hear about it all the time.  Most of the time it doesn’t affect the spot price of materials on the world market.  That doesn’t mean that it won’t hurt you.

Your cyber incident response plan, program and training is critical.  Are the external third party resources that you may need identified?  Have you reviewed the contracts that will need to be signed?  

Do you have backup plans for how your business will operate when you no longer have a network or an Internet connection?

What happens when your web site goes down?  Will visitors just get a message that your site can’t be found?  What will they think if that happens?

In the case of Norsk it was a ransomware attack, but it could be a failure of your Internet provider, a fire in your building, a burst water pipe in your data center or any number of other possible situations.

In their case, they can afford the millions of dollars they are spending to deal with the situation.  Can you afford that?

Will your cyber risk insurance cover all of this?  Many times companies come to us after discovering that their insurance won’t cover the loss and we look at the policy.  The insurance company is right.  It doesn’t cover it.  That is because cyber insurance is like the wild west and if your agent does not write a lot of coverage, you may or may not get what you need.  This is very different than almost EVERY other form of insurance.  In Colorado and many (most) other states, cyber risk insurance is not regulated by the Department of Insurance.

If you are not prepared then now is the time to get prepared, because it is not a matter of if, but rather how, how bad and when.  

Plan now or deal with it later and dealing with it later will not be pretty.  Take it from someone who knows.

Information for this post came from Threatpost.

 

Facebooktwitterredditlinkedinmailby feather

Dolce and Gabbana Needs a Better Incident Response Program

Stefano Gabbana is known for very edgy ads and posts on social media.  Some people say over the edge – way over the edge.

The brand ran a series of commercials of Chinese people eating pizza and other Italian foods with chopsticks on the eve of a star-studded fashion show in Hong Kong.  I suspect someone thought that it was something the Chinese would find funny (?).

Then Gabbana’s Instagram account sent out racist taunts to people who were complaining about the ad campaign.

The company’s response was to claim that both Stefano’s and the Company’s Instagram accounts were hijacked.  Few people believed that.  Stefano posted this note on his instagram account after.

If there is one thing the Chinese are, it is loyal to their country.  Models pulled out of the show. Next celebrity guests pulled out.  The show was cancelled less than 24 hours before it was scheduled to go on.

Now D&G merchandise is being pulled from store shelves and removed from web sites.  A full scale disaster for the company.

So what lessons are there to learn from this?

The obvious one is that if your strategy for getting attention is edgy commercials and racist social media posts, you might want to rethink that, especially in certain countries.

In reality, most companies don’t do that, at least on purpose.

The bigger issue is how to respond to cyber incidents.

Lets assume their accounts were hijacked.  It is certainly possible.  Obviously, you want to beef up your social media security if you are doing things that might attract attackers, but more importantly, nothing is bulletproof in cyberspace, so you need an incident response program to deal with it. 

That incident response program needs to deal with the reputational fallout of events that may or may not be in the company’s control.  Crisis communications is a key part of incident response.

The Incident response team needs to be identified and then the team members need to be trained.  That can be done with “table-top” exercises.

Bottom line -prepare for the next cyber event. Information for this post came from SC  Magazine and the New York Times.

 

Facebooktwitterredditlinkedinmailby feather

Cathay Pacific is Beginning to Fess Up and it Likely Won’t Help Their GDPR Fine

As a reminder, Cathay Pacific Airlines recently admitted it was hacked and lost data on over 9 million passengers.  Information taken includes names, addresses, passport information, birth dates and other information

They took a lot of heat for waiting 6 months to tell anyone about it (remember that GDPR requires you to tell the authorities within 72 hours).

Now they are reporting on the breach to Hong Kong’s Legco (their version of Parliament) and they admitted that they knew they were under attack in March, April and May AND it continued after that.  So now, instead of waiting 6 months to fess up, it is coming out that they waited 9 months,

They also admitted that they really didn’t know what was taken and they didn’t know if the data taken would be usable to a hacker as it was pieces and parts of databases.

Finally, they said after all that, they waited some more to make sure that the information that they were telling people was precisely accurate.

Now they have set up a dedicated website at https://infosecurity.cathaypacific.com/en_HK.html for people who think their data has gone “walkies”.

So what lessons can you take away from their experience?

First of all, waiting 6 months to tell people their information has gone walkies is not going to make you a lot of friends with authorities inside or outside the United States.  9 months isn’t any better.

One might suggest that if they were fighting the bad guys for three months, they probably either didn’t have the right resources or sufficient resources on the problem.

It also means that they likely did not have an adequate incident response program.

Their business continuity program was also lacking.

None of these facts will win them brownie points with regulators, so you should review your programs and make sure that you could effectively respond to an attack.

Their next complaint was that they didn’t know what was taken.  Why?  Inadequate logs.  You need to make sure that you are logging what you should be in order to respond to an attack.

They said that they wanted to make sure that they could tell people exactly what happened.  While that is a nice theory, if you can’t do that within the legally required time, that bit of spin will cost you big time.

Clearly there is a lot that they could have done better.

While the authorities in Europe may fine them for this transgression, in China they have somewhat “harsher” penalties.  Glad I am not in China.

Information for this post came from The Register.

 

 

Facebooktwitterredditlinkedinmailby feather

Incident Response 101 – Preserving Evidence

A robust incident response program and a well trained incident response team know exactly what to do and what not to do.

One critical task in incident response is to preserve evidence.  Evidence may need to be preserved based on specific legal requirements, such as for defense contractors.  In other cases, evidence must be preserved based on the presumption of being sued.

In all cases, if you have been notified that someone intends to sue you or has actually filed a lawsuit against you, you are required to preserve all relevant evidence.

This post is the story of what happens when you don’t do that.

In this case, the situation is a lawsuit resulting from the breach of one of the Blue Cross affiliates, Premera.

The breach was well covered in the press; approximately 11 million customers data was impacted.

In this case, based on forensics, 35 computers were infected by the bad guys.  In the grand scheme of things, this is a very small number of computers to be impacted by a breach.  Sometimes, it might infect  thousands of computers in a big organization.  The fact that we are not talking about thousands of computers may not make any difference to the court, but it will be more embarrassing to Premera.

The plaintiffs in this case asked to examine these 35 computers for signs that the bad guys exfiltrated data.  Exfiltrated is a big word for stole (technically uploaded to the Internet in this case).  Premera was able to produce 34 of the computers but curiously, not the 35th.  The also asked for the logs from the data protection software that Premera used called Bluecoat.

This 35th computer is believed to be ground zero for the hackers and may well have been the computer where the data was exfiltrated from.  The Bluecoat logs would have provided important information regarding any data that was exported.

Why are these two crucial pieces of evidence missing?  No one is saying, but if there was incriminating evidence on it or evidence that might have cast doubt on the story that Premera is putting forth, making that evidence disappear might seem like a wise idea.

Only one problem.  The plaintiffs are asking the court to sanction Premera and prohibit them from producing any evidence or experts to claim that no data was stolen during the hack.

The plaintiffs claim that Premera destroyed the evidence after the lawsuit was filed.

In fact, the plaintiffs are asking the judge to instruct the jury to assume that data was stolen.

Even if the judge agrees to all of this,  it doesn’t mean that the plaintiffs are going to win, but it certainly doesn’t help their case.

So what does this mean to you?

First you need to have a robust incident response program and a trained incident response team.

Second, the incident response plan needs to address evidence preservation and that includes a long term  plan to catalog and preserve evidence.

Evidence preservation is just one part of a full incident response program.  That program could be the difference between winning and losing a lawsuit.

Information for this post came from ZDNet.

 

 

Facebooktwitterredditlinkedinmailby feather

Why Your Incident Response Program is Critical

Police think that hackers hacked the pumps at a Detroit area gas station allowing drivers to get free gas.

Ten cars figured it was okay to steal gas from “The Man” to the tune of about 600 gallons.  While 600 gallons of gas is not the end of the world, it does make a point.

The article said that the gas station attendant was unable to shut off the pump that was giving away free gas for 90 minutes until he used something called an emergency kit.

This happened at 1:00 in the afternoon – in broad daylight, a few minutes from downtown Detroit, so this is not a “in the dark of night in the middle of nowhere” kind of attack.

One industry insider said that it is possible that the hackers put the pump into some kind of diagnostic mode that had the pump operate without talking to the system inside the booth.

In the grand scheme of things, this is not a big deal, but it does make a point.

If the gas station owner had an incident response plan, then it would not have taken 90 minutes to turn off the pump.

For example, the circuit breakers that power the pumps in the tanks are in the booth where the person is.  I PROMISE that if you turn off the power to the pumps, you will stop the flow of free gas.  Then you can put a sign on the pumps that say that you are sorry, but the pumps are not working right now.

This time is was a gas station, but next time, it could be much worse.

But the important part is that you need to have an incident response plan.

The article said that the didn’t call the police until after he figured out how to turn off the pump after 90 minutes.  Is that what the owner wants to happen?

It doesn’t say if he talked to the owner during that 90 minutes.

Is there a tech support number he should have called to get help?

Bottom line is that even a low tech business like a gas station needs a plan.

You have to figure out what the possible attacks are.  That is the first step.

Then you have to figure out what the course of action should be for each scenario.

After that, you can train people.

Oh yeah, one last thing.  How do you handle the scenario that you didn’t think about?

That is what incident response plans need to be tested and modified.  Nothing is forever.

Information for this post came from The Register.

 

 

Facebooktwitterredditlinkedinmailby feather

The Fallout From a Ransomware Attack

We have heard from two big name firms who succumbed to the recent Petya/NotPetya ransomware attack and they provide interesting insights into dealing with the attack.

First a quick background.  A week ago the world was coming to grips with a new ransomware attack.  Initially called Petya because it looked like a strain of the Petya ransomware, but then called NotPetya because it became clear that it was an attempt to look like Petya but really was not the same malware.

One major difference is that it appears that this malware was just designed to inflict as much pain as possible.  And it did.

While we have no idea of all the pain it inflicted, we do have a couple of very high profile pain points.

The first case study is DLA Piper.  DLA Piper is a global law firm with offices in 40 countries and over 4,000 lawyers.

However, last week, this is what employees saw on their screens:

When employees came to work in the London office, they were greeted with this sign in the lobby:

Suffice it to say, this is not what attorneys in the firm needed when they had trials to attend to, motions to file and clients to talk to.

To further their embarrassment, DLA Piper had jumped on the WannaCry band wagon telling everyone how wonderful their cyber security practice was and that people should hire them.  Now they were on the other side of the problem.

In today’s world of social media, that sign in the lobby of DLA Piper’s London office went viral instantly and DLA Piper was not really ready to respond.  Their response said that client data was not hacked.  No one said that it was.

As of last Thursday, 3+ days into the attack, DLA Piper was not back online. Email was still out, for example.

If client documents were DESTROYED in the attack because they were sitting on staff workstations which were attacked, then they would need to go back to clients and tell them that their data wasn’t as safe as the client might have thought and would they please send them another copy.

If there were court pleadings due, they would have to beg the mercy of the court – and their adversaries – and ask for extensions.  The court likely would grant them, but it certainly wouldn’t help their case.

The second very public case is the Danish mega-shipping company A.P. Moller-Maersk.

They also were taken out by the NotPetya malware but in their case they had two problems.

Number one was the computer systems that controlled their huge container ships were down, making it impossible to load or unload ships.

The second problem was that another division of the company runs many of the big ports around the world and those port operations were down as well.  That means that even container ships of competing shipping companies could not unload at those ports.  Ports affected were located in the United States, India, Spain and The Netherlands.  The South Florida Container Terminal, for example, said that it could not deliver dry cargo and no container would be received.  At the JPNT port near Mumbai, India, they said that they did not know when the terminal would be running smoothly.

Well now we do have more information.  As of Monday (yesterday), Maersk said it had restored its major applications.  Maersk said on Friday that it expected client facing systems to return to normal by Monday and was resuming deliveries at its major ports.

You may ask why am I spilling so much virtual ink on this story (I already wrote about it once).  The answer is if these mega companies were not prepared for a major outage then smaller companies are likely not prepared either.

While we have not seen financial numbers from either of these firms as to the cost of recovering from these attacks, it is likely in the multiple millions of dollars, if not more, for each of them.

And, they were effectively out of business for a week or more.  Notice that Maersk said that major customer facing applications were back online after a week.  What about the rest of their application suite?

Since ransomware – or in this case destructoware since there was no way to reverse the encryption even if you paid the ransom – is a huge problem around the world, the likelihood of your firm being hit is much higher than anyone would like.

Now is the time to create your INCIDENT RESPONSE PLAN, your DISASTER RECOVERY PLAN and your BUSINESS CONTINUITY PLAN.

If you get hit with an attack and you don’t have these plans in place, trained and tested, it is not going to be a fun couple of weeks.  Assuming you are still in business.  When Sony got attacked it took them three months to get basic systems back online.  Sony had a plan – it just had not been updated in six years.

Will you be able to survive the effects of this kind of attack?

Information for this post came from Fortune, Reuters and another Reuters article.

Facebooktwitterredditlinkedinmailby feather