Tag Archives: disaster recovery

News Bites for the Week Ending November 30, 2018

Microsoft Azure and O.365 Multi-Factor Authentication Outage

Microsoft’s cloud environment had an outage this week for the better part of a day, worldwide.  The failure stopped users who had turned on two factor authentication from logging in.

This is not a “gee, Microsoft is bad” or “gee, two factor authentication is bad” problem.  All systems have failures, especially the ones that businesses run internally.  Unfortunately cloud systems fail occasionally too.

The bigger question is are you prepared for that guaranteed, some time in the future, failure?

It is a really bad idea to assume cloud systems will not fail, whether they are from a particular industry specific application or a generic one like Microsoft or Google.

What is your acceptable length for an outage?  How much data are you willing to lose?

More importantly, do you have a plan for what to do in case you pass those points of no return and have you recently tested those plans?

Failures usually happen when it is inconvenient and planning is critical to dealing with it.  Dealing with an outage absent a well thought out and tested plan is likely to be a disaster. Source: ZDNet.

 

Moody’s is Going to Start Including Cyber Risk in Credit Ratings

We have said for a long time that cyber risk is a business problem.  Business credit ratings represent the overall risk a business represents.

What has been missing is connecting the two.

Now Moody’s is going to do that.

While details are scarce, Moody’s says that they will soon evaluate organizations risk from a cyber attack.

Moody’s has even created a new cyber risk group.

While they haven’t said so yet, likely candidates for initial scrutiny of cyber risk are defense contractors, financial, health care and critical infrastructure.

For companies that care about their risk ratings, make sure that your cybersecurity is in order along with your finances.  Source: CNBC.

 

British Lawmakers Seize Facebook Files

In what has got to be an interesting game, full of innuendo and intrigue, British lawmakers seized documents sealed by a U.S. court when the CEO of a company that had access to them visited England.

The short version of the back story is that the Brits are not real happy with Facebook and were looking for copies of documents that had been part of discovery in a lawsuit between app maker Six4Three and Facebook that has been going on for years.

So, when Ted Kramer, founder of the company visited England on business, the Parliament’s Sargent-at-arms literally hauled Ted into Parliament and threatened to throw him in jail if he did not produce the documents sealed by the U.S. court.

So Ted is between a rock and a hard place;  the Brits have physical custody of him;  the U.S. courts could hold him in contempt (I suspect they will huff and puff a lot, but not do anything) – so he turns over the documents.

Facebook has been trying to hide these documents for years.  I suspect that Six4Three would be happy if they became public.  Facebook said, after the fact, that the Brits should return the documents.  The Brits said go stick it.  You get the idea.

Did Six4Three play a part in this drama in hopes of getting these emails released?  Don’t know but I would not rule that out.  Source: CNBC.

 

Two More Hospitals Hit By Ransomware

The East Ohio Regional Hospital (EORH) and Ohio Valley Medical Center (OVMC) were both hit by a ransomware attack.  The hospitals reverted to using paper patient charts and are sending ambulances to other hospitals.  Of course they are saying that patient care isn’t affected, but given you have no information available to you regarding patients currently in the hospital, their diagnoses, tests or prior treatments, that seems a bit optimistic.

While most of us do not deal with life and death situations, it can take a while – weeks or longer – to recover from ransomware attacks if the organization is not prepared.

Are you prepared?  In this case, likely one doctor or nurse clicked on the wrong link;  that is all it takes.  Source: EHR Intelligence.

 

Atrium Health Data Breach – Over 2 Million Customers Impacted

Atrium Health announced a breach of the personal information of over 2 million customers including Socials for about 700,000 of them.

However, while Atrium gets to pay the fine, it was actually the fault of one of their vendors, Accudoc.  Accudoc does billing for them for their 44 hospitals.

Atrium says that the data was accessed but not downloaded and did not include credit card data.  Of course if the bad guys “accessed” the data and then screen scraped it, it would not show as downloaded.

One more time – VENDOR CYBER RISK MANAGEMENT.  It has to be a priority.   Unless you don’t mind taking the rap and fines for your vendor’s errors.   Source: Charlotte Observer.

Facebooktwitterredditlinkedinmailby feather

Is Your DR Plan Better Than London Gatwick Airport’s?

Let’s assume that you are a major international airport that moves 45 million passengers and 97,000 tons of cargo a year,

Then let’s say you have some form of IT failure.  How do you communicate with your customers?

At London’s Gatwick airport, apparently your DR plan consists of trotting out a small white board and giving a customer service agent a dry erase marker and a walkie-talkie.

On the bright side, they are using black markers for on time flights and red markers for others.

Gatwick is blaming Vodafone for the outage.  Vodafone does contract with Gatwick for certain IT services.

You would think that an organization as large as Gatwick would have a well planned and tested Disaster Recovery strategy, but it would appear that they don’t.

Things, they say, will get back to normal as soon as possible.

Vodafone is saying:

We have identified a damaged fibre cable which is used by Gatwick Airport to display flight information.

"Our engineers are working hard to fix the cable as quickly as possible. 

This is a top priority for us and we are very sorry for any problems caused by this issue.

But who is being blasted in social media as “absolute shambles”, “utter carnage” and “huge delays”?  Not Vodafone.

Passengers are snapping cell phone pictures and posting to social media with snarky comments.

Are you prepared for an IT outage?

First of all, there are a lot of possible failures that could happen.  In this case, it was a fiber cut that somehow took everything out.  Your mission, should you decide to accept it, is to identify all the possible failures.  Warning, if you do a good job of brainstorming, there will be a LOT.

Next you want to triage those modes.  Some of them will have a common root cause or a common possible fix.  Others you won’t really know what the fix is.

You also want to identify the impact of each failure.  In Gatwick’s case, the failure of all of the sign boards throughout the airport, while extremely embarrassing and which will generate a lot of ridicule on social media is probably less critical than a failure of the gate management software which would basically stop planes from landing because there would not be a way to get those planes assigned to a gate.  A failure of the baggage automation system would stop them from loading and unloading bags, which represents a big problem.  

Once you have done all that, you can decide which failures you are willing to live with and which ones are a problem.

Then you can brainstorm ways to mitigate the failure.  Apparently, in Gatwick’s case, rounding up a few white boards, felt tip markers and walkie talkies was considered acceptable.

After the beating they took today on social media, they may be reconsidering that decision.

In some cases you may want an automated disaster recovery solution;  in other cases, a manual one may be acceptable and in still other ones, having an outage until it is fixed may be OK.

Time may play a factor into this answer also.  For example, if the payroll system goes down but the next payroll isn’t for a week, it MAY not be a problem at all, but if payroll has to be produced today or tomorrow, it could be a big problem.

All of this will be part of your business continuity and disaster recovery program.

Once you have this disaster recovery and business continuity program written down, you need to create a team to run it, train them and test it.  And test it.  And test it.  When I was a kid there was a big power failure in the northeast.  There was a large teaching hospital in town that lost power, but, unfortunately, no one had trained people on how to start the generators.  That meant that for several hours until they found the only guy who knew how to start the generators, nurses were manually running heart lung machines and other critical patient equipment by hand.  They fixed that problem immediately after the blackout so the next time it happened, all people saw was a blink of the lights.  Test.  Test.  Test!

If this seems overwhelming, please contact us and we will be pleased to assist you.

Information for this post came from Sky News.

 

Facebooktwitterredditlinkedinmailby feather

Davidson County, NC Hit By Ransomware – Reverts to Paper

While yet another local government being shut down by a ransomware attack is old news these days, it still can point to a few valuable things.

This time it is Davidson County, NC, home of Greensboro.

At 2:00 in the morning the county’s CIO was woken up – there was something strange going on with the 911 system.

What they figured out what that ransomware had compromised 70 servers and an unknown number of desktops and laptops.

Oh, yeah, and the phones weren’t working, which is sort of a problem for the 911 dispatchers.

The county manager said it could take weeks or months to fully resolve.  He also said that this kind of attack is common in Europe.  It is, but it is equally common in the U.S.  Just recently neighboring county Mecklenburg had the same problem.

One bit of good news is that they have cyber insurance.  That likely will help them pay for some of the costs.  At the time of the first article, they had not decided if they were going to pay the ransom.

By Monday the county said that 911 was working as was the tax collector.  You can see why both of these are important to the county.

They continue to work on the restoration, but did not give a time when things would be back to normal – just soon.

What what are the takeaways here?

  • Have a disaster recovery plan – it sounds like they did have one of these.
  • Have a business continuity plan  – how do we the doors open or answering the phone.  And, if you are a web based business and your web site is down, now what?
  • Having cyber insurance will help pay for all this.
  • Make sure you have backups.  Make sure it covers ALL of your data and systems.
  • Figure out how long it will take to restore those backups.  For nearby Mecklenburg, it was a couple of months.  Is that OK?  If not, what is plan B?
  • How are you going to communicate about it.
  • MUTUAL AID – this one is easier for non-profits and the public sector, still it is worth considering.  Davidson County received offers of assistance from the nearby City of Lexington and from Rowan County as well as the North Carolina Association of County Commissioners.  And they are talking with Mecklenburg County – that went through the same ordeal recently.  When I was in college in upstate New York (this was in the dark ages before the Internet), the volunteer fire departments up and down the Finger Lakes would invoke that mutual aid using fog horns that traveled across the lakes for miles.  A particular  burst meant that this fire department or that needed help.  It was a life saver, literally.  Maybe it is with a customer or a business partner or an investor.  You may not need the aid, but having it available could make a huge difference.

Ultimately, having a plan and testing that plan is hugely important.  Don’t hope it won’t happen to you.  That might be the case, but then again, it might not be the case.  Will you be ready if it happens to you?

Information for this post came from the Dispatch and Greensboro.com

Facebooktwitterredditlinkedinmailby feather

What if Your Payment Processor Shuts Down?

What would happen to your business if your credit card processor shut down?  If you do online bill pay, what would happen if it shut down?

Millions of people and businesses got to figure that one out this month when Paypal’s TIO Networks unit suddenly shut down.  TIO does payment processing, both for merchants and for consumers who use it to pay bills at kiosks in malls, at grocery stores and other locations.

Paypal paid over $230 million for the company earlier this year.

Whether they were aware of the breach at the time that Paypal bought it or not is not clear.

In fact, all that is clear is that over a million and a half users had their information compromised.

Paypal’s decision was, on November 10th, to shut the unit down until they could fix the problems.

The impact of this shutdown varied from group to group.

If you are using the bill pay service at the grocery store, you are likely to go to another location.  Unfortunately, for TIO Networks, many of those customers won’t come back.  While this may be annoying for customers, the annoyance was likely manageable.

For merchants who uses the vendor as a merchant payment processing service and magically, with no notice, the service is shut down, that could be a big problem.

This is especially a problem for organizations that depend on credit cards such as retail or healthcare or many other consumer services.

We often talk about business continuity and disaster recovery plans, but if you operate a business and credit cards are important to you, then your plan needs to deal with how you would handle an outage of your credit card processing service.

In the case of TIO, after about a week they started bringing the service back online for a few people who were most dependent on it.

Things get a bit complicated here.  Most of the time merchant payment processors require businesses to sign a contract for some number of years.  Since the contract was written by lawyers who work for the credit card processor, it likely says that they aren’t responsible if they shut down for a week or two without notice.  It probably even says that they aren’t liable for your losses and you are still required to pay on your contract.

If you switch to a new processor, you may have two contracts,  Now what do you do?

To make things more complicated, if your payment processor is integrated with other office systems or point of sale systems, switching to a new provider is even more difficult.

I don’t have a magic answer for you – unfortunately – but the problem is solvable.  It just requires some work.  Don’t wait until you have an outage – figure it out NOW!

This is why you need to have a written and tested business continuity and disaster recovery program.

Information for this post came from USAToday.

Facebooktwitterredditlinkedinmailby feather

Do You Have a Disaster Recovery Plan for Your Front Door?

The Internet of Things never fails to amaze me.  And make us think outside of the box.

As the British publication The Register said, your smart lock may be knackered.  Google says that knacker means damage severely and I think they are right.

Here is the story.

For AirBnB hosts, one security challenge they have is how do they get keys to their one night renters in a secure manner and how do they stop those renters from making a copy of the key to rob the place later.

There is an answer.  AirBnB has actually partnered with a company that makes smart locks (hence the Internet of Things tie in).  These smart locks have a keypad on the front so that you can set a code, if you want, 5 minutes before your overnight guest arrives and tell them what it is and when they leave, you can change it.

Ignoring for the moment all the security holes in many of these smart locks, in concept it makes perfect sense.

So much sense that AirBnB recommends these $469 locks (and, maybe, gets a cut of the action;  I don’t know).

For AIrBnB homeowners, this makes their life easier.  The lock connects to WiFi which allows you to reset the code remotely, which is convenient for the owner.

It also allows for the manufacturer to download new firmware automatically (because, after all, one of the things that is not high on your priority list is patching your door. Err, door lock).

Again, in concept, I think this automatic patching is THE WAY TO GO.  People are, in general, horrible about patching software.  Whether we are talking about their computer or their phone, they just don’t do it.  So when it comes to the Internet of Things – your dishwasher, refrigerator or front door, it is pretty unlikely that you are going to patch it with any regularity, so automatic patching is good.

EXCEPT … when the manufacturer screws it up.

In this case Lockstate, who makes this formerly smart and now knackered lock, sent the wrong firmware update to some of their locks.  In this case they claim it was only 500 locks, but it certainly makes a point when you are standing on the front step of this home that you rented for hundreds of dollars a night and you can’t get in.

Apparently, they sent the firmware for their 7000i model lock to some of their 6000i model locks and, not surprisingly, it knackered the lock (I like that word).

Lockstate sent an email to the owners of these formerly smart locks and told them that they had two choices.

Option 1 was to take the back of the lock off (where I assume the smart part is) and send it back to the factory and they would either replace it or put the right software in it, making it UNknackered.  This option, they say, would take 5-7 business days.

Option 2 was for the homeowner to ask Lockstate to send you a new lock and then, once you get it, send them back the old lock.  This will take them 14-18 days to ship.

In the mean time, you get to camp out on your front doorstep, I guess.

For AirBnB home owners who may have new guests every night, this could be a problem.  Especially if the owner does not live in the same town in which the home is located.

Ultimately, the AirBnB home owners (and, apparently, they are the only ones affected because this lock was made specifically for AirBnB), will deal with it and in a week or three they will all be laughing about it.

Now to circle around to the title of the post.

As we integrate more so-called smart devices into our lives, we are going to have to create disaster recovery plans and business continuity plans for what happens when these smart devices are not so smart.

For example, let’s assume this was your house and not a rental.  The lock does have a physical key, but since you go in and out all the time using the buttons on the front (or maybe, with different locks, your smart phone), the key is in a junk drawer somewhere inside the house.  And you are standing on the front step.  What do you do?  What is your disaster recovery plan?  How do you get in and out of your house until you can get your lock repaired or replaced?

How long are you willing to be locked out of your house?

Of course, this is only a placeholder for the 20 billion smart Internet of Things devices that we, supposedly, will be using in the next few years.

What happens if they update the software in all of your smart light bulbs and they won’t turn on any more?  Or, maybe, they won’t turn off.  What if a hacker updates your light bulbs and each one of them starts calling 911 continuously (a variant of this actually happened already, so don’t call it far fetched)?

These are maybe simplistic things, but it can get more real.  Your smart car has millions of lines of software in it and it also can update itself.  The possibilities of what an errant or malicious update might do are endless.

Right now we don’t even know what these 20 billion smart devices that we are going to be using ARE, never mind how to deal with all of the potential failure modes.

I can see it now.  You buy your smart light bulb and you open the manual.  In it, in addition to the 40 safety warnings in the manual, is included, at no extra charge, a 20 page disaster recovery plan for dealing with all of the possible disasters that could happen to you and this light bulb.

The possibilities boggle the mind.

Lets assume that, in a few years, you might have a hundred smart devices in your home or apartment.  Along with, of course, a hundred disaster recovery plans.  OMG!

Unfortunately, since cost is the driver in IoT devices, the manufacturers will not put in manual controls to be used in case of emergency,  And, if current IoT security is any harbinger of the future, we know security will be terrible.

So here is one scenario.  A hacker or nation state actor decides to wreak havoc and hacks into some major vendor’s IoT devices and knackers them.  Maybe, all of the smart light bulbs in the country turn off. And won’t turn on.

OK everybody,  Where is your light bulb disaster recovery manual?  Have you practiced your light bulb disaster recovery plan?  Have you implemented your light bulb business continuity plan?

While I am doing this partly tongue in cheek, maybe it isn’t as far fetched as we would like to think.

As hundreds of AirBnB home owners discovered recently, it isn’t that far fetched.

By the way, Lockstate says that they have fixed 60 percent of the dead locks.  I guess the other 40 percent of the home owners are still standing on their front porch.

Information for this post came from The Register.

Facebooktwitterredditlinkedinmailby feather

The Fallout From a Ransomware Attack

We have heard from two big name firms who succumbed to the recent Petya/NotPetya ransomware attack and they provide interesting insights into dealing with the attack.

First a quick background.  A week ago the world was coming to grips with a new ransomware attack.  Initially called Petya because it looked like a strain of the Petya ransomware, but then called NotPetya because it became clear that it was an attempt to look like Petya but really was not the same malware.

One major difference is that it appears that this malware was just designed to inflict as much pain as possible.  And it did.

While we have no idea of all the pain it inflicted, we do have a couple of very high profile pain points.

The first case study is DLA Piper.  DLA Piper is a global law firm with offices in 40 countries and over 4,000 lawyers.

However, last week, this is what employees saw on their screens:

When employees came to work in the London office, they were greeted with this sign in the lobby:

Suffice it to say, this is not what attorneys in the firm needed when they had trials to attend to, motions to file and clients to talk to.

To further their embarrassment, DLA Piper had jumped on the WannaCry band wagon telling everyone how wonderful their cyber security practice was and that people should hire them.  Now they were on the other side of the problem.

In today’s world of social media, that sign in the lobby of DLA Piper’s London office went viral instantly and DLA Piper was not really ready to respond.  Their response said that client data was not hacked.  No one said that it was.

As of last Thursday, 3+ days into the attack, DLA Piper was not back online. Email was still out, for example.

If client documents were DESTROYED in the attack because they were sitting on staff workstations which were attacked, then they would need to go back to clients and tell them that their data wasn’t as safe as the client might have thought and would they please send them another copy.

If there were court pleadings due, they would have to beg the mercy of the court – and their adversaries – and ask for extensions.  The court likely would grant them, but it certainly wouldn’t help their case.

The second very public case is the Danish mega-shipping company A.P. Moller-Maersk.

They also were taken out by the NotPetya malware but in their case they had two problems.

Number one was the computer systems that controlled their huge container ships were down, making it impossible to load or unload ships.

The second problem was that another division of the company runs many of the big ports around the world and those port operations were down as well.  That means that even container ships of competing shipping companies could not unload at those ports.  Ports affected were located in the United States, India, Spain and The Netherlands.  The South Florida Container Terminal, for example, said that it could not deliver dry cargo and no container would be received.  At the JPNT port near Mumbai, India, they said that they did not know when the terminal would be running smoothly.

Well now we do have more information.  As of Monday (yesterday), Maersk said it had restored its major applications.  Maersk said on Friday that it expected client facing systems to return to normal by Monday and was resuming deliveries at its major ports.

You may ask why am I spilling so much virtual ink on this story (I already wrote about it once).  The answer is if these mega companies were not prepared for a major outage then smaller companies are likely not prepared either.

While we have not seen financial numbers from either of these firms as to the cost of recovering from these attacks, it is likely in the multiple millions of dollars, if not more, for each of them.

And, they were effectively out of business for a week or more.  Notice that Maersk said that major customer facing applications were back online after a week.  What about the rest of their application suite?

Since ransomware – or in this case destructoware since there was no way to reverse the encryption even if you paid the ransom – is a huge problem around the world, the likelihood of your firm being hit is much higher than anyone would like.

Now is the time to create your INCIDENT RESPONSE PLAN, your DISASTER RECOVERY PLAN and your BUSINESS CONTINUITY PLAN.

If you get hit with an attack and you don’t have these plans in place, trained and tested, it is not going to be a fun couple of weeks.  Assuming you are still in business.  When Sony got attacked it took them three months to get basic systems back online.  Sony had a plan – it just had not been updated in six years.

Will you be able to survive the effects of this kind of attack?

Information for this post came from Fortune, Reuters and another Reuters article.

Facebooktwitterredditlinkedinmailby feather