Tag Archives: Business Continuity

Country of Georgia Hacked

Well it seemed like the whole damn country.

Over 15,000 website have been hacked, including, not surprisingly, newspapers, government offices and TV stations.

After the sites were defaced by the hackers, they were taken offline.

Newspapers said it was the biggest attack in the country’s history, even bigger than the 2008 attack by Russia.

This attack even affected some of the country’s courts and banks.

Needless to say, and based on the history with Russia, there was some panic around.

However a web hosting company, Pro-service, admitted that their network was attacked.

By late in the day more than half of the sites were back online and they were working on the rest.

The hackers defaced the sites with a picture of former president Mikheil Saakashvili, with the text “I’ll be back” overlaid on top.

Saakashvili is in exile in Ukraine now but was generally thought to be anti-corruption, so it is unlikely that Russia did it this time, but it seems to be politically motivated.

At least two TV stations went off the air right after the attack.

Given that Georgia (formerly known as the Republic of Georgia) is not vital to you and me on an everyday basis, why should we care.

The answer is that just because hackers attacked them today — if it could be done there, it could be done here too.  Oh.  Wait.  They already did that (see here).  In that case, it was the Chinese and the damage was much greater.

The interesting part for both the Chinese attack on us and the <whoever did it> attack on Georgia is that one attack on a piece of shared infrastructure can do an amazing amount of damage.

Think about what happens when Amazon, Microsoft or Google go down – even without a cyberattack.

The folks in DC are already planning how to respond to an attack on shared infrastructure like banking, power, water, transportation and other critical infrastructure.  You and I don’t have much ability to impact that part of the conversation, but we do have impact on our own infrastructure.

Apparently this attack was pretty simple and didn’t do much damage, but that doesn’t mean that some other attack will also be low tech or do little damage.  What if an attack disabled one or a few Microsoft or Amazon data centers.  Microsoft is already rationing VMs in US East 2 due to lack of capacity.  What would happen if they lost an entire data center?

This falls under the category of disaster recovery and  business continuity.  Hackers are only one case, but the issue of shared infrastructure makes the impact much greater.  If all of your servers were in your office like they used to be, then attacks would be more localized.  But there are many advantages to cloud infrastructure, so I am not suggesting going back to the days of servers in a closet.

Maybe Microsoft or Amazon are resilient enough to withstand an attack (although it seems like self inflicted wounds already do quite a bit of damage without the help of outside attackers), but what about smaller cloud providers?

What if one or more of your key cloud providers had an outage?  Are you ready to handle that?  As we saw with the planned power outages in California this past week, stores who lost power had to lock their doors because their cash registers didn’t work.  Since nothing has a price on it any more, they couldn’t even take cash  – assuming you could find a gas station to fill your car or an ATM to get you that cash.

Bottom line is that shared infrastructure is everywhere and we need to plan for what we are going to do — not if, but when –, that shared infrastructure takes a vacation.

Plan now.  The alternative may be to shut the doors until the outage gets fixed and if that takes a while, those doors may be locked forever.

Facebooktwitterredditlinkedinmailby feather

News Bites for the Week Ending November 30, 2018

Microsoft Azure and O.365 Multi-Factor Authentication Outage

Microsoft’s cloud environment had an outage this week for the better part of a day, worldwide.  The failure stopped users who had turned on two factor authentication from logging in.

This is not a “gee, Microsoft is bad” or “gee, two factor authentication is bad” problem.  All systems have failures, especially the ones that businesses run internally.  Unfortunately cloud systems fail occasionally too.

The bigger question is are you prepared for that guaranteed, some time in the future, failure?

It is a really bad idea to assume cloud systems will not fail, whether they are from a particular industry specific application or a generic one like Microsoft or Google.

What is your acceptable length for an outage?  How much data are you willing to lose?

More importantly, do you have a plan for what to do in case you pass those points of no return and have you recently tested those plans?

Failures usually happen when it is inconvenient and planning is critical to dealing with it.  Dealing with an outage absent a well thought out and tested plan is likely to be a disaster. Source: ZDNet.

 

Moody’s is Going to Start Including Cyber Risk in Credit Ratings

We have said for a long time that cyber risk is a business problem.  Business credit ratings represent the overall risk a business represents.

What has been missing is connecting the two.

Now Moody’s is going to do that.

While details are scarce, Moody’s says that they will soon evaluate organizations risk from a cyber attack.

Moody’s has even created a new cyber risk group.

While they haven’t said so yet, likely candidates for initial scrutiny of cyber risk are defense contractors, financial, health care and critical infrastructure.

For companies that care about their risk ratings, make sure that your cybersecurity is in order along with your finances.  Source: CNBC.

 

British Lawmakers Seize Facebook Files

In what has got to be an interesting game, full of innuendo and intrigue, British lawmakers seized documents sealed by a U.S. court when the CEO of a company that had access to them visited England.

The short version of the back story is that the Brits are not real happy with Facebook and were looking for copies of documents that had been part of discovery in a lawsuit between app maker Six4Three and Facebook that has been going on for years.

So, when Ted Kramer, founder of the company visited England on business, the Parliament’s Sargent-at-arms literally hauled Ted into Parliament and threatened to throw him in jail if he did not produce the documents sealed by the U.S. court.

So Ted is between a rock and a hard place;  the Brits have physical custody of him;  the U.S. courts could hold him in contempt (I suspect they will huff and puff a lot, but not do anything) – so he turns over the documents.

Facebook has been trying to hide these documents for years.  I suspect that Six4Three would be happy if they became public.  Facebook said, after the fact, that the Brits should return the documents.  The Brits said go stick it.  You get the idea.

Did Six4Three play a part in this drama in hopes of getting these emails released?  Don’t know but I would not rule that out.  Source: CNBC.

 

Two More Hospitals Hit By Ransomware

The East Ohio Regional Hospital (EORH) and Ohio Valley Medical Center (OVMC) were both hit by a ransomware attack.  The hospitals reverted to using paper patient charts and are sending ambulances to other hospitals.  Of course they are saying that patient care isn’t affected, but given you have no information available to you regarding patients currently in the hospital, their diagnoses, tests or prior treatments, that seems a bit optimistic.

While most of us do not deal with life and death situations, it can take a while – weeks or longer – to recover from ransomware attacks if the organization is not prepared.

Are you prepared?  In this case, likely one doctor or nurse clicked on the wrong link;  that is all it takes.  Source: EHR Intelligence.

 

Atrium Health Data Breach – Over 2 Million Customers Impacted

Atrium Health announced a breach of the personal information of over 2 million customers including Socials for about 700,000 of them.

However, while Atrium gets to pay the fine, it was actually the fault of one of their vendors, Accudoc.  Accudoc does billing for them for their 44 hospitals.

Atrium says that the data was accessed but not downloaded and did not include credit card data.  Of course if the bad guys “accessed” the data and then screen scraped it, it would not show as downloaded.

One more time – VENDOR CYBER RISK MANAGEMENT.  It has to be a priority.   Unless you don’t mind taking the rap and fines for your vendor’s errors.   Source: Charlotte Observer.

Facebooktwitterredditlinkedinmailby feather

Is Your DR Plan Better Than London Gatwick Airport’s?

Let’s assume that you are a major international airport that moves 45 million passengers and 97,000 tons of cargo a year,

Then let’s say you have some form of IT failure.  How do you communicate with your customers?

At London’s Gatwick airport, apparently your DR plan consists of trotting out a small white board and giving a customer service agent a dry erase marker and a walkie-talkie.

On the bright side, they are using black markers for on time flights and red markers for others.

Gatwick is blaming Vodafone for the outage.  Vodafone does contract with Gatwick for certain IT services.

You would think that an organization as large as Gatwick would have a well planned and tested Disaster Recovery strategy, but it would appear that they don’t.

Things, they say, will get back to normal as soon as possible.

Vodafone is saying:

We have identified a damaged fibre cable which is used by Gatwick Airport to display flight information.

"Our engineers are working hard to fix the cable as quickly as possible. 

This is a top priority for us and we are very sorry for any problems caused by this issue.

But who is being blasted in social media as “absolute shambles”, “utter carnage” and “huge delays”?  Not Vodafone.

Passengers are snapping cell phone pictures and posting to social media with snarky comments.

Are you prepared for an IT outage?

First of all, there are a lot of possible failures that could happen.  In this case, it was a fiber cut that somehow took everything out.  Your mission, should you decide to accept it, is to identify all the possible failures.  Warning, if you do a good job of brainstorming, there will be a LOT.

Next you want to triage those modes.  Some of them will have a common root cause or a common possible fix.  Others you won’t really know what the fix is.

You also want to identify the impact of each failure.  In Gatwick’s case, the failure of all of the sign boards throughout the airport, while extremely embarrassing and which will generate a lot of ridicule on social media is probably less critical than a failure of the gate management software which would basically stop planes from landing because there would not be a way to get those planes assigned to a gate.  A failure of the baggage automation system would stop them from loading and unloading bags, which represents a big problem.  

Once you have done all that, you can decide which failures you are willing to live with and which ones are a problem.

Then you can brainstorm ways to mitigate the failure.  Apparently, in Gatwick’s case, rounding up a few white boards, felt tip markers and walkie talkies was considered acceptable.

After the beating they took today on social media, they may be reconsidering that decision.

In some cases you may want an automated disaster recovery solution;  in other cases, a manual one may be acceptable and in still other ones, having an outage until it is fixed may be OK.

Time may play a factor into this answer also.  For example, if the payroll system goes down but the next payroll isn’t for a week, it MAY not be a problem at all, but if payroll has to be produced today or tomorrow, it could be a big problem.

All of this will be part of your business continuity and disaster recovery program.

Once you have this disaster recovery and business continuity program written down, you need to create a team to run it, train them and test it.  And test it.  And test it.  When I was a kid there was a big power failure in the northeast.  There was a large teaching hospital in town that lost power, but, unfortunately, no one had trained people on how to start the generators.  That meant that for several hours until they found the only guy who knew how to start the generators, nurses were manually running heart lung machines and other critical patient equipment by hand.  They fixed that problem immediately after the blackout so the next time it happened, all people saw was a blink of the lights.  Test.  Test.  Test!

If this seems overwhelming, please contact us and we will be pleased to assist you.

Information for this post came from Sky News.

 

Facebooktwitterredditlinkedinmailby feather

Davidson County, NC Hit By Ransomware – Reverts to Paper

While yet another local government being shut down by a ransomware attack is old news these days, it still can point to a few valuable things.

This time it is Davidson County, NC, home of Greensboro.

At 2:00 in the morning the county’s CIO was woken up – there was something strange going on with the 911 system.

What they figured out what that ransomware had compromised 70 servers and an unknown number of desktops and laptops.

Oh, yeah, and the phones weren’t working, which is sort of a problem for the 911 dispatchers.

The county manager said it could take weeks or months to fully resolve.  He also said that this kind of attack is common in Europe.  It is, but it is equally common in the U.S.  Just recently neighboring county Mecklenburg had the same problem.

One bit of good news is that they have cyber insurance.  That likely will help them pay for some of the costs.  At the time of the first article, they had not decided if they were going to pay the ransom.

By Monday the county said that 911 was working as was the tax collector.  You can see why both of these are important to the county.

They continue to work on the restoration, but did not give a time when things would be back to normal – just soon.

What what are the takeaways here?

  • Have a disaster recovery plan – it sounds like they did have one of these.
  • Have a business continuity plan  – how do we the doors open or answering the phone.  And, if you are a web based business and your web site is down, now what?
  • Having cyber insurance will help pay for all this.
  • Make sure you have backups.  Make sure it covers ALL of your data and systems.
  • Figure out how long it will take to restore those backups.  For nearby Mecklenburg, it was a couple of months.  Is that OK?  If not, what is plan B?
  • How are you going to communicate about it.
  • MUTUAL AID – this one is easier for non-profits and the public sector, still it is worth considering.  Davidson County received offers of assistance from the nearby City of Lexington and from Rowan County as well as the North Carolina Association of County Commissioners.  And they are talking with Mecklenburg County – that went through the same ordeal recently.  When I was in college in upstate New York (this was in the dark ages before the Internet), the volunteer fire departments up and down the Finger Lakes would invoke that mutual aid using fog horns that traveled across the lakes for miles.  A particular  burst meant that this fire department or that needed help.  It was a life saver, literally.  Maybe it is with a customer or a business partner or an investor.  You may not need the aid, but having it available could make a huge difference.

Ultimately, having a plan and testing that plan is hugely important.  Don’t hope it won’t happen to you.  That might be the case, but then again, it might not be the case.  Will you be ready if it happens to you?

Information for this post came from the Dispatch and Greensboro.com

Facebooktwitterredditlinkedinmailby feather

What if Your Payment Processor Shuts Down?

What would happen to your business if your credit card processor shut down?  If you do online bill pay, what would happen if it shut down?

Millions of people and businesses got to figure that one out this month when Paypal’s TIO Networks unit suddenly shut down.  TIO does payment processing, both for merchants and for consumers who use it to pay bills at kiosks in malls, at grocery stores and other locations.

Paypal paid over $230 million for the company earlier this year.

Whether they were aware of the breach at the time that Paypal bought it or not is not clear.

In fact, all that is clear is that over a million and a half users had their information compromised.

Paypal’s decision was, on November 10th, to shut the unit down until they could fix the problems.

The impact of this shutdown varied from group to group.

If you are using the bill pay service at the grocery store, you are likely to go to another location.  Unfortunately, for TIO Networks, many of those customers won’t come back.  While this may be annoying for customers, the annoyance was likely manageable.

For merchants who uses the vendor as a merchant payment processing service and magically, with no notice, the service is shut down, that could be a big problem.

This is especially a problem for organizations that depend on credit cards such as retail or healthcare or many other consumer services.

We often talk about business continuity and disaster recovery plans, but if you operate a business and credit cards are important to you, then your plan needs to deal with how you would handle an outage of your credit card processing service.

In the case of TIO, after about a week they started bringing the service back online for a few people who were most dependent on it.

Things get a bit complicated here.  Most of the time merchant payment processors require businesses to sign a contract for some number of years.  Since the contract was written by lawyers who work for the credit card processor, it likely says that they aren’t responsible if they shut down for a week or two without notice.  It probably even says that they aren’t liable for your losses and you are still required to pay on your contract.

If you switch to a new processor, you may have two contracts,  Now what do you do?

To make things more complicated, if your payment processor is integrated with other office systems or point of sale systems, switching to a new provider is even more difficult.

I don’t have a magic answer for you – unfortunately – but the problem is solvable.  It just requires some work.  Don’t wait until you have an outage – figure it out NOW!

This is why you need to have a written and tested business continuity and disaster recovery program.

Information for this post came from USAToday.

Facebooktwitterredditlinkedinmailby feather

The Fallout From a Ransomware Attack

We have heard from two big name firms who succumbed to the recent Petya/NotPetya ransomware attack and they provide interesting insights into dealing with the attack.

First a quick background.  A week ago the world was coming to grips with a new ransomware attack.  Initially called Petya because it looked like a strain of the Petya ransomware, but then called NotPetya because it became clear that it was an attempt to look like Petya but really was not the same malware.

One major difference is that it appears that this malware was just designed to inflict as much pain as possible.  And it did.

While we have no idea of all the pain it inflicted, we do have a couple of very high profile pain points.

The first case study is DLA Piper.  DLA Piper is a global law firm with offices in 40 countries and over 4,000 lawyers.

However, last week, this is what employees saw on their screens:

When employees came to work in the London office, they were greeted with this sign in the lobby:

Suffice it to say, this is not what attorneys in the firm needed when they had trials to attend to, motions to file and clients to talk to.

To further their embarrassment, DLA Piper had jumped on the WannaCry band wagon telling everyone how wonderful their cyber security practice was and that people should hire them.  Now they were on the other side of the problem.

In today’s world of social media, that sign in the lobby of DLA Piper’s London office went viral instantly and DLA Piper was not really ready to respond.  Their response said that client data was not hacked.  No one said that it was.

As of last Thursday, 3+ days into the attack, DLA Piper was not back online. Email was still out, for example.

If client documents were DESTROYED in the attack because they were sitting on staff workstations which were attacked, then they would need to go back to clients and tell them that their data wasn’t as safe as the client might have thought and would they please send them another copy.

If there were court pleadings due, they would have to beg the mercy of the court – and their adversaries – and ask for extensions.  The court likely would grant them, but it certainly wouldn’t help their case.

The second very public case is the Danish mega-shipping company A.P. Moller-Maersk.

They also were taken out by the NotPetya malware but in their case they had two problems.

Number one was the computer systems that controlled their huge container ships were down, making it impossible to load or unload ships.

The second problem was that another division of the company runs many of the big ports around the world and those port operations were down as well.  That means that even container ships of competing shipping companies could not unload at those ports.  Ports affected were located in the United States, India, Spain and The Netherlands.  The South Florida Container Terminal, for example, said that it could not deliver dry cargo and no container would be received.  At the JPNT port near Mumbai, India, they said that they did not know when the terminal would be running smoothly.

Well now we do have more information.  As of Monday (yesterday), Maersk said it had restored its major applications.  Maersk said on Friday that it expected client facing systems to return to normal by Monday and was resuming deliveries at its major ports.

You may ask why am I spilling so much virtual ink on this story (I already wrote about it once).  The answer is if these mega companies were not prepared for a major outage then smaller companies are likely not prepared either.

While we have not seen financial numbers from either of these firms as to the cost of recovering from these attacks, it is likely in the multiple millions of dollars, if not more, for each of them.

And, they were effectively out of business for a week or more.  Notice that Maersk said that major customer facing applications were back online after a week.  What about the rest of their application suite?

Since ransomware – or in this case destructoware since there was no way to reverse the encryption even if you paid the ransom – is a huge problem around the world, the likelihood of your firm being hit is much higher than anyone would like.

Now is the time to create your INCIDENT RESPONSE PLAN, your DISASTER RECOVERY PLAN and your BUSINESS CONTINUITY PLAN.

If you get hit with an attack and you don’t have these plans in place, trained and tested, it is not going to be a fun couple of weeks.  Assuming you are still in business.  When Sony got attacked it took them three months to get basic systems back online.  Sony had a plan – it just had not been updated in six years.

Will you be able to survive the effects of this kind of attack?

Information for this post came from Fortune, Reuters and another Reuters article.

Facebooktwitterredditlinkedinmailby feather