Tag Archives: Business Continuity

What if Your Payment Processor Shuts Down?

What would happen to your business if your credit card processor shut down?  If you do online bill pay, what would happen if it shut down?

Millions of people and businesses got to figure that one out this month when Paypal’s TIO Networks unit suddenly shut down.  TIO does payment processing, both for merchants and for consumers who use it to pay bills at kiosks in malls, at grocery stores and other locations.

Paypal paid over $230 million for the company earlier this year.

Whether they were aware of the breach at the time that Paypal bought it or not is not clear.

In fact, all that is clear is that over a million and a half users had their information compromised.

Paypal’s decision was, on November 10th, to shut the unit down until they could fix the problems.

The impact of this shutdown varied from group to group.

If you are using the bill pay service at the grocery store, you are likely to go to another location.  Unfortunately, for TIO Networks, many of those customers won’t come back.  While this may be annoying for customers, the annoyance was likely manageable.

For merchants who uses the vendor as a merchant payment processing service and magically, with no notice, the service is shut down, that could be a big problem.

This is especially a problem for organizations that depend on credit cards such as retail or healthcare or many other consumer services.

We often talk about business continuity and disaster recovery plans, but if you operate a business and credit cards are important to you, then your plan needs to deal with how you would handle an outage of your credit card processing service.

In the case of TIO, after about a week they started bringing the service back online for a few people who were most dependent on it.

Things get a bit complicated here.  Most of the time merchant payment processors require businesses to sign a contract for some number of years.  Since the contract was written by lawyers who work for the credit card processor, it likely says that they aren’t responsible if they shut down for a week or two without notice.  It probably even says that they aren’t liable for your losses and you are still required to pay on your contract.

If you switch to a new processor, you may have two contracts,  Now what do you do?

To make things more complicated, if your payment processor is integrated with other office systems or point of sale systems, switching to a new provider is even more difficult.

I don’t have a magic answer for you – unfortunately – but the problem is solvable.  It just requires some work.  Don’t wait until you have an outage – figure it out NOW!

This is why you need to have a written and tested business continuity and disaster recovery program.

Information for this post came from USAToday.

Facebooktwitterredditlinkedinmailby feather

The Fallout From a Ransomware Attack

We have heard from two big name firms who succumbed to the recent Petya/NotPetya ransomware attack and they provide interesting insights into dealing with the attack.

First a quick background.  A week ago the world was coming to grips with a new ransomware attack.  Initially called Petya because it looked like a strain of the Petya ransomware, but then called NotPetya because it became clear that it was an attempt to look like Petya but really was not the same malware.

One major difference is that it appears that this malware was just designed to inflict as much pain as possible.  And it did.

While we have no idea of all the pain it inflicted, we do have a couple of very high profile pain points.

The first case study is DLA Piper.  DLA Piper is a global law firm with offices in 40 countries and over 4,000 lawyers.

However, last week, this is what employees saw on their screens:

When employees came to work in the London office, they were greeted with this sign in the lobby:

Suffice it to say, this is not what attorneys in the firm needed when they had trials to attend to, motions to file and clients to talk to.

To further their embarrassment, DLA Piper had jumped on the WannaCry band wagon telling everyone how wonderful their cyber security practice was and that people should hire them.  Now they were on the other side of the problem.

In today’s world of social media, that sign in the lobby of DLA Piper’s London office went viral instantly and DLA Piper was not really ready to respond.  Their response said that client data was not hacked.  No one said that it was.

As of last Thursday, 3+ days into the attack, DLA Piper was not back online. Email was still out, for example.

If client documents were DESTROYED in the attack because they were sitting on staff workstations which were attacked, then they would need to go back to clients and tell them that their data wasn’t as safe as the client might have thought and would they please send them another copy.

If there were court pleadings due, they would have to beg the mercy of the court – and their adversaries – and ask for extensions.  The court likely would grant them, but it certainly wouldn’t help their case.

The second very public case is the Danish mega-shipping company A.P. Moller-Maersk.

They also were taken out by the NotPetya malware but in their case they had two problems.

Number one was the computer systems that controlled their huge container ships were down, making it impossible to load or unload ships.

The second problem was that another division of the company runs many of the big ports around the world and those port operations were down as well.  That means that even container ships of competing shipping companies could not unload at those ports.  Ports affected were located in the United States, India, Spain and The Netherlands.  The South Florida Container Terminal, for example, said that it could not deliver dry cargo and no container would be received.  At the JPNT port near Mumbai, India, they said that they did not know when the terminal would be running smoothly.

Well now we do have more information.  As of Monday (yesterday), Maersk said it had restored its major applications.  Maersk said on Friday that it expected client facing systems to return to normal by Monday and was resuming deliveries at its major ports.

You may ask why am I spilling so much virtual ink on this story (I already wrote about it once).  The answer is if these mega companies were not prepared for a major outage then smaller companies are likely not prepared either.

While we have not seen financial numbers from either of these firms as to the cost of recovering from these attacks, it is likely in the multiple millions of dollars, if not more, for each of them.

And, they were effectively out of business for a week or more.  Notice that Maersk said that major customer facing applications were back online after a week.  What about the rest of their application suite?

Since ransomware – or in this case destructoware since there was no way to reverse the encryption even if you paid the ransom – is a huge problem around the world, the likelihood of your firm being hit is much higher than anyone would like.

Now is the time to create your INCIDENT RESPONSE PLAN, your DISASTER RECOVERY PLAN and your BUSINESS CONTINUITY PLAN.

If you get hit with an attack and you don’t have these plans in place, trained and tested, it is not going to be a fun couple of weeks.  Assuming you are still in business.  When Sony got attacked it took them three months to get basic systems back online.  Sony had a plan – it just had not been updated in six years.

Will you be able to survive the effects of this kind of attack?

Information for this post came from Fortune, Reuters and another Reuters article.

Facebooktwitterredditlinkedinmailby feather

How to Spend $100 Million Without Even Trying

UPDATE: The Sun, not always the most reliable information source, is saying the outage and trickle down affected 300,000 passengers and may cost the airline $300+ million.  The CEO, Alex Cruz, allegedly said, when warned earlier about the new system installed last fall, that it was the staff’s fault, not the system’s, that things were not working as desired.   Cruz, trying to rein in the damage, said in an email to staff to stop talking about about what happened.  Others have said that the people at Tata did not have the skills to start up and run the backup system – certainly not the first time you wind up with a bumpy situation when you replace on-shore resources with much lower paid off-shore resources – resources who have zero history in the care and feeding of that particular very complex system.  Even if the folks at Tata were experienced at operating some complex computer system, no two systems are the same and there is so much chewing gum and bailing wire in the airline industry holding systems together, that without that legacy knowledge of that particular system, likely no one could make it work right.  

Of all of the weekends for an airline to have a computer systems meltdown, Memorial Day weekend is probably not the one that you would pick.

Unfortunately for British Airways, they didn’t get to “pick” when the event happened.

 

Early Saturday British Airways had a systems meltdown.  This really is a meltdown since the web site and mobile apps stopped working, passengers could not check in and employees could not manage flights, among other things.

Passengers at London’s two largest airports – Heathrow and Gatwick – were not getting any information from the staff.  Likely this was due to the fact that the systems that the staff normally used to get information were not working.

Initially, BA cancelled all flights out of London until 6 PM on Saturday, but later cancelled all flights out of London all day.

Estimates are that 1,000 flights were cancelled.

Given this is a holiday weekend, likely every flight was full.  If you conservatively assume 100 passengers per flight, cancelling 1,000 flights affected 100,000 passengers.  Given the flights are all full, even if they wanted to rebook people, there probably aren’t available seats during the next couple of days.  That means that for a lot of these passengers, they are going to have to cancel their trips.  Given that the airline couldn’t blame the weather or other natural disasters, they will likely have to refund passengers their money.  This doesn’t mean giving people credit towards a future trip, but rather writing them a check.

In Britain, airlines are required to pay penalties of up to 600 Euros per passenger, depending on the length of the delay and the length of the flight.

In addition they are required to pay for food and drinks and pay for accommodations if the delay is overnight – and potentially multiple nights.

Of course there are IT people working around the clock trying to apply enough Band-Aids to get traffic moving again.

Estimates are, so far, that this could cost the airline $100 million or more.  Another estimate says close to $200 million.  Hopefully they have insurance for this, but carrying $200 million in business interruption insurance is unlikely and many BI policies have a waiting period – say 12 hours – before the policy kicks in.

But besides this being an interesting story – assuming you were not travelling in, out or through London this weekend – there is another side of the story.

First, one of the unions blamed BA’s decision to outsource IT to a firm in India (Tata).  BA said that was not the problem.  It is true that BA has been trying to reduce costs in order to compete with low cost carriers, so who knows.  In any case, when you outsource, you really do need to make sure that you understand the risks and that doesn’t matter whether the outsourcer is local or across the globe.  We may hear in the future what happened, but, due to lawsuits, we may only hear about what happened inside of a courtroom.

Apparently, the disaster recovery systems didn’t come on line after the failure as they should have.  Whether that was due to cost reduction and it’s associated secondary effects or not we may never know.

More importantly, it is certainly clear that British Airways disaster recovery and business continuity plan was not prepared for an event like this.

One one point the CEO of BA was forced to say, on the public media, that people should stay away from the airport.  Don’t come.  Stay home.  From a branding standpoint, it doesn’t get much worse than that.  Fly BA – Please stay home.

As part of the disaster recovery plan, you need to consider contingencies.  In the case of an airline, that includes when you cancel flights, how do you get bags back to your customers.  Today, two days later, people are saying that they still don’t have their luggage and they can’t get BA to answer their phones.  BA is now saying that it could be “Quite a while” before people get their luggage back and if they don’t, that is more cost for BA to cover.

One has to assume that the outcome of all of this will be a lot of lawsuits.

From a branding standpoint this has got to be pretty ugly.  You know that there has been a lot of social media chatter on the horror stories.  In one article that I read, a passenger was talking about taking a trip from London to New York and that all the money they were going to lose for things that they planned on doing when they got to New York.  Whether BA is going to have to pay for all of that is unclear, but likely at least some of it.

You also have to assume that at least some passengers will book their next flight on “any airline, as long as it is not BA”.

To be fair to BA, there have been other, large, airline IT systems failures in the last year, but this one, it’s a biggie.   Likely these failures are, at least in part, due to the complex web of automation that the airlines have cobbled together after years of cost cutting and mergers.  Many of these systems are so old that the people who wrote them are long dead and the computer languages – notably COBOL – are considered dead languages.

The fact that there were no plans (at least none that worked) for how to deal with this – how to manage tens of thousands of tired, hungry, grumpy passengers – is an indication of work for them to do.

But bringing this home, what would happen to your company if the computers stopped working and it took you a couple of days to recover.  I know in retail, where all the cash registers are computerized and nothing has a price on it any more, businesses are forced to close the store.    We saw a bigger version of that at the Colorado Mills Mall in Golden earlier this month.  In that case likely a number of businesses will fail and people will lose their jobs and their livelihoods.

My suggestion is to get people together, think about likely and not so likely events and see how well prepared your company is to deal with each of them.  Food for thought.

Information for this post came from the Guardian here and here The Next Web  and Reuters.

Facebooktwitterredditlinkedinmailby feather

Was Sony The First (Hint: No)?

While The Sony hack/attack continues to capture the media’s attention with new data releases which create drama – who got caught saying what when – Bloomberg is reporting that something very similar to that happened to the Sands empire in February of this year.

Some of you are familiar with Admiral Rogers testimony (head of the NSA) last month before Congress about hackers taking down critical US infrastructure in the future – not if, but when.  Guess what.  The NSA knew all about the Sands attack from the beginning.  What Rogers didn’t say was that it had already happened.

Bloomberg reported: “But early on the chilly morning of Feb. 10, just above the casino floor, the offices of the world’s largest gaming company were gripped by chaos. Computers were flatlining, e-mail was down, most phones didn’t work, and several of the technology systems that help run the $14 billion operation had sputtered to a halt”

The engineers at the Sands figured out what was going on within an hour – that they were under attack and that computer hard drives were getting wiped.

Hundreds of people were calling IT that their computers were dead.

Like a scene out of a movie (sorry Sony – this is not your script), Sands engineers ran across the casino floors of the Sands Vegas properties unplugging network cables of as many working computers as they could.  It didn’t matter if the computer controlled slot machines or was used by pit bosses – it got unplugged.

Unlike the Sony attack – at least as reported by Bloomberg – the attackers didn’t steal data and we certainly have not seen any data publicly released.  The attackers were angry at Sheldon Adelson, CEO of Sands, for pro-Israel, anti-Iran comments he made at a panel discussion at Yeshiva University in New York late last year.

While the Sands  organization understood physical security – both of the casinos and Adelson’s family – very well, they really didn’t get cyber security at the same level.

Even though the Sands organization was able to keep the details quiet for 10 months, they are starting to come out now.  The attackers started their attack at a smaller Sands casino in Pennsylvania, got in and used that as a path toward Vegas.

Early in the morning of February 10, 2014, the attackers launched their attack, wiping thousands of computers and servers.  By early afternoon, security engineers at the Sands saw from logs that the attackers were compressing large batches of sensitive files — likely in preparation for uploading them.

The President of Sands, Michael Leven, made the decision to pull the plug – like Sony did – and disconnect the hospitality chain from the internet.

Luckily for Sands, they used an IBM mainframe for certain functions.  The door key cards still worked, the elevators worked.  The company’s web sites, hosted by a third party, were still working, although the attackers did attempt to take those servers down the following day and did compromise them.

Since the Sands was working to do damage control,  it said only that their web site had been vandalized and that some other systems were not working.

The hackers, getting upset that they were not getting the effect that they wanted, posted a video on You Tube explaining what they had done.  While the video was removed after a few hours, the attack was no longer a secret.

So what does a company do?  One thought is to hack back.  The challenge is to figure out where.  More than likely, the attacks are coming from compromised computers all over the world (the Initial attacks on Sony came from a hotel in Thailand – are we going to blow up Thailand?).  What if the attacks are coming from – or seem to be – from a farm house in Iowa.  Are we going to send S.W.A.T. in after Ma and Pa?  You might speculate.  You might eventually have evidence.  But in the U.S. if you get caught hacking in to other people’s computers (unless you are the CIA or NSA), you will go to jail.  That is the law.

There are no easy answers unfortunately.  BUT, what is clear is that companies need to start making contingency plans because this problem is not going away.

And, as news of the Sony and Sands attacks go mainstream – maybe with others following it – attackers will only amp it up and go after more people.

To paraphrase the Boy Scouts – BE PREPARED!

 

Mitch

 

 

Facebooktwitterredditlinkedinmailby feather