Category Archives: Business Continuity

Hackers Attack France’s TV5, Almost Destroying It

All 12 channels France’s TV5 Monde were taken off the air one night in April 2015.  The company had just launched a new channel that day and were out celebrating when a flood of text messages told the director-general that all 12 stations had gone dark.

Attackers, claimed to be from the Cyber Caliphate. Since this occurred only a few months after the Charlie Hebdo attack, it certainly could be a follow on attack from Daesh (aka isis).

However, as investigations continued, another possible attacker appeared.

In this particular case, as we saw in the Sony attack, the Sands Casino attack, Saudi Aramco and others, the purpose was destruction, not theft of information.  They did a pretty good job of it.

What was not clear was why TV5 Monde was selected for this special treatment.  The attackers didn’t indicate that they had done anything wrong.

The good news was that since they had just brought a new channel online that day, technicians were still at the company offices.  They were able to figure out what server was in charge of the attack and unplug it.

While unplugging this server stopped the attack, it didn’t bring the TV feeds back on line.  Given that the goal of the attackers was to destroy and without subtlety, they destroyed software and damaged equipment.

From 8:40PM that evening until 5:25 AM the next day, those 12 channels were dark.  At 5:25 AM they were able to get one channel back on the air.

The director-general of TV5 Monde said that had they not gotten those feeds back online, the satellite distribution customers, which is most of their revenue, might have cancelled their contracts, putting the existence of the company in jeopardy.  The rest of the channels did not come back until later that day.

Much later French investigators linked the attack to the Russian hacker group APT28.

To this day, no one knows why TV5 Monde was targeted.

One theory is that it was a test run to see how much damage they could do to an organization and TV5 Monde just happened to be the crash test dummy.

The attackers had been inside TV5 Monde’s network for more than 90 days doing reconnaissance.

Once they had collected enough information, they were able to construct a bespoke (custom) attack to do as much damage as possible.

Certainly we have seen destructive attacks before, such as the ones mentioned above, but we also have seen more cyber-physical attacks such as the power blackout in Ukraine last year, the German steel mill which sustained millions of dollars of damage and the recent incursions into nuclear plants in the United States.

This company survived, even though they had to spend $5 million to repair things and incur additional costs of $3 million a year forever due to new security measures put in place.

The attack route, not surprisingly, was the Internet.  As more and more stuff gets connected – the remote control TV cameras were controlled out the Netherlands for example – the ease of attack becomes much more of a known art.  As hackers conduct test runs, such as the attack on TV5 Monde is thought to have been, they become more confident of their ability to do damage going forward.

The real question is, as your company becomes more and more intertwined with the Internet, whether your organization is vulnerable to an attack – even if all you are is a distraction or collateral damage.  And if you are vulnerable, will you be able to recover and survive?  While the Sony attack was done as a revenge attack, we are seeing other attacks which are just targets of opportunity.

The good news is that TV5 Monde survived, but they were completely disconnected from the Internet for months.  Could your company survive for months without being connected to the Internet?  In their case, once they were reconnected to the Internet, that conversation that many companies have – about security or convenience – became much more clear.  Now it was convenience or survival and survival won.  Every employee has had to permanently change the way that they operate.  Forever!

Information for this post came from BBC.

Facebooktwitterredditlinkedinmailby feather

Internet of Things – The New Hacker Attack Vector

Recently, Brian Krebs ( was hit with a massive denial of service attack.  The site went down – hard – and was down for days.  His Internet Service Provider kicked him off, permanently.  The attack threw over 600 gigabits per second of traffic at the site.  There are very few web sites that could withstand such an attack.

The week after that, there was another denial of service attack – this time against French web hosting provider OVH – that was over 1 terabit per second.  Apparently, OVH was able to deal with it, but these two attacks should be a warning to everyone.

These attacks were both executed using the Mirai botnet.  Mirai used hundreds of thousands to millions of Internet of Things devices to launch this attack.    The originator released the source code to this attack because, he says, that he wants to get out of the business.

While Mirai used to control around 380,000 devices every day, some ISPs have started to take action and the number is now down to about 300,000 a day.

There are a couple of reasons why the Internet of Things presents a new problem.

The first problem is patching.  When was the last time that you patched your refrigerator?  Or TV?  I thought so!  After 10 years of berating users, desktops and laptops are being patched regularly. Phones are being patched less regularly.  Internet of Things devices are patched almost never.

The second problem is numbers.  Depending who you believe, there will be billions of new IoT devices brought online over the next few years.  These range from light bulbs to baby monitors to refrigerators.  The manufacturers are in such a hurry to get products to market and since there is almost no liability for crappy security, the manufacturers are not motivated to worry about security.

Brian Krebs, in a recent post, examined the Mirai malware and identified 68 usernames and passwords hardcoded into this “first generation” IoT malware.  For about 30 of them, he has tied the credentials to specific manufacturers.

This means that with a handful of hardcoded userids and passwords, Mirai was able to control at least hundreds of thousands of IoT devices.

How many IoT devices could a second- or third- generation version of that malware control?

The third problem is the magnitude of these attacks.  While DDoS attack prevention services like Cloudflare and Akamai have been able to handle attacks in the 500 gigabit per second range, if the growth of DDoS attacks continues and we are talking about multi-terabit attacks, how much bandwidth will these providers need to purchase to keep up with the DDoS arms race.  While the cost of bandwidth is coming down, the size of attacks may be going up faster.

Lastly, ISPs – the Internet providers that enable the Internet connection to your home or office are not stepping up to the plate quickly enough to stomp out these attacks.

The ISPs may become more motivated as soon as these rogue IoT devices that are sending out DDoS traffic force the ISPs to buy more bandwidth to keep their customers happy.

Of course, like Brian Krebs, if your company winds up being the target of one of these attacks, your ISP is likely to drop you like a hot potato.  And equally likely, they will not let you back on after the attack is over.

If being able to be connected to the Internet is important to your business – and it is for most companies – you should  have a disaster plan.

The good news is that if your servers are running out of a data center, that data center probably has a number of Internet Service Providers available and you should be able to buy services from a different provider in the same data center within a few days to a week.  Of course, your servers will be dark – down – offline – in the mean time.  Think about what that means to your business.

For your office, things are a lot more dicey.  Many office buildings only have a single service provider – often the local phone company.  Some also have cable TV providers in the building and some of those offer Internet services, but my experience says that switching to a new Internet provider in your office could take several weeks and that may be optimistic.

Having a good, tested, disaster recovery plan in place sounds like a really good idea just about now.


Information for this post came from PC World.

The Brian Krebs post can be heard here.

Facebooktwitterredditlinkedinmailby feather

Learning About Ransomware – The Hard Way

A small New England retailer learned about ransomware the hard way.  After an employee clicked on a link, that system was infected with Cryptowall.

The malware encrypted, among other files, the company’s accounting software.

The accounting software did not live on that user’s computer;  it lived on the network, but since that user had access to that network drive,  so the malware was able to encrypt the accounting files.  This is a very common situation with ransomware.  It will attempt to encrypt any files that it can get write access to .

The attackers asked for $500 in bitcoin, which is pretty typical.  It is a number which is low enough that many people will decide it is easier to pay up than to deal with it.

The best protections for ransomware is good backups.  More than one copy and not directly accessible from the system under attack, otherwise the ransomware could encrypt the backups also.

Unfortunately for this company, their backup software had not worked for over two years – and they did not know it.

Believe it or not, we see this a lot.  Either backups don’t work, they do not back up all of the critical data or they are out of date.  In many cases, no one has EVER tried to restore from the backup, so how they find out that the backups don’t work is when they try to restore from them.  If systems are backed up individually, then each and every backup needs to be tested.

So in this case, the business owner paid the ransom.

Unfortunately, ransomware, like most software, has bugs in it so when they attempted to decrypt the files after the ransom was paid, the decryption did not work.

The hackers, concerned that their business model would fail if the victims paid the ransom and did not get their data back, even offered to try and decrypt the files – if the business owner sent the files to the hacker.  The owner declined.

At this point the business owner doesn’t think he can trust his systems, but he doesn’t want to spend $10,00 to rebuild them.

And all because an employee clicked on the wrong link.

Information for this post came from True Viral News.

Facebooktwitterredditlinkedinmailby feather

The Internet of (Scary) Things

UPDATE:  Brian’s web site is not back with Akamai, but rather with Google’s Project Shield.  Project Shield is an effort by Google to support free speech to journalists around the world.  If they accept your web site, there is no cost.  And Google probably has a fair amount of both bandwidth and brainpower to stop cyber attacks. No doubt they get hacked at from time to time.

Brian Krebs is a former WaPo writer who focused on cyber security until the Post decided that cyber security was not their thing,  When he and the Post parted ways, Brian started a blog called Krebs on Security (which is a great blog if you don’t already read it) and wrote a book on the innards of the Russian spam mafia.

Very recently he exposed a group of Israeli “business people” who run a large DDoS for hire service called vDOS.  A DDoS is an attack against a target web site designed to flood the site with traffic and effectively shut it down.  His attention to vDOS got the owners arrested.

About four days ago his web site was taken offline by a very large, sustained DDoS attack.  His site is hosted by Akamai (for free) and they told him that they were going to have to shut down their support because they could not handle the attack – it was too much for them.

The attack measured a sustained attack rate of over 600 gigabits per second.  This, Akamai said, was double the next largest attack that they had ever had against any customer.

What was going on behind the scenes is not clear, but the tech community came down on Akamai like a ton of bricks.  Akamai competitor Cloud Flare offered to host the site.

72 hours later is back online, apparently with Akamai.  During those 72 hours, I think, Akamai engineers analyzed the attack and figured out a way to mitigate it.

Many of these large attacks use an attack technique called amplification.  With amplification attacks, the attacker sends out a relatively small stream of data and the attack gets amplified many times as it hits the target.  One example of an amplification attack is a DNS attack where the attacker sends a particular DNS request to a DNS server to resolve with the “sender” of the request spoofed to be the target.   Because of the way the request is structured, a 40 byte request might generate a 4,000 byte response to the target, so, in this hypothetical case, we have an amplification of 100x.   This means that if the attacker has/uses 1 gigabit of bandwidth, he would generate 100 gigabits of attack traffic on the target.  Very few sites can survive under this attack without the support of a firm like Akamai or Cloudflare and their site would stay down until the attacker got tired.  That could be minutes, hours or days.

What is different about this attack is that rather than using a few drone computers and an amplification style attack – which is relatively easy to mitigate – this attack used hundreds of thousands of devices, which made it very difficult to block.

What is unclear right now is whether Akamai’s engineers mitigated the attack or the attackers made their point and moved on.

Now the scary part from the subject.

Brian is saying on his blog that it appears that these hundreds of thousands of devices may be infected Internet of Things (IoT) devices such as web cameras, digital video recorders and routers.

As I have written before, many of these devices have horrible security, making the process of turning them into zombies relatively easy.

The next scary part is what this means for businesses.  It is certainly possible that this could be the new norm for DDoS attacks.  We are dealing with a client now who has been DDoSed a number of times and every time that happens, their ISP just shuts down their Internet connection.  Sometimes for a few hours, sometimes for a day.  In the mean time this client’s users have to resort to using some other form of Internet access – maybe their cell phone data plan with it’s ridiculously slow speed and data caps – to get online.  This has a dramatic effect on their business.

My question for you today is “Is your business prepared to deal with a DDoS attack?” All it takes is for someone to be upset with you for some perceived slight and you could be under siege.  There are many other DDoS for hire services like vDOS and their prices are insanely check.  They are hosted in places like Russia and Ukraine, so our ability to shut them down using the courts is pretty much nill.  When this happens, your ISP’s first strategy is going to be to turn off your Internet connection.  Now it is your problem.

You might say that you have a Service Level Agreement (SLA) with your provider and if they shut you off they have to pay a penalty.  I would say two things about that.  Let’s say that you pay $2,000 a month for your Internet connection (I know, most of you pay a lot less, but I want to make a point here).  In that case, your SLA probably says that they have to pay you $66 a day that you are down, but typically only if you are down for say, over 12 or 24 hours.  So they write you a check for $66 and your business is in the stone age for a day.  If you are down for a week, that would cost them $466.

How much would it cost you to be down for a day or a week?

IF you have cyber insurance and you have coverage that covers you for this kind of attack, the business interruption coverage might kick in.  We have seen a lot of those policies that have a 24 hour waiting period before coverage kicks in and if you are down for 18 hours each, several times over a month, that 24 hour waiting period applies to each event, typically.

AND, even more important, your ISP might say that the DDoS attack violates your terms of service or contract that they are not liable for anything.  If they say that, you are left to sue them in court.  That is not a very positive scenario.

The moral of the story is that you need to have both an incident response plan and disaster recovery/business continuity plan.

For more information on the attack on Brian’s web site, read his blog, here.


Facebooktwitterredditlinkedinmailby feather

Disgruntled Citi Bank Employee Shuts Down The Bank

In 2013 a disgruntled Citibank employee decided to get even.  Lennon Ray Brown, 38, who worked for Citi during 2012 and 2013 in the Dallas area, decided to teach the bank a lesson.

On December 23, 2013, Brown sent a set of commands to 10 of the Citi global core routers.   Those commands erased the running configurations in 9 of those routers.  It is not clear what happened with the 10th core router.

The result of this was to take 90 percent of Citi’s network down two days before Christmas at 6PM.  Right in the prime shopping and dinner hour.

At the time, he sent a text to a coworker that read:

“They was firing me,  I just beat them to the punch.  Nothing personal, the upper management need to see what they guys on the floor is capable of doing when they keep getting mistreated.  I took one for the team.  Sorry if I made my peers look bad, but sometimes it take something like what I did to wake the upper management up.” 

Clearly, this guy was not a happy employee.  Equally clearly, he didn’t show any remorse and didn’t care if he got caught.

And, likely, at most companies, an unhappy IT guy could do this amount of damage or more.

Ricky Joe Mitchell, Security Architect at Home Depot at the time of the breach there pleaded guilty to sabotaging his former employer’s network and causing them a million dollars in damage.  His former employer, EnerVest spent 30 days recovering from the sabotage.

In the grand scheme of things, the most likely cyber risk that any company has to deal with is the insider threat.  Most of the time it is not as dramatic as shutting down a bank’s network or sabotaging a former employer, but little attacks hurt as well.

I do not mean to single out IT employees;  it is just that they can make a pretty flashy entrance.  It really does not matter what department the employee works in.

When the Chase banker took data on 76 million customers, HE had no plans to post that data on the Internet.  But someone else did.  On top of it, Chase was fined a million dollars for not having the right controls in place to stop him.

Lennon was sentenced to 21 months in prison and $77,000 in restitution, but I suspect that for Citi, two days before Christmas, that penalty, three years later, doesn’t mean much.

So, sometimes, working on the easy stuff is what we should do first.  Monitoring.  Dual controls.  Alerting.  Keeping an ear to the ground.

Nothing is perfect when it comes to security.  We just want to continuously make things incrementally better.

Information for this post came from SC Magazine.

Facebooktwitterredditlinkedinmailby feather

Health and Human Services Issues New Guidance on Ransomware

The U.S. Department of Health and Human Services Office of Civil Rights, the government entity that manages the privacy of health care information that you share with doctors and others, has issued new guidance on ransomware.

While technically, it only applies to organizations that they regulate, in reality, almost everything they said applies equally to all businesses.

The U.S. Government says that, on average, there have been 4,000 daily ransomware attacks, a 300% INCREASE over last year. 

They say that businesses should:

(a) Conduct a risk analysis to identify threats and vulnerabilities.  In the case of HHS OCR, they are only worried about protecting health information, but in reality, every business should be conducting a risk analysis at least annually.

(b) Once you have conducted a risk analysis you need to create a plan to mitigate or remediate those risks and then execute that plan.

(c) Implement procedures to safeguard against malicious software (like ransomware).

(d) Train ALL users on detecting malicious software and what to do if they detect it or accidentally click on something.

(e) Limiting access to information to only those people with a need for it and, if possible, grant them read only access.  Ransomware can’t encrypt files that it doesn’t have write access to.

At least one ransomware attack that I am familiar with became a full blown crisis because a user had write access to a whole bunch of network shares and they ALL got encrypted.  Not a good day at that non-profit.

(f) Create and maintain and overall incident response plan that includes disaster recovery, business continuity, frequent backups and periodic full drill exercises.

There is a lot of language that ties the specifics of what they recommend to the HIPAA/HITECH regulations, which is important if you are a covered entity or business associate, but even if you have no HIPAA information, these recommendations are right on.

If you are not doing all of these things today, you should consider making it a priority.  Ransomware is messy stuff, even if you have backups of everything.  Assuming you have not implemented a full disaster recovery/business continuity solution (and if you have not you have a lot of company), recovering from your backups is a very time consuming and labor intensive process and in the mean time, you are working off of pencil and paper.

Information for this post came from the Health and Human Services web site.

Facebooktwitterredditlinkedinmailby feather