Tag Archives: DDoS

DDoS Attack Turns Off The Heat. In Finland. In the Winter.

The most recent distributed denial of service attack (DDoS) meant that most people could not get to Twitter.  While that was awful and may have forced a few people to actually work instead of tweeting, for the most part, that was not a big deal.  In fairness to the DYN attack, there were actually hundreds of web sites that were effectively offline, but still, in the grand scheme of things, a small problem.

The Metropolitan, an English language newspaper in Finland is reporting a much more serious issue and that is combining DDoS attacks with the Internet of Things (IoT).

In this case, two apartment buildings in the city of Lappeenranta lost heat and hot water due to a DDoS attack on the computer that controls the heating system.  The CEO of the company that manages these buildings said the heat and warm water were “temporarily disabled”.

By temporary, he means from late October to November 3rd, a period of over a week.  Remember, Finland is pretty chilly this time of year, so to have no heat or hot water for a week or two is, kind of, “a problem”.

The attack deluged the computers that control the system with traffic.  The system’s solution to this is to reboot, but that doesn’t make the traffic go away, so it is sort of “rinse and repeat”.  Since the systems were continuously rebooting, they could not turn on the heat or hot water.

Since the building maintenance engineers are not cyber security experts, they had no clue what was happening.  If they had replaced the “faulty” computers, they would have done the same thing because the computers were not faulty – just doing what they were programmed to do.

This is reminiscent of the attack on the Ukrainian power grid last year, with different results.  In Ukraine, the power grid is old and creaky.  What computers there are there are bolted on to the existing infrastructure.  If the computers fail, you have to drive to the substation and throw the switch by hand.  Which is why that attack, while it literally destroyed a lot of the power distribution infrastructure, only turned off the lights for less than a day.

Finland, however, is not a third world country.  They have a lot of modern technology.  I suspect, in this case, that there was no switch to throw in the apartment building to turn on the heat.

Like we see a lot in modern IoT devices, security is an afterthought.  Probably no one considered that someone might want to attack their controller so they didn’t harden it nor did they set up protocols to deal with an attack.

SCADA, the industrial version of IoT (I know that is an over simplification, but it will work for this piece), was also never designed with security in mind.  I used to work for one of the largest SCADA manufacturers in the world.  There was no security in those devices.  Not even a userid and password, never mind something more sophisticated.  SCADA devices were never designed to even be on the Internet, but people figured out that they could save money by doing that.

Unfortunately, water plants, sewage plants, power plants, chemical plants and a lot of other infrastructure is not a good place to experiment, but the money to be saved is too large to ignore.  So we are being guinea pigs.

The attack on DYN, I think, was an experiment.  How did people deal with it?  How did the experts respond?  Did the police do anything?

Now they have some data points and they will continue to experiment.

At some point they will decide it is time to take down the power grid.  While throwing the entire United States in the dark is probably more effort than even a nation state would want to take (although far from impossible), throwing Washington, DC or New York City into the dark might produce some interesting results.  If you could damage the infrastructure at the same time to make it harder, take longer and cost more to repair, that would be a “side benefit”.

You can believe me or not, but this will happen.  It is just a matter of when because the steps that need to be taken now are not being taken.  It is too expensive and too inconvenient.  Remember my mantra.  Security.  Convenience.  Pick one.  You could probably modify that to Security, convenience, cost, pick at most two.

Tell the utilities that all of their little controllers that connect by way of Wi-Fi have to be secured or all of their controllers in the field that live in a secure metal box by the side of the road have to be replaced by something that actually is secure.  They will tell you that it is too expensive to do.  Right now, secure means that there is a padlock on the box.  An attacker could cut the padlock and if that was too hard, they could smash the box to bits with a sledgehammer.

After 9-11, the Feds paid local utilities to put fences around water treatment plants and such.  Some even have fence shakers – cool little gizmos that detect if someone is shaking the fence by trying to climb over it.  And, maybe, that will improve the security of central infrastructure, but there is so much distributed infrastructure that is not effectively protected.

For example, is there a power substation near your house?  How about a gas main line?  How strongly are they protected?  Maybe – and only maybe – there is a fence around it.  For me, there is a fence around the substation but not around the gas main.  Of course, even with the fence, there is no one there to physically disable the attacker and by the time the police or utility got there, the damage would be done.

Maybe the attack in Finland is a warning. But are enough people and the right people listening?  I don’t know.

 

Information for this post came from the Metropolitan.

Facebooktwitterredditlinkedinmailby feather

Yet Another Denial Of Service Attack

Denial of Service attacks are a big deal now.  Last week the attack against Dyn stopped people from accessing Twitter and hundreds of other busy web sites for hours.

These attacks, called denial of service or distributed denial of service (DDoS) attacks have many computers send a lot of data at a web server until it rolls over,sticks it’s little computer legs in the air and plays dead.

A critical part of these attacks is something called amplification.  If I have a 1 megabit internet connection and can amplify that attack by a factor of 20, that 1 megabit connection can hit the target web site with 20 megabits (per second) of traffic.  Multiply that by, say, 500,000 computers doing the attack and you can destroy a web site.  If I have a 100 megabit Internet connection, the problem is 100 times bigger.

So the hackers keep trying to come up with more powerful amplification attacks,  They have a new one.  It uses CLDAP, a protocol computers use to authenticate users.  Or destroy web servers.

The amplification factor for this attack was between 46 and 55, meaning that, on average, for every 1 character sent, the attack generated 46-55 bytes back to the site being attacked.

1 megabit of traffic from the attacker means at least 46 megabits of traffic that the site being attacked sees.  And with these attackers controlling hundreds of thousands to millions of devices – including Internet of Things devices, that adds up to a lot of traffic.

Even if the server didn’t crash, the Internet service provider probably doesn’t have enough bandwidth, so  they will take the server down by “blackholing” it, meaning that, at the very edge of the provider’s network, they will discard ALL traffic directed at the site being attacked.  The attacker wins.  They don’t have to kill the site, the Internet provider does that for them.

Many of – if not most of – these devices that the attackers are using to attack other sites are not configured correctly or do not have the current patches.  It is critical that you change default passwords and update devices regularly.

As a result of this most recent attack, the feds are trying to figure out what ISPs can do, but you can likely be much more effective – if you take security of all of your devices – webcams, DVRs, web based doorbells, smart TVs, smart refrigerators – all of it, seriously.

We need your help!

Information for this post came from Softpedia.

Facebooktwitterredditlinkedinmailby feather

IoT Maker Says It Will Recall; China Says it Will Sue Journalists

Maybe a little good will come from the day the Internet died last week.  And maybe, also, a little bad.

To very briefly recap, attackers using the now free and open source malware Marai attacked Dyn’s servers.  Dyn provides DNS services to the likes of Twitter, Amazon and hundreds of other companies.  The attack against Dyn didn’t directly affect those companies but stopped users from being able to get to those company’s servers – effectively producing a complete outage.

Akamai and Flashpoint have said that infected IoT devices were a large part of the attack – because people don’t patch their refrigerators and don’t change the refrigerator’s default password.

In this case, the Chinese company XiongMai Technologies or XM makes circuit boards for DVRs and IP cameras for lots of other companies.  The default password, in some cases hard coded into the device and impossible for the user to change, is static and well known.  Hence the attack.

XM released a statement which, in part, read “XM have to admit that our products also suffered from hacker’s break-in and illegal use”.

XM said it would be issuing a recall on millions of devices, but XM doesn’t know who owns the devices that their circuit boards were put into.  In fact, in many cases, the company that sold the finished product has no clue who owns those products.

The result of this is that most of these products will never be replaced or fixed.

XM did say that they have made two important changes late last year.  One is to turn off the service, Telnet, that this particular malware used to attack the devices and the other is to make the users change the default password when they initially power up the devices.

99+% of the users who buy these devices have no clue what Telnet is, no clue of how to figure out whether it is on or off for a particular device and no clue of how to fix it -if that is even possible.  Nor do they know how to patch their DVR or cameras.

Which means that this problem isn’t going away any time soon.

Also remember that this attack used these devices and this technique.  Since there are billions of IoT devices, next month it will be a different device and a different technique.  This is kind of like a game of whack-a-mole.

In the meantime, the Chinese Ministry of Justice threatened journalists who reported on the story for issuing “false statements”.

Google translate, which apparently doesn’t deal with grammar well, reported their statement, in part, as “Organizations or individuals false statements, defame our goodwill behavior … through legal channels to pursue full legal responsibility for all violations of people, to pursue our legal rights are reserved.”

The good news, besides getting attention for the problem and getting at least one company to do a recall and issue patches, is that this apparently scared the poop out of the Department of Homeland Security.  While last week’s attack was on Twitter (and others), the next attack could be against the power grid, the DoD or maybe even something important.

The Department of Homeland Security has issued some contracts in the past year to companies working to thwart DDoS attacks and this event is likely to spur more contracts.

What we need to do is find a way to identify these tens of millions of infected systems and get them cleaned up or turned off.  THAT is not a simple task.

Then we need to get vendors to stop implementing the least possible security.  If product liability laws were extended to cover these types of events, or if the Consumer Product Safety Commission could issue mandatory recalls in cases like this, the cost of poor security would move back to the vendors, motivating them to do better.  Unfortunately, I don’t think either of these will happen any time soon.

Information for this post came from Krebs on Security.

Facebooktwitterredditlinkedinmailby feather

The Day The Internet Died

Well, not exactly, but close.  And it was not due to pictures of Kim Kardashian.

Here is what happened.

When you type in the name of a website to visit, say Facebook.com, the Internet needs to translate that name into an address.  That address might look like 157.240.2.35 .

The software that translates those names to numbers is called DNS or Domain Name System.  DNS services are provided by many different companies, but, typically, any given web site uses one of these providers.  The big providers work hard to provide a robust and speedy service because to load a single web page may require many DNS lookups.

One provider that a lot of big websites use is called Dyn (pronounced dine).  Today Dyn was attacked by hackers.  The attack technique is called a Distributed Denial of Service Attack or DDoS.  DDoS is a fancy term for drowning a web site in far more traffic than it can handle until it cannot perform the tasks that customers expect it to do.

In this case, customers included sites like Amazon, Paypal, Twitter, Spotify and many others.  These sites were not down, it was just that customers could not get to them.

The attacks started on the east coast, but added the west coast later.  Here is a map that pictures where the worst of the attack was.  In this picture from Downdector.com, red is bad.

ioutage

There were multiple attacks, both yesterday and today.  The attackers would attack the site for a few hours, the attack would let up and then start over again.  For the moment, the attack seems to be over, but that doesn’t mean that it won’t start back up again tomorrow, Monday or in two weeks.

You may remember I wrote about the DDoS attack against Brian Krebs web site and the hosting site OVH.  Those two attacks were massive – 600 gigabits per second in the Krebs attack and over 1 tb per second in the OVH attack.  The attackers used zombie security cameras and DVRs and the Marai attack software to launch these two attacks.

After these attacks, the attacker posted the Mirai software online for free and other attackers have downloaded it and modified it, but it still uses cameras and other Internet of Things devices that have the default factory passwords in place.

As of now, we don’t know how big this attack was, but we do know that at least part of it was based on the Mirai software.  And that it was large.  No, HUGE.

It is estimated that the network of compromised Internet of Things, just in the Mirai network,  includes at least a half million devices.  Earlier reports said that the number of devices participating in this attack was only a fraction of the total 500,000 – which means that the attack could get much bigger and badder.

The problem with “fixing” this problem is that it means one of two things: Fixing the likely millions of compromised Internet of Things devices that are part of some compromised attack network or shutting there devices down – disconnecting them from the Internet.

The first option is almost impossible.  It would require a massive effort to find the owners of all these devices, contact them, remove the malware and install patches if required.  ISPs don’t want to do this because it would be very expensive and they don’t have the margin to do that.

The second option has potential legal problems – can the ISP disconnect those users?  Some people would say that the actions of the infected devices, intentional or not, likely violates the ISP’s terms of service, so they could shut them down.  However, remember, that for most users, if the camera is at their home or business, shutting down the camera would likely meaning kicking everyone at the home or business off the Internet.  ISPs don’t want to do that because it will tick off customers, who might leave.

Since there is no requirement for users to change the default password in order to get their cameras to work, many users don’t change them.  Vendors COULD force the users to create a unique strong password when they install their IoT devices, but users forget them and that causes tech support calls, the cost of which comes out of profit.

As a result of all these unpalatable choices, the problem is likely to continue into the future for quite a while.

Next time, instead of Twitter going down, maybe they will attack the banking infrastructure or the power grid.  The good news is that most election systems are stuck way back in the stone age and they are more likely to suffer from hanging chads than hackers.

Until IoT manufacturers and owners decide to take security seriously – and I am not counting on that happening any time soon – these attacks will only get worse.

So, get ready for more attacks.

One thing to consider.  If your firm is attacked, how does that impact your business and do you have a plan to deal with it?

The thousands of web sites that were down yesterday and today were, for the most part, irrelevant collateral damage to the attacks.  Next time your site could be part of the collateral damage.  Are you ready?

Information for this post came from Motherboard and Wired.

 

Facebooktwitterredditlinkedinmailby feather

The Internet of (Scary) Things

UPDATE:  Brian’s web site is not back with Akamai, but rather with Google’s Project Shield.  Project Shield is an effort by Google to support free speech to journalists around the world.  If they accept your web site, there is no cost.  And Google probably has a fair amount of both bandwidth and brainpower to stop cyber attacks. No doubt they get hacked at from time to time.

Brian Krebs is a former WaPo writer who focused on cyber security until the Post decided that cyber security was not their thing,  When he and the Post parted ways, Brian started a blog called Krebs on Security (which is a great blog if you don’t already read it) and wrote a book on the innards of the Russian spam mafia.

Very recently he exposed a group of Israeli “business people” who run a large DDoS for hire service called vDOS.  A DDoS is an attack against a target web site designed to flood the site with traffic and effectively shut it down.  His attention to vDOS got the owners arrested.

About four days ago his web site was taken offline by a very large, sustained DDoS attack.  His site is hosted by Akamai (for free) and they told him that they were going to have to shut down their support because they could not handle the attack – it was too much for them.

The attack measured a sustained attack rate of over 600 gigabits per second.  This, Akamai said, was double the next largest attack that they had ever had against any customer.

What was going on behind the scenes is not clear, but the tech community came down on Akamai like a ton of bricks.  Akamai competitor Cloud Flare offered to host the site.

72 hours later KrebsOnSecurity.com is back online, apparently with Akamai.  During those 72 hours, I think, Akamai engineers analyzed the attack and figured out a way to mitigate it.

Many of these large attacks use an attack technique called amplification.  With amplification attacks, the attacker sends out a relatively small stream of data and the attack gets amplified many times as it hits the target.  One example of an amplification attack is a DNS attack where the attacker sends a particular DNS request to a DNS server to resolve with the “sender” of the request spoofed to be the target.   Because of the way the request is structured, a 40 byte request might generate a 4,000 byte response to the target, so, in this hypothetical case, we have an amplification of 100x.   This means that if the attacker has/uses 1 gigabit of bandwidth, he would generate 100 gigabits of attack traffic on the target.  Very few sites can survive under this attack without the support of a firm like Akamai or Cloudflare and their site would stay down until the attacker got tired.  That could be minutes, hours or days.

What is different about this attack is that rather than using a few drone computers and an amplification style attack – which is relatively easy to mitigate – this attack used hundreds of thousands of devices, which made it very difficult to block.

What is unclear right now is whether Akamai’s engineers mitigated the attack or the attackers made their point and moved on.

Now the scary part from the subject.

Brian is saying on his blog that it appears that these hundreds of thousands of devices may be infected Internet of Things (IoT) devices such as web cameras, digital video recorders and routers.

As I have written before, many of these devices have horrible security, making the process of turning them into zombies relatively easy.

The next scary part is what this means for businesses.  It is certainly possible that this could be the new norm for DDoS attacks.  We are dealing with a client now who has been DDoSed a number of times and every time that happens, their ISP just shuts down their Internet connection.  Sometimes for a few hours, sometimes for a day.  In the mean time this client’s users have to resort to using some other form of Internet access – maybe their cell phone data plan with it’s ridiculously slow speed and data caps – to get online.  This has a dramatic effect on their business.

My question for you today is “Is your business prepared to deal with a DDoS attack?” All it takes is for someone to be upset with you for some perceived slight and you could be under siege.  There are many other DDoS for hire services like vDOS and their prices are insanely check.  They are hosted in places like Russia and Ukraine, so our ability to shut them down using the courts is pretty much nill.  When this happens, your ISP’s first strategy is going to be to turn off your Internet connection.  Now it is your problem.

You might say that you have a Service Level Agreement (SLA) with your provider and if they shut you off they have to pay a penalty.  I would say two things about that.  Let’s say that you pay $2,000 a month for your Internet connection (I know, most of you pay a lot less, but I want to make a point here).  In that case, your SLA probably says that they have to pay you $66 a day that you are down, but typically only if you are down for say, over 12 or 24 hours.  So they write you a check for $66 and your business is in the stone age for a day.  If you are down for a week, that would cost them $466.

How much would it cost you to be down for a day or a week?

IF you have cyber insurance and you have coverage that covers you for this kind of attack, the business interruption coverage might kick in.  We have seen a lot of those policies that have a 24 hour waiting period before coverage kicks in and if you are down for 18 hours each, several times over a month, that 24 hour waiting period applies to each event, typically.

AND, even more important, your ISP might say that the DDoS attack violates your terms of service or contract that they are not liable for anything.  If they say that, you are left to sue them in court.  That is not a very positive scenario.

The moral of the story is that you need to have both an incident response plan and disaster recovery/business continuity plan.

For more information on the attack on Brian’s web site, read his blog, here.

 

Facebooktwitterredditlinkedinmailby feather

Why The GitHub DDoS Attack Should Concern Everyone

UPDATE:  (Note: this is a bit geeky) Again according to Steve Gibson, the way this malware that attacked Github and GreatFire worked is that it modified the local hosts file using vulnerabilities that were fixed but that users had not yet patched and changed the local hosts file.  It created entries for connect.facebook.net and google-analytics.com and pointed them to the attackers server so that when your browser asked Google or Facebook for the code it needed, it got malicious code.  Another reason to keep your patches up to date!  For systems that were up to date on their patches, this attack would not work.

Steve Gibson in his Security Now podcast talked about the details of the attack against GreatFire and GitHub.

In both of these attacks, presumed to be orchestrated by China, hackers flooded these web sites with millions of requests per hour, overwhelming the servers and denying legitimate users access.  There are two other far scarier things about the attack to concern you.

While that is a problem,  bigger problem number one is this.  GreatFire runs on the Amazon cloud.  As such, they pay as they go for compute resources.  Millions of businesses do that.  The problem is that when they are seeing a customer load of 2500 times their normal load and Amazon scales up to support that, GreatFire gets the bill.

In GreatFire’s case, that bill is $30,000 A DAY.  Probably more than they would normally spend in a year.  What this means is that if you are an attacker, one attack method would be, if your target is renting their infrastructure from a pay as you go cloud service provider, to slowly ramp up their traffic – not enough to shut them down, but enough, over time, to affect them in the pocketbook.  Likely if they are not shut down but their Amazon bill goes up by a factor of 10, you deliver an interesting financial hit.  Even at $3,000 a day, never mind $30,000 a day, that is a $1 mil a year compute bill.  Pick a number between $3,000 and $30,000 a day, depending on the size of your target.  You have just caused your target to spend a lot more money to deliver his service.

Bigger problem number two is a security problem vs. a financial problem.  Apparently, the way this attack worked is that someone, presumed to be the Chinese government or one of their agents, slipped in the middle of unencrypted traffic between Chinese web hosting service Baidu (think Chinese, Google-like web services – map, cloud, news, search, etc.) and sometimes, but not always, when a client went to a Baidu hosted service, instead of getting the javascript they were supposed to get, they got a malicious script which just banged on GreatFire and later GitHub.  The user’s machine was not technically infected because when they closed the browser the script went away and Baidu was not infected – in fact, they never saw that request for the script.

Be evil and logically extend this.  You could compromise any non-SSL site and either have it serve up occasional malicious code that could do anything, or create a man in the middle attack that returns malicious code before the web site can.  They could attack any web site and when the browser closes, the evidence is gone (minor detail, it might be in a local cache but you can tell the browser not to do that or wipe it before you leave).  The user’s anti virus software won’t detect the malware because either it doesn’t persist or the software does not check scripts in browser cache.

You may have to tune the attack, but still, pretty interesting.

Facebooktwitterredditlinkedinmailby feather