Should We Compromise Security For Preventing Terrorism

After the Paris attacks, politicians have been falling all over themselves trying to be more anti-terrorist than the other.  Prior to the attacks, the odds of the CISA bill in Congress were dicey.  Now the odds are pretty high, even though that bill will do almost zero in terms of preventing terrorism.

One of the big issues is encryption.  Web site encryption (like HTTPS: or SSL/TLS) is really not an issue because the government cracked that years ago.  It takes them a little effort, but it doesn’t really stop them.

A bigger problem is encrypted phones – iPhones and android  – that Apple and Google do not have the keys to decrypt.  This means that the gov has to get a judge to issue a subpoena and then go to the owner, assuming the owner hasn’t been killed, say by a drone strike, and get them to comply.  If the owner is dead or not in the U.S., that is hard to do.  Hence, the government would like to have a secure back door.

However, secure and back door cannot exist in the same sentence.  You can have either one – just not both.  Many noted cryptographers and computer scientists signed a letter to Congress recently stating this, so it is not just me who thinks this is not possible.

Assuming the government or many private companies had a skeleton key to get in (and there would need to be tens of thousands of these keys given the number of software vendors out there) – given the number of breaches of both government systems and private company systems – do you really think that we could keep a skeleton key private for many years.  I don’t think so.  And, wherever those tens of thousands of keys are stored would be a super hot target for hackers.

Then you have the applications to deal with.  They are thousands, if not hundreds of thousands of applications.  Many written by one-person companies in some country like Ukraine or China.

Assuming the government required a back door, do you really think a developer in China would really care?  I didn’t think so.  Do you really think that you could stop a terrorist from getting that software from China or some other country?  No again.

So let’s look at the real world.

According to police reports and the Wired article, police have found cell phones next to dead terrorists – like the ones who blew themselves up in Paris – and in trash cans.  Are these phones encrypted with impenetrable encryption?  No, they are not encrypted at all.

Sure, some terrorists are using software like Telegram that is encrypted.  What we have to be VERY careful about is which software is really secure and which software only pretends to be secure.  The article gives some examples.  If you believe the FBI or NSA is going to tell you which software fits in which category, then I have a bridge for sale, just for you, in Brooklyn.

Once the feds find a phone, they can go to the carrier and get the call log from the carrier side.  That gives you text messages, phone numbers, web sites visited, etc.  Is this perfect?  No, it is not.  They used these facts in Paris to launch the second raid – the one in Saint-Denis – where they killed the mastermind of the first attack.  And, while they have not said this publicly, this is likely how they captured the terrorists in Belgium.

All that being said would the feds love all the traffic to be unencrypted? Sure.  Does that mean they are going blind, like they have claimed?  Nope.  Not even close.

In talking with a friend who used to be high up in one of the three letter agencies, he said that he has been warning them for 10 years that this is going to be a problem and they better plan for it.  How much planning they have done is classified – and needs to remain that way.

Creating the smoke screen that they are going blind is a great way to lull terrorists into a false sense of security – right up until the moment the drone strike happens.  If you don’t think that they are doing this on purpose, I recommend you rethink your position.

In talking with another very high ranking former DHS executive about whether we should weaken the crypto, he is very emphatic that the answer is no.

This is basically a repeat of the crypto wars of the 1990s when the FBI tried to force everyone to use a compromised crypto chip (called Clipper).  The concept didn’t work then.  Now, there is software being developed in every country in the world and if the NSA or FBI thinks that they can put the genie back in the bottle, they are fooling themselves.

I recommend reading the Wired article – it will provide a different perspective on the situation.

Information for this article came from Wired.

Facebooktwitterredditlinkedinmailby feather

Small Business Owners Don’t Plan For Cyber Attacks

While cyber breaches are in the news all the time, many businesses still have not prepared for one – even if they have experienced an attack of one form or another.  Nationwide Insurance did a survey of 500 small businesses and here are some of the results:

  • 8 out of 10 small businesses do not have a cyber attack response plan even though they have experienced some form of cyber attack.
  • 46% feel their current software is secure enough.
  • 40% don’t think they would be affected.
  • 73% say they are concerned and
  • 63% say they have been the victim of some form of attack such as viruses, phishing, hacking and data breaches.

So, if 73% say they are concerned but 80% say they don’t have a plan, doesn’t that sound odd?

If you had a business in South Florida on the beach, wouldn’t you have a hurricane plan?

If you had a business in tornado alley, wouldn’t you have a tornado plan?

I think the difference is that people have an idea of what to do in case of a tornado or hurricane and really are at somewhat of a loss on what to do in case of a cyber attack.

And the comment about who is interested in me?  That is, I am sure, what Fazoli Mechanical thought before the FBI swarmed their offices.  Fazoli was ground zero for the Target hack.  So even if you are not interesting yourself, someone that you connect with might be interesting.

Also, to prepare for a hurricane, you can go to Google and get some sound advice.  Not so much so for cyber protection.

In my opinion, the insurance carriers need to step up to the plate and help small businesses a lot more than they do today.  Creating a tri-fold brochure and calling it good just won’t cut it.

Kroll, a firm who does a lot of work in breach investigation and remediation says that 31% of the breaches were due to mistakes and not state sponsored terrorism.  Even employees at small businesses make mistakes.

For those businesses that do not have a plan, when they have a breach, they are likely going to be scrambling.  At that point they will likely make mistakes and spend a lot more money.  The large consulting firms that deal with breaches typically charge 50% to 100% more per hour when dealing with emergencies rather than planned activities.  You might say they are taking advantage of the situation.  They might say they are covering your tush due to your lack of planning.  Half full or half empty?

 

 

 

Information for this post came from Security Info Watch and Kroll.

Facebooktwitterredditlinkedinmailby feather

Pentagon Blocks Links In Email

The Pentagon has a better way to stop users from clicking on phishing email – neuter the emails.

Below is an example of what the email that you send to someone in the DoD might look like before it enters the DoD email system and when the user sees it.

LinksEnabled               Linksdisabled

Needless to say, from the user’s standpoint, the resultant email is basically trash.

In part, how bad things are will depend on how much of the HTML in email they disable.  If they disable all of it, for DoD users, email goes back to the way it was in 1980.  If you send anything other than a text email with no linked graphics and no formatting, the user will be not able to read it.

If you send an email that links to content out on the ‘net, which a lot of corporate email does (like the example above), the user will likely just delete it.

If the graphics are embedded in the email (which is the way it was in the early 2000s until that resulted in emails that were so large that email servers could not deal with them), then the DoD mail scrubbing software will be able to analyze the embedded graphics for harmful content and probably your email will emerge mostly unscathed.

What this means for people who send email to DoD mailboxes is that they are going to need to be conscious of how that email is constructed and what their DoD user is going to see.

Certainly for any form of advertising email/ product email/ blog etc., businesses are probably going to need to rethink their strategies and come up with a different format of email for those millions of DoD users.

Of course, there is another option that DoD users have been using for years and that is GMail.  I have lost count of the number of DoD people who have told me over the years to send my emails to them at their GMail accounts because DoD emails are unreadable.

Of course, all that does is move the entry point for the malware from Outlook to the browser.  That’s sure a lot safer – NOT!

*IF* DoD blocks GMail and other webmail solutions, that would make things very difficult for DoD users – but that likely is going to be required.  If the DoD user can’t click on a phishing link in their Outlook mail but can click on that link in their GMail, how have we helped things?

IF corporations start neutering emails, that will make marketers very unhappy.  They have spent a lot of time and money attempting to make email pretty and if you force them to go back to 1980 email in order to get something that a corporate user can even read – that will be a problem.  The good news is that is completely unlikely to happen except at the very most security sensitive companies – maybe a fraction of one percent or less.

Still, it could get interesting.  And at least for the millions of DoD users, it is going to happen.

 

 

Information for this post came from Federal Computer Weekly (FCW)

Facebooktwitterredditlinkedinmailby feather

Your Air Safety Is Dependent on Windows 3.1 – And Vacuum Tubes

As if Paris didn’t have enough problems, Paris’ Orly Airport had to close briefly last week because a Windows 3.1 system that sends Runway Visual Range information to pilots failed.  Windows 3.1 dates back to 1992.  The French air traffic control union said that Paris airports use systems running 4 operating systems, including Windows 3.1 and XP, all are between 10 and 20 years old.  The system should be upgraded anywhere between 2017 and 2021, depending on who you talk to.

But don’t beat up the French too much.  Until the late 1990s or early 2000s, the FAA was still using systems running with VACUUM TUBES.  Seriously.  For a while, the U.S. Government was the largest user of vacuum tubes, which had to be specially made for them.

And many of you probably remember last year when a mentally ill technician attempted suicide after setting fire to an Air Route Traffic Control Center outside Chicago.  Air traffic around the country was screwed up for weeks.

Fundamentally, there is a lot critical infrastructure in the U.S. and around the world that is older than most of the readers of this blog.  Software that is 20, 30 or even 40 years old is not likely to be as secure, reliable or robust as software built today.  However, whether it is inside power plants, trains, or air traffic control systems, it is what we got.

From a hacker standpoint, that is a dream.  Much of the software was designed and built pre-Internet, but much of it is connected to the Internet anyway.  Which is why Admiral Rogers, head of the NSA, told Congress recently that he is convinced that there are several countries that have the ability to take out pieces of our critical infrastructure.  Several today.  Probably more soon.

Unfortunately, there is so much of it and the critical points are almost all under private ownership.  Nationwide, we are talking hundreds of thousands of pieces of infrastructure – drinking water, gas, electric, waste water, etc.

Unless we get serious about upgrading it,some hacker is going to get there first.  That is not a very exciting thought.

Information for this post came from ARS Technica, Baseline and Wired.

Facebooktwitterredditlinkedinmailby feather

Millions Of Records Exposed By Poorly Written Apps

App developers, like all software developers, like to integrate existing code to reduce their workload and their time to market.

Unfortunately, that does not mean that developers will follow the best practices in using that existing code.

According to a presentation at Black Hat Europe this week, developers who use BaaS or Backends as a Service, sometimes hard code the credentials to access those services – such as Amazon Web Services or Parse (Facebook) – into their code.

Reverse engineering mobile code to find those credentials, even if they are obfuscated, is that not hard.

As a test, the researchers looked at two million apps and extracted backend server credentials from 1,000 of them.

Even though statistically that is only 0.05% of the apps, that does not mean it is 0.05% of data.  The 2 millionth and first app could disclose more data than the first two million apps collectively did,

The researchers, using these credentials, were able to look at 18 million records and 53 million data elements.

What is even worse is that is the fact that the researchers talked to Amazon and other services months ago and the data elements are still accessible.

The message here is not that using cloud services is a bad idea but rather that HOW you use cloud services is important.  As always, security must be designed in, not added on.

According to these researchers, doing it right is much harder than doing it fast in this case, so some developers choose to take the shortcut.

And, unfortunately, it is completely invisible to you and me.

For those that manage developers or that are doing cyber due diligence, this is a heads up to ask some possibly uncomfortable questions.

For those of us that just want to download an app and use it, it should give us a little pause to consider WHAT we are giving to those apps and what due diligence we or someone else did on the apps.

Just food for thought.

 

Information for this post came from Computerworld.

Facebooktwitterredditlinkedinmailby feather

Malvertising

Malvertising is the term that describes taking advertising that appears on legitimate web sites and turning it into weapons.

In almost all cases, the web sites involved have no knowledge of what is going on.  Web sites buy ads from large ad networks such as Google and AOL.  Those ad networks sign up advertisers so that when you open a web site, based on the information that they have collected about you, they will show you an ad for cooking shows or rock climbing.  It used to be that these ads were just pictures, but since people were ignoring those ads, they now have amazing amounts of animation, timers and all kinds of things.  That means that these ads are really programs that are dynamically downloaded to your computer and executed without your permission or even requesting it.

Either the ads entice you to click on them and when you do, they install the malware or they figure out how to use a vulnerability in your system to get loaded just by displaying the ad.  That second type is the really scary one because all you need to do is go to Fox News, for example, to get infected.  And, it is not Fox’s fault because they don’t even know what ad you are seeing.

As software vendors patch vulnerabilities, the malvertisers get less automatic infections, so they have to get you to innocently click on the ad.  This is why they have resorted to video malvertising.  Maybe the video advertises something in the news (for example, today, it could be the Paris attacks).  Or maybe it uses humor.  In any case, the objective is to get you to click on it so that they can infect your computer or phone.

One popular way to deliver these infected ads is through Adobe Flash, which is why I have disabled it by default.  Yes, that means that a few web sites don’t work, but mostly it means that my web pages actually load faster.  This is not a cure-all, but it helps.  The other thing that users can do is use an ad blocker to, again, reduce the pool of potential ads that could possibly infect you.

Neither of these methods or even the two together are 100% effective, but they definitely improve your odds.

Of course, the last thing in the world that advertisers want is for you to block their ads, so they are working, very hard, at discovering the malicious ads before you do.  But, like all malware, it is a cat and mouse game.

One example I just saw was malware that ONLY attacks if it thinks you are a government employee (since ads are now programs, it can check things before it runs – which makes it harder to detect).  In this case, the malware only fires if you are running Windows XP and an old version of Microsoft Internet Explorer (unfortunately, there is a strong correlation between running old, unsupported software and working for the government.  Sadly.)  So, if the ad network tests the ad on Windows 10 and Chrome or a Mac with Safari, the malware doesn’t attack and they don’t find it.  Which means that the ad networks have to be very clever attempting to detect the malware.

But sometimes the bad guys win.  On October 29th, for about 12 hours, for example, about 3,000 web sites served up a video ad that told visitors that they needed to update Safari.  If the visitor got sucked in and clicked on the ad, they were infected with a backdoor trojan that gives the attacker control of their computer.  Forever.

Unfortunately, this is not going to end any time soon and ads seem to be pretty effective at delivering malware.  And, there is no easy answer to the problem.  But, being aware and doing some of what I suggested above helps.

 

 

Information for this post came from ITWorld and Wired.

Facebooktwitterredditlinkedinmailby feather