Businesses Get More Time To Upgrade Buggy Encryption Software

The PCI Council, the standards body that dictates the rules for payment card (like Mastercard and Visa) merchants and service providers last year released a directive that everyone had to upgrade their software and eliminate SSL 3.0 and TLS 1.0 in favor of newer versions – TLS 1.1 and 1.2.  The reason was that there are known security holes in those versions of software that are UNFIXABLE – they cannot be patched.  They set a deadline of over a year from when they released the directive for people to fix the problem.

Large organizations apparently complained that they have implemented a bit of a rat’s nest (no big surprise) and with everything else on their plate, they were not going to be able to get to fixing the broken SSL implementations.

As a result, the PCI Council changed the deadline from June 2016 to June 2018 – three years from the original directive.

One thing that is important to understand is that just because the PCI Council changed the date by which, if you have not upgraded, that you will be in violation of your merchant agreement with your bank, you are not relieved of liability in case of a breach.

In fact, I assume that plaintiff’s counsel will be asking if a breached merchant was still running a known vulnerable version of encryption at the time the breach occurred.  One would assume that this would not work to the merchant’s advantage at trial or in settlement negotiations.

Given this announcement, I expect that the hacking enterprises (like China, Russia and Ukraine, for example) will be looking for enterprises that have not upgraded their encryption software and specifically target them.  Given that there are known attacks, that makes these businesses an easy target.

What I am suggesting here is that even though the PCI Council has granted an extension, businesses should not delay their encryption upgrade projects.

The payment card industry, as a whole, spends tens of BILLIONS of dollars a year on payment card fraud.  A 2011 Forbes article says that the industry loses $190 billion a year to credit card fraud.  Even if Forbes has the number wrong by a factor of 2 or 3 too high, that is still a huge number.  That cost is reflected in higher prices and fees that customers – consumers and businesses pay.  By delaying the fix to encryption by two more years, the PCI Council is guaranteeing that the fraud costs will rise over that period and possibly significantly.

Information for this post came from Slashdot and the PCI Council.

Why Crypto Backdoors Don’t Work – Arris Modems

Apparently, in 2009 the developers at Arris, a manufacturer of cable modems for many cable providers, added a backdoor.  They managed to keep it secret for a few years, but some details were leaked a couple of years ago.  Now it is out in the open.

At least for several models of the Arris modems, there is a backdoor login.  The backdoor relies on a publicly known algorithm.  The attacker would need to be familiar with the algorithm, a seed key and the date.  The firmware, based on this data, generates a unique password every day.  The default seed key is MPSJKMDHAI and most, but not all, cable companies do not change it. But even for those that do, all you need is a sample of their modem to look at the code and see what they changed it to.

In theory, the access granted by this login is limited, but when you login (via SSH or Telnet, which you can turn on remotely with this account), it asks your for a second password.  That password is the last 5 digits of the serial number of the box.  At that point, you are an admin for the modem.  A backdoor inside a backdoor!

This is in addition to other security vulnerabilities not related to the backdoor.

The point of this – besides the fact that there are over a half million, at least, modems that are relatively easy to attack – and for which there is no fix other to unplug your modem – is that the idea of a secret backdoor that the FBI and Congress critters keep asking for is good, as long as you keep the secret “secret”.  Which is impossible.  Especially when an attacker can look at the code and see the backdoor.  It may take a while, but the secret will come out.

We see this time and time again.  If you can insert a backdoor, hackers can find it.  That is a fact.  And, of course, facts are, well, inconvenient.

So as this discussion continues, people should consider that time and time again backdoors don’t work.

To quote Peter, Paul and Mary – When will we ever learn, when will we ever learn?

Oh yeah, and if you have an Arris cable modem you probably want to replace it – now.

Information for this post came from Threatpost.

Mackeeper Database Breach Bigger Than Mackeeper – Much Bigger

When I read about the Mackeeper breach last week I didn’t quite grasp the implication of it.  Now I do and it is much bigger than I understood.

For those who have not seen the news, Mackeeper, which is an Apple Mac anti malware/clean up your machine kind of product that some people like and others hate, exposed their entire customer database to the Internet – 13 million customers.  One reason that I wasn’t too worried about this 21 gb data dump is that the company that makes Mackeeper said that they outsource credit card transactions (like a lot of companies do) so there was no financial data in the database.  What was in there was names, userids, passwords (hashed), product information and stuff like that.

The article I read first also said that the company patched it within hours of being notified (good for them!) and that THEY claimed that there was only one access from the Internet and that was the researcher.

Here is the bigger problem that I didn’t quite grasp.

Let’s say that everything above is no big deal.  Let’s do the rinse and repeat trick.  Let’s do what the researcher did.  Using the Shodan search engine, look for other MongoDB servers, a popular open source database, listening on the Internet.

Most people who understand this issue would say that a database server should NEVER be publicly exposed to the Internet and I agree.

Only problem is that a quick Shodan search by the founder of Shodan came up with 35,000 database servers representing more than 680 terabytes of data (that is the same as 680 million megabytes).  That is kind of a large number.

Apparently, the Mongo database at Mackeeper did not have require a userid or password to access it (bad boys and girls!).  What is unclear is how many of those 35,000 databases that John Matherly, the founder of Shodan, found also do not require a userid and password.  Let’s say that it is only 10%.  Well, then, no problem.  Only 68 terabytes of data exposed.  Of course we don’t know if the data is football scores or financial transactions, but you have to assume it is some of each.  And we don’t know if it is 10%, 50% or 90% that don’t require a userid and password.

Now lets take this one step further.  How about using the same tool to look for Microsoft databases or Oracle databases or a dozen other vendors.  SOME of those databases either don’t require a password for access or use the default password.

So this is a much bigger problem than either Mackeeper or Mongo.  Operations that expose database servers to the Internet beware.  Some of that can be fixed with a simple firewall rule as was the case with Mackeeper.  Other people will need to re-architect their software, which is a much bigger problem.

In any case, no one can say that they have not been warned.

Unfortunately, for you and me, we have no idea which companies have their act together and which ones do not.

But you can count on the fact that the hackers are looking.  With just 35,000 Mongo databases to check out, it is going to be a busy weekend for some people.


Information for this post came from eWeek and Betanews.

Holy Cow! Alert For Juniper Netscreen Firewall Users

UPDATE:  According to the Wired article below, the remote access issue was caused by a hard coded master password.  Of course, now that people know there is one, they can look at the code and find it, which means that if you have not patched your Juniper firewalls, you are at a high risk for being owned.

The article also says that the VPN issue may allow an attacker to decrypt any traffic that they have captured in the past.  So if the Chinese, for example (or US or Russian or …) had captured traffic hoping that they might be able to decrypt it some time in the future, now is that time.

This is one of those STOP THE PRESSES! kind of alerts.  Juniper announced yesterday that there are two separate compromises to Juniper Netscreen firewalls that would allow an attacker to gain administrative access to company firewalls and also to decrypt VPN traffic.  Together, this would allow an attacker to completely own your network.

If you are running a Juniper firewall running ScreenOS 6.2.r15 through 18 or 6.3.r12 through r20, you need to patch your firewalls immediately.

Juniper has been amazingly open about this, unlike some vendors.  I suspect that they figured that this exploit is so bad that customers may run away from their products, so the lesser of the evil is to be honest about it.  In reality, my guess is that they are no better or no worse than any other vendor.  Some vendors, under the same situation, might have just said “hey, we fixed some bugs, you should patch your firewall”.  The patches are available on Juniper’s web site (see link in Network World article).

A couple of notes that Juniper made:

  • There is no workaround other than applying the patches
  • They discovered this via an internal code review.  This MAY be good as hackers may not have found the problem.  HOWEVER, that being said, every attacker in the world knows about it now and since it is an OWN THE COMPANY bug, you need to patch this ASAP.  I was at a meeting yesterday where an FBI Special Agent was speaking about security and he interrupted his presentation to tell us about it.  It is that kind of high priority.
  • Juniper said that the bug is a result of unauthorized code in ScreenOS.  While they did not explain what this unauthorized code is, to me, that indicates their development environment was compromised,  If this is true, there entire code base is suspect at this time.  Hopefully they are scurrying around looking at all code in all products for backdoors.  Juniper says they don’t think that Junos devices (their other operating system) are affected.
  • The first bug allows someone to get unauthorized remote administrative access.  From there, you own the device, can wipe the logs, change the configuration or do anything else you might want to do.
  • The second bug – which is separate from the first – would allow an attacker who could monitor your VPN traffic to decrypt it.  Also, not good.  There would be no indication that an attacker was decrypting your traffic.
  • Juniper has not said how long these devices have been infected, but some of the code being patched dates back to 2012.
  • While Juniper has not said how this “unauthorized code” got into the devices, one candidate, based on Snowden documents, is the NSA.  They apparently have an interest in listening to organizations using Juniper hardware.

Whether this is the result of an NSA covert op, some other intelligence agencies handiwork, or some random hacker, it points to the fact that companies need to proactively monitor changes to their software to make sure that unauthorized changes are not being made.  For all organizations, this should be a wake up call for internal security.

This is a very interesting development.



Information for this post came from Network World.

Another article with more details can be found in Wired.

How Would Congress’ Effort To Install Crypto Backdoors Actually Work?

While the question of how cypto backdoors would work is unknown since there are no actual proposals on the table at this time, I am concerned that it will turn into a disaster.  Partly this is because Congress does not understand technology.  Out of 500 plus Congress critters, there are 5 that have a computer science degree.  While that is not surprising, it means that mostly lawyers will be writing laws about something they know almost nothing about.

Option 1 – Force Apple and Google to install secret backdoors into their phones.  One option would be a skeleton key.  That is one single key that unlocks all phones past, present and future.  That option would be a disaster since if that key got into the wild, every phone ever made would be compromised.  Hopefully, that is not the option chosen.  Another option would to have a key per phone.  When you  make the phone, you create a key for it, put the key in a mayonnaise jar on Funk & Wagnalls back porch (to quote Johnny Carson) and open that mayonnaise jar if asked.  If this were done, we would need to securely store around two billion keys and growing by hundreds of millions a year between Apple and Android phones.  We could ask the government to store them.  I am sure that would be secure.  Maybe the OPM could do it for us?  Alternatively, the manufacturers might keep them.  The third option might be to have the key algorithmly derived such that you would not have to store the keys. I think that would mean that you would have to keep the algorithm secret otherwise anyone could decrypt a phone and that is not likely possible.

I don’t think that anyone has actually come up with a way to do this that would work.  I am open to possibilities, but haven’t heard one.  Neither have many, many cryptographers who are a lot smarter than I am.

How do we deal with the close to two billion phones that are out there.  In this situation, Apple is a little easier to deal with than Android.  Since Apple users tend to keep their software more current than Android users, you could, possibly, push an update to the close to a billion iPhones, installing the backdoor.  Not to mention the could hundred million iPads.  NOT!

In the Android world the problem is harder.  There are still hundreds of millions of Android phones running version 2 of the operating system even though version 6 is the current version.  Do you really expect each phone manufacturer to dust off their software archives and update that antique software. Not likely.

Then there is the question of who is going to pay for the creation – and more importantly – the ongoing maintenance of this huge intelligence network.  I assume Congress doesn’t want to pay for it, but I certainly don’t want to either.  The cost would likely be in the billions of dollars if not more.

And what about phones that are not made in the US?  Do we really have any leverage to force Chinese manufacturers that sell knock off Android and iPhone clones to do anything that the US wants?  I didn’t think so.  So maybe the objective is to reduce the sales revenue of US phone manufacturers?

But now the real problem.  Encryption is implemented in software in millions of applications.  These applications are written by tens of thousands of developers all over the world.  Many of them are open source meaning the developers don’t have any money to do anything and do not have a company to force to do anything – assuming you can even find these people.

If you don’t remove the encryption from software, cracking the iPhone or Android phone is basically useless.

Maybe Option2 is to ban all software that does not have an encryption backdoor.  How exactly do you do that?  There are likely thousands of new applications released every week.  Some in the US but many more outside the US.  Maybe we should block all non-US IP addresses so that we can make sure that terrorists don’t download software from non US companies or developers.  Maybe we should rename the Internet to the USNet.  Maybe we should pay someone to check every new application that is available on the Internet to see if it has a backdoor.  That would be good for the economy.  The government would have to hire tens of thousands of computer experts. nah,  that’s not going to happen.

Another issue is cost.  When Congress did this the last time in the 1990s, it was called CALEA.  It was Congress’ attempt to install a backdoor into all phone switches sold in the United States to commercial phone companies (the Ma Bells in particular).   There were a handful of phone companies and another handful of phone switch manufacturers,  Congress agreed to pay for the insertion of the backdoors.  They allocated a billion dollars in 1990s money and ran out.  They had to get another billion to finish the job.  And, I think, it took around 10 years to complete.

Fast forward to 2015.  Instead of 10 phone switch manufacturers you have, say, 100,000 software developers.  Instead of a product that is sold through a sales force, installed in known locations (the phone company central office) and maintained by a paid technical staff, you have products that are given away (open source), by people that do not have any paid staff, that are not physically delivered at all and come from all over the globe.  ASSUMING you could do this, how much would this cost?  Of course, you can’t do it.

And what about software made in other countries that don’t have laws like whatever this Frankenlaw might be?  A few countries – like England for example – might be persuaded to pass a similar law, but other countries – like Germany – are actually moving in the other direction saying that strong encryption is a good thing.

What about software made in Russia?  Ukraine? China? and many other countries that are not friendly to the US?  They are not likely to comply.

And, already ISIS has released their own software.  It is encrypted, of course.  Maybe we can ask Daesh (as they do not like to be called) to insert a backdoor for us and give us the keys.  Let me think about that.  Nope. Not gonna happen.

So, in the end, Congress will be able to thump their collective chests and say how wonderful they are and it will do nothing to help fight terrorism other than to make Bin Laden right even years after his death.  Remember that he said that he wanted to bleed us to death?  Well, he certainly is succeeding.  Even in death he is succeeding.

Stay tuned because no one knows how this play will end – tragedy or comedy?  Not clear.


Information for this post came from Network World.

Tips For Small And Medium Enterprises

As we have seen all too often, the source of big cyber attacks is a small or medium enterprise (SME).  The SME is a ripe target for hackers because those businesses do not have cyber security experts on staff.  Many SME owners, not knowing what to do don’t do anything.  Target, for example, was one.

One of the challenges with cyber security is that unless the hackers are stupid, have a big ego or are bad technicians, an SME is unlikely to even know that a hacker is inside its network.  Big organizations like OMB didn’t know the hackers were inside their network for over a year.  Hackers were inside Nortel Networks at the very highest level of the company for ten years before they were discovered and likely contributed to the downfall of the company.

SMEs need to start taking action.  While the actions will not STOP all breaches, you have to start the process somewhere.  These recommendations from Real Business, the British magazine catering to SMEs, seem like a reasonable start.

One thing that is important to understand – even though you won’t like it.  Cyber Security is a never ending battle.  Unless you go out of business, the challenges of protecting your business – and more importantly your information and your customer’s information – will never end.

That said, here is the list –

1. Make one person responsible for reviewing and managing risks within your business.  Great advice.  If no one is responsible, not much will get done.  For the really small business, this person will likely have other tasks, but whoever the person is needs to know that the job is important and that they have your direction to spend a significant part of their week working on protecting your systems and network.

2. Establish ownership for data protection and information security and make that person responsible to you as the business owner.  While this seems redundant with #1 above, it adds two new dimensions.  One, cyber security is about the data.  Unless your office is ransacked by a junkie looking to make his or her next score (and that does happen, so physical security is important), hacking is mostly about stealing your data.  Hackers cannot get very much money for used computer hardware.  I had a great conversation with some business executives about whether or not they should keep some very sensitive personal information.  That is a great conversation.  The information that you do not keep cannot be hacked. In fact, that may be the only information that you can say with certainty cannot be hacked.  Second, the person in #1 needs to report to the business owner.  For SMEs, a significant data breach will likely put you out of business.  This is not something you delegate.  Additionally, if the person you put in charge of risk (which means not just cyber risk) wants to implement a new policy and that policy affects your employees, that policy needs to come from you, the business owner or CEO.

3. Put in place some simple but effective data access policies and controls to systems and key data.  We have seen time and again that when hackers compromise someone’s userid and password they have access to everything in the business.  While the recent T-Mobile/Experian attack compromised 15 million records, it could have been 10 times worse given the amount of data Experian stores.  However, Experian has implemented policies to restrict access to data, so the attackers only (if only is the right word here) got access to the T-Mobile data.  The Target attackers were successful because once they got into a vendor management portal, they had access to the point of sale system.  That is crazy.

4. Understand your data. Where is your business data and your client data?  Since hacking is about the data, you need to understand your data.  Where is your data stored?  Is it in the cloud?  Where in the cloud?  Is it a company owned or personal account?  How long do you need to keep the data?  Who makes sure it gets deleted?  Securely deleted?  This should generate a lot of questions since I suggest that most businesses do not know where ALL their data is.

5. Ensure password policies are implemented across the business.  I was reading an article this morning about a hospital that had an Internet connected smart medical device that stored patient information on it with a password of Password123.  If you think that could not happen to you, what is the password to your Internet router?  When was the last time you changed it?

6. Train staff to be aware of potential threats, including bogus emails and suspicious requests for information.  Most studies say that around 80% of the data loss (both accidental and malicious) is due to human error.  People are not cyber security experts.  They click on stuff, lose flash drives and other bad stuff.  Does your company even have a cyber security employee education program?  This cannot be something that you do once when the employee is hired or even once a year.  It needs to be constantly reinforced.

7. Take advice from a specialist and review your IT security position to ensure you have a reasonable level of defences against external attacks and malware.  No, I did not add this one, it really was in their list.  If I added it, it would be number 1.  Just like you hire experts to review your financials, cyber risk is an area where outside expertise is likely needed.  For example, the article mentions penetration testing (and for those of you who accept credit cards, you have a contractual requirement with your bank to do penetration testing at least once a year), that is likely something that you are not going to have the resources or expertise to do internally.  Since getting breached can be a bet the farm problem, use experts.

8. Take an honest view of your capability and consider moving data and applications to a secure hosted environment.  This is the only item on the list I have any heartburn with.  Not because I don’t trust application service providers.  I do.  This blog is on an ASP’s system.  My problem is that your system can be just as insecure in a hosted environment as if it is in your office.  Refer back to recommendation #7.

For those organizations that do not already have an active information security program, this is a good place to start.  Understanding that this is a long journey – it will take time, money and being willing to change some of the dangerous behaviors that you and your people engage in today.

Since this is the time of year for lists and resolutions, it would be a great time to start a corporate cyber security program.  We can help you with that.

Information for this post came from RealBusiness.