Tag Archives: Backdoor

Security News for the Week Ending September 28, 2018

Cisco Will Eliminate Hard Coded Passwords One Per Month

It seems like every patch cycle, Cisco admits to another app that has an undocumented hard coded password.  I have lost track of how many of them they have removed so far, but the number is scary large.

What is more scary is that I bet Cisco is far from unique – they are just being more honest about it.  Are all the other hardware vendors pure as the driven snow.  NOT LIKELY!

In this case, very embarrassingly, the hard coded password was in Cisco’s video surveillance manager.  In other words, the bad guys could secretly watch the watchers.

Cisco CLAIMS this was because they forgot to disable this hard coded ID (maybe used for testing) before the production code was released.

Recently Cisco has removed hard coded credentials from their Linux based OS, IOS XE, from their Digital Network Architecture server and from the Cisco Provisioning Server.  That is just recently.

This bug rated a 9.8 out of 10 on the severity Richter scale (CVSS V3).   Source: ZDNet.

Gig Workers Targeted by Malicious Attackers

This one is classically simple.

Gig workers, who have no IT department, are responding to gig requests on sites like Fiverr and Freelancer.

Unfortunately, those requests have documents associated with them that are infected.  When the gig worker opens the file to understand if he or she wants to bid on the gig, his or her computer is infected.  MAYBE the gig worker’s anti virus software will catch it, but if they are crafted just slightly differently for each attack, the AV software will be blind to it.

Freaking genius.  As long as it doesn’t happen to you.  Source: ZDNet.

Your Tax Dollars At Work

Like many public sector (not all!) networks, the security of the Pennsylvania Democratic Caucus was, apparently, not so great.  Equally unsurprisingly, their computers became infected with ransomware.

So they had two choices.  Pay the bad guys $30,000.


Pay Microsoft $703,000 plus.

Of course, since this isn’t coming out of their pockets, they opted for the gold plated, diamond encrusted deal from Microsoft.

Surely, some local outfit would have rebuilt their servers for less than three quarters of a million dollars.

According to Homeland Security, over 4,000 ransomware attacks happen every day.  I have NO way to validate that claim, but I am sure the number is big.  Source : The Trib.

Uber Agrees to Pay $148 Million for Breach – Instead of $2 Billion under CCPA

Uber agreed to pay $148 million to settle claims that it covered up a breach in 2016 by PAYING OFF the hackers to keep quiet and supposedly delete the data.

Lets compare that to what they might have paid under CCPA, the new California law.

57 million records – say 5% in California = 2,850, 000 records.

Private right of action up to $750 per user without showing damage.  Let’s reduce that to $500 x 2.85 million = $1.425 billion.

AG right to sue for malicious non-compliance.  $7,500 (treble damages since the cover up was willful) x 2.85 million = $21.375 billion.

WORST CASE = A little over $22 BILLION.

BEST CASE (Maybe) = 10% of that, maybe $2 billion.

They got off light.

By the way, THIS is why companies are scared of the new law.

Source: Mitch

Newest iPhone, Newest iOS – Hacked in a Week

We tend to think of iPhones as secure.  Secure is a relative term and relatively, the iPhone is secure.

iOs 12 was released on September 17th, along with the new iPhones, the XS and the XS Max.

Today is the 28th and news articles abound that the  pair (new phone plus new software) has been hacked.

To be fair, Pangu team, the ground that announced the hack, said that they had hacked the beta back in June.

So, as long as you don’t think secure means secure, the iPhone is secure.

Less insecure might be a better term.  Source: Redmondpie .

Hidden Backdoor Found In Another Chinese Network Gateway

The headline reads Hidden Backdoor Found in Chinese-Made Equipment.  Nothing New! Move Along!

That headline by itself should scare you.

Researchers found a hidden backdoor in a Double Technology GSM gateway used by telephone companies and VoIP providers.  DblTek is based in Hong Kong.

According to the security firm Trustwave, there is an account called Dbladm that is not listed in the documentation and that is allowed to telnet into the device with Root (admin) access.

Unlike other manufacturer supplied userids which are listed in the documentation, this userid does not use a password which the user can change.  Instead, it uses a challenge phrase from which the user needs to calculate a response in order to log in.

So lets see where we are right now?

#1 – Hidden userid, not in the documentation

#2 – User cannot change the password even if they found out the userid was there.

#3 – User cannot disable the account

#4 – the account uses a challenge rather than a password and the response to the challenge is pretty easy to figure out.

Once the user figures out the challenge response, they have full access to the device, can listen to traffic or use the device for other purposes such as launching a denial of service attack on other web sites.

In the “this would be funny if it wasn’t so scary” category, when the researchers told Dbltek about the security hole, they didn’t remove it, they merely changed the algorithm to make the response a little harder to calculate.  Still easily hackable.

So why does the headline say NOTHING NEW?

Researchers have already found similar back doors in MVPower DVRs, RaySharp DVRs, Dahua DVRs, AVer DVRs and Foxconn firmware used in some (cheap) Android phones.

And remember, just because the equipment has a name brand on the face plate does not mean that there isn’t some nosy Chinese software in it under the covers.

In 2012 a former Pentagon analyst told the media that China had backdoors in the equipment of 80% of the world’s telecoms.

Think about that for a minute.  The Pentagon says that the Chinese can listen to traffic from 80% of the world’s telecoms.

So why would you buy Chinese equipment for your network?

One word.  Price.

Just consider that you are getting a little extra value with your purchase.

A Free (no extra charge) backdoor.


So when you are considering buying network and computer equipment, dig a little deeper, ask more questions, do some research.  It might just help you keep the Chinese out of your stuff.

Information for this post came from Bleeping Computer.

When Will They Ever Learn?

As the folk music group Peter, Paul and Mary wrote in 1962 – about a completely different subject – When Will They Ever Learn?  It appears that, for software companies, the answer is a big question mark.

First Juniper got caught with a hard coded back door of unknown origins in their routers and firewalls.  Then Cisco got in trouble for hard coded credentials.  Now it is Fortinet.

The interesting thing is that these three companies are all security vendors.  If they can’t figure it out, is it likely that the rest of the software community has it figured out?

In Fortinet’s case, it wasn’t a back door in the sense of something designed to allow unauthorized people to log in to their firewalls, switches and other devices.  But the effect is the same.  Fortinet makes a central management application that allows a company to manage their Fortinet Security appliances and switches remotely.  That management console needs to exchange information with the devices in order to allow a network administrator to manage all those devices remotely.

Fortinet, of course, wants to make this easy for administrators.  What better way to do that than to hard code a set of credentials (userid and password) between the management console and the devices to be managed.

What could go wrong with that?

Vulnerable products are FortiAnalyzer release 5.0 and 5.2, Fortiswitch 3.3, Forticache 3.0 and FortiOS 4.1, 4.2, 4.3 and 5.0.

Obviously this is a problem for Fortinet customers, but there is a bigger issue here.

If security product vendors are not smart enough to figure out that hard coding credentials, no matter how well intentioned, is a problem, what are millions of other vendors doing?  Likely the same thing.  Or, MUCH WORSE!

And do I think hackers are smart enough to look for those hard coded credentials? Probably.  No, definitely.

The systems that are probably at the biggest risk are those that are remotely managed and/or those that are managed by a third party.  An example of both of these are many point of sale cash register systems, such as some of those that have been hacked in the last few years.  For systems to be managed remotely, especially by third parties, it is a whole lot easier if every system can be access using a single userid and password.

If you have one or more systems (such as a POS or Alarm system), you should ask the vendor about how credentials work and how you can periodically change the password to comply with your company’s security policy.  If the answer is that you can’t change the password, then what you have is a backdoor.  Maybe an authorized one, but still a backdoor.

If you do have a back door, then you need to figure out how to mitigate the risk.  I used to have, many years ago, a high end phone system that could be remote managed, via modem, by the vendor.  I had a simple answer to hackers.  I unplugged the modem unless I was talking to the vendor and they said they needed to remotely access it.  Simple.  But effective.

For more information on the Fortinet problem, read their blog post here.

Why Crypto Backdoors Don’t Work – Arris Modems

Apparently, in 2009 the developers at Arris, a manufacturer of cable modems for many cable providers, added a backdoor.  They managed to keep it secret for a few years, but some details were leaked a couple of years ago.  Now it is out in the open.

At least for several models of the Arris modems, there is a backdoor login.  The backdoor relies on a publicly known algorithm.  The attacker would need to be familiar with the algorithm, a seed key and the date.  The firmware, based on this data, generates a unique password every day.  The default seed key is MPSJKMDHAI and most, but not all, cable companies do not change it. But even for those that do, all you need is a sample of their modem to look at the code and see what they changed it to.

In theory, the access granted by this login is limited, but when you login (via SSH or Telnet, which you can turn on remotely with this account), it asks your for a second password.  That password is the last 5 digits of the serial number of the box.  At that point, you are an admin for the modem.  A backdoor inside a backdoor!

This is in addition to other security vulnerabilities not related to the backdoor.

The point of this – besides the fact that there are over a half million, at least, modems that are relatively easy to attack – and for which there is no fix other to unplug your modem – is that the idea of a secret backdoor that the FBI and Congress critters keep asking for is good, as long as you keep the secret “secret”.  Which is impossible.  Especially when an attacker can look at the code and see the backdoor.  It may take a while, but the secret will come out.

We see this time and time again.  If you can insert a backdoor, hackers can find it.  That is a fact.  And, of course, facts are, well, inconvenient.

So as this discussion continues, people should consider that time and time again backdoors don’t work.

To quote Peter, Paul and Mary – When will we ever learn, when will we ever learn?

Oh yeah, and if you have an Arris cable modem you probably want to replace it – now.

Information for this post came from Threatpost.

Brits Considering Banning Crypto Without Back Doors

UPDATE:  Interesting question:  Right now, the government has the benefit of secrecy when they spy on our internet traffic with laws like the U.S. Patriot Act.  If they want to be able to decrypt your messages and the app doesn’t have a back door, they would have to come to you and ask for your key, which you may or may not give them, but the authorities are no longer able to conduct their spying clandestinely.  I assume their first tactic would be to threaten the software vendor. If, for example, the vendor and the attackers are from some country like Russia, China or Iran, my guess is that this tactic would not work out very well – nor would they want to reveal to those countries who they are spying on.

I further assume that their second tactic would be to hack into the vendor’s development or production environments and insert their own back doors with who knows what ramifications to the vendor.

According to Infosecisland, British Prime Minister David Cameron said that the Brits would pursue banning encrypted messaging apps if the providers did not give them a back door to get around the crypto.

Two apps that they want to ban are Snapchat and Whatsapp.

The U.S. Department of Defense testified before the House Armed Services Committee recently and said that they too were concerned.

The FBI has been asking for several years for laws that require encrypted apps to have a passkey for them, but up until now, Congress has not been in the mood to give them that.

In the 1990s, there was a strong move towards something called the Clipper chip, which would have given them that exact back door.  That would work if encryption was performed in hardware, but today, it is usually performed in software.

Many European politicians are demanding that companies like Google and Facebook spy on their users even more than they do now, but, so far, they have refused as best we know.

Besides the obvious problem of getting a jihadist who is committed to blowing up your entire country to follow a law that says he or she should only use software that has a back door, there are more than a few other problems with this plan.

If Cameron is reelected in May, he said that he would:

  • Ban encrypted online communications without back doors (that is a lot more than snapchat and whatsapp)
  • Require ISPs and telecom companies to archive huge quantities of customer data for long periods of time

The ISPs have long complained that the government wants them to keep all this stuff, but the government does not want to pay for all that storage (Whatsapp alone, generates 25 billion messages a day, for example).

Exactly how, for example, they plan to force an app developer in say, Hungary or better yet, China, to give them the keys to his crypto is not clear.

This would probably be a good time to remember than Ben Franklin quote:

Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety