Tag Archives: SSDL

Is Your Mobile Phone App Secure? Probably Not!

More than three-fourths of mobile banking vulnerabilities can be exploited without physical access to the phone.

A new report from Positive Technologies has a number of sobering facts:

  • 100 percent of mobile banking apps contain code vulnerabilities due to a lack of code obfuscation.
  • NONE of the mobile banking apps tested had an acceptable level of protection
  • Attackers can access user data on almost all tested apps
  • In 13 out of 14 apps, hackers can access data from the client side
  • Half of the banking apps studied were vulnerable to fraud and funds theft
  • Hackers were able to steal user credentials from five out of seven banks tested

And the list goes on.

From the perspective of being a user of apps, this is a bit disconcerting.

From the point of view of being a company who may be developing apps, this is a bit of a wake-up call.

If you think about the amount of developer support that big banks have and they are still not developing secure apps, what does that mean for small to medium size companies that do not have that infrastructure?

As a user you are kind of dependent on the developers to do it right and it does not appear that the developers are doing such a good job at that. You can look at reviews, but that is of limited value.

If you are using the apps for your company, you can and should test the application’s security and if the app contains sensitive data or acts as an interface to sensitive data, that is probably not optional.

If you are writing apps or, just as importantly, paying others to write apps on your behalf, there are, at least, two things to do.

Make sure the development team has a well implemented secure software development lifecycle (SSDL) program. Don’t just trust the developers when they say sure, we do. Verify that. If you need help either developing or testing a secure software development lifecycle, give us a call.

Second, if you are not already conducting application penetration tests for every major release of applications that you develop or have developed for you, you need to start doing that. Yes, that costs money. But so does having a breach. If your app accesses data of California residents, remember that they can now sue you for $750 per record compromised without showing that they were damaged.

A 1,000 record breach equals a $750,000 liability. Not counting attorney’s fees and reputation damage. You can do a lot of testing for that amount. 1,000 records is a tiny breach. You are not Capital One, but their breach exposed 105 million records. You do the math.

The maturity level of developing apps today is similar to the maturity level of developing web software in around the year 2000. That alone should scare you.

Some questions you can ask your development team:

  • Do you have a dedicated software testing staff?
  • Are they trained to test software for SECURITY FLAWS or only for functionality?
  • Are you using automated testing tools?
  • Are your developers trained to develop software securely?
  • Does the development team have a security development manual? Something that is written down and part of their business process?
  • Who signs off on the security of apps before release? What is their security expertise?

The evidence is that app security is not so great. What are you doing to improve it? Credit: SC Magazine

Security News for the Week Ending June 19, 2020

Akamai Sees Largest DDoS Attack Ever

Cloudflare says that one of its customers was hit with a 1.44 terabit per second denial of service attack. A second attack topped 500 megabits per second. The used a variety of amplification techniques that required some custom coding on Akamai’s part to control, but the client was able to weather the attack. Credit: Dark Reading

Vulnerability in Trump Campaign App Revealed Secret Keys

Trump’s mobile campaign app exposed Twitter application keys, Google apps and maps keys and Branch.io keys. The vulnerability did not expose user accounts, it would have allowed an attacker to impersonate the app and cause significant campaign embarrassment. This could be due to sloppy coding practices or the lack of a secure development lifecycle. Credit: SC Magazine

FBI and Homeland Use Military-Style Drones to Surveil Protesters

Homeland Security has been using a variety of techniques, all likely completely legal, to keep track of what is going on during the recent protests.

Customs (part of DHS) has Predator drones, for example. Predator drones have been used in Iraq and other places. Some versions carry large weapons such as missiles. These DHS drones likely only carry high resolution spy cameras (that can, reportedly, read a license plate from 20,000 feet up) and cell phone interception equipment such as Stingrays and Crossbows. Different folks have different opinions as to whether using the same type of equipment that we use to hunt down terrorists is appropriate to use on U.S. soil, but that is a conversation for some other place. Credit: The Register

Hint: If You Plan to Commit Arson, Wear a Plain T-Shirt

A TV news chopper captured video of a masked protester setting a police car on fire. Two weeks later, they knocked on her door and arrested her for arson.

How? She was wearing a distinctive T-Shirt, sold on Etsy, which led investigators to her LinkedIn page and from there to her profile on Poshmark. While some are saying that is an invasion of privacy, I would say that the Feds are conducting open source intelligence (OSINT). The simple solution is to wear a plain T-Shirt. If you are committing a felony, don’t call attention to yourself. Credit: The Philly Inquirer

Ad-Tech Firm BlueKai has a bit of a Problem

BlueKai, owned by Oracle, had billions of records exposed on the Internet due to an unprotected database. This data is collected from an amazing array of sources from tracking beacons on web pages and emails to data that they buy from a variety of sources. Apparently the source of the breach is not Oracle it self but rather two companies Oracle does business with. They have not said whether those companies were customers, partners or suppliers and they haven’t publicly announced the breach. If there were California or EU residents in the mix, it could get expensive. The California AG has refused to say whether Oracle has told them, but this will not go away quietly or quickly. Credit: Tech Crunch

Open Source – The New Attack Vector

There are people who think open source is the holy grail of software,  I am not one of them.  Apparently hackers agree with me.  So does the Department of Defense.  They have even coined a term – SCRM or Supply Chain Risk Management.

Bottom line, developers need to understand that there is a war out there and they are the target.  According to Sonatype, the open source tools and governance company, said that the use of vulnerable open source components is up by 120% over the last 12 months,

Sonatype estimates that there are 1.3 million – yes, million – vulnerabilities in open source software components that are not recorded in the National Vulnerability Database managed by NIST.

Sonatype estimates that the average enterprise downloads 170,000 source components a year of which possibly 1 out of 8 of those have some form of vulnerability.  Sometimes those vulnerabilities get exploited in as little as 3 days.

Developers are still downloading vulnerable versions of Apache Struts (as in Equifax breach).  About 80,000 times every month.

Downloads of a vulnerable version of the Spring Framework was around 85,000 a month last year;  this year it is still 72,000 a month.

To add insult to injury, hackers are starting to inject vulnerabilities directly into some open source packages.  Done cleverly, such a logic bomb might never be discovered.

Point is, still a HUGE problem.

So what do you need to do?

#1 – Admit that open source software is far from bug free – even hugely popular packages like Apache Struts.

#2 – Create a SCRM program.  The larger the open source software package is, the more difficult it is to make sure that it is safe.  

#3 – Consider using automated tools to detect vulnerabilities.  Some of the tools are free and others are very expensive, and all of them change the development process.  Some of them are built into the software tools that developers are already using.

#4 – Create a process for finding out about patch availability.  Unfortunately,  except for the most popular open source packages, they are never patched, so you are pretty much on your own.

#5 – Treat open source packages just like code you develop when it comes to code reviews and testing.  The only difference that you can’t influence the development process.

Information for this post came from The Register.

Secure Software Development Lifecycle Process Still Lacking

In late 2015 Juniper announced that it had found two backdoors in the router and firewall appliances that it sells.  Backdoors are unauthorized ways to get into these systems in a way that bypasses security.  Kind of like going around to the back of the house and finding the kitchen door unlocked when no one is home. Researchers said that there were telltale signs that this was the work of the NSA, although they would never say, of course.  If these backdoors were the work of the intelligence community, lets at least hope it was OUR intelligence community and not the CHINESE.  Whether these backdoors were intentionally installed in the software with the approval of Juniper management at the request (and possibly payment) of the NSA is something we will never know (See article in Wired here).

At the time, Cisco, Juniper’s biggest competitor, said that they were going to look through their code for backdoors too.  They claimed that they did and that they didn’t find any.

Fast forward two years and now the shoe is on the other foot.

Cisco has announced the FOURTH SERIES of backdoors in the last FOUR months in May.  Possibly their code audit from 2015 is still going on, but if so, that would be going on for more than 30 months, which seems like a long time.

The most recent SET of bugs includes three bugs which are rated 10 out of 10 on the government’s CVSS3 severity ranking.

The first of the three is a hardcoded userid and password with administrative permissions.  What could a hacker possibly do with that?

The second provides a way to bypass authentication (AKA “we don’t need no stinkin passwords”) in a component of some Cisco software (DNA Center).

The third is a another way to bypass authentication in some of Cisco’s APIs that programmers use.

In fairness to Cisco, they do have a lot of software.

But to beat Cisco up – WHAT THE HELL WERE THEY THINKING TO ALLOW HARD CODED PASSWORDS IN THE SOFTWARE IN THE FIRST PLACE?

Source: Bleeping Computer

Okay, now that I am done beating up Cisco (actually, not quite, I have one more), what lessons should you learn from this?

First (the last time today that I am going to beat Cisco up), in order for a Cisco customer, who paid a lot of money to get the equipment in the first place, to get these security patches – patches that plug holes that should have never been there in the first place – that customer has to PAY for software maintenance.  If you let the maintenance lapse, you can re-up, but Cisco charges you a penalty for letting it lapse.   For this policy alone, I refuse to recommend Cisco to anyone.

Second, if you are a Cisco user, because of this very user unfriendly policy, you must buy software maintenance and not let it expire.  If you do, you will not be able to get any Cisco security patches.  Remember that, as one of the biggest players in the network equipment space, Cisco is constantly under attack, so the odds of bugs turning up is like 100%.

Third, no matter who’s network equipment you use, you must stay current on patches.  These flaws were being exploited within days and since hackers know that many Cisco customers do not pay for maintenance, those holes, which are now publicly known, will be open forever.

Only half in jest, my next recommendation would be to replace the Cisco equipment.  There are many alternatives, some even free if you have the hardware to run it on.

Okay, that handles the end user.

But there is an even bigger lesson for software developers here.

How did these FOUR sets of back doors get in the software in the first place?

Only one possible answer exists.

A poor or non-existent secure software development lifecycle program (known as an SSDL) inside the company.

AS AN END USER CUSTOMER, WHEN IT COMES TO SECURITY SOFTWARE ESPECIALLY, YOU SHOULD BE ASKING ABOUT THE VENDOR’S SECURE SOFTWARE DEVELOPMENT LIFECYCLE PROGRAM.  

IF YOU GET AN EVASIVE ANSWER, FIND A DIFFERENT VENDOR.  VOTE WITH YOUR CREDIT CARD.

As a developer or developer manager, it is your responsibility to make sure that customers don’t vote with their credit cards.

IMPLEMENT a secure software development lifecycle program.

CREATE and MONITOR security standards.

TEST for conformance with those standards.

EDUCATE then entire development team – from analysts to testers  – about the CRITICALITY of the SSDL process.

Advertisement: we can help you with this.

While Cisco is big enough to weather a storm like this, smaller companies will not be so lucky.  The brand damage could be fatal to the company and all of its employees.

 

 

Software Supply Chain Attacks are Real

For those of you who have been reading my blog for some time, you know that I have written about the software supply chain security problem.  In a nutshell, the problem is that programmers rarely write code from zero anymore.  Instead teams write pieces of code and integrate it.  Then there is limited testing due to time and budget.  Finally, everyone crosses their fingers and the code is released.

The folks at CCleaner discovered the hard way that it doesn’t always work out the way you expected.  Or hoped.

About 6 months ago researchers at Talos (a part of Cisco) and Morphisec discovered that the absurdly popular disk cleaner software CCLEANER had been compromised and was downloading infected software from the official web site and had been doing so for a month.

Worse yet, the code was cryptographically signed, meaning two things.  Most users would trust it and the attack happened from within Ccleaner’s four walls.

Finally more details of the story are coming out; useful for anyone else that writes software, for free or for money, and distributes it to outside parties.  This could be YOU!

2.27 million infected downloads (in just a month) later, Avast, the owner of Ccleaner is spilling the beans.

Not only is this a software supply chain lesson, but it is also a merger and acquisition lesson because this was discovered right after Avast bought Ccleaner from Piriform.

The attackers had stolen credentials and used them to log into Piriform’s London network using the remote desktop software Team Viewer that Piriform used.  From there they infected other computers, only working at night when the computers were likely not used, to avoid detection.

They then installed some malware called Shadowpad, which allowed them, among other things, to log every single keystroke on the infected machines.

Then they waited.  Two months after the acquisition closed, they infected the software inside the fence and waited for the infected software to be signed and uploaded to the web.

The attackers were very smart on top of this.  While 2.27 million infected copies were downloaded and 1.65 million copies asked the control server for instructions, only 40 payloads, representing 11 highly targeted companies, were activated with a second stage.  That is very patient.  To be willing to download over two million copies to only infect 40 very precise targets.  Those targets were in particular tech companies like Cisco .

Information for this post came from Wired.

So what does this mean for you?

First, if you are acquiring a company – or selling one – this could happen to you.  If you are the seller, you could sued for millions.  If you are the buyer you could be on the hook for millions.  It all hinges on the words in the contract.  CONDUCTING SOFTWARE SECURITY DUE DILIGENCE DURING AN ACQUISITION IS VERY IMPORTANT.  This is an example of why.

While this is not an example of downloading an infected library, the library did get infected.  How did the bad guys infect the code and get it checked in to the official library?  How come no review detected the added code that no one officially added?  The SECURE SOFTWARE DEVELOPMENT LIFECYCLE process might have caught this.

Could this have been caught during testing?  Probably.  You would have needed to be watching for where on the Internet that CCleaner was talking to – that it shouldn’t have been.  In fact, since it was trying to talk to Russian and Korea, that could have been an alarm bell since the test network likely should never have tried to do that.  But you have to be looking for it.

How come the attackers were able to compromise Team Viewer in the first place.  My bet is that Piriform was not using two factor authentication.  Bad boys and girls.  I know two factor is not friendly.  Neither is having 2 million infected copies of your software downloaded by your customers.

In the end you need to look at the entire software development process and think like a hacker to decide where he or she could compromise the process.

Obviously, these guys did.

How many other companies are already infected and don’t even know it?  THAT IS WHAT IS SCARY!