Category Archives: Software Development

Application Security – The Neglected Threat

When companies like Microsoft or Oracle develop software, they have massive teams who’s only job is to try and find bugs in the software.  They also have made significant investments automated tools to help with software quality assurance.  Still Microsoft usually patches 10-20 new bugs month after month.  Oracle often patches 100 bugs a quarter.

Given these example company’s results in spite of major investments in technology and people, what does that mean for the average software development shop that doesn’t have the tools, personnel or budget that these major software shops have.

Security Compass, a Canadian company that assists Fortune 500 companies with software security issues, conducted a study of financial institutions application security practices.

Here are some of the findings of the report:

  • While most financial institutions have created security development lifecycle practices, very few of them can actually validate how well they are doing at following them.
  • Three out of four rate application security as a critical or high priority
  • 89% use the BSIMM (Building Security In Maturity Model) while almost all of the others use some form of framework or standard.
  • When it comes to metrics to measure how effective these frameworks are, most do not have a robust KPI measurement process.  Many measure raw vulnerability counts (77%),  which is a very basic measurement.
  • Less than half measure how long it takes to fix bugs.
  • Only a little more than a third track whether developers actually use the security tools called for in the policies.
  • The study showed that 58% of the banks use some third party software, but less than half of the financial institutions require their vendors to have a security development lifecycle process or even an application security policy.

These results are from financial institutions where security and process are usually front and center.  If this is the reality for organizations which have a high security awareness profile, how does the average organization rank on security process and practices.  Likely, those organizations don’t rank very well.

Smaller development shops – say with less than 50 developers likely don’t have a security development lifecycle (SDLC) process at all.  The likely don’t have automated tools to detect bad coding practices and they likely have a small (to no) quality assurance department.  If they do have a software QA department, that department is likely looking for functionality problems and not security issues and is not trained to find security problems.

If the software is developed under a development contract, that contract likely does not specify the requirements for an SDLC process or for any of the the other security processes that large software development shops have.

In addition, they likely do not conduct third party, independent, penetration tests to attempt to find those security issues.

As a result, it is likely that those custom applications are a hacker’s dream gateway into your organization and you likely will never know.

Companies that develop their own, or contract for the development of, custom software development – including web sites – need to up their game if they want to keep hackers out.  If they don’t, the hackers will continue to quietly thank them.

Information for this post came from Dark Reading.

Facebooktwitterredditlinkedinmailby feather

Symantec Anti Virus Security Problems Exposed

Anti Virus software has long been a concern of the security community.  While it endeavors to protect the user’s workstation, in order to do it’s job, it requires a lot of system level permissions.  This week, at least with Symantec, that came home to roost.

Tavis Ormandy a researcher from Google announced that he’d found numerous critical security vulnerabilities in Symantec’s suite of anti-virus software.  That suite covers 17 enterprise software products and 8 consumer and small business products.

While some of the bugs are simple, others are quite fatal and would allow an attacker to remotely control the user’s computer.

One bug would allow the attacker to take over an entire enterprise by just sending an infected file or malicious link – without the user ever doing anything.  This is because the anti-virus software has to open files and links when they arrive to see if they are malicious and that code has the flaws in it.

Ormandy says these flaws are “as bad as it gets“.  He is the guy who has made a career out of finding security holes in security software. His previous finds include FireEye, Kaspersky, McAfee, Sophos and Trend Micro – pretty much everyone in the anti-virus business and then some.

While we do not know how actively hackers and foreign governments are exploiting these vulnerabilities, they probably will now if they have not been doing so in the past.

What is not clear is how come these vulnerabilities exist.  After all, security companies, more than anyone else, should understand the problem of vulnerable software.  Yet, apparently, they do not.

Chris Wysopal of software testing vendor Veracode had a number of comments to make about the situation.  He thinks that at least some of these vulnerabilities would have been detected by the software testing products his company makes.

Symantec has now patched these vulnerabilities, but that doesn’t mean that customers have applied these patches.  It also doesn’t mean that there aren’t other vulnerabilities not yet detected.

And since most of this code from Symantec and other vendors like them runs with very high privileges, this software is more likely to put your system at risk than, say, a word processor.

At a minimum, everyone needs to make sure that their anti-virus software is patched as soon as the patches are released.  When they are released to you, they will be released to the hackers as well.

Ormandy says that maybe the anti-virus vendors did not understand that they had a problem, but I have a hard time believing that.  More likely, they figured that they could get away with not spending too much effort at testing their software.  Mr. Ormandy is on a  mission to prove that theory wrong and I think he is doing pretty good at that mission.

Information for this post came from Wired.

Facebooktwitterredditlinkedinmailby feather

Newly Discovered Windows Bad Tunnel Attack Has Been Around For 20 Years

A Chinese researcher has “discovered” a Windows flaw which affects all versions of Windows released in the last 20 years.  It does not require installing malware and it can be executed silently with near perfect success.

While no one seems to be saying this, I wonder if the Chinese have known about this attack for years or decades and just now, for some reason, are making it public.

Yu says BadTunnel is basically a technique for NetBIOS-spoofing across networks: the attacker can get access to network traffic without being on the victim’s network, and also bypass firewall and Network Address Translation (NAT) devices.

It can be exploited via Office, Edge, Internet Explorer and some third party apps.

Without going into a lot of details, here is how it works.  The researcher is going to present a paper on the attack at Black Hat.

BadTunnel exploits a series of security weaknesses, including how Windows resolves network names and accepts responses; how  IE and Edge browsers support webpages with embedded content; how Windows handles network paths via an IP address; how NetBIOS Name Service NB and NBSTAT queries handle transactions; and how Windows handles queries on the same UDP port (137) — all of which when lumped together make the network vulnerable to a BadTunnel attack.

Since it affects all versions of Windows released in the last 20 years, including desktops and servers, installing the patch ranks as “pretty important”.

If for some reason you cannot install the patch, make sure you disable all Netbios traffic through your firewall.

The interesting thing about this is that this bug has been around for 20 years.  Which means that the code that is affected, including that in Windows 10, is 20 years old.  This goes back to my soap box conversation of software supply chain security.  This is just another example of how the software libraries that you integrate into your new code (like the old Netbios libraries into Windows 10) can come back to haunt you in a serious way.

Information for this post came from Dark Reading.

Facebooktwitterredditlinkedinmailby feather

More Data is Better – Or Is It?

Talk to Google or Facebook and they will tell you that they never met a piece of information that they did not want to add to their databases.  More information means better profiles;  better profiles mean that they can charge more for ads.

But some Silicon Valley firms are rethinking that idea.

Silicon Valley startup Envoy, for example has made a decision to keep as little customer information as possible.  That way if the government asks them for the data, they can say they don’t not have it.

Some large tech firms are beginning to offer services that rely far less on collecting user data.

Even early stage startups are beginning to realize that between government demands for data and hackers, that holding more data is a liability rather than an asset.

Startups are beginning to invest scarce resources to reduce the amount of data that they collect, even if it slows short term growth

Even Marc Andreessen, the prominent venture capitalist and cofounder of Netscape, said “Engineers are not inherently anti-government, but they are becoming radicalized, because they believe that the FBI, in  particular, and the U.S. government, more broadly, wants to outlaw encryption”.

Andreessen says that startups are “particularly wary” of Burr-Feinstein, the proposed legislation that would force vendors to add back doors to their encryption software.

For some tech vendors, it is not possible to follow this data minimization strategy since they are dependent on selling that data to make money.  For other vendors, they need to have access in order to deliver their service – web based email is an example of this.

Other vendors – Apple’s iMessage, Whatsapp, Signal and others – have added end to end encryption where the vendors do not have the keys.  If the FBI comes to them, they can say that they do not have access to the data.

Whatever the outcome, the government has certainly changed the conversation in Silicon Valley and that will influence the design of systems for a long time.  We will have to wait and see how this all plays out.

Information for this post came from the Washington Post.

Facebooktwitterredditlinkedinmailby feather

7-Zip Flaws Reveal Soft Underbelly of the Software Supply Chain

Do you use 7-Zip?  Do you even know what it is?  One of the challenges that businesses and consumers have is that, like sausage, they often do not know what is in the software that they use.  As a result, they could be diligent about applying patches and still be exposed to hackers.

In this case, the product is 7-Zip.  7-Zip is a very popular free and open source file compression utility. Unfortunately, researchers at Cisco Talos have discovered two vulnerabilities that would allow malicious actors to execute arbitrary code with the same permissions as the user who runs the code.  If the program running the code is a system utility, it will have greater permissions than the average user, making the problem more concerning.

But here is the real problem.  Let’s assume that you are a diligent computer user and you apply all the patches that are available for your products.  However, you use some product – maybe open source, maybe commercial, that incorporates a third party library – in this case 7-Zip’s library – and that developer is not as diligent as the user.  They do not realize there is a problem and do not release an updated version of their product.  You, the diligent user, may still be vulnerable.  Even if you have applied all of the available patches.

In this case, 7-Zip is used by hundreds of products.  A quick Google search finds 7-Zip copyright notices for the backup software Commvault, the storage vendor EMC, the antivirus product Malwarebytes, the security product FireEye and many others.  Exactly how each of these vendors use 7-Zip and in which of their products is unknown, so any given user’s exposure is also unknown.

This is what I call the software supply chain problem.  As a software developer, whether in house for your company or as a commercial or open software developer, you need to understand precisely what versions of which third party code is incorporated into your software.  Worse yet, developers LOVE to scan public code repositories to find a routine to do some function instead of reinventing the wheel.  In that case, there is no one who is responsible for this piece of the software supply chain other that you, the developer who did the copy and paste.  Did you consider whether that piece of software was bug free?

Back to the 7-Zip problem.  It was caused by failing to check inputs to certain  functions to make sure that the inputs did not cause unintended consequences.  While the details are rather geeky, lack of validation of user input is possibly the single biggest source of security vulnerabilities that there is.  It is such a problem that hackers have invented an entire class of tools, called fuzzers, to detect and later exploit these problems.

OK, so I now understand the problem, what the heck do I do?

There are three answers to this question:

  1. As a user, make sure that you understand every single software application installed on your system and check for and apply patches for each of them when they become available.  Unfortunately, tools like Windows Update DO NOT perform this function for you.  As a result, I generally recommend that users try to only install software that is REQUIRED.  Some users love to install any software that seems interesting.  The more software that you have, the more software that you have to monitor for patches.  As a corollary to this, for systems that process critical data like, say, credit cards or health information, this should be a rule and not a recommendation.
  2. As a developer, you need to document the use of every piece of software that your team integrates into your product.  It does not matter whether that software is commercial, open source or from a code snippet library.  Then, for the rest of eternity (or at least for as long as that code is being used), you need to be diligent to watch for vulnerabilities and if you find them create and release patches.
  3. As a purchaser of software, you should be asking vendors how they manage the software supply chain problem.  For one thing, it will give you a warm fuzzy (or not) that they are on top of this potentially critical problem.  For another, you may learn something that you can use internally in your company.

And the last thing to remember.  If you are using an old version of a product (like the millions of PCs running Windows XP, just as an example), it is likely that NO ONE is checking for security vulnerabilities.  You are fully on your own, guarding your machine.  Unfortunately, you are coming to a knife fight with a spoon, which is why, even though it may cost you money, you should only run current versions of the applications on your systems.

Information for this post came from Network World.

Facebooktwitterredditlinkedinmailby feather

GCHQ Pulls Kill Switch On Smart Meter Rollout

GCHQ is The British version of the CIA.  Usually, they are out chasing bad guys in foreign countries.  This week they are protecting British citizens.  With all of the news of intelligence agencies eavesdropping on citizens, it is nice to hear a story where they are decidedly, doing the right thing.

This all started with a plan to roll out smart meters to manage electricity and gas to every building in England.

This amounted to 53 million meters.

These smart meters don’t just read the amount of electric or gas that you use, they can shut off your utilities completely and do other things as well.

Imagine, if a hacker – or unfriendly government – were to gain control of all of these meters and shut down power to every building in the country, what would happen.  What if, they not only did that, but overwrote the firmware in the  meters so that the utilities could no longer control those meters to turn the electric back on and had to replace all 53 million meters.  This is not far fetched.  This is basically what happened in Ukraine last December when the Russian government decided to mess with Ukraine’s infrastructure.

Well, how could that happen?  It appears that the utilities and meter manufacturers, according to sources, understand a lot more about how to make a meter than how to write software.  In reality, this is not a big surprise.

So what did they do?  They created a system where all 53 million meters were protected with the same encryption key.

If that one key was compromised – say by reverse engineering a meter – the attacker might then be able to control every other meter in the country.

What could possibly go wrong.

In this case, GCHQ,which apparently does not have a vested interest in reading your electric meter, but the kibosh on the whole thing.  Good for them!

The program to replace all the meters is already forecast to cost about $18 billion.  Customers are supposed to save about $39 a year, but they will have to buy a $45 device to read their usage.

Depending on how bad the software that these “metal bashers”, as the meter companies are called not so fondly, is, how much more rewriting the software, both for the meters and at the utilities will cost.  The software will need to manage 50 million encryption keys instead of just one key, which could be simple or could be very complex.

In this case, hopefully, no one is going to complain about the spy agency watching because if the utilities had their way, it would only be a matter of when, not if, Britain went dark.

As I always say – security or convenience.  Pick one.

 

Facebooktwitterredditlinkedinmailby feather