Why The Software Supply Chain is The Rhinoceros Head in the Corner

As if Yahoo didn’t have enough trouble, it apparently was using a third party software library called ImageMagick which had a serious security bug in it.

The library which is used to manipulate images is very widely used.  Or at least, it was.  Some people say that it has not aged well.

Security researcher Chris Evans dubbed the bug YahooBleed #1 after all the “bleed” bugs identified over the last few years.

The bug is now fixed, but every developer who has integrated the package into their software has to recompile and re-release their software.

And even if the developers do re-release their software, users need to know about it and download and install the updated version.  Web server managers need to upgrade their web servers.

Yahoo apparently had enough of this and “retired” the library.

For businesses that develop software or pay people to develop software, this third party software library problem is a huge problem.

Developers often use third party libraries because it doesn’t make sense to reinvent the wheel.  And whether the library is licensed for a fee or open source, the problem is similar, although for licensed software, if you don’t pay the maintenance fee that the developer might charge, you may not be able to even get the new version of the software.

So the question for managers and executives to ask is whether your in house or contracted development team has a software supply chain management policy and if so,how does it work.  Someone in your company needs to be convinced that the process works, whether that is the CIO or CTO or CISO or VP of IT.  That is probably one of the biggest security issues in the software world today.

We just saw the WannaCry worm spread like wildfire because even though Microsoft released patches to stop it in March, many organizations had not installed those patches two months later.

Compare that with all of the internally developed and externally contracted software that likely has bugs and security holes.  Is that software being patched regularly, including the third party libraries that are in use?  In many cases, the answer is no.  Some of it likely has not been patched in years and it likely full of security holes.

Kind of like patching potholes in the road, fixing security bugs in old software is not glamorous but it is critical.

If your organization is not dealing with this, that is a high priority problem to fix.

Information for this post came from The Register.

Facebooktwitterredditlinkedinmailby feather

One Login Cloud Identity Manager Has Critical Breach

Onelogin, a cloud based identity and access manager, reported being hacked on May 30th.  This is the challenge with cloud based IDaaS managers.

WARNING: Normally I try to make my posts non-techie.  I failed at this one.  Sorry!  If the post stops making sense, then just stop reading.  I promise that tomorrow’s post, whatever it is, will be much less techie.

Onelogin’s blog post on the subject of the breach said that an attacker obtained a set of Amazon authentication keys and created some new instances inside of their Amazon environment.  From there the attackers did reconnaissance.  This started around 2 PM.  By 9 PM the attackers were done with their reconnaissance and started accessing databases.

The information the attackers accessed included user information, applications and various kinds of keys.

Onelogin says that while they encrypt certain sensitive data at rest, at this time we cannot rule out the possibility that the hacker also obtained the ability to decrypt the data.  Translating this into English, since Onelogin could decrypt the data, it is possible or even likely that the hacker could also decrypt the data.

That is all that Onelogin is saying at this time.

Motherboard says that they obtained a copy of a message that Onelogin sent to their customers.  They count around 2,000 companies in 44 countries as customers.  The message gave instructions on how to revoke cloud keys and OAuth toktens.  For Onelogin customers, this about as bad as it can get.  Of course, Onelogin is erring on the side of caution.  It is possible – but no one knows – that all the attackers got was encrypted data before they were shut down.  It is also possible that they did not have time to send the data home.  But if they did get the data home, they have the luxury of time to decrypt the data, hence the reason that Onelogin is telling to expire anything and everything from keys to certificates to secret phrases – everything.

The way Onelogin works, once the customer logs into Onelogin’s cloud, Onelogin has all the passwords needed to be able to manage (aka log in to) all of a company’s cloud instances and user accounts.  In fact, one of the reasons that you use a system like Onelogin is that it can keep track of tens or hundreds of thousands of user passwords, but to do that, it needs to be able to decrypt them.  Needless to say, if they are hacked, it spells trouble.

One thing that is important to distinguish.  Consumer password managers like LastPass also store your passwords in the cloud to synchronize them between devices, but those applications NEVER have the decryption keys.  If the encryption algorithm is good and the passphrase to protect them is well chosen, even with a lot of resources, it will take a while to decrypt.

For those people (like me) who are extra paranoid, the simple answer to that problem is to not let the password manager sync into the cloud.  It still works as a local password manager, it just won’t synchronize between devices.

Gartner vice president and distinguished analyst Avivah Litan says that she has discouraged the practice of using services like that for years because it is like putting all of your eggs in one basket.  I certainly agree with that.  However, it also is convenient.  A lesser risk scenario would be to have the system that manages those passwords completely under your control.  You get to control the security and if an instance is attacked, only one customer is affected, instead of thousands.

This does not spell the end of cloud computing as we know it.

It is, however, a reminder that you are ultimately responsible for your security and if you choose to put something in the cloud (or even in your own data center), you need to first understand the risks and accept them and then put together and test an incident response plan so that when the worse case scenario happens, you can respond.

For a customer of Onelogin with even just say a thousand users and say those users only have ten passwords each, that means that 10,000 passwords across hundreds of systems likely have to be changed.  Many of their customers are ten times or fifty timesAnd those changes have to be communicated to the users.

Incident response.  Critical when you need it. Unimportant the rest of the time.

Information for this post came from Onelogin’s blog and Krebs on Security.

Facebooktwitterredditlinkedinmailby feather

Booz | Allen | Hamilton Can’t Catch A Break

In 2013 Booz employee and NSA contractor Edward Snowden flew to Hong Kong after leaking huge quantities of highly classified NSA documents, proving that even the NSA is challenged to keep secrets under wraps.  Those documents are still being dribbled out today.

Earlier this year, when the FBI was trying to track down the Shadow Brokers NSA tools leak, they came across Harold Martin III.  Another Booz employee, Martin was found to have 50 gigabytes of highly classified information in his house, car and backyard sheds.  50 gig is the equivalent of a half billion typed pages.  While the FBI has not said that he was selling stuff to the Ruskies or Chinese, they are certainly not happy with him stealing all that stuff from the N.S. of A.

Last week security analyst Chris Vickery was scanning Amazon S3 storage “buckets”, as they are called, and came across an unprotected one in the public area of Amazon’s East Coast Data Center US-East-1.

As he was rummaging around in this bucket he found the public and private SSH keys of a Booz engineer.  This engineer, located in Alexandria, Virginia, near Fort Belvoir, home to many sensitive projects including the National Geospacial-Intelligence Agency or NGA.

Among the other things in this bucket were the master credentials to a datacenter operating system (what exactly that is is not clear, but certainly not good).   Also there were access credentials to the GEOAxIS authentication  portal, a highly sensitive Pentagon system.

Also in the bucket were access credentials to another S3 bucket, but since this one had a password on it, Chris didn’t want to stretch his luck – and likely break the law by using those credentials he found to log in there.  If a hacker had come across this bucket before Chris, the hacker’s ethics probably would not have prevented him from exploring this other password protected bucket.  I am sure that everyone is trying to figure out who else -like the Russians – knew about this unprotected bucket.

Chris thinks this lines up with another Amazon bucket he found in April that had in it, among other things, an application security risk assessment listing 3,000+ security issues with a program’s source code.

One would think, password or no password, this stuff probably belongs in Amazon’s walled garden (the one with the snipers on the roof) called Gov Cloud.  Gov Cloud is designed to be more difficult for snoopers like Chris to find because you and I can’t even get through the front door, never mind wander around aimlessly.  But, this stuff was not there.

Finding this stuff and thinking this is not good, Chris emailed Booz’s Chief Information Security Officer.  For 24 hours he did not receive a response.

So, Chris went nuclear.  He reached out to the National Geospacial-Intelligence Agency directly.  NINE MINUTES LATER, the bucket was secured.  That’s got to be a record.  This was on a Thursday.

On Friday, a government agency that asked not to be named (but, of course, is likely one of the three letter agencies) reached out to Chris’s employer, Upguard, and asked them to preserve all evidence, which I am sure they will do.

After the article was published, Booz issued a statement saying that no classified data was stored in this unprotected, not-approved-to-store-anything-classified Amazon cloud (that’s encouraging).  They said that they took action to secure it as soon as they learned about it and that may be true, even though Booz did not seem to do anything with Chris’s email until he contacted the NGA.

Likely, especially compared to Ed Snowden and Hal Martin, this is small change, but still, it is embarrassing.

If Booz had privately discovered it and told the gov, it probably would have been chalked up to the mistake that it was, but because they were publicly called out – both Booz and NGA – the investigation will likely go deeper and take longer.

The government does reserve the right to spank  contractors who breach security, but that spanking, if it does occur, will likely occur in private.

But besides it being embarrassing to Booz and their customer, it should be a wake-up call for all companies.

Here’s why.  Can you say with any certainty exactly what data of yours lives somewhere in the cloud – maybe on an employee’s personal cloud account like personal Dropbox?  Possibly without a password.  Possibly without any permission or approval.

If you are a company larger than a dozen people and you answer that you do have certainty, I suggest that you are fooling yourself.  Those with less than a dozen employees – still not clear.

If you don’t have a process for managing your company owned cloud services, you, too, could be in the same boat that Booz was – publicly publishing stuff that should not be public, but not knowing it.

That is a task that is not easy to deal with but if you manage sensitive data – whether that data belongs to you or a client of yours, it is important to know the answer.  We see WAY too many incidents of companies accidentally exposing data that they did not mean to expose.

Information for this post came from Gizmodo.

 

Facebooktwitterredditlinkedinmailby feather

Do You Know If Your Data Is Safe?

For years we have been worrying about whether the apps (or applications) that we use are secure.  Now we have to worry about whether the back end servers that our apps talk to are secure.

You may remember that recently hackers discovered thousands of Mongo database servers that had no Admin password and created a form of ransomware – either encrypt the database in place or upload the database and delete all the tables.  If you didn’t have good backups (and people that put databases on the Internet with no admin password probably don’t have good backups either) then you got to pay the ransom.

Well researchers, never content to leave bad enough alone, decided if there were thousands of Mongo database servers out there with no password, what else might be out there.

Security vendor Appthority found over 1,000 applications with backend databases that were used by iPhone and Android apps that were not properly secured.  And, I am pretty sure, the search was not exhaustive.

The research focused on two open source products – MySQL and ElasticSearch.  Open source is not really the issue here;  poorly configured software is the issue.

Their analysis found 43 terabytes of unprotected data in 21,000 wide open databases.

They call what they found HospitalGown and it is not a bug.  It is merely hackers looking for databases that operations people did not bother to secure.  The Mongo database fiasco last December was caused by the default configuration for Mongo not having any security.  It required users to change the default install.

Whether that is the case here or not,  what is likely just a sample of the whole Internet found 43 terabytes of wide open data.

Appthority did notify both Apple and Google about at least some of the non-secure apps and also notified Amazon for databases that were hosted there.

Still, there are probably tens of thousands – or more – databases out there that are still not protected.

One component of vendor risk management is to look at where your data is hosted, whether your vendors have conducted third party risk assessments and how they ensure your data is protected.  I suspect that none of these app developers have done a vendor risk assessment.  Have you?

Information for this post came from eWeek.

Facebooktwitterredditlinkedinmailby feather