The Safety Of Using Your Facebook ID To Sign On To Other Websites

UPDATE:  Apparently Paypal was one of the companies affected by some of these OAuth security holes and they just released a fix (Dec 1,2016) for a bug that would allow hackers to steal OAuth tokens from payment apps of third party developers.

Many web sites encourage you to sign on with your social media userid and password.  Different sites allow you to use different social media accounts such as Facebook or LinkedIn.

But no matter which social media account you use, the technology behind it, called OAuth 2.0, is the technology that they use to make this happen.

I have never been a fan of doing that, but not for the reason I am about to talk about.

For me, the issue is that, by definition, when you share your credentials with another site, they connect your visits and, of course, sell your data.  As an example, if you sign on to sites A, B and C using your Facebook userid and password, then Facebook knows that you are visiting sites A, B and C and it may get other information from those sites as well.

In addition, if you use your Facebook credentials at those sites and any one of the sites where you are using that userid and password has a breach, then all of those sites are compromised.  So even if you think that Facebook has good security (and it likely does have better than average security), the weakest link in that chain will compromise ALL of those sites.

Now we have another reason not to “share” userids.

Back in January, researchers at the University of Trier found two security glitches in the OAuth protocol and made recommendations on how to fix the bugs.  Whether any given site has, in fact, fixed those bugs is unknown and impossible for you as a user to tell.

Now researchers have identified 4 bugs in OAuth that compromise the security in the system and, of course, since that paper is available in the SANS library, hackers know about it also.

OAuth was designed to allow users to log in to web sites, but now it is being used for mobile apps.  In addition, it turns out that the OAuth specification is complex and convoluted, so, apparently, many developers have not implemented it correctly in the mobile space.

Researchers looked at 600 Android apps.  They used Android apps not because they are more or less secure, but rather it was easier to look at the code because of the Android architecture.

Of those 600 mobile apps, 182 allowed the user to log on using their social media accounts.  For those apps that allowed the user to log on using their social media userids and passwords, 41% of them had security issues with their implementation OAuth.  For example, developers did not check the validity of the information sent from the ID provider.  Other developers only looked at the returned ID and didn’t bother to see if the developer said that the credentials were valid.  There were a number different security issues.

While that 41% amounted to only 75 apps, scale that up to the millions of apps out there and you can see that this could be a big problem.

Unlike SSL, where there are organizations like SSL Labs website where you can test any web site’s implementation of SSL – at least to a degree – there is no equivalent way to test any particular web site OR MOBILE APP’s implementation of OAuth.

As we said, while these tests were done with Android apps, there is no reason to believe that developers coded their OAuth implementations any better on Apple devices than on Google devices.

So if you weren’t squeamish about logging on to some random website using your social media userid and password before, you may be now.    Of course, if you follow best practice and do not share passwords across web sites, then using social media IDs and passwords at different sites violates that rule.

Just food for thought.

Information for this post came for SC Magazine, SANS reading room and Forbes.

Facebooktwitterredditlinkedinmailby feather

SF Muni Hit By Ransomware

UPDATE:  While the ticket kiosks are back online, the hacker is saying that if the Muni doesn’t fix security problems and pay the ransom by Friday that they are going to release the data that they have taken.

Passengers entering the San Francisco Muni rail system were greeted by the message “You Hacked” on Friday when they attempted to purchase a ticket.

Later, handwritten signs on the ticket machines said FREE MUNI.

While the rail operator has been very quiet on what is going on, the hacker is not.  Some of the messages from the hacker include:

“You Hacked, ALL Data Encrypted.”  The bad English could easily be an attempt to disguise where the attackers came from,

The attackers are supposedly asking for 100 Bitcoin or roughly $75,000.

The agency is “using very old system’s !” the person behind the email address said. We Hacked 2000 server/pc in SFMTA including all payment kiosk and internal Automation and Email and …!”

We Gain Access Completely Random and Our Virus Working Automatically !” he continued. “We Don’t Have Targeted Attack to them ! It’s wonderful !”

We Don’t live in USA,” he said. “Sorry For My English anyway ;)”

The attackers claim to have taken 30 gigabytes of data, which may seem like a lot, but in today’s world, it is pretty small.

While shoppers on Black Friday had a free ride, by Cyber Monday, the ticket machines, at least, were working again.

While Muni officials are saying that they were investigating and it would be inappropriate (or embarrassing) to comment, others are talking.  Hoodline, a Bay Area news blog said that other data including payroll, email, Quickbooks, Nextbus operations, MySQL databases and other data had been taken.

If that is true, this could be a big deal.  While people like some federal agencies (HHS) and me have said that you need to ASSUME that if hackers encrypt your data, they could easily have a copy of the data, we now have more evidence of this actually happening.

If the comments from the hackers are true, they have control of over 2,000 computers at the agency, roughly a quarter of all of the agency’s computers.  They will need to assume that the other three quarters of their computers may be infected even though they are not showing symptoms.  YET!

Assuming that they even have backups for 2,000+ computers, which is HIGHLY unlikely, rebuilding and restoring 2,000 computers could take weeks – or more – depending on the resources available.

Apparently, the attack is a “Spray and Pray” style attack, meaning the SFMTA was not targeted.  Typically these attacks work by sending out millions of emails and whoever opens them or clicks on a link in the infected emails becomes the next victim.

If the hackers do have the data, then the SFMTA has a significant breach to deal with.

For businesses and now government agencies, this is something I have been saying would happen for months – not only do they have to worry about rebuilding their machines, potentially losing data if they don’t have backups and maybe paying a ransom, but now they have to add to that list, having their data compromised and possibly being publicly released.

In this case, the hackers merely encrypted the computers that run the ticketing and other business systems.  What if they compromised the systems that actually run the trains – similar to the attack in Ukraine last year that blacked out the country for 24 hours?  Depending on what they did, the Muni could be down for weeks.  Or more.  In the case of the Ukraine attack, the hackers DAMAGED the automation equipment, making it difficult or even impossible to repair, making replacing a lot of very expensive hardware the only option.  That equipment is not the kind of stuff that you can buy at Home Depot or Best Buy.

Being prepared for these types of attacks takes time and money and requires people to stop doing risky things.  For many businesses, dealing with this is just not a priority.  I predict it is now a priority for the SFMTA.  This will likely cost them 10 to 100 times what it would have cost them to be prepared.  The good news is that if they fall under the umbrella of governmental immunity, it will be very hard to sue them and there is not an alternative railroad for customers to use instead.

Information for this post came from the San Francisco Examiner and Fortune.

[TAG:BREACH]

Facebooktwitterredditlinkedinmailby feather

Free is Not Always Free

We don’t seem to remember history very well, so, I guess, we are doomed to repeat it.

trojan-horse-flickr-prayltno
Trojan Horse from Flickr under Creative Commons License by Playinto

A Chinese company, ADUPS, makes a technology that a number of phone manufacturers buy and use.  It allows the manufacturer to update the firmware in a phone or IoT device over the air (meaning, I assume, over the cellular, WiFi or Bluetooth network, not literally over the air).  This gives the manufacturer a lot of control over the device.  This is really not different from what Sprint, AT&T and Verizon do, but HOPEFULLY, they have more self control.

In ADUPS case, their technology was integrated into inexpensive phones made by, at least, ZTE, Blu and Huawei and sold by Amazon and Best Buy, among others.

The phones sell for between $50 and $100 and, apparently, are quite nice.  The stated reason that they sell for so little money is that the user agrees to accept onscreen advertising.  But how, exactly, do they target that advertising?

Kryptowire bought and tore apart a Blu phone from Amazon and guess what they found?

The phone transmits the full text messages, contact lists, call history with phone numbers and phone ID (IMSI and IMEI depending).  It can target specific users using remotely defined keywords. It also collected information on the applications installed and bypassed the Android permissions model and executed remote commands with system privileges.  Finally, it had the ability to reprogram the devices.

Being security conscious, ADUPS encrypted the data – wait – before it transmitted it to several servers in Shanghai, China every 72 hours.

Kind of sounds like a Trojan horse, doesn’t it?

ADUPS claims to have over 700 million active users;  they have offices in Shanghai, Shezhen, Beijing, Tokyo, New Delhi and Miami.

Kryptowire has a graphic in their article, captured below, that compares this to CarrierIQ – the spying software that US Carriers used a couple of years ago that raised such an uproar.  While neither one was cheered by privacy advocates, this new one seems to be worse.

kryptowire
http://www.kryptowire.com/adups_security_analysis.html

As you can see, there are a lot of similarities but a few “improvements” such as remote firmware update.

ADUPS, on their web site, said they do this to screen out junk calls and texts.  First, at least for me, those don’t seem to be a huge problem and second, if they were honestly doing this wouldn’t they tell the owner of the device and give them a way to see what they are doing?  That excuse doesn’t hold much water.

ADUPS claims that after they were outed for doing what they were doing, they disabled (but not removed) the feature.

They also say that they take privacy seriously and didn’t disclose the text messages, contacts and phone logs to anyone before they were caught.  That doesn’t mean that they aren’t and didn’t disclose other information, they just (maybe) didn’t have time to disclose this information before they were caught.

This is why security researchers are so critical.  You or I don’t have the time or skill to tear apart a phone and figure out what people are doing.  If some folks in Congress have their way, this type of research will be completely illegal.

So just remember, if someone offers you a free (or nearly free) Trojan horse OR phone, you do get what you pay for.  And likely, something extra – also for free.

It will be interesting to see if this software shows up elsewhere in the U.S.  Based on where their offices are located, their target market seems to be China, Japan, India and Latin America, where the loss of privacy is outweighed by the benefit of getting a high end phone.

Information for this post came from Brian Krebs,  Kryptowire and from a statement by ADUPS.

Facebooktwitterredditlinkedinmailby feather

Securing Your Web Conferencing

We have all used web conferencing tools at some time.  Some of  use use them a lot, but does anyone other than me worry about the security and privacy of these solutions?  Examples of these services are Webex and Gotomeeting, but there are dozens of these tools, at least.

Brian Krebs wrote a piece a while back that was sufficiently worrying to Webex that they sent out an all client alert.

In that case, the problem was not a bug, but rather poor security practices.

Krebs searched for Webex meetings that were not password protected.  I guess if you are using a Webex-like tool to advertise or promote your product, you are probably OK if everyone and anyone – especially your competitors and the Chinese – to lurk in your meeting and steal whatever information they find useful, but if your meetings are more sensitive than that…..

For many of these companies, Brian simply went to the company’s Webex event center and found the non protected recurring meetings sitting there.  Not very hard.  The event center very conveniently shows you time, subject, host and duration.

A few of the companies that Brian found had recurring, non-protected meetings included Charles Schwab, CBS, The Department of Energy, Fannie Mae, Jones Day, Orbitz and many others.  When Brian reached out to Webex, they contacted their customers, so, hopefully, at least these Webex customers have tightened things down a bit.

Here are some general tips to making your web conference meetings more secure.  Since every product has different features, some methods may not work on every product.

  1. Make your meeting UNLISTED in the portal.  Unless you need the meeting to be publicly known, make it unlisted.  At least then only people who have been told about the meeting will know about it.  Of course, this is only a very light touch on security because security by obscurity is not very strong. Still, no sense advertising.
  2. Require a complex password.  This won’t improve security if someone has the email with the link in it, but it will help protect you from people barging in to your meeting.
  3. Disable JOIN BEFORE HOST.  That way the host can control who joins the meeting. More work for the host but more secure.
  4. Webex has the concept of a meeting lobby.  You can lock you meeting and leave everyone in the lobby, granting access to just those people that you want in the meeting.  This is kind of like the difference between leaving your front door open and locking it.  When the door is locked, you greet each person who knocks on the door and can choose whether you want to let them in.
  5. Exclude the meeting password from the invite email.  This requires you to get the password to them via a different method, but this may be useful if the meeting is sufficiently sensitive.  Assume that email is already compromised unless you have some type of special, secure email.  Generally, I would say that normal corporate email is only somewhat secure and any given person’s email (and their phone or computer) may definitely be compromised.
  6. Make sure that the system generates a tone every time someone enters or exits the meeting and requires them to provide a name. Of course, the name could be a fake, but still it is another piece of data that you have.
  7. Request that attendees do not forward their invitations to others.  Of course you are counting on people to do what you ask, but sometimes people don’t realize that their coworkers or friends are not invited.
  8. Lock down the meeting once everyone you expect to be in the meeting is already there.  This stops people from joining after the meeting starts while your attention is on the meeting and not on who is joining late.
  9. Regarding the phone portion of a meeting, at least Webex and maybe others, does not require you to have a password.  All you need is the dial in number and conference ID.  For many web conferencing platforms, you can SEE how many people have dialed in.  Count the number of people who should be dialed in and the number of people who show up in the portal and if they are not the same, you have a potential problem.
  10. Kick off anyone that you don’t recognize or isn’t authorized.  Skype for business does this well.  Just boot them out.  If it turns out to be someone who should be there they will let you know.  Security wins or at least should win in many cases.
  11. Share ONLY the application that you want to share and not your desktop.  This avoids accidental security breaches.  If you share a Powerpoint, for example and something pops up in eMail or messaging that is confidential, only you will see that.
  12. If you are recording the meeting, put a strong password on the recording.  For attackers, in many cases the recording is better than the original  because their odds of getting caught are lower.
  13. Delete your recordings when they are no longer needed.
  14. Change your PIN periodically.  Just like a password, PINs do not age well.

These are just a few ideas and they will not make things bulletproof, but bullet resistant is a good thing.

[TAG:ALERT]

 

Information for this post came from KrebsOnSecurity,  Cisco customer alert, Webex best practices for secure meetings,

Facebooktwitterredditlinkedinmailby feather

Madison Square Garden Concessions Hacked for a Year

Madison Square Garden Company (MSG) announced this week that hackers compromised point of sale systems at a number of their properties.

madisonsquaregarden-flickr-Rich Mitchell
Photo Courtesy of Rich Mitchell (Flickr) under a Creative Commons License

The properties include Madison Square Garden, Beacon Theatre and Radio City Music Hall among venues.

One more time, it appears that the hack happened with the point of sale system that runs the concession stands at those properties.

The breach, they say, started in November 2015 and was shut down in October 2016 – a full year.

The data that was compromised is what we usually see – names, credit card numbers, verification codes and expiration dates.

MSG has not said how the hackers got in or how many cards they took.  We do not know, of course, whether this is because they have no idea or because they are keeping the details quiet in anticipation of one or more lawsuits.  If there are lawsuits, these details will likely become public, but that is an if.

As is usually the case, MSG did not figure out themselves that they were hacked.  In their case, it was not the FBI that came to visit them, but rather the credit card companies – the folks that get to eat the losses.

MSG also owns venues in Boston and Los Angeles but they have not indicated that the problem extended to those locations.

They did say that the data was compromised as it was being routed through the system for authorization.  If this is true, then it seems likely that MSG concession stands did not have chip card readers in place.  Chip card readers encrypt the data before it leaves the card reader, making it difficult to compromise while being routed through the system.  This is speculation on my side.  If this is true, then MSG has way more liability for the costs of the breach.

MSG has not said whether they have cyber insurance or whether they will be writing checks themselves.

They also have not said whether the concessions are outsourced to another company  which is relatively common.

They also have not said if a third party vendor who may have maintained the POS terminals was the source of the compromise.

For some reason, this story was a bit difficult to piece together, requiring a number of sources just to collect these limited facts.  MSG. I am speculating,  just hopes to get this issue behind them quickly.

Information for this post came from the NY Daily News, Billboard and NBC TV 4 New York .

[TAG:BREACH]

Facebooktwitterredditlinkedinmailby feather

Michael Page Recruiting Breach Caused By Operations Error – 750,000 People Affected

Michael Page/ The Page Group is a family of international recruiters operating in 35 countries and employing over 5,000 people and based in the United Kingdom.

Like many companies, PageGroup outsourced at least part of their IT operations;  in their case to another huge firm, CapGemini.

Earlier this month, Troy Hunt (a Microsoft MVP and regional director) was contacted by someone who shared this screenshot below with him (click to enlarge):

michael-page-backups
Index of backups of web site

For those of you who are not geeks, this appears to be a list of backups of a number of SQL databases for a range of countries.  The source sent a small file to Troy (about 350 meg compressed) which expanded to almost 5 gig.  Extrapolating the rest of the files, this represented about 30 gig of data.

One table in that database had fields called first name, last name, current job, location, current salary, telephone and other information.  This got Troy’s attention as he communicated with his source, he found out that the server where this data lived was at CapGemini, the mega international consulting firm with almost 200,000 employees in 40 countries.  Surely a firm as big and professional as CapGemini would not leave data like this exposed to the Internet.  Except that they did.

Troy, being an executive at Microsoft, had contacts at CapGemini and when Troy called, he told his contact “you are about to have a very bad day”.  That turned out to be an understatement.

The PageGroup released a FAQ on the breach (available here) talking about the breach and how it really wasn’t that bad.  In the grand scheme of nuclear war and global famine, that is absolutely true.  But to 750,000 plus people who’s information was compromised, that is probably not a great relief.

One more time, this is an example of a company (the PageGroup) being done in by one of it’s vendors (CapGemini).

The server was a TEST server – used to test new web sites.  The data, however, was not test data.

Lesson #1 – don’t use REAL data as test data.  A 12 billion euro company like CapGemini can invest in technology to create bogus test data so that they don’t have to risk the compromise of a client’s real data.  In the financial sector, where I worked for a long time, you can get fired for testing with real data.

But using real data was not the problem.

Some enthusiastic operations person figured we better back up this data and did.  Unfortunately, those backups for some reason, were stored on a server exposed to the Internet.

Lesson #2 – Be careful where you put backups.  The public Internet may not be the best place.

Lesson #3 – Encrypt backups and do not leave the key on “under the doormat” so to speak.  If these backups were encrypted with strong encryption and the keys not stored with the data, we would not be having this conversation.  If CapGemini was not encrypting their own backups, were they encrypting the backups of their clients?

Even though this is an operational error, I suspect there are some “interesting” conversations taking place between executives at the PageGroup and executives at CapGemini.

It is unclear what the contract between the two companies says about security, but lesson 4 is:

Lesson #4 – Make sure that you have a vendor risk management program, that you audit your vendor’s security practices and that your contracts hold vendors responsible for their blunders.

While the 750,000 people affected are probably not happy, it could have been a lot worse.  What WE don’t know (CapGemini may know) how long the data was out there and how many people other than the one who reported it to Troy downloaded it. Could be no one, could be many people.  The data may have been out there for years – we don’t know.

For everyone else, there is a lesson to learn here.

Information for this post came from Troy Hunt’s blog and The Inquirer.

[TAG:BREACH]

Facebooktwitterredditlinkedinmailby feather