FTC Settles With Asus Over Security Claims

Asus is an international manufacturer of all kinds of computer and networking equipment.

The FTC, in this case, was not upset with Asus for making hardware that was buggy and not secure, thereby exposing customer’s information, but rather representing that their routers had numerous security features that could protect users from unauthorized access and hackers when it was buggy and not secure.

In fact, under section 5 of the FTC act, as the Wyndham Hotel chain discovered, they could probably have brought an action in either case, but it is much clearer that saying it was secure when it was not is deceptive.

According to the FTC,

ASUS marketed its routers as including numerous security features that the company claimed could “protect computers from any unauthorized access, hacking, and virus attacks” and “protect [the] local network against attacks from hackers.” Despite these claims, the FTC’s complaint alleges that ASUS didn’t take reasonable steps to secure the software on its routers.

The press release goes on to talk about some of the vulnerabilities and the fact that Asus did not address them in a timely or effective manner and did not notify consumers of the vulnerabilities.

Hopefully, this will act as a warning to manufacturers of Internet of Things devices that they better maintain reasonable security or the FTC will explain to them that they should.

In the agreement, Asus agreed to create a security program, have that program watched by the FTC for the next TWENTY years, to notify consumers of security flaws and workarounds for those flaws until they are patched and let the FTC audit them every two years during that period.

For those in the IoT space, doing what is in this agreement without being told will likely keep them out of the cross hairs of the FTC.  The FTC is not expecting IoT devices to be bug free, but they are expecting manufacturers to be responsible.

Manufacturers should consider themselves warned.


The FTC press release on the Asus settlement can be found here.

Yet Another Major Open Source Program Flaw Discovered – After 8 Years

Some people are big advocates of open source because, they say, since people can look at the source, bugs are found more quickly.

I am not a big supporter of that theory even though I am a supporter of open source because just because people CAN look at the source, doesn’t mean that they will and just because they DO look at it, doesn’t mean they will find the bugs.

On a side note, OpenSSL, the super popular open source SSL software package used in many apps and on many web sites will be releasing patches on March 1st for multiple vulnerabilities.

Google announced this week another major open source software package vulnerability.  The package, GLibc, provides basic functionality for C Language software developers.  While not used by every C developer, it is an extremely popular library – likely used in tens of thousands  of applications.

Going back to the open source conversation, this bug was introduced in 2008 – 8 years ago.  And, it was only discovered by accident when a Google developer kept crashing his system.  After some work, the Google team discovered that it was caused by a bug in Glibc.

And the bug is pretty serious.  It allows a hacker to intercept a DNS request and respond with a specially crafted response which allows the attacker to take over the computer by inserting an arbitrary program up to 64,000 bytes and then running it.

The problem with these two bugs – and the fact that they are open source doesn’t really impact this issue – is that developers who use this package need to release an update and every single user needs to install that update.

In fact, these two open source packages are ATYPICAL because they both have teams that support them.  Many open source software packages don’t have formal support teams.

For major developers, such as many Linux distributions, there are likely patches already in the works and users will likely install them.

The problem comes with smaller software packages and dedicated hardware devices that use it – companies that may no longer be supporting that version of the software or hardware or even companies that have gone out of business.

Since Glibc is a large library, many Internet Of Things developers don’t use that library.   For us, that is a good thing.

But as an end user, we likely have no clue which software packages on our devices use the affected library.  Since the bug has been around for 8 years, any software product that uses the library, likely uses the infected version.


The OpenSSL announcement – minus details as is their standard policy  – can be found here.

Information on the Glibc bug can be found in Ars Technica, here.

A Tale Of Chip and PIN

I went in to my local grocery store tonight and went to use my credit card and poof – something new.

A couple of details first.

My credit card is actually a Visa logoed debit card meaning that, in theory, you should be able to use it as a debit or credit card.  Debit with a PIN, credit with a signature.

The store, King Soopers, a subsidiary of Kroger, the mega- supermarket chain, just upgraded their point of sale terminals this weekend to accept chip cards.  A little late, but better late than never.

There is now a hand written sign on the register that says all debit cards now require your PIN.

So why is this of interest?

First, my bank has set up the credit card to be chip and signature and, according to my bank, there is no way for the store to change that to chip and PIN – the card is actually set up differently.  This means that the store – King Soopers in this case – is completing the transaction as a debit card instead of credit card.  Why might they be interested in doing this?  Well, possibly, they think PIN transactions are less likely to be fraudulent.  Also possibly, debit network transactions are dramatically cheaper for the store.  Why do you think, for example, Walmart has made it very obscure for you to use your debit card as a credit card in their stores for years?  Money!

I have written before that chip and PIN is more secure than chip and signature, so why am I whining.

In the interest of full disclosure, I am not sure that I am whining – I am just not sure one way or the other.

My first complaint is that King Soopers is not being transparent with their customers.  For years I have used by Visa logoed debit card as a credit card and now, all of a sudden, with no explanation of what they are doing, they are forcing this to be a debit card transaction.  In terms of my bank account balance, there is no difference, so why do I care?

In part because I don’t trust their security.  Just this part month, Safeway stores discovered skimmers on a number of credit card terminals in their stores, including one near me.  If this happens to King Soopers and they have the card information and the PIN, they could, potentially, empty my bank account.  If someone has my PIN, is the bank going to say that it must have been me that withdrew the money from the ATM?  It could be a fight.

Next, there are very different federal laws regarding recovering from fraudulent transactions between credit and debit cards.  Radically different.  Even if the bank says that they will treat them the same, the LAW is very different.  The law favors credit cards,

SO what I told King Soopers is that, for the moment, I have decided not to shop there any more.  In part this is my way to vote on the lack of transparency.

Obviously, if I needed to shop there, I can pay cash.  There is an ATM in every grocery store if I don’t have cash.

I can also use a true credit card – they can’t force that to be a debit card – although I have not tested that, I am pretty sure that is true.

I may change my mind at some point in the future.  Right now, I am writing this to make sure that people ARE educated and understand what the situation is.  Most people are not as paranoid as me and won’t consider this to be a problem.  Who knows – maybe they are right.

California Attorney General Defines Reasonable Security

Some businesses have complained that the FTC has not been clear about what is required in order to be in compliance of section 5 of the FTC act and avoid being fined.

California, usually a leader in the privacy arena, has begun to put some detail to those requirements, at least for businesses that have customers in California.  After California implemented SB 1386, the defining privacy law in the U.S., other states followed over the next few years.  This is likely to be the case with this decision.

Kamala Harris, the California Attorney General, released a report this month on data breach impact in California between 2012 and 2015.

The report goes into some detail on the types of breaches, types of businesses, number of records breached and related information.  Retail was the leading breached business type, followed by financial and healthcare.

She then goes on to talk about reasonable security and the fact that the California information security statute requires businesses to use “reasonable security procedures and practices”.

She explains her definition of reasonable security as follows:

  1. The 20 controls in the Center for Internet Security’s Critical Security Controls identify a minimum level of information security that all organizations that collect or maintain personal information should meet. The failure to implement all the Controls that apply to an organization’s environment constitutes a lack of reasonable security.
  2. Organizations should make multi-factor authentication available on consumer-facing online accounts that contain sensitive personal information. This stronger procedure would provide greater protection than just the username-and-password combination for personal accounts such as online shopping accounts, health care websites and patient portals, and web-based email accounts.
  3. Organizations should consistently use strong encryption to protect personal information on laptops and other portable devices, and should consider it for desktop computers. This is a particular imperative for health care, which appears to be lagging behind other sectors in this regard.

The CIS 20 is an almost 100 page document, so I am not going to try and summarize it here, but it addresses inventory, configuration, continuous vulnerability assessment, controlling admin users, data recovery, need to know, wireless, account monitoring, incident response and penetration testing, among other things.

And, I would agree with her – organizations that take on the CIS 20 seriously are likely to be way more secure than the average company.

On the other hand, doing this is a serious undertaking and likely affects many aspects of your business.

One other thought.  The California Information Security Law (AB 1950) also REQUIRES a company to enter into CONTRACTS with its sub-contractors to also implement these same controls.

What we don’t know yet is what the AG plans to do about this.  For example, the California law does not say that you are required to use reasonable security only in the event that your systems are breached.  This means that the AG could go after businesses for not implementing reasonable security, even if they have not been breached.  While I think this is unlikely, she certainly would get a lot of press if she decided to make an example of someone.

It seems more likely that, in the event of a breach and after investigation, her office discovers that a breached organization was not implementing her definition of reasonable security that she might go after a business.

Bottom line is this –

If you are located in California, have customers located in California or do business with a business located in California, you now have some pretty clear guidelines for what you need to do.

The AG’s report is available here.

The CIS 20 controls are available here.

Information on CA AB 1950 can be found here.

Feds to Increase Audits Of Doctors’ Protection Of Your Information

The Inspector General in the Health and Human Services Office for Civil Rights (OIG, HHS OCR) reported that OCR is not effectively auditing HIPAA covered entities.  A covered entity includes doctors and hospitals that have primary ownership of your health records.  As a result, the OCR is establishing a permanent audit program and working to identity potential audit targets.

One place OCR is, apparently, going to be looking, is at business associates or BAs.  In HIPAA speak, BAs are those vendors that a doctor or hospital uses that have access to your information.  Under the rules, your doctor needs to not only have a written agreement with that vendor, but doctors have to use reasonable diligence to make sure that the security of your information is protected.

Also, the rules are changing regarding what is a breach.  It used to be that you only had to report a breach if there was significant risk of financial or reputational harm – as evaluated by the doctor or hospital.  Needless to say, most lost data did not present significant risk.  Now any breach has to be reported.

Unless the data is encrypted in a way that there is no reasonable way for the hacker to be able to read the data.

And, this includes mobile devices (PHONES!) that contain patient data, so just encrypt patient data wherever it lives.

A Massachusetts dermatology clinic discovered this the hard way when they lost a thumb drive.  Their wallet is now $150,000 lighter.

Doctors that use computerized record keeping systems called EHRs now need to provide copies of those records within 30 days of a request, down from the old 90 window.  That could challenge doctors and hospitals that don’t have a system in place to do that.

And, there are many other rules that both doctors and their service providers need to comply with.

Now that the OCR is finally going to have an active audit program, expect more violations.    Its not that the violations weren’t happening before, it is just that no one was looking.

Those doctors and hospitals that do not have an active program for monitoring their HIPAA compliance may find themselves with a problem.  HIPAA and its cousin HITECH have been around for years.  One of the goals of HITECH was to put teeth in the enforcement of HIPAA.  That goal may have just been accomplished.

If you are a doctor, hospital or service provider to one, don’t say you did not know.

Information for this post came from Family Practice News.

MBA Panel Discusses Third Party Risk Issues

According to HousingWire, a panel at the Mortgage Bankers Association mortgage servicing conference discussed cyber risks and one seems to have the attention of regulators is risk introduced by vendors.  All you have to do is think back to Target, Home Depot and the Office of Personnel Management (collectively around 200 million compromised records).  The entry point of attackers in all three cases was vendors.

The panel pointed to guidelines from The New York Department of Financial Services (NYDFS), which are voluntary now, but may not be voluntary for long.  NYDFS is working with many state and federal regulators to make their view of the universe the nation’s standard.

While NYDFS only regulates entities like banks, insurance companies and broker-dealers (among others), there is a food chain to consider.  If you sell to or provide services for one of these covered entities, then that entity is going to require that you measure up to their regulator’s rules.  Otherwise, the regulator will come after them.

The NYDFS wants their rules to be included in contracts that regulated entities use.  That way there is no question.  You don’t want to agree to these terms, then don’t do business with them.

Some of the rules include:

  • Requirement to use two factor (or multi factor) authentication.
  • Use of encryption at rest and in motion.
  • Notification in case of a breach (yes, believe it or not, some banks recently were found to not require vendors tell them if the vendor was breached).
  • Indemnification in case the entity that is contracting for services experiences a loss due to the vendor being breached.
  • A requirement that the entity be able to audit the third party vendor (you may recall some issues around Blue Cross and their refusal to let the feds audit them.  With this clause in the contract, no audit, no payments).
  • Finally, reps and warrants regarding the third party’s information security.

This is only a partial list of the requirements, but as you can see, the implications are serious.  If Target’s refrigeration vendor had to indemnify them, the vendor would be out of business.

AND, the NYDFS is working with other regulators to get them to adopt these same rules.

So, while this only affects New York regulated entities and any company that does business with them, expect this to grow.  Look for a future blog post on what California is doing in this area.

One option is to wait until the rules are mandatory and then scramble to react to them.  Alternatively, you could be proactive and create a vendor risk management program under your timeline.  The second way may be less stressful.  It will allow you to grow the program over time as you work out the kinks.


Information for this post came from HousingWire.