Do Employers Have To Protect Employee’s Personal Information?

At least in Pennsylvania, a court says the answer is no.  Here are the details.

The University of Pennsylvania Medical Center was hacked and employee’s personal information was taken and used to file phony tax refunds.  Information taken included names, socials, birth dates, addresses and salaries.

The Superior Court of Pennsylvania recently ruled that employees had no reasonable expectation that the data will be safe.

Really?  You have to be kidding!

In the court’s defense, the court claims that Pennsylvania law does not require employers to protect employee’s personal data.

The court says that the workers turned over their data as a condition of employment, not for safekeeping, therefore no expectation of it being kept safe.

The court went on to say that businesses should not be required to spend the money to protect employee’s data since there is no guarantee that they won’t be hacked.

That seems sort of like saying that car makers shouldn’t have to spend money on making your car safe since it is not possible to guarantee that nothing will ever go wrong with your car.

The judge claimed that the benefit of storing this information electronically outweighed the downside that the data may be compromised,

This is good news for employers in Pennsylvania since, apparently, they don’t have to spend any money protecting employee records and bad news for employees since they apparently have no recourse if employers do not adequately protect their information.

The Superior Court is one of the appeals courts in Pennsylvania;  it is unclear what recourse the employees might have to appeal this further.

It also only applies in Pennsylvania, so, maybe, the rest of the country may still be safe.

The challenge, of course, is that the law moves very slowly compared to the rest of the world.  And for the rest of the world, that is a problem.

I don’t pretend to be a lawyer, even on the Internet, so this may be a perfectly legally reasonable decision.  As a non-lawyer, this seems like an insane decision.  These people were hurt.  Since the hackers filed false tax returns, when the employees filed real returns later, those people won’t get their refunds or will have to spend time and money to get their refunds.

This court, in their decision said, why should employers have to spend money to protect employee’s information, but they, apparently, are perfectly fine to force employees to spend money to deal with their employer’s lack of security.  That doesn’t seem right to me.

The courts are basically saying that the Pennsylvania legislature needs to deal with the problem, not the courts, and I can understand that, but in the mean time, 60,000 employees are left with a mess not of their making, but of their cost to deal with.

Information for this post came from Network World.

Facebooktwitterredditlinkedinmailby feather

Hospital System Fined $5.5 Million For Not Controlling Access

Memorial Healthcare Systems in Florida was fined $5.5 million for allowing the information of about 115,000 patients to be accessed “impermissibly”.

Memorial, which operates 6 hospitals, an urgent care center, a nursing home and other healthcare facilities in South Florida, reported the breach in 2012 – 5 years ago – after it discovered the problem.  Exactly why it should take Health and Human Services 5 years to complete an investigation is a mystery to me.

The information taken includes names, birth dates and social security numbers.

Apparently, two employees who worked in an affiliated physicians’ office accessed the hospital’s systems for a year, stole patient records of over 100,000 patients and used that data to file fraudulent tax returns.

After discovering  that employees had been stealing data for a year, Memorial worked with federal law enforcement which ultimately led to the conviction of the people who filed the false tax returns using that stolen data.

Apparently, even though Memorial had been told for the six years prior to discovering the breach that reviewing employee data access records was a risk, they still did not review those records.

As part of the settlement, Memorial denied any guilt.  It seems to me that, if they had been told for six years that something was a risk and chose not to deal with it, they have some degree of guilt.  Not admitting guilt is fairly typical in these deals so as to avoid giving plaintiffs who might be suing them any additional leverage.

It appears that the credentials used to access these records were legitimate, but it is unclear to me how the physician’s office staff got access to them.

This brings up the bigger issue of logging and auditing – something that affects all businesses;  they were not using credentials assigned to them when they stole the data.

We are seeing more regulators requiring businesses to maintain more comprehensive audit logs and processes.  Besides the HIPAA regulators, DoD and some state regulators have issued new rules or opinions.

But in addition to creating audit logs, you also need to review them and generate alerts based on that review.  For a business like Memorial, that likely requires reviewing millions or even tens of millions of audit records.  That requires both software and people and those require money.  That is likely at the root of the issue.  After they discovered the breach, they did implement a review process, but apparently, that decision not to review data access records cost them a $5.5 million fine as well has having to implement a multi year corrective action plan with the HIPAA regulator.

This represents a great opportunity for businesses in general to review their auditing processes – what audit data are we collecting, does that audit data meet the regulatory requirements, how long do we store it for and how do we analyze it – to verify that it is appropriate for both compliance reasons and business requirements.

Information for this post came from the Sun Sentinel and Health and Human Services.

Facebooktwitterredditlinkedinmailby feather

Should You Take Your Phone To The United States?

An article on BBC.com really is asking that question.

Recently, NASA engineer Sidd Bikkannavar, a U.S. citizen working at NASA’s Jet Propulsion Laboratory was stopped at Houston customs.  He was returning from Chile where he was racing solar power cars.  Customs demanded that he hand over his phone and the phone PIN.  When he protested that it was a NASA phone and contained sensitive information, but they told him he needed to give them the phone.  They took the phone away after he gave them the PIN and brought it back 30 minutes later.  Likely, they made an image copy of the phone.

Sidd had even been cleared by Homeland Security’s GLOBAL ENTRY program, where they do a background check on you in advance to speed you through customs and immigration.

Homeland Security Secretary John Kelly has talked about requiring visa applicants to turn over their social media account passwords.  No passwords, no visa.

Some people are suggesting that downloading the contents of your phone and/or laptop is going to be standard issue to cross the border, both in the United States and other countries.

The BBC author decided to ask some questions, thinking this might be a bit extreme.

The UK Foreign Office said that they didn’t have any advice on the subject, but if the author was “trapped in immigration at JFK” with a customs and border patrol agent demanding his password, he could call the British embassy and arrange a lawyer.

The American embassy said that they would need to contact Washington and call him back.  He is still waiting for that call.

If you have a concern, then leaving anything sensitive at home might be wise.  Alternatively, you could encrypt your data and upload it to the cloud, download it once you are across the border and reverse the process before you go home.  Make sure you scrub the laptop after you do that with something like CCLEANER.

I know of at least one company that gives employees burner laptops when they travel.  The only data on the device is data that is (a) necessary for the trip and (b) approved by security.  When the is over, the device is sanitized and re-inventoried for the next trip.

Obviously, everyone’s level of paranoia is different, but it seems like if you can reduce the threat level, that is always better.  This is one case where less is more.

Given the amount of storage on all mobile devices these days (a phone with a hundred gigabytes of storage or more is not unusual), it is likely that there will be sensitive data on your device if you don’t do something about it.

And once you are across the border, then you only have to worry about the espionage agents of the host nation you are in.

Countries like France have a long and storied history of going into foreign business persons’ hotel rooms and cloning the disk on their laptop.  They are hardly alone.

The Department of Defense has a detailed briefing for service personnel and contractors crossing the border.

In your case, the data in question could be trade secrets, business plans or just naked selfies of you and your friends.

In the case of Sidd, he contacted NASA security as soon as he could,  powered down all of his devices  and let them deal with it.  They gave him new devices (which is really the only safe bet after you have lost physical control of the devices) and they will do whatever with the old devices.

For business people traveling internationally, it is probably better to plan for the worst and hope for the best than the alternative.

I may be a pessimist, but I don’t think it is going to get better any time soon.

 

Information for this post came from BBC.

Facebooktwitterredditlinkedinmailby feather

Cisco, Juniper Hardware Flaw May “Brick” Firewalls in 18-36 Months

First it was Cisco; now it is Juniper and apparently there are a number of other vendors who will be affected by this flaw.

While no one is saying who the vendor of the flawed hardware inside Cisco and Juniper products is, it is believed that it is Intel’s Atom C2000 chip.  Intel has acknowledged problems with that chip which seem to match the description that Cisco and Juniper are saying exists in their hardware.  Stay tuned.

Cisco has set aside $125 million to pay for repairs for faulty equipment.

So what, exactly, is the problem?

Juniper and Cisco are saying that there is a flaw in a hardware clock component that is used in their switches, routers and security devices that may cause the device to crash and die starting about 18 months.  The device is not rebootable and not recoverable.  It is, as we geeks like to say, “bricked”.

Cisco says certain models of its series 4000 Integrated Service Routers, ASA security devices, Nexus 9000 switches and other devices are affected.

Juniper said that 13 models of switches, routers and other products are affected.

Juniper says it is not possible to fix the devices in the field.  They also said that they started using this component in January 2016, so the 18 month lifetime is rapidly approaching.  They say they are working with affected customers.

HP has announced that some of their products use the Intel C2000 and may be affected as well.   Expect more manufacturers to make announcements as they analyze their product lines.

For users, it seems like if your product is under warranty or a service contract dated as of November 16, 2016, Cisco will replace the device proactively.  They say that they expect the failure rate to have limited failures at 18 months, but a more significant failure rate as it reaches the three year age range.

For customers that are not under warranty or a service contract, well ……… I think you may be on your own.

If you have products that use this component, you should work with your suppliers to understand the risk and figure out how to mitigate it.

 

Information for this post came from Network World and CIO.

[TAG:ALERT]

Facebooktwitterredditlinkedinmailby feather

25% of Web Apps Are Vulnerable to 8 of the OWASP Top 10

Let that title sink in for a minute.  A quarter of all web apps fail security miserably.  That does not mean that the other 75% are secure;  it means that the other 75% are less unsecure.  For the 25%, it means that things are pretty hopeless.

For a quick cheat sheet on the OWASP top 10, click here.

The study continues to dissect the state of unsecurity:

  • 69% of web applications have vulnerabilities that could lead to exposing sensitive data.
  • 55% of web applications have cross site request forgery flaws
  • Broken authentication and session management issues affected 41% of the applications
  • 37% of the applications had security misconfiguration issues
  • Function level access control is missing or ineffective in 33% of the web applications
  • 80% of the applications tested contained at least one vulnerability
  • And, the average number of vulnerabilities per application is 45.

So just a question – does it concern you that 80 percent of the web applications tested had at least one vulnerability and 25 percent had 8 out of the top 10?

The only way to know is to test for it.  The best way to know is you have an independent third party test for application vulnerabilities.    Think of this as a network penetration test, but for your applications.

While you can test the applications that your team writes, you can’t test applications on the Public Internet – the owner might frown upon it.  As a business, if you have to use a particular web application as part of your business AND you have a business relationship with the web application owner (such as a supplier or a business partner), you can make completing a web application independent third party penetration test a requirement for doing business.  This is easier for larger companies, but if you don’t ask, you won’t get it.

This also means that you should be careful about what applications that you use and what applications you enter sensitive data in.  Since there is no equivalent to the “Good Housekeeping Security Seal”, although Underwriters Lab is working on one, there is no easy way to know which applications are secure and which ones are not.

Unfortunately, at the moment, there is no good solution to this problem.  In almost all cases, developers have no liability at all – the user shoulders all of the responsibility.  The best that I can say is be cautious.

Information for this post came from Help Net Security.

Facebooktwitterredditlinkedinmailby feather

The Cloud is not a Miracle – Do Your Homework

As more people and more businesses embrace the cloud, the opportunity for disaster goes up.

For example, we have seen companies move to the Amazon cloud and then be surprised when their web sites go dark (see this example).

There are no silver bullets when it comes to data center availability and the cloud is not one.

The cloud can both help you and hurt you; good design and architecture still “rules”.

Here is a recent example.

MJ Freeway makes marijuana grow and dispensary software that helps businesses comply with the law and manage their businesses.  They claim to have processed $5 billion in transactions for the MJ industry,

Their solution is cloud based, making it easy for businesses to use their software.  Until they have a problem.

MJ Freeway’s cloud  based solution was hacked, blinding a thousand dispensaries – unable to track sales and manage inventories.  For many of these stores, that means closing the doors until they can get the problem resolved.

But the attack was interesting.  All the data was encrypted, so the hacker could not use the data.  That however, does not appear to be the hacker’s objective.  The attackers targeted live production servers and backup servers at the same time.

Because it took MJ Freeway several hours to discover the attack, the attackers had a head start and because they attacked the primary and backup sites, clients had an outage.

Some customers maintained their own personal, offline backups of their data.  Those customers were able to restore their data as soon as MJ Freeway had a stable web site.  While it was wonderful that these users did not lose any data, they were still down until their vendor could create a stable operating environment.

For users that depended on their cloud service provider to backup their data, they had a bigger problem.  Since the primary and backup web sites were attacked at the same time, no online copies of the data were usable.

The “seed to sale” data was, apparently, corrupted and may not ever be recoverable.  What that means to those dispensaries from a legal standpoint is not clear, but can’t be good.

If the hacker’s objective was to ruin these companies – to bankrupt them – to run them out of business – that may be a great way to do that.

If their objective is just to cause the dispensaries pain – including lost sales, lost customers forever (to competitors), lost business to MJ Freeway, fines for regulatory failures and a host of other costs, the hackers may well have succeeded.

However, this is a great lesson for all businesses – whether you are in a semi-legal business like marijuana or a totally mainstream business like retail or services – the cloud is a wonderful tool.  It is not, however, a silver bullet.

Cloud services go down.  They lose data.  Sometimes they go out of business unexpectedly.  Who is liable typically depends on the terms in the contract.  If the contract was written by the online service provider, you can count on the contract saying that the provider is not responsible for anything.

Plan for a disaster.  Plan for a cyber incident.  WHEN something unexpected happens (notice I said when and not if), you will be in a much better position to deal with it.

Two terms in the disaster recovery business should be in every business that uses cloud services (and others too) lexicon:

RTO – Recovery Time Objective – How long are you willing to be down for.  If the answer is a day or a week, how you prepare for a disaster is different than if the answer is 5 minutes or an hour.

RPO – Recovery Point Objective – How much data are you willing to lose (or how far back in time are you willing to restart at).  If you can lose (and I assume, recreate) a day’s worth of data, it is easier and cheaper to build a disaster recovery plan than if the answer is 15 minutes.

So everyone who signs up for a cloud solution, keep in mind that sometimes, where it is cloudy, it rains and when it does, if you have an umbrella (aka a disaster recovery plan) then you are likely OK;  however, if you don’t have that disaster umbrella, you are going to get wet;  possibly very wet.

As those dispensaries discovered; your profit can go up in smoke and not in a good way.

Information for this post came from Network World.

Facebooktwitterredditlinkedinmailby feather
Visit Us On FacebookCheck Our Feed