New Jersey Law Requires Health Data Encryption – Sort Of

New Jersey enacted a new law which requires data encryption (see bill information) as a response to the health care data breaches – I assume like Anthem.

The bill is short, only 4 pages, but, at least to me, that does not make things  very clear.

The bill covers health insurance carriers, but then defines them this way:

“Health insurance carrier” means an insurance company, health service corporation, hospital service corporation, medical service corporation, or health maintenance organization authorized to issue health benefits plans in this State.

It defines personal information in a pretty normal way:

“Personal information” means an individual’s first name or first initial and last name linked with any one or more of the following data elements: (1) Social Security number; (2) driver’s license  number or State identification card number; (3) address; or (4) identifiable health information. Dissociated data that, if linked, would constitute personal information is personal information if the means to link the dissociated data were accessed in connection with access to the dissociated data.

Then it says that a health insurance carrier shall not compile or maintain computerized records that include personal information unless that information is secured by encryption.  So far, so good.

But then it says that it only applies to end user computer systems and computerized records transmitted across public networks.

Fines, according to King & Spalding, are $10,000 for the first offense and $20,000 for subsequent offenses.  What is not clear to me is whether, if you have 5 computers in the office, that counts as 5 offenses.

What is also not clear is whether whole disk encryption like Microsoft’s Bitlocker (or their Android and iPhone equivalents)  count to make you compliant.  Malware will likely cut through those like a hot knife through butter, because the malware is acting as your agent and you are allowed to see the data unencrypted.

Yet, the data is encrypted, so you likely would not be liable.  Maybe.

Also, this offers ZERO protection against Anthem and Premera style attacks since those went after servers and not end user computers.

My reading would suggest that it does include mobile devices like phones and pads, so that is probably good.

Unfortunately, I think this is an example of lawmakers, who really don’t understand technology, trying really hard to do something useful, but kind of missing the mark.  It definitely helps because lost laptops, phones and pads really do happen – a lot – but it will have no effect on the big breaches that you see on the news.



Facebooktwitterredditlinkedinmailby feather

White House Hacked By Russians

USAToday is reporting that the hacking of the State Department’s email went way farther than has been reporting up until now.

The State Department has been fighting to get the hackers out of their unclassified email system for months now, even enlisting the help of private contractors and the NSA – to no avail.  CNN is reporting that the hackers used their compromise of the State Department to hack the White House.

The White House did report that they had a breach of their unclassified Office Of The President network last year, but did not tie it to the State Department email hack.

In general, if you allow people to use email and surf the web, someone will click on the wrong thing some time and compromise any security that you might have had.

For high security environments like the WH and State, you really need to separate functions – could be virtually – in order to stop the cross border pollution.  The problem is that people would like the systems to be interconnected.  For example, if you have an email attachment that you want to store in Sharepoint and you have something from a web page that you also want to store in Sharepoint (or any other document repository), you have, by allowing that, connected two otherwise independent applications (email and browsing).

The White House did not confirm – or deny – the report.  Previously, they say that the computers were not damaged although some elements of the unclassified system were “affected”.  Unclassified, of course, does not mean unsensitive, so who knows what the attackers got.

Ben Rhodes, deputy White House national security adviser, said “We do not believe that our classified systems were compromised.”  That certainly provides me with a high level of confidence.  If I do not believe that the sun will come up tomorrow morning, that probably does not decrease the likelihood that it will rise tomorrow.

I am sure that the White House and State Department are hot targets and that their I.T. organizations try hard to protect them, but that is not an easy task.

Facebooktwitterredditlinkedinmailby feather

News Bites For April 7, 2015

Researchers from the University of Virginia and Perrone Robotics recently completed testing of an anti hacking sensor for automobiles  from startup Mission Secure, Inc.  The sensor was able to detect several attempts to take over the braking, acceleration and collision avoidance systems of cars on a test track.

This article says the tests went well, but challenges remain like convincing car makers to use something they did not invent, adapting it for different cars and getting the cost down.  Hopefully, car makers will do something before there is a flashy and possibly bloody demonstration of the problem.


Although people love to beat up Android phones as not very secure, Google’s just released Android security year in review says that number of potentially harmful Android application installations was cut nearly in half from Q1 to Q4 of 2014 (see report).

Google found that less than 1% of Android devices had a potentially harmful app installed and the number went down to 0.15% for devices that only installed apps from the Google App Store.


Darking Reading is reporting that 3 out of 4 Global 2000 companies are still vulnerable to the Heartbleed SSL bug, a year after its public disclosure (see article).  Security software provider Venafi found 580,000 hosts (such as web servers) that had not completely fixed the Heartbleed problem.  Gartner called these companies “lazy”, saying they patched the bug, but did not replace the old, compromised SSL keys or revoke the old certificates.  The article provides a lot of potential reasons such lack of knowledge and not knowing where all their keys and certificates reside.

As a reminder, Heartbleed is a bug in the very popular open source SSL encryption package OpenSSL that has a catchy name, cute logo (a heart dripping blood) and span of millions of affected computers.  The bug works on both clients and servers running OpenSSL,  allowing an attacker to steal a server’s private keys (resulting in the ability to masquerade as the server) or steal a user’s password (resulting the the ability to, for example, empty your bank account).

Part of the problem is that whether a particular system is using OpenSSL is not obvious to the user like a bug in Excel 2013 would be visible.


Apparently, the U.S. Government has been tracking international phone calls way longer than Snowden told us about.  USAToday is reporting that as far back as 1992 under President George H.W. Bush and approved by, at least, then Attorney General William Barr.  The data collection continued under Presidents Clinton, Bush II and Obama until it was killed in 2013 after the Snowden leaks.

The DEA was getting so much call data that they had to get the help of the DoD to program computers to analyze the data.  They claim the call traffic has led to finding some big players, but could not name any names.

The DEA used an “expansive interpretation” of administrative subpoenas that said that the data was relevant to federal drug investigations.  A former DEA official said that they knew that they were stretching the definition.

Now the DEA sends subpoenas to the phone companies to get the data.   It is reported that they send as many as a thousand subpoenas a day, however, that likely represents a much smaller percentage of the call traffic than prior to 2013.


Facebooktwitterredditlinkedinmailby feather

The Changing World Of Transaction Payments

If you either use credit cards or are a merchant that accepts credit cards (I think that covers most of us), your world is changing and changing rapidly.

Sorry, this is going to be long, so you might want to get a cup of coffee and possibly some aspirin before you start reading.

First, if you are a merchant that accepts credit cards, effective Oct 1, 2015, if you do not accept Chip based credit cards (the so called EMV card that has been the standard in Europe for 10 years – we are just a little bit behind), if there is credit card fraud, you, as the merchant, become financially liable for the loss (for gas stations that does not happen until 2017).

This means that as a merchant, you have to change your credit card reader equipment, train your employees and if your credit card process is tied into your point of sale system, likely have to change that as well.  All this is at your cost as a merchant. Here is Visa’s guide for merchants on how to migrate from the old mag stripe credit cards to the new chip based card.

One thing that is still different between the U.S. and Europe is that Europe requires that you enter a PIN with the chip card and we are going to use the old fashioned signature.  PIN is likely much more secure – retail clerks rarely check whether your signature matches the back of the credit card.  Mastercard and Visa opted not to use a PIN because they thought that people might use their cards less if they were harder to use – and that is like a knife to the heart for credit card processors.  They would rather eat the losses, which they pass on to the merchants in the form of fees, who pass them on to you and me in the form of higher prices.

The second change that will affect merchants is the release, in April 2015, of the PCI 3.1 standard.  The main reason for this change is because of all of the SSL bugs that I and others have been writing about for months (including Heartbleed, POODLE, FREAK and Bar Mitzvah, among others).  This likely will require a number of software upgrades as SSL is no longer allowed, only the current version of TLS.

In addition, as of PCI 3.0, released in January, merchants are now required to conduct penetration tests at least annually, which are much more complicated than that the old requirement for doing vulnerability scans (see guidance on conducting penetration tests here) .  Merchants also have to implement intrusion detection and prevention technology.

Now the part that affects consumers – which, of course, also affects merchants if they choose.  Apple released Apple Pay earlier this year.  Some merchants embraced this;  others are totally fighting it – by either turning off the NFC feature on their credit card terminals that are required to make it work or not fixing that part of the terminal if it breaks.  This is so much of a problem that some customers have reported that they have only completed ONE Apple Pay transaction successfully since they registered their cards.

But if that wasn’t confusing enough, customers and merchants will have to deal with other competitors to Apple Pay, including:

Samsung Pay – which only works with the Samsung Galaxy 6

Google Wallet – which has been around for a few years, but has not gained much acceptance.

CurrentC – the big merchants alternative to Apple Pay. This is supported by the retailers and they will give you discounts and freebees if you use this rather than Apple Pay.  This will be hard for Apple to counteract because the merchants are in control of these discounts and freebees.

Stratos – a small high tech startup with their own solution

Here is a guide to these options.

If you are a consumer, you can choose to use one of these alternatives or not.

If you are a merchant, you will need to make a bunch of decisions – running the risk of offending customers and having them go elsewhere.

And, I am sure, there will be more choices before this all settles out.

Facebooktwitterredditlinkedinmailby feather

Google Declares War. On Ad Injectors!

Ad injectors are usually implemented as browser add-ins that place their own ads on web pages that you visit.  These adds could replace existing ads or insert new, additional ads.  The can also inject malware into your computer.

Google worked with a team of researchers at the University of California at Berkeley and found 200 Chrome extensions that had ad injection (and malware injection) capabilities.

Before you launch a nuclear strike at Google for allowing this to happen, here are some of the results of the research.  Google received over a hundred thousand complaints in the last three months alone about ad injectors.

  • Ad injectors affect all platforms – Windows, Mac and Linux.
  • They affect all browsers – Chrome, Firefox and IE – and probably any other browser that has a big enough market share to matter and allows developers to write extensions.
  • More than 5 percent of the users visiting Google web sites had ad-injectors in their browsers.  Of that group, half had at least two and a third had at least 4.
  • A third of the Chrome extensions that injected ads were classified as malware.

Now we need to embarrass Microsoft and Mozilla to do the same thing.

Google has blocked over 200 extensions and is refining the technique they use to detect this crap.

Unfortunately, the only way to stop this is for users to become much more aware of what they are clicking on, downloading and installing.

In the meantime, users should examine what extensions their browsers are running and disable any that they don’t need.  If disabling one causes a problem for you, you can alware re-enable it.

How to do this is different for each browser and each operating system.


Facebooktwitterredditlinkedinmailby feather

News Bites 2

TrueCrypt users panicked last year when the developers of the very popular semi-open source encryption program stopped supporting the product and issued a warning to users not to trust it.

An independent group solicited donations and paid for an independent audit of the source code to TrueCrypt version 7.1a.

Well, the audit is in and the results are pretty good.  The auditors found some issues, but no NSA back doors and nothing catastrophic.

A Russian security researcher found a bug in YouTube which would have allowed him to delete any (or all) You Tube videos with a single command.  The researcher said he fought the urge to delete the Bieber Channel in YouTube, but instead reported the bug to Google.  Google fixed the bug within a couple of hours (something that Microsoft or Apple really can’t begin to do with their desktop based software products) and paid the researcher a $5,000 bug bounty.

Facebook faced a similar problem about a month ago with a bug that would allow a hacker to delete any Facebook photo from any account (see article).

Facebooktwitterredditlinkedinmailby feather
Visit Us On FacebookCheck Our Feed