Europe’s Spies Are No Different Than U.S. Spies

While there has been a lot of noise over the ECJ ruling invalidating Safe Harbor based on NSA spying among other things, there has not been much talk about what the EU countries are doing.  Basically, it is no different than what we are doing.

Given that most communications live on the Internet, you certainly cannot expect any country’s intelligence service to ignore that fact.

There was a good opinion piece in the Times the other day by Nils Muiznieks, Council of Europe Commissioner for Human Rights.  Among other things, he said:

  • France adopted a surveillance law that permits major data collection without prior judicial authorization.  At least the NSA and FBI have to deal with the FISA Court here.  While it may not be as robust as we like, it likely requires agents to justify their data collection activities.
  • Germany adopted a new data retention law requiring telecommunications operators and Internet Service Providers to retain connection data for 10 weeks.
  • The British government intends to increase the authorities’ powers to conduct mass surveillance and bulk data collection.
  • Austria is discussing a new law that would create a new security agency and allow it to operate with limited external control while collecting and storing data for up to six years.
  • The Netherlands is considering legislation allowing dragnet surveillance, mass data collection and decryption and intrusion into the computers of non-suspects.
  • And finally, Finland is considering weakening the Constitution to ease the adoption of a bill granting military and intelligence services the power to conduct mass surveillance with little oversight.

Given all this, it does not look like what the NSA and other agencies in our intelligence community are doing anything outside the norm.

Now this does not mean that we should not be concerned and that there should not be as much transparency as possible.

It also means that we should assume that our communications are being monitored and if that is a concern for us, we are responsible for doing something about it.

I also assume that, except for the stupid terrorists, the terrorists are already aware of this and creating different techniques for dealing with it.  And, to be honest, while catching the stupid ones is good for PR, they are not the ones who can do the most damage.

What Nils does say is that, out of fear, we are willingly giving up any privacy that we might have in this digital age.

As Thomas Jefferson said 200+ years ago (I guess he was a bit ahead of his time):

Those who desire to give up freedom in order to gain security will not have, nor do they deserve, either one.

(Note: Google attributes the quote to both Franklin and Jefferson,  but I think Jefferson is the correct attribution).

You may recall that when General Alexander testified before Congress about the mass data collection, there really were not very many attacks that had been prevented as a result of mass data collection.

In the long term, the intelligence agencies world wide are going to need to think outside the box and start to come up with different ways to track down terrorists.

Part of the challenge with mass data collection is the word mass.  The more data there is, the more compute power you need (think of the new NSA data center in Bluffdale, UT which has over 1 million square feet of office and data center space).  The more data there is, the more leads that you have to track down.  The more leads you have to track down, the more analysts and agents you need.  You get the idea.

One prediction says that fixed Internet data will grow by 40% between 2015 and 2017 (from 47 petabytes per month to 67 petabytes per month), but mobile traffic will grow by almost 250% in that same time (from 3+ petabytes per month to 9 petabytes per month).  Assuming this trend continues and the use of encryption continues to grow, the amount of computer power required, storage space required and people required will become unmanageable.

Clearly, the answer is not more mass surveillance and mass data collection.

We shall see if the intelligence community is up to the challenge and can do that without destroying personal privacy.  It is a tall order.


Information for this post came from a New York Times opinion piece.

Vendor and Supply Chain Risk

Businesses have always outsourced work.  It used to be plumbers and what were referred to as “the trades”.  Now it is programmers and manufacturing.

What is different now is the degree of connectedness that those suppliers have.

A couple of examples:

Target uses HVAC contractors to maintain the refrigeration in their stores. It used to be if the cooler broke or if you were installing a new one, you picked up the phone and told the contractor what you wanted.  Now there is a portal and the portal is connected to your accounting system and your document management system … and, and, and.

In Target’s case, maybe a little too connected because this contractor was the ignition point of Target’s 2013 breach which exposed information on over a hundred million customers.

In the OPM breach, it was also a contractor that was the ignition point of a breach that released very sensitive information on over 20 million people who hold government security clearances.

The recent T-Mobile breach happened 100% at their credit decisioning vendor, Experian.  T-Mobile’s systems were not even touched.

Given these stories and scores of others, you would think that businesses would be checking out the security of their vendors and making sure their contracts had really tight reps and warrants.

On the other hand, here are some real statistics:

  • 92% of businesses do not have any supply chain risk management abilities in place
  • 70% of enterprises enter into contracts with external vendors without having conducted any security checks
  • 60% of organizations grant vendors remote access to their internal network


63% of data breaches are caused security vulnerabilities introduced by third parties.

Just like every study, we could argue about each of those statistics, but I really don’t care a lot about the specific value of any of those statistics.  If 92% is wrong, is 80% right?  Would you agree to 65%?  Those are still very big numbers.  You get the idea.

So, just maybe, putting a supply chain risk management process in place might be a good idea.

The challenge for businesses, of course, is that it takes time and money and slows down the business process.

Of course, so do security breaches.

You can create a supply chain risk management process.  The article linked below gives some ideas and there are many more things that you can do.

The financial industry is the best at this, but even they have room for improvement.

What it takes is a comittment to create a process, staffing and some money.  AND, being willing to deal with the griping on the part of employees and vendors.  Interestingly, bank employees and vendors don’t seem to gripe much about it.  They know it is just part of the deal.

Next breach that you hear about.  See if a vendor was involved.  Likely, it will be.



Information for this post came from Infotech.

Senate Passes Information Sharing Bill

The Senate, on Tuesday, passed their version of CISA, the Cybersecurity Information Sharing Act.  The House passed their own version of it months ago.

The stated purpose of the act is to allow private companies to share “threat” information with the government and have immunity from being sued by their users for doing this.

Because of the poorly defined terms – like what threat information is- and the broad array of government agencies that the information can be shared with – like the FBI and NSA, along with the pretty weak protections against using this information against American citizens, many cyber security experts are calling this bill an intelligence gathering bill disguised as a bill to improve security.

In reality, this bill, in whatever form the House and Senate conference committees make it become, will do almost nothing to improve either the average citizen’s security or the government’s security.   It would, for example, do nothing to stop the OPM breach because that was a unique attack – there were no indicators of that attack in the wild because the only place it existed was at OPM.  Same for Anthem.  Same for Home Depot.

Ignoring that, post Snowden, tech companies are extremely wary of sharing anything with the government – it is, to be honest, not good for business.  To be seen as voluntarily sharing your and my data with the government is the kiss of death from a reputation standpoint.

In fact, Microsoft and the Justice Department are locked in mortal combat.  The FBI wants Microsoft to bring data from Ireland back to the United States and give it to them.  Microsoft says that doing that, absent an Irish court order would subject them to criminal charges in Ireland, so if you want the data, get an Irish court to tell us to do so.  In Ireland.  They have been fighting over this for almost two years (see article).   Microsoft is fighting this because (a) it is good for PR and (b) they do not want to set a precedent that would likely get them sued in Europe.  And, given the sentiment inside the EU after the Max Schrems/ECJ Safe Harbor decision, I don’t blame Microsoft.

More importantly, this will do little to nothing to improve security.

There has been an FBI-private industry relationship for over 10 years now called FBI-Infragard.  This is a very simple way to share information with the government.  Sharing data is not a problem.

There are dozens of ISACs or Information Sharing and Analysis Centers and ISAOs or Information Sharing and Analysis Organizations (there really isn’t much difference between the two.  ISACs were originally focused on critical infrastructure, but many of them allow anyone in their particular vertical, like finance, to join).  Companies that want to share data with their ISAC or ISAO can already do that.

At least for industry leaders, they are already sharing all the data they need.  Sometimes informally, sometimes formally.  They do not need CISA to do that because threat indicators rarely require the sharing of personally identifiable information.

So why is Congress pushing so hard for this new law.

Two reasons, in my opinion – other people may not want to be quite as cynical as me – but they might be.

Voter approval of Congress is in the single digits.  It is worse than the approval rating of used car salespeople or debt collectors.  With a Presidential, Congress and Senate election coming up next year, incumbents want to be able to pretend that they did something useful to reduce the number of cyber breaches when they go out and campaign.  They are counting on people being too ill-informed to know that this law is next to useless.

More useful would be to provide oversight (which is their job) and provide funding.  Just this week Congress refused to give OPM $38 million dollars to deal with their hundreds of millions of dollars in budget shortfall to improve their computer systems security.  This is the agency that is still running at least one core system built in the 1960s.

The people who built that system likely have all died of old age by now, but the system is still running.  Do you think that some threat information shared by, say, Facebook (who appears to be the only tech company in favor of CISA – even though that is political suicide – unlike Google, Microsoft and others, Facebook refuses to say that they oppose CISA) will help OPM protect against a mainframe based, COBOL system written in the 1960s?  I didn’t think so.

Will sharing threat information solve the problem of tech executives who say that they won’t spend $10 million to avoid a possible $1 million loss – I will accept the risk (that would be Jason Spaltro, SVP of Information Security at Sony)?  Sony accepted the risk and look what happened to them.  The problem of course is that while you may guess that the $10 million number is right, you have no idea if the $1 million number is correct or is really $100 million, as Sony found out.

Will sharing some threat information stop 25% of a government agency’s employees from clicking on phishing emails? And almost none of them reporting it to their security team – 7% reported it.  (That would be the USPS, by the way).  I don’t think so.

So, as is often the case, Congress is taking the easy way out with CISA, rather than actually dealing with the real problem inside government – which is their responsibility to fix.  Private industry is way ahead of the government, for the most part, even though private industry knows that they have a lot more work to do.

Sorry, I know this is mostly a rant, but it is important for people to understand that CISA will not make a difference no matter what some politician tells you in a sound bite.

Read the article below for more experts takes on the issue.


Information for this post came from Net-Security.

Buying A Smart Home – Food For Thought

In the world of a connected home (or any other building), when you sell it or buy it, you need to consider the security and privacy implications.  Does the former owner still have access to the security cameras?  HVAC?  Alarm system?  Are the smart devices not so smart anymore?  Have they EVER been patched?  Are there known security holes big enough to drive a truck through?

It used to be that all you had to worry about was whether there were termites and did the heating system work (among other things).  Now, at least in the case of smart homes, there are many other things to consider.

In fact, the Online Trust Alliance has even created a checklist (see here).

Here are a few thoughts to consider:

  • Do you know what devices in the building are connected to the Internet and if there is a service provider involved?
  • How do you know that the former owner can no longer access each and every one of these smart devices?
  • Are all of these devices still supported by the manufacturers – if you even know who the manufacturers are?
  • Are there known security vulnerabilities in any of the devices that would allow them to be taken over or surreptitiously monitored (for example, there are well known cases of perps hacking into baby monitors and other security cameras and watching)?
  • Are all the devices patched?  Do you know HOW to patch all of them?

The challenge, I think, is that this is likely overwhelming for most homeowners – except maybe for a few geeks.


Manufacturers of these smart devices don’t help either.  The manufacturers could easily help a hacker break into your system since they really don’t know if you ever owned or still own the system in question.  In addition, for consumer devices, manufacturers stop making them pretty quickly and want to stop supporting them soon after that.

Manufacturers also make it difficult for users to install patches.  Do you, for example, have any idea how to patch your smart TV?  This is the current generation’s version of the VCR with the blinking clock. (That is, for those of you old enough to know what a VCR is.  If you are not old enough, it is your parent’s version of a TiVo).

Manufacturers have to step up their game – assuming they want to become anything other than a niche player.  I can also see the prospect of lawsuits against manufacturers who don’t timely patch their devices.

On my satellite TV, the provider downloads software updates every week – so I don’t try to record any shows on Saturday night at around 2 AM.  That’s when the satellite box takes over, shuts down satellite reception and downloads new firmware.

I am not a cable user, so I don’t know what they do and each provider is likely different anyway.  Typical cable setups have a cable modem and a set top box, each of which would need to be patched separately.  It is a reasonable question to your provider – who is responsible for patching security holes, how often does that happen and, if you need to do it, how do you do it.

I only mention TV boxes because they are something most people are familiar with.  While they are smart, they are not likely to be handed over to a new owner.

What is likely to be handed over are things like smart locks, alarm systems, security cameras, garage door openers – all connected to the Internet.  And, if the manufacturers are right, by the year 2020, billions of other devices.

As if you didn’t have enough to be concerned about when buying a new or used home (even if it is new, someone else likely has the codes).

UNLESS users start pressuring manufacturers by refusing to buy products that do not address this issue.

I PROMISE this problem will get worse before it gets better.  Sorry.

Information from this post came from CIO.

Sony Agrees To Pay Employees $5 Mil – Sort Of

Billboard is reporting that Sony and the employees suing them as a result of the breach last year have come to a tentative agreement.  The employees were suing for negligence and privacy violations.

If the settlement is approved, The employees will get $2 million – up to $1,000 each – for preventative measures taken against identity theft.  The lawyers will get $3.5 million.

In addition, Sony is paying for identity theft protection for two years and $1 million in identity theft insurance.

Additionally, Sony will pick up another $2.5 million – up to $10,000 per employee – for unreimbursed losses as a result of the breach.  Note that this is likely not going to be touched, so it doesn’t really count.

Why will this not be touched?  Two reasons.  First, losses on credit cards will be eaten by the banks and credit card associations – your liability, at most, is $50.

Second, and this is pretty novel, Sony is saying that with the Target breach, the Home Depot breach and many others, you need to prove that the unreimbursed loss was as a result of the hackers stealing your information from us and not one of the other breached sites.  That is pretty much impossible to do.

If the judge approves this – and it is not clear that he will – who wins is the attorneys.

From Sony’s standpoint, spending $5 million plus attorney’s fees is way cheaper than actually protecting the information.  Of course they have lots of other expenses – fixing the breached systems, lost business, film revenue, etc., but a lot of that is covered by insurance.

Sony’s former director of information security said “it’s a valid business decision to accept the risk’ of a security breach…I will not invest $10 million to avoid a possible $1 million loss.”

What we don’t know and will likely never be disclosed is whether Sony loses some picture deals as a result of the rather caustic comments attributed to their executives in leaked emails.

And there still could be shareholder lawsuits and other non-employee suits.

Information for this post came from Billboard.

Board Involvement In Cybersecurity Still Not What It Should Be

Price Waterhouse surveyed 10,000 CEOs, CFOs, CIOs and other executives and amazingly only 45% said their boards participated in cybersecurity strategy.  While that is up from 42%, it should be close to 100%.

The PwC study respondents reported a 38% uptick in cyber-assaults since 2014, with the boards spending $77 billion on tools and processes this year.  They expect that to rise to $170 billion by 2020.

Board reviews of security and privacy risks went up to 32% from 25% a year ago.  Again, understanding the risks that the company is facing is the board’s responsibility and depending on the outcome of the class action shareholder suit that was just certified this month against Home Depot, that number might rise again.

On the good news side, 46% of the boards are reviewing the security budget, 45% are reviewing the overall security strategy and 41% are reviewing the company’s security policies.  These numbers are all up, but need to be much higher.

69% of those surveyed said they were using cloud based security services and 56% were using real time monitoring an analytics.

56% said they were sharing threat intelligence with others in their industry.


Information for this post came from CIO Magazine.