All posts by mitch tanenbaum

HR 4681 and government surveillance

HR 4681, the Intelligence Authorization Act for FY 2015 was signed into law on December 19th, 2014 and provides funding for the intelligence community until next September.  The bill and now law contains one section – section 309 – that deals with the collection, retention and sharing of information collected by the intelligence community.  Because Congress wanted to get out of D.C., this bill was not debated and it was voted on under a rules suspension that is used to push through non-controversial bills.  Since no one wants to appear soft on terrorism, this bill fit into that category and it passed 325-100.

Section 309 was an effort to curtail some of the practices of mass data collection and retention of the intelligence community, but it seems to have a lot of wiggle room.  The text of the bill can be found here.

Interestingly, most of the data collection that the intelligence community collects is not done under the Patriot Act or the Foreign Intelligence Surveillance Act, but rather, under a very dusty executive order that President Reagan signed in 1981 called EO 12333.  A primer on the EO is available here.  Since EOs are written by the executive branch with no oversight by Congress, they tend to formalize what the executive branch wants to do anyway and are typically one-sided.   It covers, among other things, mass data collection and the minimization of data collected on U.S. citizens.  Those rules are currently covered by a document called USSID SP0018 which is available here.  In the preface it says that they need to balance the rights under the 4th amendment to the US Constitution against the needs of the government to collect intelligence.  In concept that makes sense, but in the case of both the EO and the USSID, the fox is squarely in charge of guarding the hen house.  EFF, a privacy watchdog, created a primer on it, which is linked to above and suggests that there are a lot of loopholes in these documents which allow for over collection, over retention and not much oversight.  Section 309 was an attempt to begin to reign in some of those activities.

Since Congress did not take the time to debate this bill, there was not much consideration of what section 309 formally codifies.  For the first time, there is a law that says that the intelligence community can collect, share and retain information on U.S. citizens.

It is a start.  Section 309:

  • It defines a covered communication as any electronic or telephone communication collected without the consent of a (only one) party to the communication.
  • It requires that the heads of each part of the intelligence community create policies approved by the Attorney General within the next two years describing how they are going to comply with Section 309.  That means that nothing is likely to change for at least two years and Congress won’t review these procedures.
  • That intelligence collected (including mass intelligence) can only be kept for 5 years unless the fox guarding the hen house decides- in compliance with these procedures that are going to be written in the next two years – that it is (a) foreign intelligence, (b) reasonably believed to be evidence of a crime, (c) encrypted, (d) all parties are reasonably believed to be non US citizens, (e) retention is necessary to protect against an imminent threat to human life (in which case they have to tell Congress about it later), (f) retention is necessary for technical assurance or compliance reasons (in which case they have to write a dusty report every year to the Senate and House Intelligence Committees) or (g) the head of an intelligence community element decides it is necessary to protect the national security (in which case they have to report on some unstated frequency to the intelligence committees again).

So while section 309 is a reasonable start, it appears that there is a lot of wiggle room and, for the first time, legally says that the intelligence community can keep encrypted communications forever and that if they think the intercepted communication is reasonably believed to be evidence of a crime, they can share it with unspecified law enforcement agencies, without a warrant and with no guidelines as to what reasonable means.  It also creates a process to keep that intelligence forever if something thinks it is important.

There is clearly no room for abuse in section 309.  So, while I think this is a good start, we are definitely no where near done yet.





Monday Morning Quarterback – The Sony Breach

I am certain we will see a number of people comment on what Sony shoulda/coulda/oughta have done and there is likely some truth in all of them.  Here is one and my thoughts on it, from Data Breach Today.  This is from a blog post by Matthew Schwartz.  He makes 7 points, which I mostly agree with:

1. Failure to spot the breach – IF the hackers really got away with 100 terabytes of data as some people claim, it is hard to understand how they did not catch this.  The devil is in the details (like did the hackers send the data to the Amazon cloud or Dropbox or some other seemingly normal place), but companies should be spending some time and effort to watch outbound traffic and look for anomalies.

2. Poor breach response – I think Matthew is right on with this one but I completely disagree with the conclusion.  I think most company’s breach response plans are woefully inadequate and I have said before that I think that Sony’s definitely fell into this category.  Where I disagree is with the recommendation that they should not have pulled the release of “The Interview”.  First, it was not their decision.  When the 4 big movie chains decided to pull it, the release was gone.  Sure, they could have gone forward and released it on the remaining few screens, but the effect would have been no different.  If Sony said they were releasing it and the only place it was showing was in a second tier theatre or in a small town, people would still figure it out.  Where their lack of a plan came through was their back and forth, on again, off again decision making process.  That made Sony look bad or even worse than they already looked.   If they decided to try and force the issue and release it and someone, completely unrelated to the hackers, decided to bomb a theatre and there was injury or loss of life, the lawsuits would have been staggering.  Until you solve that legal problem, Sony had to kill the release.

3. Shooting the messenger – Hiring a big name law firm to threaten the media was just dumb – and likely a result of #2 above.  All it did was give Sony more negative attention and it did not stop anyone from publishing anything.

4. Contradicting themselves – first they said they were going to release “The Interview”, then not, then saying they always planned to release it.  Sony hired famous spin doctor, Judy Smith (adviser to George HW Bush and Monica Lewinsky, among many others), but that seemed to happen late in the game (mid December maybe).  This likely goes back to #2 – not having a plan.  Judy should have been on board on day 1 — since she should have been under contract already.  A company the size of Sony should have a media/PR expert already under contract as part of their breach response preparation.  It doesn’t cost very much to have someone like that on retainer compared to what it did cost them after the fact, both in dollars and reputation.

5. Ceding Control Of the Conversation – After the hackers published the emails of several Sony executives and made the executives look bad, Sony looked like a deer in the headlights.  Going back to #2 and #4, I think they had and “Oh, S**t” moment.  Lack of planning caught them unprepared and as a result, left the hackers in control of the conversation.  In a vacuum, the media goes with what they have.

6.  Failure to take responsibility – Amy Pascal, head of SPE, told Bloomberg that it was nobody’s fault at the studio.  Sure, it was not her PLAN to do this, but ultimately, it certainly is her responsibility.  Hopefully, the Board of Directors has already corrected that confusion on her part.

7. Hoarding Old Emails – Actually, I would say hoarding old data.  They had social security numbers (in plain text in spreadsheets) for 50,000 employees.  They don’t have 50,000 employees.  Bloomberg, in March 2014, reported that SPE had 6,500 employees world wide and were about to make cuts to improve profitability.  How far back does that data go?  A data retention policy is important not only in the case of a breach, but also in case of a lawsuit.  Hackers cannot steal data that does not exist.  If you need to retain it for legal reasons, keep it in a virtual or physical vault.

My Conclusion – It seems to me that the lack of a plan was probably their number one problem.  Their number two problem was not effectively managing (controlling) the data that they did have.  Given that they have been hacked several times before, the lack of a breach response plan is an epic-fail and should be a resume-generating event.  The responsibility lies squarely with the Board of Directors and on Amy Pascal and Michael Lynton, the co-chairpersons of Sony Pictures.  I wonder if there will be some vacancies at SPE in the near future?


Background on the group that took down Sony and Microsoft on Christmas

Unlike the Sony breach in November, the group that took down Sony’s and Microsoft’s game network on Christmas (see article) seems to be very interested in getting attention.  Hopefully enough so that the FBI finds them, but that is another story.

What is more important is that the people who did this, according to Brian Krebs, are not on the high end of the hacking community at all and may have been doing this as a sales pitch for their new business.

Their new business is a DDoS (like they did to Microsoft and Sony, apparently) service for hire.  For $5.99 a month you can knock your favorite site offline for 100 seconds at a time (not sure if you can just keep doing this).  For $129 a month, you get a DDoS attack that lasts for more than 8 hours at a time.  They currently have over 132,000 followers on Twitter, so they are getting some attention.

According to Brian, they lifted (stole?) the entire source code for this service from TitaniumStresser, one of their competitors.  They also exposed a database with information on all of their current users (1,700) accidentally.

One of the Lizards, Vinnie Omari yapped enough to get picked up by the London cops.  I suspect they have a few questions for him.

The more important point here is that *IF* it turns out that you can really “take out” anyone you want for $129 a month, are more people going to do that?

According to Vinnie, he got drunk celebrating his 22nd birthday the day before Christmas, woke up on Christmas still half drunk and decided to take down Sony’s and Microsoft’s game networks for laughs – and because it would annoy a lot of people (they have around 150-200 million users).

If anyone can take down a major online service for $130, what should we expect to happen in 2015?  I don’t know, but if I had a business that provided online services to customers, I would certainly be concerned and I might want to think about some preparation.  Would a competitor or disgruntled customer decide to take my site down – for laughs?



Why fingerprints should not be used for access control

A presentation at the Chaos Communication Congress (a large hacker convention in Hamburg, Germany that attracted about 10,000 visitors this year – sort of, kind of, like  Defcon here) demonstrated the ability to reproduce fingerprints of a target subject from just photographs.  Reports in PC Magazine say that the researcher, Jan Krissler, took photographs of Ursula von der Leyen, Germany’s Federal Minister of Defense, while she was speaking in public.  From those photographs he was able to create fingerprints.

Of course, having the fingerprints is not very useful unless you have a use for them – like a stolen iPhone or perhaps a door system that is controlled by a fingerprint reader.

It has been known for a long time that you could lift fingerprints off a smooth surface like a glass that the target used, but this is the first time that I am aware of that fingerprints have been recreated from a photograph.

Lets assume that, unlike Apple Pay, that you have to use your fingerprint plus a PIN.  If so, having the fingerprint doesn’t totally compromise the system but it reduces the security of the system down to that of a PIN, which is not very good.

Unlike a password which can be different for different purposes, using your fingerprint would be the same for different purposes, increasing the damage from the crime of a stolen fingerprint.  In theory, you could use all 10 fingers, but do you really think people are going to remember which finger they used for each web site?  Didn’t think so.

Therefore, the big problem is how do you go about requesting a new fingerprint after your old one is compromised?  Not quite sure about that one.

Apple, to their credit, wanted something that was easy to use.  Unfortunately, most of the time, easy to use means easy to compromise.  And sometimes, it also means, hard to recover from that compromise.


Is your encryption secure? – Sure, just like flying pigs (keep reading)

Der Spiegel wrote an article on efforts by the NSA and GCHQ (their British equivalent) to crack encryption of various sorts.

Take the article at what it is worth;  it is based on documents that Snowden released, so it is a little bit old.

I apologize that this post is pretty long, but there is a lot of information in the article and I think it is useful to understand what the state of the art is.  If you think the NSA is, in any way, trying to accomplish different goals than say the Russian FSB, then you are wrong. They are likely ahead of the hacker community only because they have a $10 billion annual budget.

For most people, keeping the NSA out is not your goal, but if the NSA figures out a sneaky way to break something, it is likely that, at some point, a hacker may figure it out too.  If the NSA has to spend a million dollars to crack something, that is probably out of the realm of possibility of the hackers – until next year when it costs a quarter of that.  Unless, of course, that hacker works for an unfriendly government.

The Cliff Notes version goes like this.  If you want a longer version, read the article :).  When I refer to the NSA below, I really mean all the NSA like agencies in every country, friendly or not.

  • Sustained (meaning, I assume, ongoing) Skype data collection began in February 2011, according to an NSA training document.  In the fall of 2011, the code crackers declared their mission accomplished.
  • Since that same time (February 2011), Skype has been under order from the secret U.S. FISA court to not only supply information to the NSA, but also to make itself accessible as a source of data for the agency.  Whatever that exactly means is unclear, but it is likely not good for your privacy.
  • The NSA considers all use of encryption (except by them, I assume) a threat to their mission and it likely is.  If they cannot snoop, what use are they?  If people start using high quality encryption, they will make the snoop’s jobs that much harder.  But not impossible.
  • If you look in the dictionary for the word “packrat”, it will say, “see U.S. NSA”.  They horde data like you would not believe.  In fact, the rules that govern how long the NSA can keep data exclude encrypted data.  That they can keep forever.  So, if they ever figure out how to decrypt something, they can go back and look at the stuff that they have in inventory and figure out how much of that they can now decrypt and analyze.
  • In the leaked Snowden documents was a presentation from 2012 talking about NSA successes and failures regarding crypto.  Apparently, they categorize crypto into 5 levels from trivial to catastrophic.
  • Monitoring a document’s path through the Internet is considered trivial.
  • Recording Facebook chats is considered minor.
  • Decrypting mail sent via the Russian mail service is considered moderate.
  • The mail service Zoho and TOR are considered major problems (level 4).
  • Truecrypt also causes them major problems as does OTR, the encrypted IM protocol.  The Truecrypt project mysteriously shut down last year with no explanation.  Was it because the NSA was pressuring them?  No one knows or if they do, they are not talking.
  • It seems clear that open source software, while it probably contains as many weaknesses and bugs as closed source software, is much harder for organizations like the NSA to compromise because people CAN look at the source code.  Most people don’t have the skills, but there are enough geeks out there that obvious back doors in the code will likely be outed.  With Microsoft or Apple, that check and balance does not exist.
  • Things become catastrophic for the NSA at level 5.  The IM system CSpace and the VoIP protocol ZRTP (the Z stands for Phil Zimmerman for those of you who know of him) are or were level 5.  ZRTP is used by Redphone, an open source, encrypted, VoIP solution.
  • Apparently PGP, although it is 20 years old, also lands in the NSA’s category 5.
  • Cracking VPNs is also high on the NSA’s list. The Der Spiegel article doesn’t go into a lot of detail here other than to say that the NSA  has a lot of people working on it.  They were processing 1,000 VPN decrypt requests an hour in 2009 and expected to process 100,000 per hour by the end of 2011.  Their plan, according to Der Spiegel, was to be able to decrypt 20% of these  – i.e. 20,000 VPN connections per hour.  That was in 2011.  This is almost 2015.  You do the math.
  • The older VPN protocol PPTP is reported to be easy for them to crack while IPSEC seems to be harder.
  • SSL or it’s web nickname HTTPS is apparently no problem for them at all.  According to an NSA document, they planned to crack 10 million SSL connections a day by 2012.
  • Britian’s GCHQ has a database called FLYING PIG that catalogs SSL and TLS activity and produces weekly trend reports.  The number of cataloged SSL connections in FLYING PIG for just one week for the top 40 sites was in the billions.  This is a big database, apparently.
  • The NSA Claims that it can sometimes decrypt SSH sessions (I assume this is due to the user’s choice of bad cryptographic keys).  SSH is often used by admins to remotely access servers.
  • NSA participates in the standards processes to actively weaken cryptographic standards – even though this ultimately hurts U.S. businesses;  it also furthers the NSA’s mission.
  • The NSA steals cryptographic keys whenever possible.  Why do things the hard way when the simple way is an option.

While most hackers are not as smart or well funded as the NSA or the British GCHQ, sometimes luck is on their side.  Other, less friendly governments (think IRAN for example), might be willing to spend hundreds of millions of dollars to mess with the U.S. and since the don’t have to pay their scientists very much (the alternative to working for those governments might be being dead), their money likely goes further.

Would Iran or someone like them enjoy taking down the northeast power grid and darken the U.S from Boston to Virginia.  To quote a former vice presidential candidate – You betcha.  If they could damage the grid so that it took longer to get the lights back on (see the item from the other day on the attack on the German steel plant) would that be an extra benefit. You betcha.

So while I am using the NSA as an example, you could just as easily replace that with Iran, or Russia or China.

Being prepared is probably a good plan.



Hackers break in to German steel mill and cause “serious damage”

BBC and others are reporting that a German steel mill was hacked.  The report came not from the news media or the mill, but rather the German Federal Office for Information Security (BSI).

As a result, not a lot of details are known, but the posting are new, so perhaps more information will come out in time.

Apparently, the hackers started out the usual way – spear phishing attacks on the business network.  Once in, they used that access to get access to the factory floor network.

Using that access, they were apparently able to take over a blast furnace used for melting steel and stop the plant from shutting the furnace down in a normal fashion, causing “massive” damage.  Exactly what that means is unclear, but it was apparently significant effort for the BSI to report on it.

What are the take aways from this little bit of information that we have –

1. There apparently was not enough separation between the factory floor network and the business network.

2. There apparently were not enough safeguards in the factory control system to retake control of the physical factory after hackers got into the network.

3. Possibly, there was not an adequate incident response plan to deal with a situation like this.

4. Cyber attacks can cause “massive” physical damage.

2015 looks to be an interesting year.