The Spy Among Us

Multiple sources are reporting a feature of iPhone apps that is a major privacy concern.  This is not new and it also is an issue on Android phones, but, for some reason, everyone seems to be highlighting the problem with iPhones.  PERHAPS, that is because it it is being exploited in the wild on iPhones – I don’t know.

The short version goes like this –

IF you EVER allow an app to access your phone’s cameras, you have lost control of it.  That app can access your camera – both front facing and rear facing – whenever it wants to.  It does not have to ask you to access the camera.

You are trusting that app not to abuse that trust.

Actually, it kind of depends on whether YOU installed the app or someone else installed it – with or without your knowledge.  For example, here are 5 spying apps that people intentionally install.  It may be a parent or a spouse, but it is likely not you who installed the app.  Sometimes parents want to track what their kids are doing.  Sometimes a spouse wants to spy on their significant other.

The app could upload the photos to the net and/or it could process the images – say to examine your facial images as you look at the screen.

One part of the problem is that there is no indication that the camera, front or back, is on.  As a side note, while there is a light on many PCs indicating the camera is running, that is a bit of software and the camera COULD be turned on without the light being on.

Apple (and Google) could change the camera rules and require the user to approve camera access every single time the camera wants to turn on – but that would be inconvenient.

One of my contacts at the FBI forwarded an alert about this today, so I suspect that this is being actively exploited.

The FBI gave a couple of suggestions –

  1. Only install apps from the official app store, not anyplace else.
  2. Don’t click on links in emails

In reality, the only recommendation that the FBI made that will actually work is this next one:

3. Place a piece of tape over the front and rear camera.

Ponder this thought –

The camera sits on your table in front of you;  it is in your bedroom, potentially capturing whatever you do there; it is in your bathroom. You get the idea.

Just in case your were not paranoid enough before.

Information for this post came from The Hacker News and The Register.

The Dangers of Removable Storage

Does your organization allow thumb drives?  Do you use them at home?  What do you store on them?  What follows could be called nightmare on Main Street.

Queen Elizabeth from Airport Technology

A private citizen found a thumb drive (AKA flash drive or memory stick) and did what any smart cyber security aware person would do.  He took it to the library and plugged it into their computer.  After all, if it was infected, you wouldn’t want to infect your own computer, would you?

What he did find was an interesting surprise.

About 70 folders with about 175 files, including:

  • The exact route the Queen uses to get to Heathrow and the security measures to protect her.
  • Specific types of IDs needed to get into restricted areas at the airport.  Including those used by undercover police.
  • The location of closed circuit TV cameras at the airport.
  • Routes and safeguards for cabinet ministers.
  • Details of the security system used to protect the perimeter of the airport.
  • And other security related information

Of course the flash drive was unencrypted, not password protected and not secured in any way.

What happened next is what you would hope the average, security conscious citizen would do – he shared it with the international newspaper in town.  After all, if he took it to the police, it might get swept under the rug.

The next step is also pretty obvious – the excrement hit the rotating air movement device (AKA the Sh*t hit the fan).

Police are worried that this data was copied and shared on the dark web.  Certainly possible.  If it has, the cat is out of the bag and not possible to rebag the cat.

The man who found it was an unemployed person who found it in Queens Park, West London.  Not exactly the source material for the next Mission Impossible movie.

Insiders said that the DISCLOSURE of finding the flash drive started a “very, very urgent” investigation.  I am not sure whether that is because Heathrow security was publicly embarrassed or something else.

A spokesman for the airport said “Heathrow’s top priority is the safety and security of our passengers and colleagues”.  Also a priority – stop placing extremely sensitive documents on unencrypted memory sticks an losing them outside.  Oh, wait, they didn’t say that last part.

They also said that they “have reviewed all of our security plans and are confident that Heathrow remains secure …”.  I am not sure what else they might say.

Hopefully, behind the scenes they are making changes – like changes to the Queen’s route to the airport.  And training people.  And locking down computers.  Among other things.

For your company, could an employee plug a flash drive into a computer and download sensitive information to it?  And then lose it on a street corner.  To be found by a homeless person.  PROBABLY!

Information for this post came from The Mirror and The Standard.


Who Owns Your Financial Data Anyway?

Consumers have been wrestling for years now about access to their personal data.  There are many non-bank financial products such as Mint and WalletGyde that help consumers manage their money, but it has always been a fight between the banks and these companies (of which there are at least hundreds, maybe more).  As a group, these companies are called FinTechs.

In Europe, the government said that consumers owned their data and even forced a standard on banks for sharing data with FinTechs that consumers wanted to share with.

In the U.S. there is no standard and up until now no requirement that banks allow you to be able to grant access to your own data.  This has led to FinTech companies having to ask you to trust them with your banking userid and password and those same companies having to scrape your data right off the screen.  About a year ago I got a message from Chase warning me that if I shared my password with a FinTech company (or anyone else), the bank was disavowing any responsibility for what happened.

This week that all changed.

The Consumer Financial Protection Bureau issued a long waited-for ruling on the subject.  Their answer.


This is a win for consumers who now will be able to have a more timely and secure method of sharing their data with third parties and it is a win for the FinTechs who have been fighting for this.  For the banks, it is not good news, but probably expected.  Banks are fighting for their survival.  Until say ten years ago, they were the king of the financial hill.  Now, they are just one player of many and when it comes to data aggregation, the banks aren’t really much of a player at all.  This is one more nail in that coffin.

Up until now the data sharing between banks and FinTechs have been one off agreements between two parties such as:

  • Chase and Intuit have created a data interchange agreement
  • Wells and Xero have an agreement
  • Capital One and Xero have an agreement
  • And likely others that we have not heard about

The principles that the CFPB created include –

  1. Access – users can obtain information from a service provider and grant access to a third party
  2. Data Scope and Usability – The available data should include transaction and fee information and any other aspect of a consumer’s usage.
  3. Control and informed consent – Consumers can control their data sharing and revoke it whenever they want to
  4. Authorizing payments – Accessing data is different from authorizing payments to be made, but consumers may grant third parties both of these permissions.
  5. Security – The data has to be secure.  This seems to give the CFPB a camel’s nose under the tent to make sure that the FinTechs protect consumer’s data.
  6. Access Transparency –  Consumers need to be able to easily understand what permissions they have granted to whom with relevant parameters (like how often the third party can access their data).
  7. Accuracy –  Consumers can expect the shared data to be accurate and have reasonable means to dispute and resolve inaccuracies.
  8. Ability to dispute and resolve unauthorized access – Consumers have reasonable and practical ways to dispute and resolve issues related to unauthorized access and payments.
  9. Efficient and accurate accountability mechanisms –  Commercial participants (i.e. the FinTechs) are accountable for the risks, harms and costs they introduce to consumers.

So this swings both ways and the CFPB has already whacked FinTechs from time to time (Search for CFPB Dwolla consent decree, for example).  All in all, though, I would say that this is great news for consumers, good news for FinTechs and not so good news for banks.

Now it is up to the banks and the FinTechs to work out the details.  It is likely to get a bit messy before it gets cleaned up.  MAYBE, the banks will agree to a data interchange standard, which would be great, but I haven’t seen anything public on that subject.

Information for this post came from American Banker, here, here and here and the CFPB.

Another International Law Firm Hacked

You might think that after the Panama Papers breach in which the law firm of Mossack Fonseca was hacked and 11 million documents exposed – including ones that forced the prime minister of Iceland to resign and the prime minister of Pakistan to be removed from office – that law firms around the world would have stepped up their cyber security efforts.

I am sure that some have improved their security while others have made minor efforts to improve it, but it is not working.  Until clients of these same law firms start conducting frequent cyber security audits of those firms, it is unlikely that significant changes will be made in the industry.

Remember that security and convenience oppose each other and security costs money.  If their clients are not demanding that they spend money on security, they likely will spend that money elsewhere.

So what is this week’s news?

The Bermuda based law firm Appleby, with 10 offices around the world and around 470 staffers admitted this week that they had been hacked.   The hack, they said, occurred last year.  That hack was not disclosed at the time and legally they were probably not required to do so. The only reason they are talking about it now is that the international investigative journalist group ICIJ was given at least some of the documents and has been pouring through them and asking embarrassing questions.

Apparently, clients of the firm include the rich and the famous, especially in Britain, possibly including some Royals.  While the firm says that try to do things lawfully, “no one is perfect”.  Whether what the two prime ministers who were exposed in the Panama Papers breach were doing things legally or not, the court of public opinion didn’t think what they were doing was appropriate.

When members of the rich and the famous get exposed doing things that may be legal or may be shady or may be perceived as illegal by the masses, that is not good for their public image.

The apparent threat that these documents are now going to be published probably scared the poop out some of the firm’s clients, which forced them to admit the breach.

This brings us to an important point.  In the United States (and the firm has no offices in the U.S.; their offices are mostly in tax havens), companies that are hacked are required to disclose that fact ONLY UNDER SOME, LIMITED, CIRCUMSTANCES.  If personally identifiable health care information is breached, if payment card information is breached and if non-public personal information as defined in the various state’s laws is breached, for example – then, assuming the data wasn’t encrypted, etc. etc. – the companies have to fess up to the breach.

If, however, if the breach did not expose that kind of information  – say it exposed your company’s not yet filed patent applications or information regarding a merger or information regarding an off-shore business transaction – then maybe that information does not have to be disclosed – either publicly or even to the client.

For U.S. based law firms, the American Bar Association has created model ethics clauses for states to adopt – some have been adopted and  others not – that says that attorneys should try to protect client information, but the wording is a bit loose.

As a client of a law firm, your CONTRACT with that firm can certainly be a tight as the two parties agree for it to be (assuming the terms are legal, of course).  You, as a client of a law firm, for example, can say that if you want me as a customer then if you suffer a breach and my information is exposed, then you must notify me within, say 72 hours.  That would put the onus on the law firm.  For small clients that is a difficult issue to force.  For larger clients, it is less difficult.  That doesn’t mean that lawyers, as good negotiators, won’t try to make the terms more favorable to them and you can’t blame them for wanting to do that.  Still, you have a say in the matter and you can always choose to find another firm.  There are lots of law firms in the country.

While there are probably thousands of clients of the Appleby law firm that are currently holding their breath, this, along with the multiple other law firms that have been hacked, should act as a wake-up call to clients to push their law firms to improve security.

I would think that most reputable law firms REALLY don’t want to have their client’s information compromised, independent of ethics rules or client contracts, but security is both inconvenient and expensive.

However, so is being hacked,  as is having your name dragged through the mud and losing clients.

Since many of the largest breaches in the U.S. are the result of vendors being hacked (think Target or Office of Personnel Management, for example), we work with clients to create a vendor cyber risk management program to tighten up the parameters of their vendor contracts and cyber security programs.

Stay tuned; there is likely to be more fallout from this breach.

Information for this post came from The Register.

The NSA-Kaspersky Story Gets Even Stranger

In case you didn’t know whom or what to believe in the battle between Gene Kaspersky and the U.S. Government, it just got a little weirder.

You probably remember that the DoD told its people to remove Kaspersky’s software from it’s machines.  They didn’t say why.  But, no matter how this story plays out, that decision was the right decision.

Later it came out that an NSA employee was developing NSA  malware to replace malware that Snowden exposed; he removed that classified software from NSA facilities and took it home.  It was then thought that the software was compromised to the Ruskies because that employee had Kaspersky software on his computer and Kaspersky was working for the FSB.

Fast forward the story and Gene Kaspersky is fighting for his company’s very existence.  Never mind the fact that if the employee had followed both policy and the law, we would not be having this conversation.

Kaspersky has now revealed some more information about the situation.  Whether you believe him or not is up to you.  Our gov is being totally radio-silent on the situation, which likely means that it is at least, mostly accurate.  Probably.  No guarantee.

  1.  The NSA employee was running the Kaspersky software on his home computer.
  2. The employee had intentionally turned on the feature called Kaspersky Security Network, which, by design, forwards suspicious malicious software to Kaspersky’s labs for analysis.
  3. The employee disabled the Kaspersky software.  BECAUSE:
  4. The employee downloaded pirated software
  5. After the employee’s computer was infected, the employee turned the anti-virus software back on.
  6. When turned back on, the Kaspersky software scanned his computer and detected the new NSA malware as a variant of the Equation Group software that Snowden disclosed.  Since it was unknown and he had intentionally turned on the security network feature of Kaspersky’s software, it sent the malware (the software that he was developing) to Kaspersky’s labs for analysis.
  7. This LIKELY ties back to a 2015 breach of Kaspersky’s network (probably by the FSB) which has been well covered in the media.
  8. ALTERNATIVELY, the pirated software that he downloaded allegedly had a back door in it and if that is true, the Russian FSB could have stolen anything on his computer.

There are probably a bunch of potential variants here, but it seems reasonable that all of this could have easily happened if the alleged scenario happened.



Information for this post came from Ars Technica.

CrySiS Ransomware Targets Open RDP Servers

The FBI released an alert this week about malware called CrySiS that attacks public facing servers that have RDP enabled.

RDP or Remote Desktop Protocol is an old Microsoft protocol that was designed to allow IT people to remotely control a Windows machine (server or desktop) to perform maintenance.  The protocol is old – it was first released with Windows NT in 1996  – and has been upgraded many times.  There are also many non-Microsoft versions of the client such as a Unix and a Mac version.

However, RDP was designed in pre-Internet days and while Microsoft continues to button up the security of RDP, hackers continue to attack it.

The CrySiS ransomware finds servers facing the Internet which have RDP enabled and attacks them.  Businesses that have been infected with CrySiS include small businesses, churches, medical facilities, law firms and local governments.

Assuming that the attackers are successful, CrySiS operates like many ransomware attacks – they encrypt your files and demand money, in cryptocurrency, to get your files decrypted.

They breach RDP using dictionary attacks, brute force or stolen credentials obtained in other ways.

Our recommendation is that businesses NEVER expose the RDP protocol to the public Internet.  If you need to remotely manage a server where the only access is via the Internet, we recommend that you connect to that remote network via a VPN.  This will put you on a private network that is not visible to the Internet.  From this private network it is safe to RDP into the server to remotely manage it.

Information for this post came from a private FBI alert.  This alert can be provided to clients on request.