Multiple sources are reporting a feature of iPhone apps that is a major privacy concern. This is not new and it also is an issue on Android phones, but, for some reason, everyone seems to be highlighting the problem with iPhones. PERHAPS, that is because it it is being exploited in the wild on iPhones – I don’t know.
The short version goes like this –
IF you EVER allow an app to access your phone’s cameras, you have lost control of it. That app can access your camera – both front facing and rear facing – whenever it wants to. It does not have to ask you to access the camera.
You are trusting that app not to abuse that trust.
Actually, it kind of depends on whether YOU installed the app or someone else installed it – with or without your knowledge. For example, here are 5 spying apps that people intentionally install. It may be a parent or a spouse, but it is likely not you who installed the app. Sometimes parents want to track what their kids are doing. Sometimes a spouse wants to spy on their significant other.
The app could upload the photos to the net and/or it could process the images – say to examine your facial images as you look at the screen.
One part of the problem is that there is no indication that the camera, front or back, is on. As a side note, while there is a light on many PCs indicating the camera is running, that is a bit of software and the camera COULD be turned on without the light being on.
Apple (and Google) could change the camera rules and require the user to approve camera access every single time the camera wants to turn on – but that would be inconvenient.
One of my contacts at the FBI forwarded an alert about this today, so I suspect that this is being actively exploited.
The FBI gave a couple of suggestions –
Only install apps from the official app store, not anyplace else.
Don’t click on links in emails
In reality, the only recommendation that the FBI made that will actually work is this next one:
3. Place a piece of tape over the front and rear camera.
Ponder this thought –
The camera sits on your table in front of you; it is in your bedroom, potentially capturing whatever you do there; it is in your bathroom. You get the idea.
Just in case your were not paranoid enough before.
General Keith Alexander, former director of the National Security Agency, said that cyber espionage is the greatest transfer of wealth in history. In 2012 when he made that statement, the the value of cyber industrial espionage on an annual basis was $338 billion. Per year. 5 years later I am sure that number is greater.
Of course industrial espionage is not new. In the early 18th century John Lombe, a British silk spinner went to Italy to steal the technology of an Italian company. At night, by candlelight, he sketched drawings of the Italian company’s machines that he had managed to get a job working for. He returned to England with the stolen technology and built a better machine to compete with the Italians. Industrial espionage is not new.
What is new is the ease with which this can be done. With everything being connected, you can now steal secrets from half way around the world. And with cyber security practices at many businesses being a bit lax (there are a few industries for which this is not the case, but they are the exception), it is pretty easy to do. Even defense, which you think would be secure, is not. Lockheed lost the technology for the F-35 and now the Chinese make a knockoff and sell it at a fraction of the price.
Unlike credit card or personal information theft which is required to be disclosed, for the most part, stolen intellectual property is kept quiet. It is embarrassing and would likely make stockholders upset. What they don’t know won’t hurt them.
As the manufacturing process becomes more computerized, it is a huge leak opportunity. Traditional IT security solutions sometimes don’t work on the factory floor. Crooks know that and attack at that weak spot. In the absence of controls, detection and good processes, the crime will go undetected.
Fast forward a couple of centuries.
6 men in Houston were arrested for stealing technology for creating marine foam. China wanted to increase it’s marine business and this foam is used in building boats due to its special buoyancy.
The Chinese, like John Lombe above, spent years weaseling their way into the company in Houston that makes this. The crooks sent the info back to China who then had the gall to try and sell it back to the company they stole it from saying they could make it for less.
In the process of stealing the information they kept coming back to the insiders in the U.S. to get more information when their efforts at cloning the process was not working.
Now, except for one guy who is in China, they are all under arrest. BUT, the technology has already been stolen, so it is not clear how this company can get the genie back in the bottle. Not clear at all.
Supposedly, this information that was stolen was only known to about a half dozen employees in this company – it was the company’s crown jewels and now the cat is out of the bag.
The company considered buying the stuff from the Chinese knockoff IF the Chinese would give them an exclusive. SO, rather than go public and be outed, they proposed making a deal with the devil.
When the Chinese started offering this U.S. company’s technology to other companies in the U.S., the company called in the FBI. That started an investigation and, eventually, the arrest of these 6 engineers. FOUR years later.
Unfortunately, this is one of, likely, thousands of incidents. Stopping one will NOT stop the hackers. They just consider that an acceptable loss or collateral damage to the bigger game.
And American companies continue to ignore the warning signs (because, in many cases, there are no warning signs because the companies who got hacked keep the attack quiet).
Think about what happens to your company if you lose control of your intellectual property, whatever that is.
There are some folks who say that open source software is much better than proprietary (commercial) software because you can look at the source code. Ignoring whether you are I would know what we are looking at, it isn’t so simple as this story will tell.
On the other hand, proprietary (commercial) software isn’t a silver bullet either. If it was, Microsoft would not have patched 48 bugs this month.
Here is the story.
OpenVPN is an extremely popular virtual private network software that runs on all major operating systems and is used by millions.
Given that it is used by so many people, if the theory of open source software being safe because people can look at it, then it should not have any bugs.
Recently there have been two, independent code audits. One by respected Johns Hopkins cryptographer Matthew Green. Green found several bugs but nothing super major. At the same time another code audit was being done in Europe by Quarkslab. They found two more bugs that Green didn’t find.
Okay, so now we have lots of people using the software and two separate, independent code reviews. Surely there are no more bugs.
Guido Vranken decided to do his own test and decided to attempt to hack the software using a technique called fuzzing. How fuzzing works is not important except to hackers and security researchers, but, suffice it to say, Guido found yet more bugs.
So what does this mean?
Is OpenVPN bad software?
No, in fact, it is pretty good software.
What it means is that it is very difficult to create bug free software even if you are very committed to doing so.
So when people get into a rivalry about open source vs. closed source software, here is what you say.
Neither is good. Neither is bad. But, if you think that because open source software is open it will be bug free, I am telling you that you are fooling yourself. OpenVPN is a perfect example of that.
Bug free software is hard. Maybe impossible. Every piece of software has them. It is just a fact of life.
That doesn’t mean that people shouldn’t like open source software. Or closed source software. There are lots of reasons to like both. And not like both.
As I have been writing about lately, the browser makers, Google and Firefox – and to a much lesser extent Microsoft, are pushing the envelope to get web site operators to switch to always on SSL (AKA HTTPS). Well, that is a good start, but certainly not the end game.
Why do they care? Because it is much harder to eavesdrop on HTTPS traffic than it is to eavesdrop on HTTP traffic. Remember, eavesdropping is a bit of a loose term. Not only can someone listen in on what you are sending, but they can also fake the RESPONSE back from a web site. In that case, you THINK that what you are getting back in your browser is coming from a web site you trust, but in reality, you are seeing what the hacker wants you to see. Sometimes that means changing something here or there, but other times it could mean a wholesale replacement of the web page.
While not impossible to do this under HTTPS (also called SSL or TLS – while there are subtle differences between these, for the purpose of this conversation, they are the same), it is way more difficult for a hacker to do.
But there are a lot of subtleties when it comes to how a web site implements HTTPS. Most of the time web site operators choose the easiest one; on rare occasion, they choose the best options. I will briefly talk about some of the options, but for the most part, it is geek-speak. There is one option that will be the focus of the rest of this post that is important for an end user to understand.
First the geeky part –
There are a number of things that the web site operator should do to enhance security; here are just a couple. These are out of the user’s control, but we can help the web site operator get closer to the best option instead of the easiest option.
The web site should enable HSTS or HTTP strict transport security. This ensures that even if YOU don’t enter the S in HTTPS, the browser will do it for you.
HTTP Public Key Pinning – when this is enabled it ensures that an attacker cannot use an SSL certificate obtained illegally from another certificate authority to pretend that they are the site you intend to visit.
Secure cookies – setting secure option helps to protect your information from being stolen by other web sites.
Now here is the part that the user can easily see.
There are two kinds of SSL certificates; one is called domain validation (DV) and the other is called extended validation (EV).
While we talk about the HTTPS encrypting the traffic so that no one can eavesdrop on it, there is another feature of the SSL certificate and that is to ensure that the owner of the web site is, to a much higher level of assurance, who you think it is.
DV certificates ONLY encrypt the traffic to prevent eavesdropping. Extended validation certificates provide a level of assurance that you are talking to who you think you are talking to.
First, an example.
Here is a screen shot of Vectra Bank’s home page:
Notice in the address bar, on the left side, you have the padlock and the word secure.
Here is a screen shot of J.P. Morgan Chase Bank’s home page:
Notice to the right of the padlock and the left of the address it says JP Morgan Chase and Co. [US] .
Vectra is using a domain validation certificate and Chase is using an extended validation certificate.
What this means is that you have a higher level of assurance that, when you visit the Chase web site, that it is really owned by JP Morgan Chase and Co. That is the value of EV certificates.
OK, so that is a theoretical conversation, let’s bring it down to the practical.
Guess how many web sites have an SSL certificate that includes the name PAYPAL? When you visit Paypal and enter your credit card, you would like to know that it really is The real Paypal.
Maybe 10? How about 20? Some of you probably said 1.
The real answer is 15,270. And that is just from one certificate authority, Let’s Encrypt. Of those 15,000 plus domains, over 14,000 were for phishing web sites.
If you are a user and you see the SSL padlock (which is all you get with a DV certificate), you have no way of knowing whether you are visiting a legitimate web site.
Unfortunately, many of the biggies haven’t figured out this is a problem. Facebook. Linkedin. Amazon. They all use DV certificates. That could be OK as long as you know what the domain address is and you type it in manually, but for many of these sites, they have related domain names that contain the company’s name but are different from the company’s main web site.
If we use that Paypal example, probably some of those 15,000 plus domains are actually owned by Paypal, but which ones?
The moral of the story is, as a consumer, look for the extended validation and ask questions if you don’t find it.
When companies like Microsoft or Oracle develop software, they have massive teams who’s only job is to try and find bugs in the software. They also have made significant investments automated tools to help with software quality assurance. Still Microsoft usually patches 10-20 new bugs month after month. Oracle often patches 100 bugs a quarter.
Given these example company’s results in spite of major investments in technology and people, what does that mean for the average software development shop that doesn’t have the tools, personnel or budget that these major software shops have.
Security Compass, a Canadian company that assists Fortune 500 companies with software security issues, conducted a study of financial institutions application security practices.
Here are some of the findings of the report:
While most financial institutions have created security development lifecycle practices, very few of them can actually validate how well they are doing at following them.
Three out of four rate application security as a critical or high priority
89% use the BSIMM (Building Security In Maturity Model) while almost all of the others use some form of framework or standard.
When it comes to metrics to measure how effective these frameworks are, most do not have a robust KPI measurement process. Many measure raw vulnerability counts (77%), which is a very basic measurement.
Less than half measure how long it takes to fix bugs.
Only a little more than a third track whether developers actually use the security tools called for in the policies.
The study showed that 58% of the banks use some third party software, but less than half of the financial institutions require their vendors to have a security development lifecycle process or even an application security policy.
These results are from financial institutions where security and process are usually front and center. If this is the reality for organizations which have a high security awareness profile, how does the average organization rank on security process and practices. Likely, those organizations don’t rank very well.
Smaller development shops – say with less than 50 developers likely don’t have a security development lifecycle (SDLC) process at all. The likely don’t have automated tools to detect bad coding practices and they likely have a small (to no) quality assurance department. If they do have a software QA department, that department is likely looking for functionality problems and not security issues and is not trained to find security problems.
If the software is developed under a development contract, that contract likely does not specify the requirements for an SDLC process or for any of the the other security processes that large software development shops have.
In addition, they likely do not conduct third party, independent, penetration tests to attempt to find those security issues.
As a result, it is likely that those custom applications are a hacker’s dream gateway into your organization and you likely will never know.
Companies that develop their own, or contract for the development of, custom software development – including web sites – need to up their game if they want to keep hackers out. If they don’t, the hackers will continue to quietly thank them.
An article on BBC.com really is asking that question.
Recently, NASA engineer Sidd Bikkannavar, a U.S. citizen working at NASA’s Jet Propulsion Laboratory was stopped at Houston customs. He was returning from Chile where he was racing solar power cars. Customs demanded that he hand over his phone and the phone PIN. When he protested that it was a NASA phone and contained sensitive information, but they told him he needed to give them the phone. They took the phone away after he gave them the PIN and brought it back 30 minutes later. Likely, they made an image copy of the phone.
Sidd had even been cleared by Homeland Security’s GLOBAL ENTRY program, where they do a background check on you in advance to speed you through customs and immigration.
Homeland Security Secretary John Kelly has talked about requiring visa applicants to turn over their social media account passwords. No passwords, no visa.
Some people are suggesting that downloading the contents of your phone and/or laptop is going to be standard issue to cross the border, both in the United States and other countries.
The BBC author decided to ask some questions, thinking this might be a bit extreme.
The UK Foreign Office said that they didn’t have any advice on the subject, but if the author was “trapped in immigration at JFK” with a customs and border patrol agent demanding his password, he could call the British embassy and arrange a lawyer.
The American embassy said that they would need to contact Washington and call him back. He is still waiting for that call.
If you have a concern, then leaving anything sensitive at home might be wise. Alternatively, you could encrypt your data and upload it to the cloud, download it once you are across the border and reverse the process before you go home. Make sure you scrub the laptop after you do that with something like CCLEANER.
I know of at least one company that gives employees burner laptops when they travel. The only data on the device is data that is (a) necessary for the trip and (b) approved by security. When the is over, the device is sanitized and re-inventoried for the next trip.
Obviously, everyone’s level of paranoia is different, but it seems like if you can reduce the threat level, that is always better. This is one case where less is more.
Given the amount of storage on all mobile devices these days (a phone with a hundred gigabytes of storage or more is not unusual), it is likely that there will be sensitive data on your device if you don’t do something about it.
And once you are across the border, then you only have to worry about the espionage agents of the host nation you are in.
Countries like France have a long and storied history of going into foreign business persons’ hotel rooms and cloning the disk on their laptop. They are hardly alone.
The Department of Defense has a detailed briefing for service personnel and contractors crossing the border.
In your case, the data in question could be trade secrets, business plans or just naked selfies of you and your friends.
In the case of Sidd, he contacted NASA security as soon as he could, powered down all of his devices and let them deal with it. They gave him new devices (which is really the only safe bet after you have lost physical control of the devices) and they will do whatever with the old devices.
For business people traveling internationally, it is probably better to plan for the worst and hope for the best than the alternative.
I may be a pessimist, but I don’t think it is going to get better any time soon.