SEC Issues Risk Alert To Advisors and Brokers

Last week the SEC released what they call a Risk Alert to Investment Advisors and Broker-Dealers saying that they were concerned about the protection of client information because of recent attacks and attempted attacks against the financial community.

In the alert, they laid out the very particular concerns they have in 6 specific areas and said that they are going to start a Cybersecurity Examination Initiative to create better compliance.

They are not saying who is going to get one of these special surprises, how many will or when.

That being said, the focus of these examinations are applicable to almost every company.

The 6 areas are;

  1. Governance – how are you managing the cyber risk process.  Is the board and C-Suite actively involved?   How often are you doing risk assessments – things like that.
  2. Access rights and controls – are you controlling who has access to what systems and what data and how are you managing that process.
  3. Data loss prevention – monitoring information that goes out of the organization electronically to make sure that it is not going to places that it should not – like China or an employee’s personal storage.
  4. Vendor management – making sure that you are not the next Target or Home Depot – both of whom were done in by vendors who did not manage cyber security appropriately.
  5. Training – while training will not stop all attacks, poorly trained employees may make inappropriate security decisions because they do not understand the risks of their actions.
  6. Incident response –  we have seen that in some breach situations (Sony and OPM come to mind), the companies were not prepared to deal with a breach.  This can turn into a PR disaster and usually increases the cost of recovering from the breach.

So, whether you are a firm who is regulated by the SEC or not, these 6 areas are definitely a good place to start with your cyber risk assessment.  After these areas are handled you can move on to other areas.

Facebooktwitterredditlinkedinmailby feather

Adobe Patchs 23 Flash Flaws – Enough Is Enough

Adobe announced patches yesterday for 23 additional Flash vulnerabilities.  18 of these bugs can be used to run malicious code on the underlying computer.

To see what version you are running, go to:

That web page will give you the version that you are running, the current version that you should be running and a link to the download page.

On my computer, I run Firefox and Chrome.

On Chrome I have Flash disabled completely.  To do that, open Chrome and type

Chrome://plugins – you will get a screen that looks like this (click to enlarge).  You should look at what plugins you are running and decided which ones you want to run and which ones you want to disable.

Chrome flash

Chrome IMMEDIATELY disables Flash if you do this –  if you have browser windows open with Flash objects in it, those objects will go away.  On the other hand, if you enable it, you have to click on the page refresh to make the Flash object reappear.

In Firefox, you have to go to


You will see a page that looks like this.  Find the SHOCKWAVE FLASH addon.  I set it to Ask to activate, but you can select Never activate.  If you set it to ask, Firefox displays a box where the Flash object should be with a link.  The link asks you if you want to activate it one time or always – Should you want to display a Flash object I recommend selecting One time.

Firefox flash

Curiously, the Flash installer requires you to activate Flash in order to run the installer.

It is surprising how many sites still use Flash, but the number is decreasing every day because a lot of businesses are blocking Flash as a security enhancement.

The biggest benefit is the number of ads that won’t run – reducing page load times.

Still, it is a personal decision – kind of like paper, plastic or your own grocery bag.  Some web sites will not work without Flash, so you have to decide.

Facebooktwitterredditlinkedinmailby feather

Apple’s Turn – Major Cyber Attack

Apple is dealing today with something that Microsoft or Google is used to dealing with.  Hackers attacked a weak link in Apple’s universe – the developer community.  Apparently, the performance of Apple’s web site is poor in China, so developers often download software from alternative web sites.  These hackers convinced enough developers to download a malware laced version of Xcode, a tool used by developers of mobile apps.

The thing that is surprising is that some of the developers that got fooled are major apps in China such as WeChat, a popular chatting app, Didi Kuaidi, a Uber-like app and over 300 other apps.  Apparently, most of the affected apps were used in China.

Users need to be alert to email phishing scams that tell people they have been infected and need to click on a link to get a safe version of some app.

Apple has declined to say how many infected apps they have found – leading me to think that it is bigger than the 344 number that is currently being bandied about publicly.

While some of the vendors who’s apps were affected said that no data was compromised, I am less than convinced of that.  Hackers would not go to all this trouble to do nothing.

There is also currently no way for Apple users to find out if any apps installed on their devices are infected.

What this says is that hackers will always look for the weak link.  If you lock the front door but leave the back door open, thieves will figure that out.  And if you lock both doors but leave a window open, they will find that too.

Unfortunately for businesses, this requires that they take a holographic view of security to look at things from all sides at once.  THAT is what the hackers do.

If you are an Apple user, I would pause installing any apps until the dust settles a bit.

Information for this post came from Reuters.


Facebooktwitterredditlinkedinmailby feather

Follow Up To TSA Master Key Fail

In a classic TSA response, the TSA says that this is no big deal.

First, here is what they said in 2003 when they introduced them:

TSA official Ken Lauterstein described them as part of the agency’s efforts to develop “practical solutions that contribute toward our goal of providing world-class security and world-class customer service.”

Now, however the TSA says that the ability to create your own TSA master key does not threaten aviation security.  That statement is probably true.

Then they say that these products are “peace of mind”, not part of security.  Well they are half right.  Those devices are not part of THEIR security.  They should not be a part of anyone’s peace of mind, however.

Here is the real kicker, however:

In addition, the reported availability of keys to unauthorized persons causes no loss of physical security to bags while they are under TSA control.

So the fact that that copies of the TSA master key are out in the wild does not reduce security? Do ya want to explain that?  The TSA does not bother to explain.

That being said, researchers being researchers, they asked whether the TSA keys been posted before and the answer is YES.  Back in 2008, high res photos were published to 7 TSA master keys.  That photo is still out there (see photo).

My suggestion – just use regular Master padlocks (the little ones are available on Amazon in a 4 pack for $8 and change).  If the TSA decides that they need to break in at least you will know it and you will be out $2.

Information for this post came from the Intercept.

Facebooktwitterredditlinkedinmailby feather

Why Are Software Development Process Audits Important?

D-Link makes a variety of network equipment, both for home and business users.  They release the software for this equipment, for the most part, as open source software.  This allows techie users to review the code to see if it does anything bad and since the software is useless unless you bought the hardware, there is no revenue impact to releasing it as open source.

There is, of course, a downside to showing off your software if you screw up.  For closed source vendors like Microsoft, if they screw  up and researchers or hackers don’t find the bug before the vendor does, they can issue a patch, mutter something about fixing bugs and move on.

In the open source world you are exposing yourself to anyone on the planet ‘outing’ you and that is what happened to D-Link.

In one version of their software that they made available for the world to look at, they included the private signing key and pass phrases to use it in the software that they made available for people to review.

They did this in February and the key recently expired so it is no longer useful – hence talking about it is not a problem.

However, if a hacker found that key while the key was still valid and signed the software at that time, that malicious software will still show as valid.

Hackers want their software to be signed so that it looks legitimate.  Here is the dialog box that a Windows user would get if the software is signed.  Notice it says verified publisher:


On the other hand, if the software has not been signed, then the windows user will get a dialog that looks like this.  Notice this one says unknown publisher and is a different color than the one above.


Technically there is something called certificate revocation which allows a software publisher to invalidate a signing certificate.  There are a couple of problems with that, however.  The first is that the publisher has to known that the certificate has been compromised.  The second is that the system that is using the certificate has to proactively check to see if that certificate has been invalidated.  I assume that for the 6 months that D-Link’s certificates were available online, they were not aware that they were exposing the family jewels.

Hackers sometimes break into a company to steal their certificates, but if it is available in plain sight, that is so much more excellent.

A software development process audit should have detected the fact that D-Link was not securing these keys correctly.  Like the combination to the bank vault, these keys should not have been laying around somewhere for a developer to accidentally include in an upload of the source code.

Process is certainly a “first level of defense” against hackers.  Not having good security practices is like leaving a loaded gun around.  No guarantee that someone will get shot with it, but it does reduce accidental shootings if you store it unloaded and locked up.  I would compare what D-Link did to accidentally shooting THEMSELVES.

Just like you don’t do a virus check on your PC one time and call it good, you need to do process audits periodically.  This won’t solve all the world’s problems, but it will cut down self inflicted wounds.

My theory is that we should not make things any easier for hackers than you have to.

In the mean time, I would be very careful installing any D-Link software.


Information for this post came from Ars Technica.

Facebooktwitterredditlinkedinmailby feather

The Government Wants Us To Believe They Can Keep A Crypto Back Door Secret …

FBI director James Comey has been telling everyone that the world will end unless every company around the world provides the FBI and only the FBI a back door to allow them to decrypt your communications.  This includes countries we like and ones we don’t like.

FBI Director Comey

So far the world isn’t listening to him, but the Justice Department is not giving up the quest.

In their defense, they will have to use other tactics if they cannot browse through your digital life at will.  The evidence shows that they already have done that when they needed to.  It is just more complicated and time consuming.

There are two problems with their fantasy of crypto back doors.

The first is to think that they really can abolish software that does not provide a back door.  There is an article in Boing Boing, linked below, that talks about the challenge of policing billions of app downloads, some from well known app stores and some from app stores that don’t even have a web site name – only an IP address.  Do you think terrorists will voluntarily use software that they know the U.S. government can tap?  Maybe the FBI is that foolish, but I am not.  The Boing Boing article goes into great detail explaining why this is a pipe dream.

That of course doesn’t stop the FBI from asking for a back door.  They are apparently pretty smooth about it according to Nico Sell of Wickr;  she talks about it in the PC Magazine article linked below.  While Wickr told them to pound sand, apparently AT&T was more than cooperative with the NSA, going back ten years before 9-11 (see second ARS link below).  The deal with AT&T was so cozy that the NSA apparently told their agents to be very polite when visiting AT&T facilities.

The second is the fallacy that the government, any government, can keep a secret for any extended period of time.

This past week, the government gave us proof that their goal of keeping secrets secret is unlikely to be successful for very long.

Since 9-11, the TSA has required that passengers traveling by air only use padlocks that have a TSA bypass mechanism so that if the TSA suspects there is a bomb in your suitcase, they can open it and look.  This is a backdoor into “physical cyber”, but it is a perfect example of the problems with back doors.

There have been numerous complaints, lawsuits and payments by the TSA as a result of TSA employees stealing things out of passenger’s luggage using these physical back doors.

This past week, the TSA in an amazing act of stupidity allowed the news media to photograph these same master keys.  The media, doing what the media does, published the pictures on the web.  Within a few days, hobbyists created a CAD file that allowed anyone with access to a $1,000 3D printer could print one of these master keys.

Compare this to the FBI accidentally or maliciously exposing the crypto back door keys.  The cost to use these accidentally exposed crypto keys is zero.

But there is a MUCH bigger problem with the crypto back door.  With the luggage locks, everyone now knows that these locks are no longer secure and can stop using them.  You can’t use those keys to open the suitcases that were in airports last month or last year.

HOWEVER, those purloined or accidentally exposed crypto keys could be used to decrypt files sent years ago.  Even if the government were to somehow discover that the keys had been exposed and magically snap their fingers and get every manufacturer of software new keys the next day (think about the logistics of that), every communication ever sent that could be opened using that compromised back door key is compromised.  AND, there is no way to undo that because those files and communications are out of your control.

Which is why the idea of a crypto back door is insanity.

Ignoring the fact that the bad guys will continue to use crypto that doesn’t have a back door.  Ignoring that minor detail.

Luckily, Congress seems, for the moment, to understand this problem.

You decide.   Would you trust a government that can’t even keep a padlock key secret to keep a crypto key that opens billions of communications secret?


Information for this post came from ARS Technica, boing boing, PC Magazine, ARS Technica and Wired.

Facebooktwitterredditlinkedinmailby feather