Over 90 Percent of IoT Data Transactions Are Not Encrypted

According to a report released by  cloud security vendor Zscaler, 91% of the traffic that they saw coming through their network security devices from IoT “things” was NOT encrypted.

This is on enterprise networks where one might think that security is more important, so maybe the number is even higher on home networks, although it would be hard to beat that 91% by very much.

The data covered 56 million IoT device transactions from 1,051 enterprise networks, so it seems like a reasonable sample.

These devices include cameras, watches, printers, TVs, set-top boxes, digital assistants, DVRs, media players, IP phones and a host of other stuff.

Given that, what should you do?

First of all, you should be scanning your corporate network to look for these IoT devices since according to the survey, many of the IoT devices found on corporate networks are, not surprisingly, consumer grade.

Next you need to create a policy regarding what devices you are going to allow.  There is no right or wrong answer, but it should be a conscious decision.

Finally, you should isolate all of those devices onto the anything-but network.  Meaning, anything but your trusted internal company networks.  You probably want to group these into multiple anything-but networks.  For example, one network for phones, another for printers, another for smart devices (TVs, coffee pots, water coolers), etc..

While you are in the middle of this, it is probably a good idea to figure out which of these devices patch themselves and which ones vendors even offer patches for.  Then you have to figure out how the heck you can patch them.

And, if you CAN turn on encryption, you should probably do so.

Doesn’t this sound like fun?  Source: Zscaler.

 

 

Facebooktwitterredditlinkedinmailby feather

Baltimore Ransomware Recovery Continues

May 7, 2019 is the day things changed in the City and County of Baltimore.  That is the day that hackers encrypted computers used by 10,000 people in the offices of Baltimore City and County.

While 911 services continued to work. unfortunately the same could not be said for their phones and email.

The hackers want about $100,000 in Bitcoin to decrypt all the computers but the mayor says that the city is not going to pay.  The hacker also said that if the city didn’t pay the ransom in 10 days, the hacker would destroy the key.  That deadline has passed.

In  the meantime the city can’t create utility bills, residents can’t pay their bills, people cannot buy or sell houses because they can’t check or record liens and time could not be entered so that employees could be paid.

Consider that this is YOUR company and not some city 2,000 miles away (from Denver, at least).

We are now more than two weeks into this and city and county systems are, for the most part, still down.

The attack came just days after Mayor Jack Young took over from former Mayor Catherine Pugh, who resigned facing an ever expanding corruption investigation.

Baltimore has no insurance to help pay for the costs, which are likely very substantial.  The city says they and outside consultants are working 24×7 to repair the damage.  This will cost millions.

And the Mayor says that they really don’t know when things will be back to normal – saying it will likely take months.

Baltimore knew this was a  problem  – they were attacked last year as well – and Baltimore’s information security manager said there were big problems during budget hearings last year.  But the budget did not include any money for strategic investments in IT.  It didn’t include money for security training of employees.

The City has had five Chief Information Officers in five years – not great for making progress.

The library, which is not part of the affected systems, is opening early and closing late so that city supervisors can enter employee’s time so that they will get paid.

This week the city came up with a plan to restart home sales.  The title companies are going to go down to the city and the city will print out a piece of paper with whatever lien information they have.  Buyers/sellers will have to sign a piece of paper that says that they will pay back any liens that they didn’t find.  Title companies will probably spend months (and lots of money) to clean up the mess after the systems come back online.

And if history is any indication, the city will discover that they don’t have backups of everything, so some data will be lost forever.  In other city attacks, the police lost electronic evidence of crimes and had to dismiss criminal cases.

Does any of this remind you of your organization?

Most of the City’s systems were hosted internally. The City’s website was almost a goner – not because it was infected.  It is hosted at Amazon.  But it is managed by a contractor, the contract had expired and the city was delinquent in its payments.

Bottom line is that companies should not hope that it won’t happen on their watch.  You don’t know.  Security is not optional.  Companies usually spend ten times or more to respond to a crisis than they would have spent if they planned for security.

Are you prepared?  Have you done everything you can to avoid being the next organization in the news?  Are you ready to recover if the worst happens?  One thing going in favor of the city of Baltimore?  There is no competition.  Unless you just plan to leave the city, you don’t have an option for an alternate provider.  That is likely not true for your customers.

Information for this post came from Vox and Ars Technica.

 

Facebooktwitterredditlinkedinmailby feather

Information Ops Kill Chain

Way back in 2011 Lockheed Martin released a white paper defining the concept of the “Cyber Kill Chain” (see below).

cyber kill chain

The cyber kill chain defined the steps in a hacking attack and then the way a defender can use that to “kill” that attack.  It is a very effective tool and here is a link to that original paper.

Based on our information society, now might be the right time to create an information operations version of the cyber kill chain.  This kill chain is based on the way Russia did business way back in the 1980s and they are still doing it that way.

Step 1  – Find the cracks – in the fabric of society­ — the social, demographic, economic and ethnic divisions

Step 2 – Seed distortion by creating alternative narratives. In the 1980s, this was a single “big lie,” but today it is more about many contradictory alternative truths­ — a “firehose of falsehood” — ­that distorts the political debate.

Step 3 – Wrap those narratives around kernels of truth. A core of fact helps the falsities spread.

Step 4 – (This step is new.) Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives.

Step 5 – Conceal your hand; make it seem as if the stories came from somewhere else.

Step 6 – Cultivate “useful idiots” who believe and amplify the narratives. Encourage them to take positions even more extreme than they would otherwise

Step 7 – Deny involvement, even if the truth is obvious.

Step 8 – Play the long game. Strive for long-term impact over immediate impact.

This is the playbook the Russians used in the2016 and continue to use today.  It was a new game to most Americans so they didn’t know how it worked.

Here is Bruce Schneier’s version of the Information Operations Kill Chain Circa 2019.  Note this is directly from one of Bruce’s blog posts.

Step 1: Find the cracks. There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the “common political knowledge” necessary for democracies to function. We need to strengthen that shared knowledge, thereby making it harder to exploit the inevitable cracks. We need to make it unacceptable­ — or at least costly — ­for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. We need to become reflexively suspicious of information that makes us angry at our fellow citizens. We cannot entirely fix the cracks, as they emerge from the diversity that makes democracies strong; but we can make them harder to exploit.

Step 2: Seed distortion. We need to teach better digital literacy. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.

Step 3: Wrap the narratives around kernels of truth. Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. We need to ensure that the true narrative is legitimized and promoted.

Step 4: Build audiences. This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Here, the defenses center around making disinformation efforts less effective. Social media companies need to detect and delete accounts belonging to propagandists and bots and groups run by those propagandists.

Step 5: Conceal your hand. Here the answer is attribution, attribution, attribution. The quicker we can publicly attribute information operations, the more effectively we can defend against them. This will require efforts by both the social media platforms and the intelligence community, not just to detect information operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.

Step 6: Cultivate useful idiots. We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth­ — in the words of one report, we must “make the truth louder.” Of course, there will always be true believers for whom no amount of fact-checking or counter speech will convince; this is not intended for them. Focus instead on persuading the persuadable.

Step 7: Deny everything. When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA’s Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.

Step 8: Play the long game. Counterattacks can disrupt the attacker’s ability to maintain information operations, as U.S. Cyber Command did during the 2018 midterm elections. The NSA’s new policy of “persistent engagement” (see the article by, and interview with, U.S. Cyber Command Commander’s Gen. Paul Nakasone here) is a strategy to achieve this. Defenders can play the long game, too. We need to better encourage people to think for the long term: beyond the next election cycle or quarterly earnings report.

This is not a silver bullet as Bruce explains in his essay, but it is a framework for starting to address the information operations attack.

Information operations attacks are not going away and they are not limited to the Russians.  Americans, politicians, are using them too because, up until now,

The Information Operations Kill Chain is part of an essay by Bruce Schneier.

 

 

Facebooktwitterredditlinkedinmailby feather

Security News for the Week Ending May 17, 2019

Be Thankful That You Are Not Equifax – Costs Reach $1.4 Billion So Far

Two years after the big breach, Equifax reported financials for the first quarter.   They reported a loss of $555.9 million compared to a net income of $90 million for the same period in 2018 on basically flat revenue.

Equifax had $125 million in cyber risk insurance with a $7.5 million retained liability.  The insurance has paid out the full amount.

So far, the company has accrued $1.35 Billion in data breach costs and this game is far from over.  The say it is not possible to estimate the full costs.  For more information, read the Bank Info Security article.

Boost Mobile Announces Breach – Two Months Ago

Boost mobile apparently got some customer data boosted.  Two months ago.  An undated letter to the California AG and an undated web page on Boost’s website says that the breach happened on March 14, 2019.  We don’t know what the bad guys took, how many customers were affected or even when people were notified.  The only thing we can guess is that since it hit the media today, the notifications were very recent.

If any of the people affected were in Colorado, the notifications came 15-30 days late.  There are probably other states for which the notification was late as well.  Stay tuned- we may see some AGs getting upset.  Source: Techcrunch.

Supply Chain Attacks Get Bigger and Badder

Last week it was WebPrism and 200 college bookstores.   This week it is Picreel, the analytics firm, Alpaca Forms (open source-so much for open source is more secure) and over 4,600 hacked websites.

The attack is still going on; the sites are still infected and the problem is only getting worse.  If you are loading third party code on your website, you need to rethink your security.  Source: ZDNet .

Intel Announces New Family of Speculative Execution Attacks

Intel seems to be challenged to catch a breach.  Err, a break.    After last year’s Spectre and Meltdown attacks comes this year’s ZombieLoad and Fallout attacks.  This is not a surprise – experts predicted more speculative execution attacks would be found.

Other than some new Intel 8th and 9th generation chips, all Intel chips made in the last decade are vulnerable, but ARM and AMD chips are not.  Some older chips will be patched while others, which are likely out of patch space on the chip, will never be fixed.

Apple, Intel, Microsoft and others have all released patches to mitigate these attacks on the chips for which there are fixes.  The attacks can be made either by planting malware on the device or remotely over the Internet.

The good news FOR THE MOMENT is the attack seems to be complex, so likely it will be used in targeted situations, but if used, everything on the device can be compromised including passwords and encryption keys.

Disabling Simultaneous Multi-Threading will significantly reduce the impact of this attack.

Source: Security Week.

For $600 A Hacker Could Confuse Any Commercial Plane’s Instrument Landing System

From a Cessna to a jumbo jet, every commercial plane built in the last 50 years uses a radio based system to guide it to land when it can’t see the runway – such as in rain or in fog.

These radios were not designed to be secure from hacking.

There is no encryption.  There is no authentication.  The system in the plane assumes that any radio signals that come from the ground are legit.

Unfortunately, for $600 a hacker can purchase a software defined radio that can tell the plane that it is off course.  A little high.  A little to the side.

In theory, if the pilot can see the runway, he or she will execute a “missed approach” and go around.  Given how busy the US airspace is, that decision may be at 50 feet off the ground – not a lot of time to react.

Probably, right now, this is an  unlikely attack.  Right now.  But remember, attacks never get less probable, only more probable as attackers figure out how to manipulate things.  Source: Ars Technica.

Facebooktwitterredditlinkedinmailby feather

Microsoft Has a Recommendation and You’re Not Gonna Like It

System, network and application administrators can do the most damage in case of a malware attack.  The permissions that they have allow them to do many things that the average user can’t do and those things, in the hands of a hacker, can mean a lot of damage inside every company.

So here is what Microsoft is recommending.

Per Microsoft’s Security Team, employees with administrative access should be using a separate device, dedicated only for administrative operations.

See, I told you that you weren’t going to like it.  But wait, there is more.

This device should always be kept up to date with all the most recent software and operating system patches.

That, of course, seems like common sense.

“Provide zero rights by default to administration accounts,” the Microsoft Security Team also recommended. “Require that they request just-in-time (JIT) privileges that gives them access for a finite amount of time and logs it in a system.”

JIT permissions is a relatively new concept,  but fundamentally a great one.  Instead of having the administrator be all powerful all the time, have them ask for a specific permission in real time and just for the very short time period that they need it.

Furthermore, administrator accounts should be created on a separate user namespace/forest that cannot access the internet, and should be different from the employee’s normal work identity.

In addition, that account should not have access to the administrator’s regular email (this is my addition).

Finally, companies should also prevent administrative tasks from being executed remotely, Microsoft said.

Microsoft also explored multifactor authentication and, although it was very secure, it was somewhat cumbersome.  Instead they are using biometrics.  With Windows 10 and a computer that has a crypto chip (TPM), Windows Hello is very secure and also easy to use.  Partly this is because there is a ONE TIME enrollment process that ties that user’s identity and biometrics to that specific physical device.  If you need to log in from more than one device, you need to enroll in each of them, but after the enrollment is done, you can literally look at the computer and enter a short PIN to log in.

Check out the rest of their recommendations at ZDNet.

These are recommendations that I think will definitely improve security.  But it will be less convenient.  So make a choice.

SECURITY

CONVENIENCE

Pick Just One.

 

Facebooktwitterredditlinkedinmailby feather

Could You Detect This?

Military prosecutors who are prosecuting a Navy SEAL for killing an Islamic State prisoner are now charged with bugging emails and documents that they sent to defense lawyers.

The bugs, known in the trade as beacons, tell the person who installed it who has opened the document based on their IP address and also provides other information that is returned by the beacon.

In the case of attorney-client communications, these beacons could represent prosecutorial misconduct when installed by the government and may also violate attorney-client protections.

The government claims that they bugged the documents because they are investigating leaks, but the defense says that it must be the government doing the leaking because the media is reporting on the documents before the defense even receives them.

Without regard to this particular case, bugging documents is relatively normal in business – to see if documents shared in confidence are being distributed further than the creator intended.  There are even commercial products that facilitate doing this.  One such product is Thinkst Canary.

Would you be able to detect this kind of surveillance if someone were to bug documents sent to you?  Do you think that if someone were to bug documents sent to you, that would be a violation of trust or privacy?

One simple way to temporarily defeat this kind of beaconing is to disconnect the system that the document is on from any network connection of any kind  before opening the document and leave it disconnected while the document is open.  While not impossible, normal commercial beacons do not persist once the document is closed or deleted.

It is likely that installing this sort of beacon may violate state privacy laws due to the data the comes back to the company who installed it.

While there is zero case law on the subject that I am aware of, as the use of beacons becomes more common – both legally and illegally – that will likely end.   This particular case is going on behind closed doors – for now, but that doesn’t mean that the next case will do the same.

Right now, the question is, would you even detect such a beacon if someone sent you an infected (I use that word intentionally because if they can send a beacon, they can send malware) document?

Source: Navy Times.

 

 

Facebooktwitterredditlinkedinmailby feather

Privacy, Security and Cyber Risk Mitigation in the Digital Age

Visit Us On FacebookCheck Our Feed