Category Archives: Security Practices

And You Think YOU Have a Problem Finding Cybersecurity Talent

If you have tried to hire any cybersecurity talent recently, you know that experienced folks are hard to find, hard to keep and expensive.  That is why we offer the virtual Chief Information Security Officer program.

But if you are the federal government and you have hundreds of agencies and millions of employees – not to mention adversaries that are working overtime to hack you – you need “a few good people”.  Actually quite a few.

The federal government doesn’t have a great pay scale either, so in order to motivate people, they have to be aligned with the mission.

But the federal government doesn’t seem to have much of a mission when it comes to cybersecurity.  We can’t even seem to agree on whether the Russians interfered with the last presidential election.

So what does that mean for the feds?

It means that senior cybersecurity people are leaving.  Key people.

Jeanette Manfra, who is currently the Assistant Director for Cybersecurity for the Office of Cybersecurity and Communications at DHS’ Cybersecurity and Infrastructure Security Agency (how’s that for a title?) is leaving CISA to join Google.  At Google, she is going to head up the Office of the CISO to help customers improve their security.

She is not alone.

Kate Charlet, who served as acting Deputy Assistant Secretary of Defense for Cyber Policy at the Department of Defense, left in and is now Director of Data Governance at Google.

Daniel Pietro, who was Director for Cybersecurity Policy on the staff of the National Security Council, joined Google as an executive for Public Sector Cloud at Google.

Rob Joyce, was forced out of his role at the White House as Cybersecurity Coordinator at the National Security Council by former National Security Advisor John Bolton.  Rob, at least, went back to the NSA where he is appreciated.  Now the White House has no one in that role and some people are saying that we may be back in the same situation as we were in 2014 when the Russians hacked the White House.  Cyber is not a priority for this administration.

Joe Schatz resigned as White House CISO to join technology consulting firm TechCentrics.

In October 2019, Dimitrios Vastakis, Branch Chief of the White House Computer Network Defense and staff member of Office of the Chief Information Security Officer (OCISO) at the White House released a scathing resignation memo saying that OCISO staff are “systematically being targeted for removal from the Office of the Administration (OA) through various means.”

One of the key issues with all of these senior folks leaving is that all of the tribal knowledge is going with them.  Even if you can replace these folks – and the evidence seems to indicate that either this administration doesn’t want to or can’t – there is no way to replace their knowledge of the workings of all of these federal systems.

Back in 2016 then acting director of OPM Beth Cobert said  “…federal agencies’ lack of cybersecurity and IT talent is a major resource constraint that impacts their ability to protect information and assets.”

Another person who left, Michael Daniel, former special assistant to the president and cybersecurity coordinator at the White House, said “Hiring and retaining cybersecurity professionals is difficult for the federal government under normal circumstances, because supply remains low and demand high across our entire economy,

President Trump did sign an EO last May to try and address the cybersecurity staffing gap estimated at 300,000.

I don’t know where that number came from.  Maybe this is in the federal government alone.  I have seen estimates of a nationwide shortage of over 3 million by next year.  If the feds want 10% of that, they are going to have to work very hard and create an environment that is agile and receptive – something no government agency is good at doing in the best of times.

I hope the government is successful at turning this around, but I am a bit skeptical of their ability to do that.   I guess we shall see.  Source: MSSP Alerts

 

Facebooktwitterredditlinkedinmailby feather

Are Smart Cars Safe Cars?

Here is the punch line.

Automotive cybersecurity incidents doubled in 2018 and are up 605% since 2016.  That doesn’t seem that safe to me.

Here are some statistics from Upstreams 2019 automotive cybersecurity report:

  • 330 million vehicles are already connection and top brands in the US say that they will only sell connected vehicles this year.  If true, one attack vector might be to design a hack to disable all smart vehicles in a specific area.
  • Smart vehicles will benefit from 5G cellular, if and when it becomes widely available in the US because 4G speeds in the US tend to be very variable and often horribly slow.
  • Since 2016, the number of annual incidents has increased by 605%
  • Incidents more than doubled in 2019 compared to 2018.
  • 57% of incidents were criminal in nature – disruption, theft and ransoms.  The rest were researchers trying to stay ahead of the bad guys.
  • The three most common attacks are keyless entry, backend systems and mobile apps.  Remember, if you choose not to install your car maker’s mobile app and register your vehicle, you are leaving your car open to attack if a bad actor registers your car instead.
  • One third of all incidents resulted in the theft of a vehicle or a break-in.
  • One third of the attacks included taking over some of the car’s function.
  • 82% of the attacks in 2019 did not require physical access to the car.

Car makers understand these security issues and are working to improve their security, but the basis of all smart cars is software and we know that software always works perfectly.

Users like the features, so they will continue to ask for them but they might also want to ask their insurance agent if their insurance covers these new types of attacks.

Also recommended is to talk to your legislator to make sure that laws take into consideration that the risks of smart cars.  For example, if you are in an accident and you say that you lost the ability to control your vehicle as we saw on 60 Minutes a couple of years ago, will the police believe you?  Or hold you responsible?  What if someone else is hurt as a result of that?  In today’s level of sophistication, it is going to be hard to prove that it wasn’t your fault.

Source: HelpNet Security

 

Facebooktwitterredditlinkedinmailby feather

What Do You Think About a National ID Number?

No, I am not kidding.  Currently, your Social Security Number is effectively a national identifier. Except when it is not allowed to be used.

In many healthcare situations, they use first and last name plus birth date.  Apparently, however, that is more than a bit error prone.  This has led to treatment errors and medication errors.

When HIPAA was enacted, it mandated the creation of a Universal Patient Identifier (UPI).  That has been stymied by a ban that has been put into the annual funding bills every year that bans the government from spending any money to do this.

So, instead, we use the Social Security Number as a de facto universal identifier.

Rep. Ron Paul initially and now Sen. Rand Paul have said that a national identifier is a threat to personal privacy.  In a sense that is hard to argue with.  On the other hand, Using the Social Security Number as a universal identifier for healthcare not only compromises medical information when there is a breach, but also a person’s financial information.

Some people say that stricter penalties for breaches, identity theft and other related crimes would reduce the abuse, but I am skeptical.  After all, the war on drugs, which tried exactly that, is certainly stopping drug sales and use.

This year the House removed the ban from the funding bill but the Senate left it in.

Some places are using biometrics to help identify patients, but the use of biometrics represents a whole other raft of problems.

There is not a simple solution, but continuing to use your Social Security Number as a universal identifier is NOT the answer.

For more details, see the article in Health IT Security.

 

Facebooktwitterredditlinkedinmailby feather

From Unsecure to Less Unsecure

Text messages, as many people know are not very secure.  If you are asking where we are meeting for lunch, you probably don’t care.  But many banks use text messages (technically known as SMS or Short Message Service) as a second factor to enhance login security.  While it does help some, it would be  a lot better if SMS messages were secure.

Add to that the limited character length allowed in SMS (only a bit longer than the original Twitter at 162 characters, but that is sometimes masked by phone makers text messaging applications), the fact that photos sent by SMS have to be compressed down to be barely identifiable and the fact that it can be hijacked, we have been needing a replacement.

Enter RCS or Rich Communications Services.  RCS eliminates a lot of these shortcomings.  Supposedly the big four (soon to be three) US carriers say it is coming in 2020, even though the standard has been around for 10 years.

But the way the carriers are implementing it is not very secure as researchers are starting to point out.

While you can pick a different text messaging app like iMessage, Whatsapp or Signal, for example, for talking to your friends and have enhanced privacy with them, you don’t have any control over which text messaging service your bank uses, leaving you more vulnerable than alternative solutions such as Google Authenticator or Authy, generically known as Time based One Time Passwords or TOTP.

So what are the carriers doing wrong?

SRSLabs researchers are going to talk about the holes that they have found at Black Hat Europe in December.  Hopefully the carriers get embarrassed and fix some of these bugs before the systems go live next year.

The issue SRSLabs seems to have a problem with is the way the standard for RCS is being implemented, rather than the standard itself.  This is actually good news because it means that a software patch can improve security and it doesn’t require changes to the standard.  Even with these fixes, RCS is **NOT** encrypted end to end like iMessage or Whatsapp.

One issue is security around how RCS configuration files, which contain the userid and password for your text messages are secured.  In that case, there is no security, meaning any app can request the configuration and have access to your text messages.

Another one sends a six digit code to identify you are who you say you are but lets you have unlimited guesses.  To try all the possible numbers takes about five minutes.

The carriers, of course, are completely defensive, but I suspect after Black Hat makes their sloppiness public, many of the carriers will clean up their acts.

Which is good for users.

Bottom line though, if you want more private text messages, use something like iMessage or Signal – RCS is not going to solve that problem.  Even if the carriers fix their implementation bugs in RCS, it will just be less unsecure.  Source:  Vice

 

 

 

 

Facebooktwitterredditlinkedinmailby feather

“Smart Cities” Need to be Secure Cities Too

For hundreds of years, government has been the domain of the quill pen and parchment or whatever followed on from that.

But now, cities want to join the digital revolution to make life easier for their citizens and save money.

However, as we have seen, that has not always worked out so well.

Atlanta recently was hit by a ransomware attack – just one example out of hundreds.  It appears that was facilitated by the city’s choice to not spend money on IT and IT security.  Now they are planning on spending about $18 million to fix the mess.  Atlanta can afford that, smaller towns cannot.

We are hearing of hundreds of towns and cities getting hit by hackers – encrypting data, shutting down services and causing mayhem.  In Atlanta, for example, the buying and selling of homes and businesses was shut down for weeks because the recorder could not reliably tell lenders how much was owed on a property being sold or record liens on property being purchased.

But what if, instead of not being able to pay your water bill, not having any telephones working in city hall or not being able to do things on the city’s web site – what if instead, the city owned water delivery system stopped working because the control system was hacked and the water was contaminated?  Or, what if, all of the traffic lights went green in all directions?  Or red?  What if the police lost access to all of the digital evidence for crimes and all of the people being charged had to be set free?  You get the general idea.

As cities and towns, big and small, go digital, they will need to upgrade their security capabilities or run the risk of being attacked.  Asking a vendor to fill out a form asking about their security and then checking the box that says its secure does not cut it.  Not testing software, both before the city buys it and periodically after they buy it to test for security bugs doesn’t work either.  We are already seeing that problem with city web sites that collect credit cards being hacked costing customers (residents) millions.  Not understanding how to configure systems for security and privacy doesn’t cut it either.

Of course the vendors don’t care because cities are not requiring vendors to warranty that their systems are secure or provide service level agreements for downtime.  I promise if the vendor is required to sign a contract that says that if their software is hacked and it costs the city $X million dollars to deal with it, then the vendor gets to pay for that, vendors will change their tune.  Or buy a lot of insurance.  In either case, the city’s taxpayers aren’t left to foot the bill, although the other issues are still a problem.  We have already seen information permanently lost.  Depending on what that information is, that could get expensive for the city.

In most states governments have some level of immunity, but that immunity isn’t complete and even if you can’t sue the government, you can vote them out of office – something politicians are not fond of.

As hackers become more experienced at hacking cities, they will likely do more damage, escalating the spiral.

For cities, the answer is simple but not free.  The price of entering the digital age includes the cost of ensuring the security AND PRIVACY of the data that their citizens entrust to them as well as the security and safety of those same citizens.

When people die because a city did not due appropriate security testing, lawsuits will happen, people will get fired and politicians will lose their jobs.   Hopefully it won’t take that to get a city’s attention.

Source: Helpnet Security

Facebooktwitterredditlinkedinmailby feather

Coworking and Shared Work Spaces Are A Security and Privacy Nightmare

Coworking and shared office spaces are the new normal.  WeWork, one of the coworking space brands, is now, apparently, the largest office space tenant in the United States.

Who are in these coworking spaces are startups and small branches (often 1 or 2 people) of larger companies, among others.

Most of these folks have a strong need for Internet access and these coworking spaces offer WiFi.  Probably good WiFi, but WiFi.  And WiFi is basically a party line, at least for now.

Look for WiFi 6 with WPA 3 over the next couple of years – assuming the place that you are getting your WiFi from upgrades all of their hardware and software.  And YOU do also.

A couple of years ago a guy moved into a WeWork office in Manhattan and was concerned about security given his business, so he did a scan.  What did he find but hundreds of unprotected devices and many sensitive documents.

When he asked WeWork if they knew about it, the answer was yes.

Four years later, nothing has changed.

Fundamentally, it is a matter of money.  And convenience.

But, if you are concerned about security, you need to think about whether you are OK with living in a bit of a glass house.

For WeWork in particular, this comes at a bad time because they are trying to do  – off and on  – an initial public offering and the bad press from publications like Fast Company on this security and privacy issue don’t exactly inspire investor confidence.

Fundamentally, using the Internet at a WeWork office or one of their competitors is about as safe as using the WiFi at a coffee shop that is owned by the mob  and is in a bad part of town.  Except that you are running your business here.

In their defense, WeWork does offer some more secure options (although you might be able to do it yourself for less).  A VLAN costs an extra $95 a month plus a setup fee and a private office network costs $195 a month.  That might double the cost of a one person shared space (a dedicated desk costs between $275 and $600 a month, depending on the location).

And clearly they do not promote the fact that you are operating in a bit of a sewer if you do not choose one of the more expensive options.  The up sell here is not part of their business model.

For users of shared office spaces, like WeWork (but likely anywhere else too, so this is not a WeWork bug), they need to consider if they are dealing with anything private or whether they care whether their computer is open to hackers.  If not, proceed as usual.

If not, then you need to consider your options, make some choices and spend some money.  Sorry.  Source: CNet.

Facebooktwitterredditlinkedinmailby feather