Category Archives: Security Practices

Pipeline Operators Are In the Crosshairs – From Both Regulators and Hackers

The Colonial Pipeline attack exposed what a lot of us have been saying for years – that when it comes to U.S. critical infrastructure, the emperor has no clothes.

After the attack on Colonial was dealt with, TSA issued a directive very quickly that was pretty superficial. It required, among a couple of other things, that operators identify a cybersecurity coordinator who is available 24×7 and assess whether their security practices are aligned with the 2018 pipeline security VOLUNTARY directive.

In fairness, there was not a lot of time to prepare and TSA – those same folks that do a wonderful job of stopping guns getting through security in airports (in a public outing, in 2016 the TSA director was fired after it became public that the TSA failed to detect guns 95% of the time) – said that more would be coming.

The electric distribution network, managed by NERC and FERC, have done a somewhat better job of protecting that infrastructure, but even that has a lot of holes in it. No one seems to be watching the water supply.

Now we are learning that the TSA issued another directive regarding pipeline security. Given all of the recent supply chain attacks, this is decades past due and nothing will change immediately, meaning that the Chinese, Russians, North Koreans and others will still have years to attack us. This directive requires the pipeline industry to implement specific mitigations (not explained, likely due to security issues) to protect against ransomware and other known threats, to develop and implement a cybersecurity contingency plan, to implement a disaster recovery plan and review the security of their cyber architecture.

The TSA is still not acting like a regulator. There do not appear to be any penalties for not doing these things and there doesn’t even seem to be much oversight. The TSA calls the companies that it regulates its partners. I cannot recall, for example, ever hearing banking regulators calling the banks that they regulate their partners. The TSA is not the partner of the companies that it regulates (unless maybe, they are getting kickbacks, in which case, okay).

Sorry, but that is completely the wrong model and is doomed to fail. It may require Congress to do something although I am pessimistic that they will. You can never tell.

This directive comes on the heels of another report from the FBI and CISA that the Chinese targeted 23 pipeline operators between 2011 and 2013. Why they didn’t think it important to tell us about this for 10 years is not explained. Maybe the facts were about to be leaked? Don’t know.

Are there more attacks that they are not telling us about still?

Of the 23 pipeline operators in this report, 13 were confirmed to have been breached. Three more were what the feds call near misses, whatever that means, and the remaining 8 were unknown as to how badly there were compromised.

Well, that certainly gives me a warm fuzzy feeling.

At the same time, CISA has been reporting an insane number of IoT vulnerabilities on every brand of industrial IoT equipment. While it is good that CISA is “outing” these vendors’ decades-old sloppy security practices, there is still a long way to go. For every bug they announce, who knows how many remain and, more importantly, will the operators of the vulnerable equipment even bother to deploy the patches. In fairness, in many cases the cost of downtime is high and the operators’ confidence that their equipment will still work after being patched is low.

For many operators, the equipment that is vulnerable has been in place for 10, 15, even 20 years and the people who installed it or designed it are retired and possibly even deceased. To reverse engineer something like that is an insanely complex task.

The alternative is to ignore the problem and hope that the Chinese, Russians and others decide to play nice and not attack us. Fat chance.

We should also consider that independent hackers who may have even less morals than the North Koreans (is that possible?) may have discovered these bugs – which of course are now being made public on a daily basis – and choose to use them to attack us for their own motives. Even if we do arrest them after, for example, they blow up a refinery, that is a tad bit unsatisfying to me.

If you get the sense that I am disgusted that the government is decades behind in protecting us, I am. You should be too. By the way, this is not a Democratic vs. Republican thing. Administrations on both sides of the aisle have put this in the “too hard to do pile” and pretended that it does not exist.

IoT Bug Could Lay Waste to Factories ….

When people talk about IoT – Internet of Things – these days, they are thinking of Amazon Alexa or Phillips Vue lightbulbs, but where IoT started was in factories and warehouses, decades ago.

Industrial automation or IIoT is still where the biggest in IoT attacks lies.

Today we learned about a critical remote code execution bug in Schneider Electric’s programmable logic controllers or PLCs.

The bug would allow an attacker to get ROOT level access to these controllers and have full control over the devices.

These PLCs are used in manufacturing, building automation, healthcare and many other places.

If exploited, the hackers could shut down production lines, elevators, heating and air conditioning systems and other automation.

The good news, if there is any, is that the attacker would need to gain access to the network first. That could mean an insider attack, a physical infiltration or something simple like really bad remote access security like that water plant in Florida. That means that you probably should not count on this extra level of hardness to protect the millions of systems that use Modicon controllers.

Schneider Electric has released some “mitigations” but has not released a patch yet.

The bug is rated 9.8 out of 10 for badness.

What is really concerning is that Schneider released patches for dozens of bugs today.

Given that IIoT users almost never install patches, this “patch release” doesn’t make me feel much better.

But it appears that the velocity of IIoT bug disclosures and patches is dramatically increasing. Given that, factory and other IIoT owners have to choose between two uncomfortable choices – don’t patch and risk getting hacked or patch and deal with the downtime. They are not going to like either choice, but they are going to have to choose.

My guess is that they are going to choose not to patch and we are going to see a meltdown somewhere that is going to be somewhat uncomfortable for the owner. An example of past similar events is the Russians blowing up a Ukrainian oil pipeline a few years ago. In the middle of winter. When the temperature was below zero.

Credit: Threatpost

How Fast Can You Detect a Supply-Chain Ransomware Attack?

In light of the recent series of supply chain attacks (actually going back to 2011 at least), speed is crucial. SolarWinds, Microsoft Exchange, Kayesa and others.

This weekend’s attack against MSP software provider Kaysera is a perfect example of why speed is so important.

Many small and medium sized companies are dependent on managed service providers (MSPs) to run their IT systems. In order for MSPs to do that, they need access to their clients’ systems. The software that Kaysera makes helps MSPs do just that.

Which means that MSPs are a great attack point. Finding out what software they use and compromising it gives the hackers a force multiplier. One MSP equals, say, 100 customers, equals, say, 2500 workstations. Or more!

It appears that Kayesa got their arms around this quickly.

How did they do that? We don’t know how, but here is my speculation.

Given the business that they are in, they likely have a well trained, well staffed and well armed (with software) 24 by 7 Security Operations Center or SOC. Even a small SOC can easily cost a company a quarter million dollars a year or more, when you consider payroll, benefits, training and software. This is NOT something that you should try with one person, no training and limited software.

There is an alternative and that is a SOC as a service or SOCaaS. With a SOCaaS, you only pay for however much you use. The SOCaaS provider deals with the staffing, training, software and does it at scale. Maybe you need three people for a 25 person company, but those same 3 people can probably handle a hundred people. At 5 people maybe you can handle 500 people. It scales well due to automation. They also have the benefit of once they have seen an attack on one customer, they know what to look for at all customers. Also, if they need to buy a database of attack indicators, the cost of the database is likely licensed based on the number of SOC personnel they have, not the number of users they are monitoring. Again, Scale is your friend.

What is clear is time is your enemy and a SOC or SOCaaS reduces the time to detect a breach, so it is your friend.

While SOCs are very expensive, SOCaaS may be more cost effective than you might think. Nothing is free, but neither is getting attacked.

If you would like to investigate a SOCaaS, please contact us – we have a great solution.

Security News for the Week Ending June 25, 2021

Paying Ransom is Tax Deductible

Under current IRS regulations, paying cyber ransom after a hack is deductible, just like losses from a robbery, but the IRS is “looking into it”. One way the government could discourage ransom payments is if the cost is borne fully by the company’s owners. They still might choose to do it, but at least the taxpayers would not be subsidizing it. Of course, if your insurance pays for or reimburses you for the ransom, then that ransom is not deductible. Credit: AP

How Much Does YOUR Board Know About Cybersecurity Issues

As I reported last week, the SEC fined First American Financial a half million dollars for the data leak they had. The fine was based on the fact that an internal security team discovered the problem that was reported to the SEC several months later, no one bothered to tell FirstAm executives about the issue. The moral of the story is that the SEC is “suggesting” that you keep your business leaders informed about cybersecurity issues. If the SEC does that, assume that your insurance provider will follow suit soon and deny coverage if your executives are not kept in the loop. Credit: Reuters

How Long Does It Take to Fix Critical Vulnerabilities

According to White Hat Security, the average time to fix a CRITICAL vulnerability in May 2021 was 205 days, up from 201 days in April. The water utility sector was the least prepared. 66% of all applications used by the sector had at least one exploitable vulnerability open throughout the year. Even in finance, 40% of the applications had a window of exposure of 365 days, but 30% had a WoE of fewer than 30 days. Given stats like these, it is not surprising that the hackers are winning. Credit: ZDNet

Cyber Breach Insurance Market Set for a Reckoning

Cyber insurance claims spiked this year. Standalone claim payouts jumped from $145,000 in 2019 to $358,000 in 2020. A key metric the industry uses is something called direct loss plus defense and cost containment ratio. It skyrocketed last year to 73% from 42% the previous five years. At 73%, when you add in other costs, that means the industry is probably losing money. This means that premiums will go up, coverage will go down and limits and sublimits will be changing. If you have cyber risk insurance, prepare for changes. Credit: The Record

How Long Does it Take a Misconfigured Container to be Attacked?

Containers are great, but they are not bullet proof. Aqua Security says that based on data they have collected over 6 months, 50% of Docker APIs are attacked by botnets within 56 minutes of being set up.

It takes five hours on average for a new honeypot container to get scanned. The fastest happened in a few minutes. The longest was 24 hours. None of these numbers are very long.

What this means is that you need up your game when it comes to securing your cloud based systems. If you can, set them up in a contained environment (that is not publicly accessible) and harden it before exposing it. Credit: SC Magazine

NIST Prepares Post-Quantum Encryption Standards

Long before quantum computing becomes “main stream”, state actors will have access to it. In part, because they command large budgets; in part because it is important to them.

Why do they care? Because, it will allow them to decrypt both communications that they intercept going forward and communications that they have intercepted in the past and stored. That is a game changer.

While we can make things more difficult with perfect forward secrecy (PFS), which requires each message to be separately decrypted, there are plenty of places were PFS is not being used.

NIST, the part of the Department of Commerce, is responsible for creating encryption standards used by most of the government (except for the spies) and all of the commercial sector, and has been working on this problem since 2016. They are not there yet, but this week they made an important announcement.

They plan to announce finalists for new standards roughly by the end of the year.

Then they have to document them as standards and put out the documents for public comment. Possibly, rinse and repeat.

They expect approved standards by 2024 – an 8 year process.

THEN COMPANIES NEED TO IMPLEMENT THEM AND INTEGRATE THEM INTO SOFTWARE AND HARDWARE PRODUCTS.

They have selected 8 algorithms as candidate standards.

And just to make sure that things don’t get away from them, they are also looking at 7 backup standards.

These standards use different strategies, not just different implementations of solving the same problem. (Like RSA encryption uses the hard problem of factoring large prime numbers. That is not quantum proof, but that is an example of one strategy). So we potentially have 15 different problems which NIST thinks will be hard for even quantum computers to break. If they are wrong about one, they have 14 more. Backups with backups to the backups.

Look for NIST to release draft proposals in a few months. Then we have more wait. But at least this seems like light at the end of the tunnel.

For software developers, that means work, documentation and testing. Plan to be doing that around 2024.

Credit: SC Magazine and NIST

Executive Order on Cybersecurity Part 2

As I said yesterday, some EOs are a couple of paragraphs long. This one goes on for pages. Today’s post is going to cover the section of the EO that addresses supply chain risk. Supply chain risk, as we saw in both the SolarWinds and Microsoft Exchange attacks, is a huge problem. So what does the EO do?

  • The Commerce Department, through NIST, has only 30 days to solicit input from government, academia and the public to identify existing or develop new standards, tools and practices for complying with other requirements in this EO.
  • NIST must publish preliminary guidelines for complying with this EO within 180 days.
  • Within 360 days NIST will publish guidelines for reviewing and updating the guidelines above.
  • Within 90 days NIST must issue guidance identifying practices that enhance the security of the software supply chain. This must include standards, procedures or criteria described below.
  • – Secure software development environments
  • – Creating and delivering documentation proving the use of SSDL practices
  • Within 60 days, Commerce and NTIA must publish minimum elements for an SBOM. NTIA has been working on this since 2018 and I have been involved in this effort. This is critical.
  • Within 45 days NIST, consulting with the Secretary of Defense (SecDef), shall publish an official definition of what software is considered critical. Likely this includes anything that runs with more than normal user permissions. Then, within another 30 days, CISA will release a list of categories of software and software products that fit into that definition.
  • Within 60 days, NIST and CISA will release guidance for required security measures for critical software.
  • Within 30 days, OMB will take appropriate steps to make sure agencies comply with this guidance and specifically with respect to software that they obtain after this EO was issued.
  • While agencies may ask for an extension in complying with a specific requirement, OMB will review those requests on a case by case basis.
  • Within a year, OMB will provide recommended FAR language changes to the FAR Council.
  • Within 60 days, NIST, consulting with NSA and SecDef shall publish software security testing guidelines.
  • Within 270 days NIST must identify IoT cybersecurity criteria for a consumer (security) labelling program. This shall reflect increasingly comprehensive levels of security testing.
  • Within 270 days NIST must identify security software development practices or criteria for a consumer software labeling program. The labeling shall reflect a baseline level of secure practices and if practicable, increasing levels of comprehensive testing and assessment.

Okay, I left a bunch of section 4 out for clarity. The highlighted items will affect consumers or are otherwise important. I am sure that some companies will try to sue the government. Congress may have to act. But even if these labels and standards are voluntary for now, some companies will think it is great marketing to push what they are doing and the other companies will be pressured to step up to the plate. If some companies lie about what they are doing, the FTC can come after them.

We are now about half way through the EO. As you can see, this has a lot more meat than most EOs. If you sell products (hardware or software) to the government, to other companies that sell to the government or to consumers, you need to be considering your plans now.

Credit: The Cybersecurity Executive Order