Author Topic: Is there something to learn for embedded/IOT from the Crowdstrike disaster?  (Read 3653 times)

0 Members and 1 Guest are viewing this topic.

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3219
  • Country: ca
So you should not outsource to other companies?

This has nothing to do with outsourcing.

They give full control of their servers to Microsoft, Crowdstrike, most likely to other companies too. These "providers" can do anything on the servers and PCs under their control, and the owners do not even have the ability to cut off this control in a state of emergency. The "providers" may not have malicious intents, but their employees may be bribed, radicalized, tortured or what not. They may be infiltrated by terrorists or foreign powers. This give them an ability to control most of the computers in the world, for example take down the infrastructure of any country in a matter of hours. With all eggs in one basket the consequences may be dare. Is that your idea of security?

Do you believe that Crowdstrike should be given full control of nuclear weapons too? May be they are not secure enough without Microsoft or Crowdstrike?
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8355
  • Country: fi
So you should not outsource to other companies?

This has nothing to do with outsourcing.

They give full control of their servers to Microsoft, Crowdstrike, most likely to other companies too.

It has everything to do with outsourcing, it's a prime example of a case where everyone outsources to the same company and probably no one does their due diligence to monitor the reliability of their service provider, because "no one ever got fired by buying IBM".

So while it's an example of outsourcing producing bad result, the answer is not to stop outsourcing, but outsource more carefully.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3219
  • Country: ca
It has everything to do with outsourcing, it's a prime example of a case where everyone outsources to the same company and probably no one does their due diligence to monitor the reliability of their service provider, because "no one ever got fired by buying IBM".

If you hired a company to install a secure steel door, this is outsourcing, and this strengthens your security.

If at the time of installation you sign a contract that no duplicate keys are ever made, this is where you strengthen your security further.

If, instead, you sign a contact where the installer makes a duplicate key and acquires the right to enter your premises at any time and do there anything they want without being liable for consequences, this is where you weaken your security and give up your rights.

How did it happen that, in the software world, the later case is perceived more secure than the former is also a very interesting question. If you have time, may I suggest a good read - "Extraordinary Popular Delusions and Madness of Crowds" by Charles MacKay. The book is about trading, but it explains how people may part their way with common sense and make really stupid decisions. It is an interesting read regardless.
 
The following users thanked this post: MK14

Offline wraper

  • Supporter
  • ****
  • Posts: 17366
  • Country: lv
If you hired a company to install a secure steel door, this is outsourcing, and this strengthens your security.
If at the time of installation you sign a contract that no duplicate keys are ever made, this is where you strengthen your security further.
If, instead, you sign a contact where the installer makes a duplicate key and acquires the right to enter your premises at any time and do there anything they want without being liable for consequences, this is where you weaken your security and give up your rights.
Very poor analogy for an antivirus-like software. Not to say even you don't do any updates ever, you still need to trust them that there are no backdoors.
« Last Edit: July 29, 2024, 03:52:08 pm by wraper »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6593
  • Country: fi
    • My home page and email address
Do note that however you look at it, the outsourcing problem is prevalent in free/open source projects as well.

Way too much stuff, including security sensitive stuff, relies on some library a poor developer trying their best with very limited resources, with everyone relying on the end product but nobody contributing to its upkeep.  I'm not talking about money, either; the xz backdoor was possible due to lack of developer time and resources.  At the same time, there are well funded developer teams who believe they should be shielded from criticism by end users, with someone else acting as their secretary and bouncer.  Some of the open source developers are just as bad as the bad proprietary software companies, and some of the proprietary companies behave very well wrt. their customers and even open source projects.

It is a human and cultural problem, not a technical one.

Is there anything to learn from the CrowdStrike disaster?  Nothing new, just the same old story.  Just compare this to the various physical disasters caused by specific companies (ships blocking Suez canal, building collapsing bridges and other buildings, et cetera).  Though, I must admit, a $10 Uber Eats gift card does not compare favourably to actual insurance payouts...
 

Offline IanB

  • Super Contributor
  • ***
  • Posts: 12056
  • Country: us
Another human and cultural problem is that CrowdStrike apparently just released a blog post about the incident, that basically says:

"There was a temporary problem with an update that caused some Windows systems to crash, but we quickly corrected it, so it didn't last very long. We are going to avoid this in future by doing more adequate testing of what we release. Nothing to see here, move along now."

Public statement linked and attached for posterity.

https://www.crowdstrike.com/wp-content/uploads/2024/07/CrowdStrike-PIR-Executive-Summary.pdf
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3219
  • Country: ca
If you hired a company to install a secure steel door, this is outsourcing, and this strengthens your security.
If at the time of installation you sign a contract that no duplicate keys are ever made, this is where you strengthen your security further.
If, instead, you sign a contact where the installer makes a duplicate key and acquires the right to enter your premises at any time and do there anything they want without being liable for consequences, this is where you weaken your security and give up your rights.
Very poor analogy for an antivirus-like software. Not to say even you don't do any updates ever, you still need to trust them that there are no backdoors.

Same as with steel doors, you need to trust the manufacturer not to make a duplicate key and go rob you in the middle of the night.

As to the anti-viruses, people trust them absolutely, and also fear viruses too much. Consequently they let anti-viruses do whatever they want and patiently tolerate all the problems caused by anti-viruses. Such approach is certainly heavily biased.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8355
  • Country: fi
If you hired a company to install a secure steel door, this is outsourcing, and this strengthens your security.

Incorrect analogy, because a steel door with physical key requires very little maintenance, so the whole challenge of antivirus/data security (the fact it's a moving target) is bypassed. Better analogy is hiring security officers at your premises. Maybe they even carry weapons. They need access to your secret areas to effectively do their job. And it's a totally nontrivial problem to choose whether you hire them on your company payroll, or outsource from a security-related company, latter is very likely a wise thing to do, but on the other hand you can't do that blindly, as this resource can be compromised either by accident or on purpose.
« Last Edit: July 29, 2024, 05:58:57 pm by Siwastaja »
 
The following users thanked this post: wraper

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14967
  • Country: fr
Another human and cultural problem is that CrowdStrike apparently just released a blog post about the incident, that basically says:

"There was a temporary problem with an update that caused some Windows systems to crash, but we quickly corrected it, so it didn't last very long. We are going to avoid this in future by doing more adequate testing of what we release. Nothing to see here, move along now."

Public statement linked and attached for posterity.

https://www.crowdstrike.com/wp-content/uploads/2024/07/CrowdStrike-PIR-Executive-Summary.pdf

This is hilarious.
 

Offline langwadt

  • Super Contributor
  • ***
  • Posts: 4565
  • Country: dk
If you hired a company to install a secure steel door, this is outsourcing, and this strengthens your security.

Incorrect analogy, because a steel door with physical key requires very little maintenance, so the whole challenge of antivirus/data security (the fact it's a moving target) is bypassed. Better analogy is hiring security officers at your premises. Maybe they even carry weapons. They need access to your secret areas to effectively do their job. And it's a totally nontrivial problem to choose whether you hire them on your company payroll, or outsource from a security-related company, latter is very likely a wise thing to do, but on the other hand you can't do that blindly, as this resource can be compromised either by accident or on purpose.

and in this case it seems like the security company thought it would be too much hassle to have every security officer go through the front entrance, so instead they had one guy go in and open a backdoor for easy access



 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3219
  • Country: ca
Incorrect analogy, because a steel door with physical key requires very little maintenance, so the whole challenge of antivirus/data security (the fact it's a moving target) is bypassed. Better analogy is hiring security officers at your premises. Maybe they even carry weapons. They need access to your secret areas to effectively do their job. And it's a totally nontrivial problem to choose whether you hire them on your company payroll, or outsource from a security-related company, latter is very likely a wise thing to do, but on the other hand you can't do that blindly, as this resource can be compromised either by accident or on purpose.

You keep trying to make it about outsourcing. The problem is the behaviour.

Many software companies (not only Microsoft, CrowdStrike, but others as well) overstepping their boundaries, disrespecting their customers, pushing their updates as if they were more important than customers' business. Do you really like that? Is that how you would want it to be?

As to CrowdStrike specifically, they have a driver which reads pseudo-code from files and executes it in Kernel mode. This can do anything - wipe out the hard drive, or steal all the files. An attacker can overwrite some of their files and thereby make their driver execute them. This is a new vulnerability that didn't exist before. Shouldn't the customers think about that? Or should they unquestionably believe that everything made by CrowdStrike is good?
 
The following users thanked this post: MK14, SiliconWizard, zilp

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8355
  • Country: fi
You keep trying to make it about outsourcing.

No, it was you who gave impractical advice of "don't give power to others".

Quote
The problem is the behaviour.

I fully agree.

Quote
Many software companies (not only Microsoft, CrowdStrike, but others as well) overstepping their boundaries, disrespecting their customers, pushing their updates as if they were more important than customers' business.

I fully agree as well. The solution is not to stop giving power others by contracting, it's about choosing partners wisely and monitoring their contributions. The fundamental issue is excess trust in partners only due to their sheer size - "no one ever got fired by buying IBM Microsoft". And in reality, companies blindly using Microsoft products causes massive loss of productivity every day, not to mention the huge risk of having all the eggs in the same basket.
 

Offline wek

  • Frequent Contributor
  • **
  • Posts: 514
  • Country: sk
Quote
Many software companies (not only Microsoft, CrowdStrike, but others as well) overstepping their boundaries, disrespecting their customers, pushing their updates as if they were more important than customers' business.

I fully agree as well. The solution is not to stop giving power others by contracting, it's about choosing partners wisely and monitoring their contributions. The fundamental issue is excess trust in partners only due to their sheer size - "no one ever got fired by buying IBM Microsoft". And in reality, companies blindly using Microsoft products causes massive loss of productivity every day, not to mention the huge risk of having all the eggs in the same basket.
Putting absence of real competition aside, based on what are you going to make decision, how to split the eggs and how to select the baskets? And how to reject eggs you don't need or want at all?

Being secretive about what the "updates" constitutes is the norm; plus "updates" are today widely seen as universally good, without questioning the risks/benefits ratio at all.  This in turn makes "push whatever garbage you have out of the door, fix later" mindset to be a winning strategy; and this even did not include the "side" income from gathering data/intelligence or extortion.

I may despise this approach, but for many maybe this is the real takeaway for the embedded/IoT (whatever that is).

JW
 

Offline zilp

  • Frequent Contributor
  • **
  • Posts: 297
  • Country: de
You keep trying to make it about outsourcing.

No, it was you who gave impractical advice of "don't give power to others".

It wasn't, though. He said "full control" and "do not even have the ability to cut off this control in a state of emergency". That goes beyond just "giving power to others".

I fully agree as well. The solution is not to stop giving power others by contracting, it's about choosing partners wisely and monitoring their contributions. The fundamental issue is excess trust in partners only due to their sheer size - "no one ever got fired by buying IBM Microsoft". And in reality, companies blindly using Microsoft products causes massive loss of productivity every day, not to mention the huge risk of having all the eggs in the same basket.

While all of that is true, the problem is not just excess trust for bad reasons. One problem is trust in large corporations period.

This is the exact same problem that democracy is trying to solve, namely, the risks of concentrations of power.

Which is precisely why the interpretation of "don't give power to others" is oversimplified, even though there is some kernel of truth in it. It is true, in a way, that democracy works by putting limits on "giving power to others", in that giving all power over a country to one person is the antithesis of democracy. But obviously, the idea is not to never "give any power away". Rather, one defining characteristic of democracy are elections, the purpose of which is explicitly to "give away power", i.e., to delegate the power to make decisions to others. But the important part is that the system as a whole is ideally set up in such a way that no single person or small group of people gains an exceedingly large amount of power. That is how democracy tries to limit the damage that can result from a person going rogue, being incompetent, being blackmailed, whatever--and, maybe even more importantly, this also decreases the incentive to corrupt people, because the power that you can gain by corrupting people is also limited.

Analogously, the solution is not to avoid all outsourcing. But the solution requires limiting how much power ends up in the same hands. And in that context, it is not sufficient to just look at how many other customers some supplier has. It is also important to consider how much power they gain over you when you buy their product. And forced automatic updates of software that runs with unlimited privileges on all of your systems is the maximum of power that you could possibly be giving away to a single entity ... as was just demonstrated. Mind you that this effectively means that the admin password of the build machine of Crowdstrike is effectively the admin password to all of your machines. So, it's not just about whether you trust the integrity of the people in that company, you also have to consider the size of the target on their back. Someone who manages to compromise their systems effectively gains admin access to millions of systems that guard trillions of dollars of value, that can be used to cause even more trillions of dollars of damage ... i.e., it is a very high value target for both corruption and for compromise, even if your own company by itself might not be of much value to an attacker.

Also, I just want to remind everyone that it is not a law of nature that constant software updates are a necessity for having secure systems, but that rather a lot of vulnerabilities are the result of bad engineering, and often ultimately of business priorities. It isn't just a law of nature that products you buy are full of defects. In many ways this is equivalent to it being accepted as completely normal that your industrial machinery constantly explodes, and that you think the obvious solution would be to hand keys to your factory to the manufacturers of those machines, so that they can sneak in at any time to swap parts.

And with anti-virus and similar products, you further have to consider that they are largely useless unless you are doing things very wrong elsewhere, so the potential gain from handing over so much power (and the risks you accept as a result of that) is rather limited. If there is a piece of malware that exploits a vulnerability that does not exist on your systems (either because you simply aren't using the affected software, or because you installed the security fix), then having software that prevents that malware from reaching your systems is useless. Malware that exploits vulnerabilities that don't exist on your systems is not a danger to you. At the same time, all of the code of anti-virus software that deals with all of the  untrusted data to check if for malware, is itself a huge attack surface, and given the complexity, is pretty much guaranteed to have vulnerabilities. So, you are potentially installing additional vulnerabilities for the purpose of preventing stuff from reaching your systems that would not have any negative impact if it did reach your systems.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6593
  • Country: fi
    • My home page and email address
I tried to think of examples illustrating why the cultural reasons that lead to CrowdStrike disaster are not specific to any software product, but are ubiquitous.  (I do not know why that is; I am only trying to point out the bigger picture here.)

There are some that come to my mind, all on the web server side, but they do apply to embedded/IoT development, integration, and deployment too.

The first is server configuration in Linux distributions.  How many of those who maintain web servers (at the Apache/Nginx configuration level) use a different active server configuration directory from the default one?  Very, very few.  This means that whenever software updates are applied, any web server related configuration will be automatically applied and take effect, too.  While most distributions are quite good at using "safe" default configuration files, installing the wrong package may enable a web-based service (specialized Apache/Nginx module) that compromises the security on the machine due to cross-service interactions.

(I always use a dedicated directory, with a separate configuration information.  For about a decade, I even published one designed for group-based maintainer access (when you have may webmasters with different levels of access to a mixed tree) at my previous home page.  This way, I can allow automatic security patches to the server, but monitor the distribution-provided configuration directory –– which does not affect the running service at all –– for important changes that may need to be ported to the server.  I did this on a number of rather public servers, with very good results (both security and support to webmasters); and not because they were small and quiet: they were constantly bombarded with exploit access patterns based on the log files!  Yet, to introduce the pattern to new sysadmins/sysmasters/web root admins, I always need to explain the whole story with examples before they realize the small inconvenience they imagine this causes, actually improves their work product for everyone, and typically reduces the time taken for maintenance in the long run.  This should be the obvious default pattern, not something that has to be pushed through heavy resistance!)

The second is self-modifying web scripts –– "required" only so that one can update the web framework by clicking on a button on a web page ––, and the third is embedding credentials in the firmware.

Allowing web service executables to rewrite themselves or create new executables is the reason why script drops, trojans, and worms are still plagueing web frameworks.  If they could not modify themselves or create new executables, almost all exploits would immediately stop working, with entire approaches becoming impossible to exploit.

Embedding credentials in the firmware, even if in salted password form, allows anyone excavating those credentials to pose as that device.  It makes it possible to create infiltration and exploitation devices that pose as the original device, but produce attacker-controlled output.  In IoT devices communicating with upstream servers, it allows the attackers to communicate with those upstream servers without the upstream servers being able to detect it.  This applies to both custom protocols and common protocols like http and smtp.  A lot of spam is injected to email servers via exploited trusted clients, for example.

Now, I'm not here to argue whether the above is correct or not (especially with zilp), because I've observed this in practice for a long time; decades.  I described them only as examples, to illustrate the analogous human behaviour to that underlying the Crowdstrike disaster, so give some slack.

The point I am trying to make here is that doing things the above way is what most sysadmins, webmasters, systems integrators, developers, et cetera nowadays learn to do by default, because it aligns with the cultural approaches we have.  We don't build buildings to last, so why would one expect us to build software to last either?  It is not cost-effective to do so.  When we teach software development, error checking is at best an afterthought, normally omitted to "save time".

You have to train people to think along better lines, in better patterns, or they will repeat the mistakes others have repeated before, learning nothing from history.  (And, they will typically complain all through training how much a waste of time this is, because <the example exploits> are so rare in practice, they're not worth worrying about.  Just like you see when you write code that say checks the return value of close(); because "it cannot fail for normal files on typical desktop file systems".)

Of course, exactly what those better patterns are, is a separate discussion; it is more about "how to avoid Crowdstrike disaster situations" than "what to learn from the Crowdstrike disaster").
« Last Edit: July 30, 2024, 12:27:33 pm by Nominal Animal »
 

Offline peter-hTopic starter

  • Super Contributor
  • ***
  • Posts: 3847
  • Country: gb
  • Doing electronics since the 1960s...
In the embedded context, this debacle ought to make manufacturers think about the risks of firmware updates, especially if they have sold a lot of the boxes to a customer big enough to sue ;)
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline madires

  • Super Contributor
  • ***
  • Posts: 7996
  • Country: de
  • A qualified hobbyist ;)
The first is server configuration in Linux distributions.  How many of those who maintain web servers (at the Apache/Nginx configuration level) use a different active server configuration directory from the default one?  Very, very few.  This means that whenever software updates are applied, any web server related configuration will be automatically applied and take effect, too.  While most distributions are quite good at using "safe" default configuration files, installing the wrong package may enable a web-based service (specialized Apache/Nginx module) that compromises the security on the machine due to cross-service interactions.

Actually the problems already start with using the distrbution's apache/whatever package. Usually you'll get a daemon with all features enabled. It might not be the best setup for your specific environment (performance-wise) and also increases the attack surface with features you don't use. Especially for apache's httpd there are compile-time options which have a huge impact on performance in specific environments and on the use of other features. If someone finds a critical security issue in feature xyz and you don't have that feature enabled in your daemon you can relax.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6593
  • Country: fi
    • My home page and email address
Actually the problems already start with using the distrbution's apache/whatever package. Usually you'll get a daemon with all features enabled. It might not be the best setup for your specific environment (performance-wise) and also increases the attack surface with features you don't use.
Yes; fortunately, most of the modules are configurable, so it's a matter of using a completely separate configuration directory rather than recompiling the daemon.  Any configuration I do always starts from the absolute minimum; usually from the choice of the worker module and its initial configuration.

My own configuration schemes require a few local users and groups per virtual host, just to correctly manage access rights to filesystem resources.  Unfortunately, this is not compatible with most web hosting services (using cPanel, Plesk, etc.) as they are designed for one user account per site.  So silly...

And it's not just multiple user accounts, either.  If only the robust methods of creating directed graphs of allowed user identity transitions –– that user A can switch to user B but not vice versa; that user B can switch to user C but not vice versa (and user A to user C only by switching to user B in between), like a very simple state machine allowing identity transitions ––, we could make web scripting environments even more secure.  (We can do it now based on process ID's and an external daemon, but it is fragile with several attack scenarios.)  The script interpreter instances are configured on a per-vhost basis in both Apache and Nginx, with specific user and group privileges.  Those would be sufficient to obtain the necessary credentials to databases and e.g. a connection to the user verification.  For each incoming request, a fork of that process would then drop privileges as soon as the request URI is received (final privileges depending on that URI), before even fully parsing the request.  For pages involving user password of configuration in read-write form, you could use a separate user account than on normal pages; similarly for pages that upload static file objects, and administrator interfaces.  The kernel then manages the privilege separation between parts of the site.  Very little of the site code would be security-sensitive.  For example, a bug in scripts generating user-configurable style sheets could not be exploited for a script drop, no matter what bugs it had, because it simply does not have the privileges to create any filesystem objects in the first place.  It sounds complicated, but isn't.

Simply put, we can already minimize these attack/exploit/catastrophic error surfaces; we simply do not do so yet.  We can, but most people object to it when it comes to do the grind –– or at least whenever I've done this stuff, I've had to seriously fight against those demanding for simpler (but clearly more vulnerable/fragile) approaches, and definitely not because the others didn't know what they were doing:  their priorities just differed from mine.
Hell, I'm not even sure anymore whether the more secure and robust approach is the one we should use, because marketplace competition says users are not demanding it, nor are they willing to pay for it.  They just want to express their outrage on social media for a few days, then revert back to the same old behaviour.
 

Offline madires

  • Super Contributor
  • ***
  • Posts: 7996
  • Country: de
  • A qualified hobbyist ;)
Crowdstrike doesn't like parody and tries to take down https://clownstrike.lol/. You can read the DMCA and the response at https://clownstrike.lol/crowdmad/. Streisand effect at work. ;D
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14967
  • Country: fr
Nice.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf