Author Topic: Is there something to learn for embedded/IOT from the Crowdstrike disaster?  (Read 11473 times)

0 Members and 1 Guest are viewing this topic.

Offline wek

  • Frequent Contributor
  • **
  • Posts: 525
  • Country: sk
I wasn't joking about Rust
The reason why this can be seen, at the same time, by some as trolling and by others as freeing truth is, that there's practically no evidence for nor against.

As Derek Jones says, there is often little science in computer science.

JW
 

Offline floobydust

  • Super Contributor
  • ***
  • Posts: 7386
  • Country: ca
CrowdStrike External Technical Root Cause Analysis — Channel File 291
https://www.crowdstrike.com/wp-content/uploads/2024/08/Channel-File-291-Incident-Root-Cause-Analysis-08.06.2024.pdf

BASIC and PASCAL have run-time array-bounds checking. Out-of-bounds array indexing, ahhh C++ is shit as is C, let's celebrate 50 years of this crap  :palm:
 
The following users thanked this post: quince

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8658
  • Country: fi
CrowdStrike External Technical Root Cause Analysis — Channel File 291
https://www.crowdstrike.com/wp-content/uploads/2024/08/Channel-File-291-Incident-Root-Cause-Analysis-08.06.2024.pdf

BASIC and PASCAL have run-time array-bounds checking. Out-of-bounds array indexing, ahhh C++ is shit as is C, let's celebrate 50 years of this crap  :palm:

This is oversimplification of what happened. Lately I have been dealing with shit caused by array-checked language causing complete crash of everything. To the customer, the end result is exactly the same: either white screen of death; or complete misbehavior of all random stuff.

Or it can be worse; maybe, by luck, the variable which would get corrupted would be some rarely needed or unimportant thing. But range checking escalated that into full blown total crash.

For developers of course, it is much nicer to have a repeatable crash with clear explanation why, instead of weird bugs that affect something else in execution flow. But if developers are not interested in quality, then this opportunity is wasted. Range checked languages require as much care as non-range checked. Fanboyism leads to false sense of security.

In this particular case, as it often is, the crash clearly was instant, repeatable and easy to reproduce and find (e.g. by using a debugger), so the root issue was that developers were not interested in checking their inputs and not interested in testing their product (they first tested the 21-parameter use case on customers). Which means that even if language had exception mechanism to catch over bounds accesses, developers would not have used it either, instead letting it to escalate into full crash like these languages usually do by default if you don't try-catch (or whatever the mechanism) with fine enough granularity. Which you should do, just like on C++ you should range check arrays.

It is tempting to do simple conclusions, but be careful not to make wrong ones. Advanced tools usually do not remove complexity, just move responsibilities elsewhere.
« Last Edit: August 12, 2024, 06:42:14 am by Siwastaja »
 
The following users thanked this post: madires, wek

Online Marco

  • Super Contributor
  • ***
  • Posts: 6907
  • Country: nl
There's always javascript, it doesn't panic on out of bound.

There's some work in Rust to try to prove code can't panic, though I have no idea how it handles the possibility of stack overflow.
 

Offline mfro

  • Regular Contributor
  • *
  • Posts: 219
  • Country: de
CrowdStrike External Technical Root Cause Analysis — Channel File 291
https://www.crowdstrike.com/wp-content/uploads/2024/08/Channel-File-291-Incident-Root-Cause-Analysis-08.06.2024.pdf

BASIC and PASCAL have run-time array-bounds checking. Out-of-bounds array indexing, ahhh C++ is shit as is C, let's celebrate 50 years of this crap  :palm:

This is oversimplification of what happened. Lately I have been dealing with shit caused by array-checked language causing complete crash of everything. To the customer, the end result is exactly the same: either white screen of death; or complete misbehavior of all random stuff.

Certainly. Simpler languages like Basic and Pascal just offer to terminate the program when array-bounds checking finds an offence.
That obviously wouldn't help much in an OS kernel - where should it terminate to?

If that routine had been written in Ada with a simple catch-all exception handler that would just restart skipping the offending channel file, nothing had happened. If that routine had been written in Ada and verified using SPARK, it wouldn't even have needed the exception handler.

Even better, if Microsoft had decided to write their OS in Ada (and some SPARK for verification) first place, most likely the whole Crowdstrike business model would fall apart since it wouldn't be needed anymore.
Beethoven wrote his first symphony in C.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
I think one good thing to come from this is that the EU will have to revisit their policy which stopped MS from employing that new API they'd developed which stopped security-related drivers from running in kernel mode.

It was a bad decision by the EU, who were obsessed with stopping a Microsoft monopoly and overlooked the security and reliability implications.

So I expect an improvement in the resilience of the Windows OS to follow along in due course.
 

Offline mfro

  • Regular Contributor
  • *
  • Posts: 219
  • Country: de
I think one good thing to come from this is that the EU will have to revisit their policy which stopped MS from employing that new API they'd developed which stopped security-related drivers from running in kernel mode.

Nonsense.

The EU did not stop MS from implementing a new API in any way - they just said that independent security vendors need to have access to that API.

The EU also didn't talk Crowdstrike into implementing crappy software.
Beethoven wrote his first symphony in C.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
The OP asked about lessons that could be learned, and I'm guessing that was intended to be in the context of microcontroller (the title of this forum).

The user mode vs kernel mode argument is probably a bit less relevant in the world of microcontrollers. However, one thing that struck me from Big Dave's video was that the kernel mode driver - signed, of course - wasn't what was updated. Rather, it was a data file that the driver read from disk and then processed that caused the driver to choke.

So to me, one lesson that absolutely applies to microcontrollers is the need to make sure all parts of your code are resilient to bad incoming data - whether that is from a sensor, a file, the user, or another part of your code. It implies sanity checking all such data before making use of it.*  In many cases this won't be trivial. It requires a full description of what "sane" data looks like, and I can imagine this will be non-trivial in many cases.

Furthermore, even when the incoming data has been sanity-checked, there is still a need for a policy on what to do when "not-sane" data is encountered. When it is user input it might be reasonable to show an error message and ask for the data to be re-entered correctly. But what about when it is from a sensor, or a file read from disk? Yes, a proper sanity check will warn when it's bad, but then what? Your policy for handling these circumstances will probably require a fair bit of effort to design and implement.  Plus you need to make sure its scope is comprehensive enough to handle all conceivable sanity check failures.

I wonder if there is a development process or model that makes sure this is properly addressed during the design and coding stages. Exception handling and try-finally blocks provide some of the "how", but the big picture - the "what" - still needs defining.

*When I was learning to code, like almost all of us I had to develop a simple calculator: add, subtract, multiply, divide. And probably like many of us, I experienced my first crash - a 'divide-by-zero' exception. That led to my first sanity checking lesson: ensuring that the denominator is not zero before going ahead with the division. I then extended that to screen out non-numeric inputs. However, I don't recall receiving or reading much more on the topic of sanity checking and software resilience. Yes, throwing and catching exceptions is taught, but the next layer up - the "so now what?" layer - doesn't seem to play a big part in learning to code. Mostly because its a policy that needs designing before the code gets cut.
 
The following users thanked this post: globoy

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8658
  • Country: fi
Specifically funny in how Rust fanboys are jumping into the case is that, in this particular case, due to sheer luck, overindexing happened into a page not allocated by the OS, and therefore - again by sheer luck - produced same response as a range checked language - again in absence of error handling - would have produced.

There are advantages in languages which always range-check. For example, this non-checking bug could have allowed access into allocated kernel page, possibly allowing an attacker full privileges to silently enter the system and access any data, or turn the computers into botnets, which arguably is even worse than the kernel crash and related downtime.

Yet, in this particular case, this did not happen. By luck, the behavior of non-checking was exactly like in most decent range-checked languages: crash and downtime.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
The EU did not stop MS from implementing a new API in any way - they just said that independent security vendors need to have access to that API.

It isn't nonsense. Watch Dave's video for a fuller explanation - minute 6 onwards. The EU regulators explicitly stopped Microsoft from incorporating that API into Windows, despite Microsoft designing it specifically for "users like CrowdStrike". The whole truth seems to be more complicated: Microsoft intended that the API would only be available to SOME vendors, and the EU insisted the API be available to ALL developers or it doesn't go in. As usual the regulators won.

But this does not mean that Microsoft were in the wrong, or being unreasonable, or anti-competitive. We cannot make that judgement until we know the full story. Why did Microsoft apparently limit the API to vendors "like CrowdStrike"? What were the criteria that must be met for a vendor to get access to the API?

For all we know, those criteria might be perfectly reasonable. For all we know, they might have been quite easy to meet.  For all we know, keeping the amateurs well away from that API might be extremely wise.

We don't know the full story. All we know is that if the EU hadn't intervened, that API would be in place now and Windows would presumably be much more resilient to cock-ups like CrowdStrike made.
 

Offline quince

  • Regular Contributor
  • *
  • Posts: 63
  • Country: us
Lately I have been dealing with shit caused by array-checked language causing complete crash of everything. To the customer, the end result is exactly the same: either white screen of death; or complete misbehavior of all random stuff.

Or it can be worse; maybe, by luck, the variable which would get corrupted would be some rarely needed or unimportant thing. But range checking escalated that into full blown total crash.

For developers of course, it is much nicer to have a repeatable crash with clear explanation why, instead of weird bugs that affect something else in execution flow. But if developers are not interested in quality, then this opportunity is wasted. Range checked languages require as much care as non-range checked.

In this particular case, as it often is, the crash clearly was instant, repeatable and easy to reproduce and find (e.g. by using a debugger), so the root issue was that developers were not interested in checking their inputs and not interested in testing their product (they first tested the 21-parameter use case on customers). Which means that even if language had exception mechanism to catch over bounds accesses, developers would not have used it either, instead letting it to escalate into full crash like these languages usually do by default if you don't try-catch (or whatever the mechanism) with fine enough granularity. Which you should do, just like on C++ you should range check arrays.

It is tempting to do simple conclusions, but be careful not to make wrong ones. Advanced tools usually do not remove complexity, just move responsibilities elsewhere.

This is a good discussion but I don't like the implication that software with bugs should run. Software should not be written as if it is a person - where imperfect behavior is acceptable. If a software has any bugs, it should delete itself or at a minimum crash immediately.

Quote
Fanboyism leads to false sense of security.
I am a Rust fanperson. You are a C/C++ fanboy. We are not the same.
 

Offline quince

  • Regular Contributor
  • *
  • Posts: 63
  • Country: us
Specifically funny in how Rust fanboys are jumping into the case is that, in this particular case, due to sheer luck, overindexing happened into a page not allocated by the OS, and therefore - again by sheer luck - produced same response as a range checked language - again in absence of error handling - would have produced.

There are advantages in languages which always range-check. For example, this non-checking bug could have allowed access into allocated kernel page, possibly allowing an attacker full privileges to silently enter the system and access any data, or turn the computers into botnets, which arguably is even worse than the kernel crash and related downtime.

Yet, in this particular case, this did not happen. By luck, the behavior of non-checking was exactly like in most decent range-checked languages: crash and downtime.

Range checking in rust is at compile time! If you try to build:

let x = [1,2,3,4,5];

let y = x[7];

rustc will simply refuse to compile the program. "index out of bounds: the length is 5 but the index is 50."

However, if you write

int x [] = {1,2,3,4,5};

int y = x[50];

gcc will not complain under any circumstances and will happily compile your code. Even if you pass -std=c23 and -Wall.

It will merely make the useless remark that "int y" is not used. The only thing you are guaranteed is UB.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8658
  • Country: fi
Range checking in rust is at compile time!

No, obviously range checking in rust must be at both compile and runtime. This should be very obvious even to beginners, but... often you can't range-check during compile, for example in following example written in made-up pseudo language:

Code: [Select]
int idx, val;
int array[10];
cout << "Enter data index to write at (hopefully 0 - 9)?" << endl;
cin >> idx;
cout << "Enter value to write?" << endl;
cin >> val;
array[idx] = val;

Compile-time checks, in those sadly rare or trivial cases where they are possible, are great not only due to the always-stated obvious reason that they waste no CPU or program memory during runtime, but also for a much more significant reason that proving the check would never fail also frees the user from having to think what to do if the test fails - how to recover.

Because that recovery is the very issue right here. A language which forces runtime checks (whenever it cannot check compile-time, which is very often in complex piece of software) does not magically know how to recover. While the programmer does not need to write explicit "if (idx > MAX_IDX)" type checks, they still need to write very explicit handlers for out of bounds accesses. Which is 99% of the work.

The problem with fanpersonism is the oversimplification and false sense of security leading to fact that actual problems are not addressed at all.

PS. If you use old and crappy languages like C in mission critical systems, consider using additional tooling. Static analysis tools that can detect misindexing at compile time are readily available, and much more.
« Last Edit: August 12, 2024, 06:28:54 pm by Siwastaja »
 
The following users thanked this post: quince

Online Marco

  • Super Contributor
  • ***
  • Posts: 6907
  • Country: nl
So to me, one lesson that absolutely applies to microcontrollers is the need to make sure all parts of your code are resilient to bad incoming data
I think having a rock solid bootloader which can do network updates of later stages is more important. It allows you to remotely recover from anything but the bootloader update going disastrously wrong, and wiping out the AB update mechanism ... in which case you should have a proper factory reset too.
 
The following users thanked this post: SteveThackery

Offline wek

  • Frequent Contributor
  • **
  • Posts: 525
  • Country: sk
This may be something to learn from, not only for embedded/IoT:

https://www.theverge.com/2024/8/12/24218536/crowdstrike-accepts-def-con-pwnies-award-most-epic-fail-global-windows-it-outage

Now compare this with Microsoft's whining.

JW
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3515
  • Country: it
gcc will not complain under any circumstances and will happily compile your code. Even if you pass -std=c23 and -Wall.

It will merely make the useless remark that "int y" is not used. The only thing you are guaranteed is UB.

And that is GCC's fault.
now try Clang

Code: [Select]
<source>:4:12: warning: array index 50 is past the end of the array (that has type 'int[5]') [-Warray-bounds]
    4 |     return x[50];
      |            ^ ~~
<source>:2:5: note: array 'x' declared here
    2 |     int x[] = {1,2,3,4,5};
      |     ^
1 warning generated.
ASM generation compiler returned: 0
<source>:4:12: warning: array index 50 is past the end of the array (that has type 'int[5]') [-Warray-bounds]
    4 |     return x[50];
      |            ^ ~~
<source>:2:5: note: array 'x' declared here
    2 |     int x[] = {1,2,3,4,5};
      |     ^
1 warning generated.
Execution build compiler returned: 0
Program returned: 64
 
The following users thanked this post: quince

Offline zilp

  • Frequent Contributor
  • **
  • Posts: 329
  • Country: de
We don't know the full story. All we know is that if the EU hadn't intervened, that API would be in place now and Windows would presumably be much more resilient to cock-ups like CrowdStrike made.

For all we know, if the EU hadn't intervened, that API would have had a bug that twould have caused millions of Windows machines to crash, and you would now be complaining how the EU had forced people to use Microsoft's crappy software instead of allowing more competent vendors to enter the market.

The job of the people making that decision within the EU is to ensure that competition isn't hindered. The technical quality of products is simply not something that is relevant to their job. It is not their job to decide which products are good and which are bad and to then decide based on that which products are allowed to exist. Their job is to ensure that all vendors have a chance to offer their product in the market, so that customers have the choice, and can use whatever criteria they prefer to decide which product to buy.

It is your assumption that the MS API would have been higher quality. But for one, that's not really something that you know, much less that you would have known a priori. Then, even if that were true, that doesn't mean that that is true of all other products that are enabled by that EU decision. And in any case, that is not their job.
 
The following users thanked this post: madires, Siwastaja

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
It is your assumption that the MS API would have been higher quality. But for one, that's not really something that you know, much less that you would have known a priori. Then, even if that were true, that doesn't mean that that is true of all other products that are enabled by that EU decision. And in any case, that is not their job.

Ah, I'm afraid you have let your anti-Microsoft bias show, which does weaken your argument somewhat.  In fact your argument is really strange, coming from an engineer. It boils down to this:

1/ Microsoft want to re-architect part of Windows to make it more secure.

2/ But there might be bugs in the new code, undermining the gains it can offer.

3/ The regulators don't care about bugs or the quality of anyone's code.

4/ Therefore there is no case for allowing the improved architecture and we should continue with an architecture which the owner has identified as insecure and fragile.

Can you see how silly this is? You are probably right that code quality is not the concern of the EU, but security vulnerabilities surely should be! If not, then the EU is the party that needs to do some serious thinking.

Take your own code. Suppose you can see a fundamental weakness in the architecture that leaves it vulnerable to bad third party code. Do you leave it unfixed in case your solution introduces new bugs? (Don't say yes, because no one will believe you). The truth is that nobody's code is bug-free, but to use that as a reason not to close a gaping security hole in your software is absolutely nuts.

There is a new architecture available that should greatly improve the resilience and security of Windows. Yes, there might be critical bugs in it, but we now know with certainty that the existing architecture is extremely vulnerable.

Getting the architecture right is crucial, and improving a bad architecture is crucial. It means you are starting from a better place with your coding.
 

Offline zilp

  • Frequent Contributor
  • **
  • Posts: 329
  • Country: de
Ah, I'm afraid you have let your anti-Microsoft bias show, which does weaken your argument somewhat.  In fact your argument is really strange, coming from an engineer. It boils down to this:

1/ Microsoft want to re-architect part of Windows to make it more secure.

That is not an established fact. That is Microsoft's claim.

2/ But there might be bugs in the new code, undermining the gains it can offer.

3/ The regulators don't care about bugs or the quality of anyone's code.

4/ Therefore there is no case for allowing the improved architecture and we should continue with an architecture which the owner has identified as insecure and fragile.

For one, I still haven't seen any source that actually establishes that "the improved architecture" was not allowed. As far as I know, this was only about not allowing Microsoft to prevent competitors from accessing interfaces. If you actually have a source for that, feel free to point me to it.

But also, this still is just Microsoft's claim, not an established fact.

Can you see how silly this is? You are probably right that code quality is not the concern of the EU, but security vulnerabilities surely should be! If not, then the EU is the party that needs to do some serious thinking.

No, they shouldn't either. Not in that part of the bureaucracy. Security vulnerabilities are adressed via liability rules and possibly via product quality standards, not as part of antitrust law.

Take your own code. Suppose you can see a fundamental weakness in the architecture that leaves it vulnerable to bad third party code.

This is simply misleading framing. You are phrasing this as if there is a security vulnerability in Microsoft Windows. There isn't. Or, at least not in this particular case. A security vulnerability is something that allows an unauthorized party to compromise the integrity of a system. That was not the case. What you call a "vulnerability" here is nothing more than the freedom of authorized parties (i.e., the owner of the computer) to install any software they want - and that happens to include software that crashes their system, or that contains security vulnerabilities, because that is an unavoidable consequence of having control over your own computer. It is your computer, and the fact that it is "vulnerable" to you installing any software that you want to install on it is a feature, not a bug.

Do you leave it unfixed in case your solution introduces new bugs? (Don't say yes, because no one will believe you).

No, I wouldn't, because that's a misrepresentation of my position.

I would indeed potentially leave it "unfixed", because, in my view, it is not a vulnerability that the owner of a product that I developed can do with it whatever they want.

And also, what I would do is not particularly relevant to what the competition authority should do.

The truth is that nobody's code is bug-free, but to use that as a reason not to close a gaping security hole in your software is absolutely nuts.

There is no security hole in Windows.

There is a new architecture available that should greatly improve the resilience and security of Windows. Yes, there might be critical bugs in it, but we now know with certainty that the existing architecture is extremely vulnerable.

We do know that the existing architecture of CrowdStrike is extremely fragile. That doesn't mean that it therefore makes sense to allow Microsoft in particular to have a monopoly on the part of the system that CrowdStrike screwed up. And that is what you are suggesting. It doesn't follow that because CrowdStrike was bad at it, therefore, Microsoft will build a better product than all other competitors, nor that that is something that the competition authority should be concerned with.

Getting the architecture right is crucial, and improving a bad architecture is crucial. It means you are starting from a better place with your coding.

Yeah. But making it so that only Microsoft is allowed to build the architecture isn't a useful approach to that problem.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
You are phrasing this as if there is a security vulnerability in Microsoft Windows. There isn't. Or, at least not in this particular case. A security vulnerability is something that allows an unauthorized party to compromise the integrity of a system. That was not the case.
(...)

There is no security hole in Windows.
 

This is a seriously weird conversation. The existing architecture allows third party code to run in kernel mode. This means that third party code has the potential to crash the OS, or even read data from memory being used by other processes or threads.

If you really don't think that is a security vulnerability, then you might be the only engineer on the planet to think that.

Don't you remember the consternation in the early days of NT, when the decision was made to run video/graphics drivers in kernel mode? People couldn't resist pointing out how badly it undermined the whole secure and resilient architecture of this lovely new OS. They were right: it does.
 

Offline IanB

  • Super Contributor
  • ***
  • Posts: 12291
  • Country: us
This is a seriously weird conversation. The existing architecture allows third party code to run in kernel mode. This means that third party code has the potential to crash the OS, or even read data from memory being used by other processes or threads.

If you really don't think that is a security vulnerability, then you might be the only engineer on the planet to think that.

Don't you remember the consternation in the early days of NT, when the decision was made to run video/graphics drivers in kernel mode? People couldn't resist pointing out how badly it undermined the whole secure and resilient architecture of this lovely new OS. They were right: it does.

Perhaps, but how are you going to define third party code?

It's my computer. If I want to write and execute code in kernel mode on my computer, I can. That's first party code. It would be wrong, and worthy of serious complaint for anyone to prevent that.

If I want to pay a consultant to write code for me that executes in kernel mode on my computer, I can. That's second party code. And likewise, I have every expectation to do so.

Or I can license code available on the open market and have it run on my computer in kernel mode. That's third party code.

So how should anyone, practically, technically, or legally, differentiate between first party, second party or third party code?

Such differentiation is done, for example, by Apple on iPhones, but then, nobody is pretending that an iPhone is a general purpose computer. The iPhone can only be locked down the way it is, specifically by not being a general purpose computer.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
So we're falling back on semantics now. I don't think I need to spend much longer on this. In the context of this discussion, when I wrote "third party" I meant "not Microsoft". I'm pretty sure everybody reading this thread - including you - knew exactly what I meant.

The problem is this: if you can write code that executes in kernel mode, so can anyone else. I'm sure you don't want your desktop machine to be vulnerable to someone with hostile intent running kernel mode code on your machine.  It would give them a level of control equal to the OS itself.

I'm sure you would appreciate Microsoft making it as difficult as possible for said people to do that.  I'm sure you would appreciate being protected against people who are merely incompetent, with no evil intent, like CrowdStrike.

Allowing non-Microsoft code to run in kernel mode is an obvious security and reliability hole. It is time to revisit the EU decision, including getting to the bottom of why MS wanted to restrict access to only some vendors. The whole thing needs reviewing.
 

Offline IanB

  • Super Contributor
  • ***
  • Posts: 12291
  • Country: us
The problem is this: if you can write code that executes in kernel mode, so can anyone else. I'm sure you don't want your desktop machine to be vulnerable to someone with hostile intent running kernel mode code on your machine.  It would give them a level of control equal to the OS itself.

I'm sure you would appreciate Microsoft making it as difficult as possible for said people to do that.  I'm sure you would appreciate being protected against people who are merely incompetent, with no evil intent, like CrowdStrike.

But you can do that. Simply don't install or run such code on your computer.

Here, "you" also means corporate IT departments.

Such security software is at least as effective as the elephant fence around my home, which works perfectly. In all the time I have been living here, I have never once seen an elephant in my garden.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8658
  • Country: fi
So we're falling back on semantics now. I don't think I need to spend much longer on this. In the context of this discussion, when I wrote "third party" I meant "not Microsoft".

The idea of "third parties" not being able to do kernel modules is absolutely weird. It happens in any OS, including those considered quite secure.

The key is processes of approval, signing etc. so that users do not accidentally install low-quality kernel modules. And while not Windows expert, AFAIK MS has at least tried to do something about it. It's not like some Crowdstrike software gets randomly installed. You downloaded it, read the EULA, got warning message from Windows, were asked for Administrator privileges by Windows, to which you clicked OK, I'm an administrator.

And this is again question of outsourcing. You are the one who is making it about semantics. In real world, I don't think there is any fundamental difference between a team working in a Microsoft building, vs. team in another company qualified by Microsoft to supply software. Both can make similar mistakes.

Besides, MS has a long tradition of being completely unable to provide a safe operating system, and unable to provide firewall/virus protection for their own product (the need of which mostly stems from their own product design). 3rd party firewalls, virus protection etc. has been a thing for nearly three decades now.
« Last Edit: August 15, 2024, 03:27:33 pm by Siwastaja »
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
It's my computer. If I want to write and execute code in kernel mode on my computer, I can.

No, you can't, actually. Or at least not without jumping through some serious hoops. Firstly, it's your computer but it's not your OS: you have licensed it from Microsoft. Secondly, and more importantly, you would need to write your code as a driver, get it tested by Microsoft, and get it signed.*

*I think that's the process in outline - please correct me if I'm wrong.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf