Author Topic: Is there something to learn for embedded/IOT from the Crowdstrike disaster?  (Read 11474 times)

0 Members and 2 Guests are viewing this topic.

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
The idea of "third parties" not being able to do kernel modules is absolutely weird. It happens in any OS, including those considered quite secure.


Yes, I did go through that in my next post.

The problem is simple: there is only one Microsoft, but there are an unlimited number of other software vendors out there.  Of all of them, only Microsoft has an imperative to keep their OS, and especially their kernel, bulletproof. The other vendors have different shareholders with different priorities. The Windows kernel is not their code - it alters the dynamic.

The best way to protect the kernel from bad code - even from Microsoft's own coders - is to restrict or prevent access to the kernel as thoroughly as possible.

And while not Windows expert, AFAIK MS has at least tried to do something about it. It's not like some Crowdstrike software gets randomly installed. You downloaded it, read the EULA, got warning message from Windows, were asked for Administrator privileges by Windows, to which you clicked OK, I'm an administrator.


Actually I'm not sure you install something called "CrowdStrike" and do all that business you describe, do you? But even so, access to kernel mode is inherently dangerous, and MS are wise to reduce the need for such access as widely as possible. Ultimately, the author of the code is of secondary importance.

You say MS "tried to do something about it". Well, sure, one of the things they tried to do was implement a new API which prevented any code using that API running in kernel mode. That is automatically a good thing. Hostile or incompetent, the best thing is to prevent it.

I have no idea why MS apparently wanted to restrict access to only some vendors (vendors "like CrowdStrike", whatever that means). It sounds odd to me, but none of us know the full facts so we can't judge.

And this is again question of outsourcing. You are the one who is making it about semantics. In real world, I don't think there is any fundamental difference between a team working in a Microsoft building, vs. team in another company qualified by Microsoft to supply software. Both can make similar mistakes.


Which misses my point. Blocking kernel mode protects the OS against mistakes even from Microsoft's own coders.

Yes, of course this new API might have bugs in it, but they can and will be flushed out as time passes. This is Microsoft's golden goose - it's in their interests more than anyone else's to protect it. There is one API, but presumably dozens or hundreds of products wanting to use it. It is obviously easier to nail down one API than to nail down hundreds of non-Microsoft drivers.

Making the OS more resilient and secure against hostile or incompetent code is automatically a good thing. And that includes against incompetent code coming from Microsoft's own coders.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27707
  • Country: nl
    • NCT Developments
So to me, one lesson that absolutely applies to microcontrollers is the need to make sure all parts of your code are resilient to bad incoming data - whether that is from a sensor, a file, the user, or another part of your code. It implies sanity checking all such data before making use of it.*  In many cases this won't be trivial. It requires a full description of what "sane" data looks like, and I can imagine this will be non-trivial in many cases.

Furthermore, even when the incoming data has been sanity-checked, there is still a need for a policy on what to do when "not-sane" data is encountered. When it is user input it might be reasonable to show an error message and ask for the data to be re-entered correctly. But what about when it is from a sensor, or a file read from disk? Yes, a proper sanity check will warn when it's bad, but then what? Your policy for handling these circumstances will probably require a fair bit of effort to design and implement.  Plus you need to make sure its scope is comprehensive enough to handle all conceivable sanity check failures.
True. And actually the problem is not whether using a language where you need to implement all the range checking yourself or a language which does it for you, the problem of how to deal with an error is the one to tackle.

If you look at the OSI model for example (which was quite popular in the 1990's), you see there is absolutely nothing in the stack with layers which deals with error handling. However, when designing / implementing a communication protocol, the key element is how to deal with errors and how to do recovery.

And then there is also the right place to pick for where to put error checking. A long time ago one of my employers sub-contracted development of a large machine. One day we heard the loud bang and the building was shaking.  :wtf:  Turned out the machine ran beyond it's limit switches and crashed into itself. So the internal software engineering team was send out to investigate what the contractor was doing. It turned out the contractor had mixed safety features with regular code. And due to putting a few remarks here and there for a test, the safety features got bypassed as well. The contractor had a bad day for sure.
« Last Edit: August 15, 2024, 04:02:21 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: SteveThackery

Offline zilp

  • Frequent Contributor
  • **
  • Posts: 329
  • Country: de
You are phrasing this as if there is a security vulnerability in Microsoft Windows. There isn't. Or, at least not in this particular case. A security vulnerability is something that allows an unauthorized party to compromise the integrity of a system. That was not the case.
(...)

There is no security hole in Windows.
 

This is a seriously weird conversation. The existing architecture allows third party code to run in kernel mode. This means that third party code has the potential to crash the OS, or even read data from memory being used by other processes or threads.

If you really don't think that is a security vulnerability, then you might be the only engineer on the planet to think that.

No, that is exactly how anyone with any clue of IT security thinks about this. A vulnerability is something that allows unauthorized parties to interfere with a system. The fact that the owner of a computer can "compromise" their own computer is not a security problem. And really, it is nonsensical to call it a "compromise", because by definition, the computer doing what the owner commands it to do, is not a compromise. A compromise is when the computer does what an unauthorized party commands it to do. "Third party software" is simply not a category that is of any relevance here, whatever that even means.

Also, it is completely arbitrary that you pick out the user/kernel space boundary as the special case. When you have all your important business data in one user account without backups, say, then any old batch script running with the privileges of that user account can delete it all and thus ruin your business. Does that mean that the fact that you can "run third-party batch scripts" is a security vulnerability?

It is a vulnerability if a process that runs with user privileges can run arbirtrary code in kernel mode, because granting the user privileges does not authorize the user to do such a thing. But it is not a vulnerability if the owner/admin can run arbitrary code in kernel mode, because they are authorized to do so. And also, it's just a nonsensical goal to prevent this, because, obviously, the owner/admin can access all the data anyway, and they can just put the machine in a shredder, for that matter, so it's not like you can prevent the owner from damaging their system, or from using the data on the system in whatever way they like.

Don't you remember the consternation in the early days of NT, when the decision was made to run video/graphics drivers in kernel mode? People couldn't resist pointing out how badly it undermined the whole secure and resilient architecture of this lovely new OS. They were right: it does.

No, I don't, I never cared about Windows much to know anything much about its history.

But regardless, did you notice how you didn't say that that was a vulnerability? Because it wasn't. What it was (based on your description of the situation here) was an architecture that made vulnerabilities in drivers a bigger problem, because it allowed those vulnerabilities to be used to gain more privileges than they would with a different architecture.

Also, mind you that there is a difference between providing infrastructure that mitigates exploitation of vulnerabilities for people to use and forcing people to use it/preventing people from doing things differently. The latter is the sort of thing that competition authorities are concerned with, the former not so much. Though they might be concerned with the former, too, but also for reasons that have nothing to do with the usefulness or quality of a product, but only with market behaviour, namely, when the former happens to be a thing that competes with products from other vendors, and you bundle your version of it with another product in such a way that the other vendors are left without a market, because you force your customers to buy your version of it, rather than selling it as a separate product that then has to compete with other offers.
 

Offline zilp

  • Frequent Contributor
  • **
  • Posts: 329
  • Country: de
So we're falling back on semantics now. I don't think I need to spend much longer on this. In the context of this discussion, when I wrote "third party" I meant "not Microsoft". I'm pretty sure everybody reading this thread - including you - knew exactly what I meant.

No, I certainly didn't, because it is a nonsensical distinction in this context.

The problem is this: if you can write code that executes in kernel mode, so can anyone else. I'm sure you don't want your desktop machine to be vulnerable to someone with hostile intent running kernel mode code on your machine.  It would give them a level of control equal to the OS itself.

But that was not the situation here. It wasn't "someone with hostile intent". It was the authorized owner of the machine. That is exactly why it is not a vulnerability.

Noone could run code in kernel mode without the authorized owner of the machine auhtorizing this to happen.

I'm sure you would appreciate Microsoft making it as difficult as possible for said people to do that.  I'm sure you would appreciate being protected against people who are merely incompetent, with no evil intent, like CrowdStrike.

It is already impossible. Noone can run code in kernel mode on your machine without your authorization. So, if you don't want code to run in kernel mode, then don't run code in kernel mode, noone is forcing you to. What you are arguing about here is whether Microsoft should force other people to not run code in kernel mode, because you somehow think that your system is vulnerable due to the fact that other people might run code that you consider too risky to run on your own machine. For you to not run that code on your machine, you don't need Microsoft to force you, you can simply not do it.

Allowing non-Microsoft code to run in kernel mode is an obvious security and reliability hole. It is time to revisit the EU decision, including getting to the bottom of why MS wanted to restrict access to only some vendors. The whole thing needs reviewing.

No, that is not in any way obvious. You are constantly making the completely unsubstantiated assumption that Microsoft is the most competent company to build kernel mode code. Without that assumption, your argument makes no sense at all.

Like, obviously, Microsoft could build this API that handles hooking into the system in a way that products like CrowdStrike need, and build it with the goal of making it hard to compromise or crash the system via that API. That is obviously just a kernel-mode driver written by Microsoft, right?

Now, obviously, other companies besides Microsoft could also build such a kernel-mode driver with the same goal, right?

Now, why do you think that Microsoft's will necessarily be the best and no other vendor could do it better? Or, if that is not what you think, then why should Microsoft be allowed to prevent competitors from selling their superior product to customers?

And note, again, that noone is preventing Microsoft from selling their own solution, or preventing you from buying Microsoft's solution. So it is irrelevant that you would prefer to use Microsoft's solution, because noone is preventing that. All this is about is whether other people should be prevented from selling and buying alternative solutions, and you so far have failed to support that demand.
 

Offline zilp

  • Frequent Contributor
  • **
  • Posts: 329
  • Country: de
The idea of "third parties" not being able to do kernel modules is absolutely weird. It happens in any OS, including those considered quite secure.


Yes, I did go through that in my next post.

The problem is simple: there is only one Microsoft, but there are an unlimited number of other software vendors out there.  Of all of them, only Microsoft has an imperative to keep their OS, and especially their kernel, bulletproof. The other vendors have different shareholders with different priorities. The Windows kernel is not their code - it alters the dynamic.


So, you are saying that Microsoft is the only company that has an interest in keeping their customers secure? Can you substantiate that in any way?

The best way to protect the kernel from bad code - even from Microsoft's own coders - is to restrict or prevent access to the kernel as thoroughly as possible.

You are, again, confusing concepts.

Software security is about preventing accidental privilege escalation.

Competition authorities are concerned with intentional privilege escalation.

That is the difference between authorized and unauthorized access. It is, generally speaking, perfectly fine for Microsoft to provide mechanisms that software can use to prevent accidental privilege escalation, i.e., to mitigate the risks of vulnerabilities. What is not fine is Microsoft preventing the authorized owner of a system from intentional privilege escalation, i.e., when the owner of a system intentionally installs a kernel-mode driver.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb

Also, it is completely arbitrary that you pick out the user/kernel space boundary as the special case.


I didn't say anything about a special case. It is one of the most important cases because it provides a back door to the OS, with the ability to steal data, corrupt data, inject a virus, crash the OS, send DDOS packets, etc. It is a particularly important vulnerability to address for those reasons. And crucially, this can happen without the owner giving permission or knowing it's about to happen.

When you have all your important business data in one user account without backups, say, then any old batch script running with the privileges of that user account can delete it all and thus ruin your business. Does that mean that the fact that you can "run third-party batch scripts" is a security vulnerability?


I don't think you can protect against that sort of thing because the user might legitimately want to delete all their data. I don't think this is a useful analogy because unlike the OS, those data files belong to the user and they can do whatever they want with them. It is Microsoft's job to secure the OS. It is the user's job to secure their own data. This is widely known and accepted (hence the exortations to backup, backup, backup).

Just because a user can destroy their own data, it doesn't mean the OS shouldn't protect itself from corruption or destruction. As I said, the data belongs to the user, but the OS belongs to Microsoft, and I want them to make it as bulletproof as possible. With the possible exception of yourself, pretty much everyone else on the planet would agree. When MS is found not to do that, they get strung out to dry.

It is a vulnerability if a process that runs with user privileges can run arbirtrary code in kernel mode, because granting the user privileges does not authorize the user to do such a thing. But it is not a vulnerability if the owner/admin can run arbitrary code in kernel mode, because they are authorized to do so.

Are they? Who authorises it? I might be wrong, but I don't think anyone can run arbitrary code in kernel space. I think the only way to do that is to write a driver, get MS to test and green-light your code, then get it signed.

I'm afraid I didn't understand the rest of your post so I won't try to respond to it.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
Firstly, it's your computer ...

Not really. You can only do with it what Microsoft allows.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
So, you are saying that Microsoft is the only company that has an interest in keeping their customers secure? Can you substantiate that in any way?

You naughty person! Putting words into my mouth is a sure sign of desperation. Even when it's by inference. Because I'm a nice guy, I will respond constructively, but don't do it again, OK?

What I said is that Microsoft has more incentive than anyone else to keep their OS secure. I didn’t mention customers in that part of my argument. Also, it only has to secure that open doorway once, and it will remain closed against the dozens or hundreds of products using that API. That is clearly better than trying to bulletproof hundreds of said products. There is no case for leaving this doorway open. The anti-competitive argument is about MS restricting access to the new API, and that is quite a different matter. It sounds wrong to me (as it did to the regulators), but none of us know the full story on that.

The best way to protect the kernel from bad code - even from Microsoft's own coders - is to restrict or prevent access to the kernel as thoroughly as possible.

Software security is about preventing accidental privilege escalation.
Competition authorities are concerned with intentional privilege escalation.
That is the difference between authorized and unauthorized access.

Way to miss the point! Any privilege escalation - intentional or unintentional - renders the OS vulnerable to damage because nothing can stop the elevated code from stomping over the OS. That's why the OS runs in kernel mode - to protect it.

Anything that reduces the need for privilege escalation is a good thing. That's what that new API was intended to do.

Let me be clear: I don't know enough about why MS wanted to restrict access to that API. I am not defending it; it sounds dodgy to me. I am defending any steps which reduce the need for privilege escalation. That is the key takeaway, and it is something every OS owner would agree with.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
Firstly, it's your computer ...

Not really. You can only do with it what Microsoft allows.

That was my point, although I would repeat: the computer belongs to you; the OS belongs to Microsoft.
 

Offline IanB

  • Super Contributor
  • ***
  • Posts: 12291
  • Country: us
Are they? Who authorises it? I might be wrong, but I don't think anyone can run arbitrary code in kernel space. I think the only way to do that is to write a driver, get MS to test and green-light your code, then get it signed.

Of course you can run kernel mode code on your computer at your discretion, if you want to. You just download the appropriate developer tools, SDKs and debugging tools from Microsoft and code away. Now, of course, all the tutorials exhort you to run and debug the driver remotely on a dedicated test machine or VM, and not on any machine you care about, but if you choose to disregard such advice, that's your funeral.

If you want to, you can also share the driver you write with friends and colleagues to run on their machines too. You just need to give them a copy of the test certificate you created for it. And of course, they need to trust you before they install and run such a driver.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
Of course you can run kernel mode code on your computer at your discretion, if you want to.

You need to disable secure boot and enable test signing mode. At some point, new computers will appear where it may be impossible to disable secure boot. Then you won't be able to do so anymore.
 

Offline IanB

  • Super Contributor
  • ***
  • Posts: 12291
  • Country: us
Of course you can run kernel mode code on your computer at your discretion, if you want to.

You need to disable secure boot and enable test signing mode. At some point, new computers will appear where it may be impossible to disable secure boot. Then you won't be able to do so anymore.

This is an electronics forum. If I cannot design and construct a hardware peripheral for my computer, and then write a driver for it, then my computer will have become useless for that purpose and I will have to use another OS instead.

Also, how is anyone supposed to write a driver in the first place and get it signed, if there is no way to test and debug it?
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
Also, how is anyone supposed to write a driver in the first place and get it signed, if there is no way to test and debug it?

I don't know. For Mac you need an Apple account and you need to register your Mac with apple, then you can run your driver on this Mac (and on this Mac only). And Mac's drivers are user-mode now.

I guess Microsoft may come up with something as well. It's only get harder and harder to write drivers with time. It is silly to expect any improvements in this area. It can only get worse. The Crowdstrike accident may give rise to a new wave of driver policies hardening.
 
The following users thanked this post: quince

Offline zilp

  • Frequent Contributor
  • **
  • Posts: 329
  • Country: de

Also, it is completely arbitrary that you pick out the user/kernel space boundary as the special case.


I didn't say anything about a special case. It is one of the most important cases because it provides a back door to the OS, with the ability to steal data, corrupt data, inject a virus, crash the OS, send DDOS packets, etc.

None of those, with the exception of crashing the OS, is specific to kernel mode. Which is why I said what you quoted. Every batch script can steal the data that it has access to.

It is a particularly important vulnerability to address for those reasons.

It still is not a vulnerability, because no unauthorized access is possible.

And crucially, this can happen without the owner giving permission or knowing it's about to happen.

So, Crowdstrike can be installed on a system without the owner giving permission? Or what?

When you have all your important business data in one user account without backups, say, then any old batch script running with the privileges of that user account can delete it all and thus ruin your business. Does that mean that the fact that you can "run third-party batch scripts" is a security vulnerability?


I don't think you can protect against that sort of thing because the user might legitimately want to delete all their data.


I didn't ask whether you can protect against that, I asked whether that makes it a vulnerability, because it fits the definiton that you gave for what makes a security vulnerability a security vulnerability.

and I want them to make it as bulletproof as possible. With the possible exception of yourself, pretty much everyone else on the planet would agree. When MS is found not to do that, they get strung out to dry.


That's a pretty wild claim. Especially so given that I am not the only person who runs an operating system where you can edit all of the source code, recompile it, and run it. And all of those people obviously don't think that that means that open operating systems are a security vulnerability. Because we tend to use a sensible definition of vulnerability, as I have explained repeatedly.

It is a vulnerability if a process that runs with user privileges can run arbirtrary code in kernel mode, because granting the user privileges does not authorize the user to do such a thing. But it is not a vulnerability if the owner/admin can run arbitrary code in kernel mode, because they are authorized to do so.

Are they? Who authorises it?

The owner of the computer, by installing/executing that code.

I might be wrong, but I don't think anyone can run arbitrary code in kernel space. I think the only way to do that is to write a driver, get MS to test and green-light your code, then get it signed.

I think that is roughly correct. But that doesn't mean that Microsoft is allowed to just arbitrarily decide which code they sign. Because it is, after all, not their computer.

I'm afraid I didn't understand the rest of your post so I won't try to respond to it.

What specifically was unclear? Because it seems like that is actually the important part, in that you keep bringing up stuff that is simply irrelevant to matters of competition law.
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6907
  • Country: nl
Lets examine for a moment exactly why Mx Mac's don't have third party kernel drivers any more, it's not because they have a microkernel with isolated drivers ... they simply have no third party hardware which pushes enough data to suffer from context change any more.

So lets make everything vertically integrated and giant monopolies of a scale even the old Microsoft couldn't touch! That's great!

PS. lets not forget about the time when in Apple's great wisdom they decided they want to do an online certificate check where a certificate revocation list with delta updates would work just fine and completely screwed it up.
« Last Edit: August 15, 2024, 07:03:08 pm by Marco »
 

Offline zilp

  • Frequent Contributor
  • **
  • Posts: 329
  • Country: de
So, you are saying that Microsoft is the only company that has an interest in keeping their customers secure? Can you substantiate that in any way?

You naughty person! Putting words into my mouth is a sure sign of desperation. Even when it's by inference. Because I'm a nice guy, I will respond constructively, but don't do it again, OK?


I certainly didn't intend to.

What I said is that Microsoft has more incentive than anyone else to keep their OS secure.


OK ... secure from what? And why would that be in the interest of anyone other than Microsoft?

I mean, I assumed that you meant some form of security that is in the interest of the customer, because "security" that prevents customers from doing what they want to do with their property in order to serve the interests of the manufacturer obviously is not in any sensible meaning of the word "security" as used in this thread.

I didn’t mention customers in that part of my argument. Also, it only has to secure that open doorway once, and it will remain closed against the dozens or hundreds of products using that API. That is clearly better than trying to bulletproof hundreds of said products.


OK, maybe? But you haven't explained why Microsoft should be the vendor that gets given the privilege of building this one solution that everyone else has to use. Why shouldn't some other company that is better at it get the chance to build it?

There is no case for leaving this doorway open.


There obviously is, and I have made it repeatedly in this thread: The fact that you haven't demonstrated that other vendors wouldn't provide a better solution for securing the doorway.

The anti-competitive argument is about MS restricting access to the new API, and that is quite a different matter. It sounds wrong to me (as it did to the regulators), but none of us know the full story on that.

Now, I don't know the actual facts, either. But it's not really difficult to guess, I would think. Because your assumption that not allowing random vendors to write their own kernel modules would actually solve a problem makes no sense, even if we assume that Microsoft is exceptionally good at building their API, noone else could do it better, and the API could do everything the competing "security" vendors need. Because the whole point of those "security" solutions is that they have access to everything that matters. The whole point is that they can see everything every process on the system does, and all the data that they access, and all the network traffic, and that they can intervene in operations that are recognized as suspicious. That API would provide access to all of that anyway. The most you can achieve this way is that maybe the kernel doesn't crash, but that is of barely any value to the user, because you can make the system completely unusable even if the kernel has not crashed.

So, obviously, Microsoft is aware that handing out access to this API instead of allowing kernel-mode drivers wouldn't actually help their customers in any relevant way. And so, they then got the idea that they only hand access to companies that they trust ... or so they claim.
 

Offline zilp

  • Frequent Contributor
  • **
  • Posts: 329
  • Country: de
That was my point, although I would repeat: the computer belongs to you; the OS belongs to Microsoft.

It's just that ... it doesn't.

Microsoft has the copyright. That's it. Now, you can call that "belongs to" in some context. But that doesn't mean that all the others meanings of "to belong to" also apply. And having copyright of a particular piece of software does not mean that you have a right to do with the data that is being processed by that software whatever you want, nor that you are allowed to prevent people from interoperating with it, nor that anti-trust laws don't apply to you, nor anything much, really. It first and foremost means that other people aren't allowed to create and distribute copies of it, that's it.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
That was my point, although I would repeat: the computer belongs to you; the OS belongs to Microsoft.

The computer may belong to you in a sense that you can sell it or destroy it. However, you gave up all the control over your computer to Microsoft - they posses it and you must obtain their permission for anything you want to do. They also have full access to your data and any software that is located on your computer. In that sense, your computer belongs to them.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb

So, Crowdstrike can be installed on a system without the owner giving permission? Or what?



Yes, I think so. Isn't it a technology that is licensed to the vendors of other security products? I've had a look at their website and there doesn't seem to be any sort of application that I can actually buy. I think it gets incorporated into other products.

 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
The computer may belong to you in a sense that you can sell it or destroy it. However, you gave up all the control over your computer to Microsoft - they posses it and you must obtain their permission for anything you want to do. They also have full access to your data and any software that is located on your computer. In that sense, your computer belongs to them.

I think you are stretching the definitions of "possess" and "belong" somewhat. Microsoft owns the OS and you buy a license from them that allows you to use it, within the terms and conditions of the license.

But you absolutely do own the hardware and your own data. You can vape the hard disk, you can add an ext3 drive, you can install a different OS, if you like. You don't need Microsoft's permission for anything other than what is specified in the license.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
Microsoft has the copyright. That's it.

No, that's wrong. Yes, Microsoft has the copyright, but it's more than that. Microsoft sells you a license to use their OS, and that license contains various terms and conditions which you agree to when you install the OS, or use it for the first time. If you then fail to observe those terms, Microsoft is legally allowed to prevent you using their software. None of this is controversial - pretty much all commercial software has similar terms and conditions.

And having copyright of a particular piece of software does not mean that you have a right to do with the data that is being processed by that software whatever you want, nor that you are allowed to prevent people from interoperating with it, nor that anti-trust laws don't apply to you, nor anything much, really. It first and foremost means that other people aren't allowed to create and distribute copies of it, that's it.

We agree on all of that except for the last sentence. That isn't "it". Microsoft owns the copyright AND you buy a license to use it from them, which means you agree to various legally enforceable terms and conditions. (Legal in the sense that MS can withdraw your right to use their OS).
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15185
  • Country: fr
The computer may belong to you in a sense that you can sell it or destroy it. However, you gave up all the control over your computer to Microsoft - they posses it and you must obtain their permission for anything you want to do. They also have full access to your data and any software that is located on your computer. In that sense, your computer belongs to them.

I think you are stretching the definitions of "possess" and "belong" somewhat. Microsoft owns the OS and you buy a license from them that allows you to use it, within the terms and conditions of the license.

But you absolutely do own the hardware and your own data. You can vape the hard disk, you can add an ext3 drive, you can install a different OS, if you like. You don't need Microsoft's permission for anything other than what is specified in the license.

You can vape the hard disk, but if you use any online MS account at all, your data is going to end up one way or another on MS servers.
Regarding the license, do you actually have any accurate idea of what it entails?

Like
"It's ok, everything is specified in the license agreement. There is no trap here.
- Have you read it?
- Uh, no."
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
I think you are stretching the definitions of "possess" and "belong" somewhat. Microsoft owns the OS and you buy a license from them that allows you to use it, within the terms and conditions of the license.

I didn't say they do it illegally. You give them the full control of your computer and data voluntarily. According to the license.
 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
What I said is that Microsoft has more incentive than anyone else to keep their OS secure.

OK ... secure from what? And why would that be in the interest of anyone other than Microsoft?

Secure from inept or hostile software that could damage the OS, obviously.

And as for your second sentence, that is exactly my point! Microsoft has much more at stake when it comes to the stability and security of Windows than any other software vendor, so it cannot simply rely on on all those vendors being honest and competent. Only Microsoft is responsible for the security and stability of Windows, nobody else. That is why it wants to take steps to protect the OS from hostile or incompetent applications. And one of the most powerful tools at their disposal is to reduce as far as possible the non-Microsoft code running in the kernel space.  (In fact, even their own code, because anyone can make mistakes.)

I mean, I assumed that you meant some form of security that is in the interest of the customer, because "security" that prevents customers from doing what they want to do with their property in order to serve the interests of the manufacturer obviously is not in any sensible meaning of the word "security" as used in this thread.

Restricting access to kernel mode is in the interest of the user, because code running in kernel mode can corrupt or even steal the user's data. That is why the OS blue-screens: it is to protect the users data by halting execution.

OK, maybe? But you haven't explained why Microsoft should be the vendor that gets given the privilege of building this one solution that everyone else has to use. Why shouldn't some other company that is better at it get the chance to build it?

Because the function of an OS is to provide a range of services to applications in order that they can do their job, whilst at the same time taking all possible precautions to protect the user's data. The OS belongs to Microsoft, so it is their job to deliver that. The model is the same across all operating systems: all operating systems provide a suite of APIs. The OS vendor owns the APIs and everything behind them; the application writers write their code to interact with those APIs. This model is universal and has been for decades.

If "some other company" thinks they can do a better job, they are welcome to write their own OS. As it happens, for reasons of expedience, Microsoft does allow breaches in their APIs, and the CodeStrike disaster is a perfect illustration of why it's a bad idea. I repeat: it is obviously easier to nail down the OS - once and properly - than to rely on every application vendor being sufficiently honest and competent to nail down their own apps. There are hundreds of them, and new ones come along all the time.

Whenever Windows blue-screens Microsoft is blamed, and rightly so. It is Microsoft's responsibility to protect their OS from damage, not some other vendor. In the recent case, it appears that Microsoft's effort to improve the protection never got implemented due to some regulatory issue.

I get the impression you aren't aware of the architecture of modern operating systems, and especially the concept of the API: I will repeat: the OS presents the APIs to the applications, and the applications make calls to the APIs to make stuff happen.  The APIs act as the boundary between the OS and the applications.  The APIs and the code behind them belong to the OS vendor. The OS vendor is responsible for making those APIs and the OS behind them secure against damage by the applications. The application writers are responsible for ensuring their code is bug-free and, in particular, never damages the user's data. The recent disaster shows the risks involved in not strictly observing that model.

I think at this point I have adequately covered the remaining points in your post, so I will stop here.

 

Online SteveThackery

  • Frequent Contributor
  • **
  • Posts: 404
  • Country: gb
You can vape the hard disk, but if you use any online MS account at all, your data is going to end up one way or another on MS servers.

Some of it - it depends on how much you put one OneDrive. I've lost track of what the relevance is, though. The data is yours, despite being on Microsoft's servers. Microsoft has agreed to provide a particular service - synchronisation and backup - as part of the license that you have agreed to, but that doesn't change the legal ownership of the data, just as it doesn’t change the legal ownership of the OS.

Regarding the license, do you actually have any accurate idea of what it entails?

I've deleted the sarcasm because we are adults having a respectful conversation. Re. the license: I've read most of it; not all of it, and not recently. But that isn't important: it's the concept of software licenses that I was explaining. As I said, the software remains the property of the vendor; the license permits use of the software, but in compliance with the various terms and conditions set out in the license. Again, this is standard fare throuout the industry, even open source software. The ownership and responsibilities are clear.

If you don’t agree to those terms, the owner of the software has the right to stop you using it. And you are always free to use some other software.  Including some other OS, of course.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf