Okay It means without volatile, the compiler doesn't look at the value of variable x inside ISR
No. It means the compiler assumes the normal meaning of the code, as interpreted from the perspective of that code. In the following fragment, is
x ever modified, if it’s not 1?
while (1) {
if (1 == x) {
++x;
}
}
Clearly not, there is no single operation that would modify anything. The code is exactly equivalent to:
if (1 == x) {
x = 2;
}
while (1) {
}
That’s what the compiler sees without
x being marked as
volatile.
A side note: since there is an infinite loop in this code, the actual meaning of the code is undefined and the compiler may do anything it wishes: including never even generating any loop or simply “make demons fly out of your nose.”
With volatile, the compiler looks at the value of variable x in main and inside ISR
No. It means that the compiler knows the snippets of code shown above are not equivalent, because
x may be “magically” modified despite it seems it can not. So it must assume that every read from
x may produce a different value, even if it never sees any stores into
x in this fragment.
Nope as said already. It's actually much simpler than this, and sort of the opposite. With volatile, the compiler doesn't try to analyze the usage of the corresponding variable, and will generate an access to it in any case.
That is not exactly true. It’s a bit nitpicky, but in the case of
volatile such details are important and missing them is what brings pain.
With
volatile the compiler assumes that any read from the variable may produce a different value, despite no modification to that variable can be seen. But
volatile does not guarantee by itself generation of an access to the actual storage.
The difference is slight, but not understanding it has grave consequences. The presence of
volatile tells the compiler that some natural assumptions about the meaning of code can’t be made, which leads to different interpretation of the described logic and
in consequence missing some optimizations. But “generate an access” is a much stronger requirement. On simple platforms like small microcontrollers merely skipping an optimization does accidentally generate an access, but that is never true on modern, more complex architectures. Which include even any modern consumer CPU. With multilayer caches, complex CPU-level optimization, and multiple cores or processors it is required to use the right synchronization methods to ensure memory consistency. That includes explicit memory barriers, instructions offering such guarantees as side effects etc.
volatile never introduces those. And it is usually not even needed if they are used. For example on any platform supported by pthreads using synchronization primitives alone induces the desired behavior. So is using atomics API of C.
And that’s only about reading a variable! I’m not even touching things like the order of execution.
Also note that
volatile is not the only situation that prevents the compiler from making assumptions about reads.
char pointers behaviour in the context of strict aliasing is another such example.
Remember that C will almost always favor efficiency over "correctness", and that some additions (such as the volatile qualifier) are there to manually instruct the compiler when the intent of the programmer can't be clearly conveyed without them. Just the way it works. If you want a language that favors correctness over efficiency in all cases, pick another one.
In this case correctness is not sacrificed at all. The behaviour without
volatile is perfectly correct. The addition of
volatile is not making that code more correct: it changes its semantics. Do not confuse “correct” with “what a programmer had on their mind”. While C is known to sacrifice safety and clarity for some types of performance gains, this is not the case here.