Yes signed int overflow in C99 is UB. But perhaps I'm too pragmatic, but I always wonder why and does it apply? Signed magnitude, 1s and 2s complement will have different overflow behaviour, and if the Clang language can assume that is unknown, then it is right to optimize "i+1 > i" to be always "true", even when on a 2s complement computer with i=INT32_MAX , that expression would be false. GCC does the same.
The point at which Clang seems to take 1 step further, is moving variables across the comparison operator, as a mathematical number. So "a - b < 0" is equal to "a < b". GCC doesn't do this, and generates the "intended" subtraction comparison.
I think your two examples are exactly the same thing. The fact that one compiler does only one optimization and the other does both is mostly irrelevant.
You would need to force the Clang compiler to assume integers are stored 2s complement with argument fwrapv. Then it won't do this optimization.. Even for i+1>i will be executed on the CPU using both compilers.
It's not like the compiler has any illusion about the architectural data type. The compiler knows that it is using twos compliment arithmetic and generates it's code based on that. What you are doing with fwrapv is forcing the compiler to emit code that behaves as if it were evaluated by the arithmetic rules of the language -- operator precedence and promotion rules -- on a twos compliment machine. Right now the compiler is not required to do that even when using 2s compliment hardware.
IMO this is a design flaw in C. The language could easily say that signed integers must have a consistent implementation defined overflow behavior. That could be twos compliment, saturation, or if someone actually wanted to implement c99 on a ones compliment architecture they could do that. This would be analogous to how C allows implementations to select the size of short/int/long based on what is efficient but its not allowed to occasionally perform optimization with a different width that changes the behavior. I get that there are legitimate optimizations this would prevent but there are also a lot of totally legitimate operations that can't be done without casting to unsigned and back. One of the most important is testing for overflow. If you do an operation on signed integers and then test for overflow the compiler can and often will remove the check. This is stupid. While people argue that "mandating predictable but 'incorrect' behavior is worse than undefined behavior" they are just wrong. But that is the way the C standard currently is, and the current trend in compilers is to not promise more than the standard requires even when the standard is stupid. So until c23 comes out and mandates 2s compliment behavior that is what we have.
That's true, but the optimization and code generation will happen in multiple steps, and optimization may not need to account all side-effects from the actual code generation (since it's UB). Therefore the optimizer won't "see" the 2's complement signed overflow side effects, and any satisifyability solver will transform those expressions. When the compiler is forced to use/store an intermediate result (like `a-b`) into an int32 (or fwrapv to add it under the hood), then the optimization is split into 2 steps with the arithmetic world in betwee, and will it actually see the overflow.
Personally I think a compiler that leaves as little UB as possible is better, especially since on any modern platform this is not even UB.
Anyhow, I think that on new systems a 64-bit unsigned integer for (high res) timestamping is the easiest fix. Even if you keep time at 1GHz in unsigned 64-bit, you won't have an overflow in 585 years. But perhaps that's is a bit beyond the scope of this topic.