In the end, it is managing possibilities. It is possible that infinite number of monkeys write perfect copies of all work by Shakespeare, and so it is possible that given infinite time, at some point random noise has generated a CAN packet which is valid (data, CRC, all), which instructs the lathe to kill its operator. It is just unlikely enough, and this is something which is not too hard to ballpark even on a napkin. If it requires billion of machines running for billion years, then you can maybe accept the risk.
CAN is resilient against this because it uses a pretty long CRC code compared to the packet payload length. It is practically impossible to get a garbled CAN message through the CRC check. A dangerous bug in CAN code (including the HW peripheral) is far more likely, like dozens of orders of magnitude, or say memory corruption; even with ECC.
It is important to understand absolutely nothing is zero-risk, and concentrate on highest-risk items, which would be, usually, in this order: human errors in software code, then human errors in hardware design, maybe then bit errors in non-checksummed RAM and bit errors in non-checksummed simple communication interfaces (think I2C), then bit errors in parity-checked RAM or interfaces. Strongly checksummed (even just 8-bit CRC) is much further.