I thought a reversible process is by definition time-symmetric. This means that if you filmed a process on camera and played it back to someone, they wouldn't be able to tell by looking if the film was running forward or backwards because both cases would make perfect sense.
Yes -- elementary processes such as electron-electron "collision"; all the elementary forces where you'd draw a Feynman diagram (at least for QED and the like). Which are also space-time invariant, so for example, a particle going forwards is bounced backwards (i.e., turned into what we would call an antiparticle) on interacting with a pair of photons (pair production/annihilation), but it's the same as a scattering process (one particle interacting with a force particle).
So it's not only conservative in time, but in space, and in both at the same time, which is a pleasing symmetry. At least for QED and the like.
This type of interaction is what I mean by particles exchanging information. Apologies if that was not clear.
The Brownian motion is a good example of this. It makes perfect sense no matter which way you are playing the film.
Radioactive decay and some other quantum phenomena on the other hand are definitely not time symmetric and you can always tell which way the film is running.
For example if you see a film of X-ray particles impacting a photosensitive film you can definitely tell if the motion was running backwards. You do not expect blotches of developed film to condense back into an X-ray and fly away from the paper. The process is not time-symmetric and not reversible, even though it is (largely) predictable.
Ah, but these are all bulk phenomena*! So the entropy is exactly the kind of emergent property as I was talking about.
The photon strikes the film grain, causing ultimately the entire grain to decompose and develop as a black speck. There's essentially no way to reverse such a chain of events, because so many particles have dissipated the information of that photon coming in.
(Brownian motion being a property of fluids at constant entropy, we shouldn't expect to see a time direction.)
*Arguably, radioactive decay is bulk in a sense (the nucleus contains a great many particles, at quite high energy: it is a system all its own!), but anyway, fusion is a known atomic process as well, so it would be foolish to say that a nucleus losing an alpha particle is only a time-forward phenomenon!
Now, seeing a whole ensemble of, say, Pu-238 losing alpha particles, and only losing them as time goes on, would be quite suspicious; but as each particle is emitted, it bumps into other atoms, transferring various amounts of energy and momentum, and kicking phonons around in the lattice (literally, making it hot), so in this system as well, there is a massive preponderance of more interactions as time goes on, and entropy rising, and the dilution of information (the information about the decay of any one atom is all but imperceptible on the bulk level; almost all you know is it's hot and gives off helium!).
If we could know the state of each atom and each electron in the resistor we could theoretically tell what the next outcome would be.
The Heisenberg Uncertainty principle tells us that we can't actually look into a state of an electron without influencing it. So we are still cryptographically safe on that front.
That's relevant to measuring atoms, but I was thinking (probably not too explicitly stated) about modifying the state, probably using an observation and deconvolutional method. Apply a random input, what does the output do? Repeat, well, about a trillion times, run an algorithm or whatever as needed, see if it improves, see if you can discover a controllable aspect of the output (again, besides bulk temperature).
(A comparable process has already been achieved, on a much simpler scale: reconstructing visual or spacial (2D/3D) images from propagating or scattered light. Using a sufficiently accurate optical system and a similarly detailed scattering matrix and inversion, the light path can be reversed. A photon might scatter through millions of crystal domains in a translucent solid, so this achievement alone is not trivial. At least, that's what I recall they were claiming -- science reporting being what it is.)
In any case, even for a relatively small (nano-scale) system, I've got to imagine the computational burden is already bordering on impossible, which even assuming the statistics could work out (which QM does indeed put rather tough limits on, for this sort of thing!), basically means you must invert cryptographic hashes in real time...
Which is to say, at least at today's level,
absolutely fucking yes, it's cryptographically secure.
What I am after are the algorithms that test randomness in the RNGs. Apparently some of these are good and some of these are subtly tampered with in a way that they may fail a true RNG but pass a less than perfect RNG. My question from the beginning was about these algorithms.
Can these algorithms be influencing the development of true RNGs to be predictable in some very subtle way.
For example if you insist that the your RNG produces ones and zeroes in the exact 50% ratio even in short periods then you throw away the vast majority of your entropy.
As for algorithms, again -- you can always make low entropy into high, almost trivially; but reversing the process is nontrivial, bordering on impossible (indeed, physically impossible for macroscopic systems).
If you had a sequence of numbers and no other information, how long would it take you to discover what system(s) underly that sequence? Even if you knew it was something (say, a Mersenne Twister sequence, which is used by MATLAB for example), how long would it take before you could confirm it?*
(*In this case, at least the bit length of the state variable. Which is usually >256 bits for this example; I forget what exactly is typical.)
And keep in mind that confirmation includes removing all the possible steps that have been applied to it after the number generation itself. For example, if the sequence follows a normal distribution, it might be reasonable to suppose someone called normrnd(mu, sigma) a bunch of times. But even just for that: for each time it was executed, which function call was made first? (Generating a normal random value requires two random inputs, essentially one for magnitude, one for phase.) Or was one single call made, and two halves used? Was it chopped up or scrambled or hashed first? Maybe your sequence of numbers has a flat distribution; if it happened to consist of a series of hashes (yet still ultimately arising from the same source in question), how would you know? How long would it take to prove it?
So finally, the point is: there is no such thing as true random, in a physical sense, because it would take this trans-astronomical amount of computational power to determine the set of all possible algorithms that could generate the given sequence (even given constraints, like that they be relatively simple algorithms and thus likely to have been written by humans, and likely to cause trouble in still other relatively-simple algorithms). And that the set (even with those constraints) is probably too large to be of any value anyway.
The only things that make cryptography vulnerable today are obvious bugs and backdoors (from social engineering to bad implementations), and the fact that, in the great scheme of things, we haven't even begun to scratch the surface of computational or convolutional complexity, so it's relatively easy to guess our relatively simple constants, or algorithms. The universe is a humbling place, I suppose...
Tim