I have been wondering what that "much" would be, but could not find any definition thus far.
There are a few practical definitions, depending on the context. They all boil down to a variant of
$$x + y \approx x \quad \text{ iff } \quad \lvert x \rvert \gg \lvert y \rvert$$
where the context is what dictates the meaning of the approximately equal sign, \$\approx\$.
In general, especially in computer science and software engineering, the approximate sign is understood via (machine)
epsilon, \$\epsilon\$, which represents the precision at which we are operating in:
$$x + y = x \quad \text{ iff } \quad \lvert y \rvert \le \epsilon$$
Simply put, \$\epsilon\$ refers to the greatest positive number that is still considered zero. (Sometimes it is defined as the smallest positive nonzero value, in which case you replace the \$\le\$ with \$\lt\$ above.) For example, if you use scalars with three decimal digits of precision, then you implicitly define \$\epsilon = 0.0005\$ (assuming you use rounding, \$0.000999\$ if you use only truncation).
In many cases, the
epsilon is not only dependent on the equation itself, but also the inputs. For example, consider the case where the user provides you a number of 3D points, and your program must return the number of unique points. There, you use some form of
epsilon to describe the maximum distance or maximum coordinate difference to tell whether two points are separate or not. If you use floating-point numbers, the precision of those values already give you an implicit
epsilon (which depends on the terms in the expression, as both the arguments and the result are only the closest values one can represent using floating-point). If you rotate or translate those points before making the decision, you really need to consider the precision (or
epsilon of) the original coordinates, as floating-point operations have rounding error – equivalent to quantization error in analog-digital circuits.
(This is also why so many programming manuals talk about exact equality comparison of floating-point values being usually wrong.)
In electronic circuits, we can add a few additional meanings to \$\gg\$.
One is that when \$x \gg y\$, the magnitude of signal \$x\$ is so large compared to \$y\$, that the effect of \$y\$ is lost in the inherent (physical, thermal, etc.) noise. You can consider this roughly analogous to floating-point issues in computation, since real-world electronics with very large or very small or very precise signals has to consider the "non-electronics physical effects", like thermal effects from heating up the conductors and so on. Again, here the "very large" or "very small" is completely dependent on the situation.
Another is that the effect of a property \$x\$ completely overwhelms property \$y\$. This is the case where \$x\$ and \$y\$ are not signals, but properties of e.g. a component, say the \$\beta\$ of a transistor. When writing out the exact description of the system, many of the variables are "neglible"; do not really affect the output at all. Depending on how you write the exact description, you can either approximate them as zeros, or say the more important variables are much greater than the variables with little to no effect within the normal operating conditions.
Consider the common practical example from physics, speed addition (velocities in the same direction, opposite if different signs):
$$v_{sum} = v_a + v_b \quad \text{ iff } \quad \lvert v_a \rvert, \lvert v_b \lvert \ll c$$
What is the limit at which we consider velocities no longer "much smaller than the speed of light"? There is no such absolute value. For the longest time, we couldn't measure velocities at sufficient precision to even check if relativistic correction was necessary; our measurement
epsilon was greater than the difference between the relativistic velocity sum and the direct sum, so we couldn't tell the difference. If your measurements can only provide a couple of digits of precision – which isn't bad at all if we are talking about really huge velocities – anything more than a few percent slower than the speed of light is sufficiently "much slower".
If you are now having a very uncomfortable feeling about the lack of precise numerical limits, or even a precise definition of such a limit, welcome to the physical world.
There have never been such limits; any claims for those are just coddling the weak minds. The expectation of such limits is a limitation that we must cast away. One way I personally recommend doing that, is by habitual application of
dimensional analysis, followed by "top of the envelope estimates", i.e. like dimensional analysis, but with all numerical values replaced by the nearest power of ten, to find out the approximate magnitude of the result. It will be off, but it will immediately tell you if a model is completely bunk. This kind of criticality, as long as you are willing to consider any model or claim up to that utterly basic verification stage, will release you from the expectation of rock-solid absolute numbers, replacing it with the Real Big Guns to deal with anything thrown your way.