Putting a button on an interrupt is perfectly fine.
You can do it, just as you can use asynchronous monostables (like a 74123) in clocked logic circuits. Occasionally there are good reasons for those techniques.
But both techniques smell because they introduce subtle failure mechanisms (analogue and digital), can make it difficult to reason about "edge case" operation of the complete system, and difficult to test in production.
What is the subtle failure? The interrupt fires saying "something changed"...then you debounce the switch in a timer, or whatever your favorite method is....then you re-enable the interrupt. How is that more prone to failure than polling to see that something changed, debouncing and polling some more? If you can't make it reliably work in an interrupt, then you can't make it reliably work polling either. You're just depending on the polling not polling at exactly the wrong time. Either the debounce technique works or it doesn't. The thing that triggers it is not important.
You need to do failure analysis, and that depends on identifying all the components, hardware and software, in the complete system. Thereafter you can consider how they can completely or partially fail, and how the system will react.
I suggest you should look at comp.risks to see subtle and unexpected failure modes in systems, including real-time systems created by intelligent and dedicated people. Comp.risks as a very high SNR, appears roughly weekly, and the last 30 years archives can be found at http://catless.ncl.ac.uk/Risks/
So what's the subtle failure mechanism? Maybe it's because my background is software that I don't consider any of these subtleties to be big deals, or difficult to handle, or whatever, but it seems pretty darn straightforward to me.
Ah, right, that explains a lot. I presume your expertise is in digital hardware, otherwise I won't be able to point you in the right direction within a reasonable time. An analogy to get you started thinking in the right direction...
Most software is written in a way that presumes that control flows linearly, and that the control flow is the only thing that can mutate memory and i/o. That specifically includes the compilers, which have widely misunderstood and misapplied constructs for situations where that isn't the case. Prime example: C on multicore processors with multiple levels of cache and shared memory, where compiler and/or program bugs are
legion.
Most digital hardware is constructed in a way that presumes changes only occur at specific instants, due to the clocked synchronous methodology. It is very
very difficult to design unclocked asynchronous logic where inputs can change at any instant.
Predictable design is, with significant effort and understanding, possible if you consider two inputs that can change asynchronously w.r.t. each other. With three or more it is effectively impossible. In addition, the design tools make it effectively impossible, since they habitually "optimise out" the necessary constructs inserted, e.g. Karnaugh map bridging terms.
Interrupts in software systems have the same effect as adding an extra core to a processor. They are analogous to turning a clocked synchronous hardware design into an asynchronous digital design.
And that doesn't even consider the timing implications of having your code arbitrarily suspended, which is analogous to the potentially infinite delay due to metastable behaviour in a synchroniser.