For computers there is no real alternative to silicon. There are transistors based on GaN and similar that can be faster, but these tend to be much larger and lower reliability - so more like a small circuits for a fiber optic IO part or a single RF power amplifier, but no large scale logic of memory. An interesting point might be what comes when flash memory hits it's limits. This limit is closer than that of pure logic transistors. Flash based SSDs made a big change for every days computers but here technological limits are in sight.
So Moore's law might finally no longer work and progress will likely be slower, but much of the computers will still be silicone based CMOS. Anyway the last decode or so, the CPU designers where a little in behind on what to do with all the transistors, except for ever more cache. So even if the transistors don't get much smaller and faster any more, there can still be an advance in the logic design and more powerful CPUs. We already see GPUs more efficient in number crunching than classical CPUs (like x86).
Besides the limit of getting ever smaller, there is an other area where things tend to change: in the last decades and especially in the early years the useful size of the chips also increased and thus more an more could be implemented in one chip and thus way make things faster. But at the current size, there is no more such an pronounced optimum chip size. So no more large advantage from making the chip area larger.
Personally I am a little skeptical about quantum computing. It might work for a few niches, but not for general purpose things. It is a little like analog computing - putting more data in a single gate, needs high resolution and this makes things slow and there is limit in to how many binary bits one can get from one Qbit.