The fact that I still get plenty of use out of a 13 year old laptop should answer that question. Computers are getting better but only incrementally. They're packing more transistors onto each die, but they're not really increasing the density by a significant margin, which is why the big stuff now requires water cooling. Regular CPU's have plateaued, GPU/NPU is a parlor trick of reducing functionality and packing more cores onto the die, but again, not at any increased density. Quantum computing is perpetually in a state of "almost there".
And I don't think most people need to care. There's no requirement for ever-increasing density. Hitting a wall could actually be beneficial.
Modern silicon is dead. Moore's law is broken - at least - from a traditional perspective.
But OTHER technology is advising exponentially every 18 months. So, it isn't REALLY broke. It just changed when we hit a ceiling.
The fact that I still get plenty of use out of a 13 year old laptop should answer that question. Computers are getting better but only incrementally. They're packing more transistors onto each die, but they're not really increasing the density by a significant margin, which is why the big stuff now requires water cooling. Regular CPU's have plateaued, GPU/NPU is a parlor trick of reducing functionality and packing more cores onto the die, but again, not at any increased density. Quantum computing is perpetually in a state of "almost there".
And I don't think most people need to care. There's no requirement for ever-increasing density. Hitting a wall could actually be beneficial.
But OTHER technology is advising exponentially every 18 months.
Is it, though?
All of the focus right now is in GPU and NPU density, but they aren't really advancing those, from a transistor density perspective, any faster than CPU has been advanced.
The whole point of GPU and NPU, after all, is that it's "really really RISC" in that the cores are specialized for high-density, low-complexity compute instead of general purpose compute. So they're packing as many of them onto the die as they can, but the process itself isn't advancing any faster than it is in the CPU world.
(This is my current understanding; if I'm full of shit please feel free to call me out here.)
All of the significant advancements have emerged from the idea someone had that a GPU, which was originally intended for graphics, can be used for any task that can be expressed as "reduce it to simple math and then parallelize the heck out of it". From a compute perspective, AI is just Bitcoin mining on a different data set, and Bitcoin mining is just CoD on a different data set. (Ok, that's *really* hyperbolic but you get the idea.)
Fundamental advances in manufacturing process could be equally applied to CPU and GPU.