I don’t know that I’ve seen any good models of compute/algorithmic improvement to future-optimizing power. Predictive accuracy probably isn’t the important and difficult part, though it’s part of it. We really have no examples of superhuman intelligence, and variation among humans is pretty difficult to project from, as is variation among non-human tool-AI models.
The optimists (or pessimists, if unaligned) tend to believe that evolution is optimizing for different things than an AI will, and the diminishing returns on brain IQ are due to competing needs for the biology, which probably won’t apply to artificial beings.
I haven’t heard anyone saying it’s easy, nor fully unbounded once past a threshold. I HAVE heard people saying they expect it will seem easy in retrospect, once it gets past human-level and is on the way to a much higher and scarier equilibrium.
I don’t know that I’ve seen any good models of compute/algorithmic improvement to future-optimizing power. Predictive accuracy probably isn’t the important and difficult part, though it’s part of it. We really have no examples of superhuman intelligence, and variation among humans is pretty difficult to project from, as is variation among non-human tool-AI models.
The optimists (or pessimists, if unaligned) tend to believe that evolution is optimizing for different things than an AI will, and the diminishing returns on brain IQ are due to competing needs for the biology, which probably won’t apply to artificial beings.
I haven’t heard anyone saying it’s easy, nor fully unbounded once past a threshold. I HAVE heard people saying they expect it will seem easy in retrospect, once it gets past human-level and is on the way to a much higher and scarier equilibrium.