I agree with the point that TekhneMaker makes, we must not count on efficiency improvements of running intelligence to be hard. They might be, but I expect they won’t be. Instead, my hope is that we can find ways of deliberately limiting the capabilities and improvement rate / learning rate of ml models such that we can keep an AGI constrained within a testing environment. I think this is an easier goal, and more under our control than hoping intelligence will remain compute intensive.
I agree with the point that TekhneMaker makes, we must not count on efficiency improvements of running intelligence to be hard. They might be, but I expect they won’t be. Instead, my hope is that we can find ways of deliberately limiting the capabilities and improvement rate / learning rate of ml models such that we can keep an AGI constrained within a testing environment. I think this is an easier goal, and more under our control than hoping intelligence will remain compute intensive.