What do you think of a claim like “most of the intelligence comes from the steps where you do most of the optimization”? A corollary of this is that we particularly want to make sure optimization intensive steps of AI creation are safe WRT not producing intelligent programs devoted to killing us.
Example: most of the “intelligence” of language models comes from the supervised learning step. However, it’s in-principle plausible that we could design e.g. some really capable general purpose reinforcement learner where the intelligence comes from the reinforcement, and the latter could (but wouldn’t necessarily) internalise “agenty” behaviour.
I have a vague impression that this is already something other people are thinking about, though maybe I read too much into some tangential remarks in this direction. E.g. I figured the concern about mesa-optimizers was partly motivated by the idea that we can’t always tell when an optimization intensive step is taking place.
I can easily imagine people blundering into performing unsafe optimization-intensive AI creation processes. Gain of function pathogen research would seem to be a relevant case study here, except we currently have less idea about what kind of optimization makes deadly AIs vs what kind of optimization makes deadly pathogens. One of the worries (again, maybe I’m reading too far into comments that don’t say this explicitly) is that the likelihood of such a blunder approaches 1 over long enough times, and the “pivotal act” framing is supposed to be about doing something that could change this (??)
That said, it seems that there’s a lot that could be done to make it less likely in short time frames.
What do you think of a claim like “most of the intelligence comes from the steps where you do most of the optimization”? A corollary of this is that we particularly want to make sure optimization intensive steps of AI creation are safe WRT not producing intelligent programs devoted to killing us.
This seems probably right to me.
Example: most of the “intelligence” of language models comes from the supervised learning step. However, it’s in-principle plausible that we could design e.g. some really capable general purpose reinforcement learner where the intelligence comes from the reinforcement, and the latter could (but wouldn’t necessarily) internalise “agenty” behaviour.
I agree that reinforcement learners seem more likely to be agent-y (and therefore scarier) than self-supervised learners.
What do you think of a claim like “most of the intelligence comes from the steps where you do most of the optimization”? A corollary of this is that we particularly want to make sure optimization intensive steps of AI creation are safe WRT not producing intelligent programs devoted to killing us.
Example: most of the “intelligence” of language models comes from the supervised learning step. However, it’s in-principle plausible that we could design e.g. some really capable general purpose reinforcement learner where the intelligence comes from the reinforcement, and the latter could (but wouldn’t necessarily) internalise “agenty” behaviour.
I have a vague impression that this is already something other people are thinking about, though maybe I read too much into some tangential remarks in this direction. E.g. I figured the concern about mesa-optimizers was partly motivated by the idea that we can’t always tell when an optimization intensive step is taking place.
I can easily imagine people blundering into performing unsafe optimization-intensive AI creation processes. Gain of function pathogen research would seem to be a relevant case study here, except we currently have less idea about what kind of optimization makes deadly AIs vs what kind of optimization makes deadly pathogens. One of the worries (again, maybe I’m reading too far into comments that don’t say this explicitly) is that the likelihood of such a blunder approaches 1 over long enough times, and the “pivotal act” framing is supposed to be about doing something that could change this (??)
That said, it seems that there’s a lot that could be done to make it less likely in short time frames.
This seems probably right to me.
I agree that reinforcement learners seem more likely to be agent-y (and therefore scarier) than self-supervised learners.