A) There is a video, but it’s not super high quality and I think the transcript is better. If you really want to listen to it, though, you can take a look here.
B) Yeah, I agree with that. Perhaps the thing I said in the talk was too strong—the thing I mean is a model where the objective is essentially the same as what you want, but the optimization process and world model are potentially quite superior. I still think there’s still approximately only one of those, though, since you have to get the objective to exactly match onto what you want.
I still think there’s still approximately only one of those, though, since you have to get the objective to exactly match onto what you want.
Once you’re trying to extrapolate me rather than just copy me as-is, there are multiple ways to do the extrapolation. But I’d agree it’s still way less entropy than deceptive alignment.
A) There is a video, but it’s not super high quality and I think the transcript is better. If you really want to listen to it, though, you can take a look here.
B) Yeah, I agree with that. Perhaps the thing I said in the talk was too strong—the thing I mean is a model where the objective is essentially the same as what you want, but the optimization process and world model are potentially quite superior. I still think there’s still approximately only one of those, though, since you have to get the objective to exactly match onto what you want.
Once you’re trying to extrapolate me rather than just copy me as-is, there are multiple ways to do the extrapolation. But I’d agree it’s still way less entropy than deceptive alignment.