Here’s a link to the part of interview where that quote came from: https://youtu.be/GyFkWb903aU?t=4739 (No opinion on whether you’re missing redeeming context; I still need to process Nesov’s and your comments.)
I low-confidence think the context strengthens my initial impression. Paul prefaced the above quote as “maybe the simplest [reason for AIs to learn to behave well during training, but then when deployed or when there’s an opportunity for takeover, they stop behaving well].” This doesn’t make sense to me, but I historically haven’t understood Paul very well.
Here’s a link to the part of interview where that quote came from: https://youtu.be/GyFkWb903aU?t=4739 (No opinion on whether you’re missing redeeming context; I still need to process Nesov’s and your comments.)
I low-confidence think the context strengthens my initial impression. Paul prefaced the above quote as “maybe the simplest [reason for AIs to learn to behave well during training, but then when deployed or when there’s an opportunity for takeover, they stop behaving well].” This doesn’t make sense to me, but I historically haven’t understood Paul very well.
EDIT: Hedging