Now that I think about it, I think you and I are misinterpreting what johnwentsworth meant when he said “optimization pressure”. I think he just means a measure of how much we have to change the world by, in units of bits, and not any specific piece of information that the AI or alignment researchers produce.
Now that I think about it, I think you and I are misinterpreting what johnwentsworth meant when he said “optimization pressure”. I think he just means a measure of how much we have to change the world by, in units of bits, and not any specific piece of information that the AI or alignment researchers produce.
I don’t think that’s quite right (or at least, the usage isn’t consistently that). For instance
seems to be very directly about how far out of distribution the generative model is and not about how far our world is from being safe.