Like, I expect OpenAI leadership explicitly thinks of themselves as increasing x-risk a bit by choosing to attempt to speed up progress to AGI.
Do you think that they think they are increasing x-risk in expectation (where the expectation is according to their beliefs)? I’d find that extremely surprising (unless their reasoning is something like “yes, we raise it from 1 in a trillion to 2 in a trillion, this doesn’t matter”).
Do you think that they think they are increasing x-risk in expectation (where the expectation is according to their beliefs)? I’d find that extremely surprising (unless their reasoning is something like “yes, we raise it from 1 in a trillion to 2 in a trillion, this doesn’t matter”).
See my reply downthread, responding to where you asked Oli for an example.