At the same time, current models seem very unlikely to be x-risky (e.g. they’re still very bad at passing dangerous capabilities evals), which is another reason to think pausing now would be premature.
The relevant criterion is not whether the current models are likely to be x-risky (it’s obviously far too late if they are!), but whether the next generation of models have more than an insignificant chance of being x-risky together with all the future frameworks they’re likely to be embedded into.
Given that the next generations are planned to involve at least one order of magnitude more computing power in training (and are already in progress!) and that returns on scaling don’t seem to be slowing, I think the total chance of x-risk from those is not insignificant.
No, introducing the concept of “indexical sample space” does not capture the thirder position, nor language. You do not need to introduce a new type of space, with new definitions and axioms. The notion of credence (as defined in the Sleeping Beauty problem) already uses standard mathematical probability space definitions and axioms.