Could be, although remember that everyone else would also prefer for just them, their friends, and their trusted figures to be in CEV. Including more people is for reasons of compromise, not necessarily intrinsic value.
That’s my point. If you’re funding a small-team top secret AGI project, you can keep your seed community small too; you don’t need to compromise. Especially if you’re consciously racing to finish your project before any rivals, you won’t want to include those rivals in your CEV.
Well, what does that imply your fellow prisoners in this one-shot prisoner’s dilemma are deciding to do in the secret of their basements? Maybe our best bet is to change the payoffs so that we get a different game than a one-shot PD, via explicit coordination agreements and surveillance to enforce them.
The surveillance would have to be good enough to prevent all attempts made by the most powerful governments to develop in secret something that may (eventually) require nothing beyond a few programmers in a few rooms running code.
This is a real issue. Verifying compliance with AI-limitation agreements is much harder than with nuclear agreements, and already those have issues. Carl’s paper suggest lie detection and other advanced transparency measures as possibilities, but it’s unclear if governments will tolerate this even when the future of the galaxy is at stake.
That’s my point. If you’re funding a small-team top secret AGI project, you can keep your seed community small too; you don’t need to compromise. Especially if you’re consciously racing to finish your project before any rivals, you won’t want to include those rivals in your CEV.
Well, what does that imply your fellow prisoners in this one-shot prisoner’s dilemma are deciding to do in the secret of their basements? Maybe our best bet is to change the payoffs so that we get a different game than a one-shot PD, via explicit coordination agreements and surveillance to enforce them.
The surveillance would have to be good enough to prevent all attempts made by the most powerful governments to develop in secret something that may (eventually) require nothing beyond a few programmers in a few rooms running code.
This is a real issue. Verifying compliance with AI-limitation agreements is much harder than with nuclear agreements, and already those have issues. Carl’s paper suggest lie detection and other advanced transparency measures as possibilities, but it’s unclear if governments will tolerate this even when the future of the galaxy is at stake.