It also seems to me that a correctly implemented CEV including on myself and a few friends and trusted figures of authority would lead to a much better outcome than a CEV including all Americans or all Chinese.
Could be, although remember that everyone else would also prefer for just them, their friends, and their trusted figures to be in CEV. Including more people is for reasons of compromise, not necessarily intrinsic value.
Isaac_Davis made a good point that a true CEV might not depend that sensitively on what country it was seeded from. The bigger danger I had in mind would be the (much more likely) outcome of imperfect CEV, such as regular democracy. In that case, excluding the Chinese could lead to more parochial outcomes, and the Chinese would then also have more reason to worry about a US AI.
Could be, although remember that everyone else would also prefer for just them, their friends, and their trusted figures to be in CEV. Including more people is for reasons of compromise, not necessarily intrinsic value.
That’s my point. If you’re funding a small-team top secret AGI project, you can keep your seed community small too; you don’t need to compromise. Especially if you’re consciously racing to finish your project before any rivals, you won’t want to include those rivals in your CEV.
Well, what does that imply your fellow prisoners in this one-shot prisoner’s dilemma are deciding to do in the secret of their basements? Maybe our best bet is to change the payoffs so that we get a different game than a one-shot PD, via explicit coordination agreements and surveillance to enforce them.
The surveillance would have to be good enough to prevent all attempts made by the most powerful governments to develop in secret something that may (eventually) require nothing beyond a few programmers in a few rooms running code.
This is a real issue. Verifying compliance with AI-limitation agreements is much harder than with nuclear agreements, and already those have issues. Carl’s paper suggest lie detection and other advanced transparency measures as possibilities, but it’s unclear if governments will tolerate this even when the future of the galaxy is at stake.
It also seems to me that a correctly implemented CEV including on myself and a few friends and trusted figures of authority would lead to a much better outcome than a CEV including all Americans or all Chinese.
Could be, although remember that everyone else would also prefer for just them, their friends, and their trusted figures to be in CEV. Including more people is for reasons of compromise, not necessarily intrinsic value.
Isaac_Davis made a good point that a true CEV might not depend that sensitively on what country it was seeded from. The bigger danger I had in mind would be the (much more likely) outcome of imperfect CEV, such as regular democracy. In that case, excluding the Chinese could lead to more parochial outcomes, and the Chinese would then also have more reason to worry about a US AI.
That’s my point. If you’re funding a small-team top secret AGI project, you can keep your seed community small too; you don’t need to compromise. Especially if you’re consciously racing to finish your project before any rivals, you won’t want to include those rivals in your CEV.
Well, what does that imply your fellow prisoners in this one-shot prisoner’s dilemma are deciding to do in the secret of their basements? Maybe our best bet is to change the payoffs so that we get a different game than a one-shot PD, via explicit coordination agreements and surveillance to enforce them.
The surveillance would have to be good enough to prevent all attempts made by the most powerful governments to develop in secret something that may (eventually) require nothing beyond a few programmers in a few rooms running code.
This is a real issue. Verifying compliance with AI-limitation agreements is much harder than with nuclear agreements, and already those have issues. Carl’s paper suggest lie detection and other advanced transparency measures as possibilities, but it’s unclear if governments will tolerate this even when the future of the galaxy is at stake.