(My perennial uncertainty is: AI 1 can straightforwardly send source code / model weights / whatever to AI 2, but how can AI 1 prove to AI 2 that this file is actually its real source code / model weights / whatever? There might be a good answer, I dunno.)
They can jointly and transparently construct an AI 3 from scratch motivated to further their deal, and then visibly hand over their physical resources to it, taking turns with small amounts in iterated fashion.
AI 3 can also be given access to secrets of AI 1 and AI 2 to verify their claims without handing over sensitive data.
I think this idea should be credited to Tim Freeman (who I quoted in this post), who AFAIK was the first person to to talk about it (in response to a question very similar to Steven’s that I asked on SL4).
They can jointly and transparently construct an AI 3 from scratch motivated to further their deal, and then visibly hand over their physical resources to it, taking turns with small amounts in iterated fashion.
AI 3 can also be given access to secrets of AI 1 and AI 2 to verify their claims without handing over sensitive data.
I think this idea should be credited to Tim Freeman (who I quoted in this post), who AFAIK was the first person to to talk about it (in response to a question very similar to Steven’s that I asked on SL4).