Have you tried using this approach (e.g by double-cruxing) to come to an agreement on a simpler issue to start? AI safety is complicated, smart small.
I frequently come to agreement with Aumannian stuff.
But yes I suspect that one cannot simply use Aumann’s agreement theorem to reach agreement on AI safety; it was the other rationalist who wanted to do it.
Have you tried using this approach (e.g by double-cruxing) to come to an agreement on a simpler issue to start? AI safety is complicated, smart small.
I frequently come to agreement with Aumannian stuff.
But yes I suspect that one cannot simply use Aumann’s agreement theorem to reach agreement on AI safety; it was the other rationalist who wanted to do it.