If one of the main coordination mechanisms used by humans in practice is this simulacrum-3 pretend-to-pretend trick, and rationalists generally stick to simulacrum-1 literal truth and even proactively avoid any hints of simulacrum-3, then a priori we’d expect rationalists to be unusually bad at cooperating.
If we want to close that coordination gap, our kind are left with two choices:
play the simulacrum-3 game (at the cost of probably losing some of our largest relative advantages)
find some other way to coordinate (which is liable to be Hard)
I think it ultimately has to be the latter—ancestral human coordination mechanisms are already breaking down as they scale up (see e.g. Personal to Prison Gangs, Mazes), the failure modes are largely directly due to the costs of simulacrum-3 (i.e. losing entanglement with reality), so it’s a problem which needs to be solved one way or the other.
(Also, it’s a problem essentially isomorphic to various technical AI alignment problems.)
See also: Why Our Kind Can’t Cooperate.
If one of the main coordination mechanisms used by humans in practice is this simulacrum-3 pretend-to-pretend trick, and rationalists generally stick to simulacrum-1 literal truth and even proactively avoid any hints of simulacrum-3, then a priori we’d expect rationalists to be unusually bad at cooperating.
If we want to close that coordination gap, our kind are left with two choices:
play the simulacrum-3 game (at the cost of probably losing some of our largest relative advantages)
find some other way to coordinate (which is liable to be Hard)
I think it ultimately has to be the latter—ancestral human coordination mechanisms are already breaking down as they scale up (see e.g. Personal to Prison Gangs, Mazes), the failure modes are largely directly due to the costs of simulacrum-3 (i.e. losing entanglement with reality), so it’s a problem which needs to be solved one way or the other.
(Also, it’s a problem essentially isomorphic to various technical AI alignment problems.)