Maybe, but I’m not sure it’s even necessary to invoke LDT/FDT/UDT, and instead argue that coordinating even through solely causal methods is very cheap for AIs to the point where coordination, and as a side effect, interfaces become quite a lot less of a bottleneck compared to today.
In essence, I think the diff between John’s models and tailcalled’s models is plausibly in how easy coordination in a more general sense can ever be for AIs, and whether AIs have much better ability to coordinate compared to humans today, where John thinks that coordination is a taut constraint for humans but not for AI, but tailcalled thinks it’s hard to coordinate for both AIs and humans due to fundamental limits.
Maybe, but I’m not sure it’s even necessary to invoke LDT/FDT/UDT, and instead argue that coordinating even through solely causal methods is very cheap for AIs to the point where coordination, and as a side effect, interfaces become quite a lot less of a bottleneck compared to today.
In essence, I think the diff between John’s models and tailcalled’s models is plausibly in how easy coordination in a more general sense can ever be for AIs, and whether AIs have much better ability to coordinate compared to humans today, where John thinks that coordination is a taut constraint for humans but not for AI, but tailcalled thinks it’s hard to coordinate for both AIs and humans due to fundamental limits.