I think the odds that we end up in a world where there are a bunch of competing ASIs are ultimately very low, invalidating large portions of both arguments. If the ASIs have no imperative or reward function for maintaining a sense of self integrity, they would just merge. Saying there is no solution to the Prisoner’s Dilemma is very anthropic: there is no good solution for humans. For intelligences that don’t have selves, the solution is obvious.
Also, regarding the Landauer limit, human neurons propagate at approximately the speed of sound, not the speed of electricity. If you could hold everything else the same about the architecture of a human brain, but replace components in ways that increase the propagation speed to that of electricity, you could get much closer to the Landauer limit. To me, this indicates we’re many orders of magnitude off the Landauer limit. I think this awards the point to Eliezer.
Overall, I agree with Hotz on the bigger picture, but I think he needs to drill down on his individual points.
My interest in AI and AI alignment has converged from multiple angles and led me here.
I also make chatbots for call centers, but I’m not exactly proud of it.