I think this is taking aim at Yudkowskian arguments that are not cruxy for AI takeover risk as I see it. The second species doesn’t need to be supercoherent in order to kill us or put us in a box; human levels of coherence will do fine for that.
I think this is taking aim at Yudkowskian arguments that are not cruxy for AI takeover risk as I see it. The second species doesn’t need to be supercoherent in order to kill us or put us in a box; human levels of coherence will do fine for that.