I mean, you get the standard utopia that the aligned AI gives you. And you’re more likely to end up in worlds with aligned AIs that disincentivize unaligned AIs from being created, so maybe there’s an anthropic feedback loop?
I’m not sure that most people who seek to create aligned AI’s want an AI that starts doing the Last Judgment and punishes people for their misdeads for causal trade reasons.
It’s been a while since I read Roko’s post, but I don’t think that it makes any argument for the resulting AI being non-Aligned. Being aligned doesn’t prevent the AI from assuming that it’s existence is very high utility and doing acausal trade to further the chances of existing.
I mean, you get the standard utopia that the aligned AI gives you. And you’re more likely to end up in worlds with aligned AIs that disincentivize unaligned AIs from being created, so maybe there’s an anthropic feedback loop?
I’m not sure that most people who seek to create aligned AI’s want an AI that starts doing the Last Judgment and punishes people for their misdeads for causal trade reasons.
It’s been a while since I read Roko’s post, but I don’t think that it makes any argument for the resulting AI being non-Aligned. Being aligned doesn’t prevent the AI from assuming that it’s existence is very high utility and doing acausal trade to further the chances of existing.