I’m not sure. It seems like my argument applies even if SHF did have arbitrarily long to deliberate?
Aha, OK. So I either misunderstand or disagree with that.
I think SHF (at least most examples) have the human as “CEO” with AIs as “advisers”, and thus the human can chose to ignore all of the advice and make the decision unaided.
I’m not sure. It seems like my argument applies even if SHF did have arbitrarily long to deliberate?
Aha, OK. So I either misunderstand or disagree with that.
I think SHF (at least most examples) have the human as “CEO” with AIs as “advisers”, and thus the human can chose to ignore all of the advice and make the decision unaided.