OK, so it sounds like your argument why SHF can’t do ALD is (a specific, technical version of) the same argument that I mentioned in my last response. Can you confirm?
Aha, OK. So I either misunderstand or disagree with that.
I think SHF (at least most examples) have the human as “CEO” with AIs as “advisers”, and thus the human can chose to ignore all of the advice and make the decision unaided.
“|” meant concatenation, so “S|Output := H(S|Input)” means you set S to the first half of H(S|Input), and Output to the second half of H(S|Input).
OK, so it sounds like your argument why SHF can’t do ALD is (a specific, technical version of) the same argument that I mentioned in my last response. Can you confirm?
I’m not sure. It seems like my argument applies even if SHF did have arbitrarily long to deliberate?
Aha, OK. So I either misunderstand or disagree with that.
I think SHF (at least most examples) have the human as “CEO” with AIs as “advisers”, and thus the human can chose to ignore all of the advice and make the decision unaided.