We shouldn’t assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI.
Why not? It’s an assumption which may be slightly overcautious, but I would far rather be slightly overcautious than to increase the risk that an AI is going to smiley-tile the universe. Until we have a more precise idea of what AI-not-designed-using-rigorous-and-deliberate-FAI-methodology is not a UFAI, I see no reason to abandon the current hypothesis.
Because it fails to quite match reality. e.g. charitable corporations can behave pathologically (falling prey to the Iron Law of Institutions), but are generally qualitatively less unFriendly than the standard profit-making corporation.
If you believe it is overcautious, then you believe it is wrong. If you are worried about smiley-tiling, then you get the right answer by assigning the right value to that outcome. Not by intentionally biasing your decision process.
I say ‘may be slightly overcautious’ contingent on it being wrong—I’m saying that if it is wrong, it’s a sort of wrong which will result in less loss in utility than would being wrong in the other direction.
If you’re an agent with infinite computing power, you can investigate all hypotheses further to make sure that you’re right. Humans, however, are forced to devote time and effort to researching those things which are likely to yield utility, and I think that the current hypothesis sounds reasonable unless you have evidence that it is wrong.
You should not err on the side of caution if you are a Bayesian expectation-maximizer!
But I think what you’re getting at, which is the important thing, is that people say “Assume X” when they really mean “My computation of the expected value times probability over all possible outcomes indicates X is likely, and I’m too lazy to remember the details, or I think you’re too stupid to do the computation right; so I’m just going to cache ‘assume X’ and repeat that from now on”. They ruin their analysis because they’re lazy, and don’t want to do more analysis than they would need to do in order to decide what action to take if they had to make the choice today. Then the lazy analysis done with poor information becomes dogma. As in the example above.
Instead of downvoting a comment for referring to another comment that you disagree with, I think you should downvote the original comment.
Better yet, explain why you downvoted. Explaining what you downvoted is going halfway, so I half-appreciate it.
I can’t express strongly enough my dismay that here, on a forum where people are allegedly devoted to rationality, they still strongly believe in making some assumptions without justification.
Why not? It’s an assumption which may be slightly overcautious, but I would far rather be slightly overcautious than to increase the risk that an AI is going to smiley-tile the universe. Until we have a more precise idea of what AI-not-designed-using-rigorous-and-deliberate-FAI-methodology is not a UFAI, I see no reason to abandon the current hypothesis.
Because it fails to quite match reality. e.g. charitable corporations can behave pathologically (falling prey to the Iron Law of Institutions), but are generally qualitatively less unFriendly than the standard profit-making corporation.
If you believe it is overcautious, then you believe it is wrong. If you are worried about smiley-tiling, then you get the right answer by assigning the right value to that outcome. Not by intentionally biasing your decision process.
I say ‘may be slightly overcautious’ contingent on it being wrong—I’m saying that if it is wrong, it’s a sort of wrong which will result in less loss in utility than would being wrong in the other direction.
If you’re an agent with infinite computing power, you can investigate all hypotheses further to make sure that you’re right. Humans, however, are forced to devote time and effort to researching those things which are likely to yield utility, and I think that the current hypothesis sounds reasonable unless you have evidence that it is wrong.
The erring on the side of caution only enters when you have to make a decision. Your pre-action estimate should be clean of this.
You should not err on the side of caution if you are a Bayesian expectation-maximizer!
But I think what you’re getting at, which is the important thing, is that people say “Assume X” when they really mean “My computation of the expected value times probability over all possible outcomes indicates X is likely, and I’m too lazy to remember the details, or I think you’re too stupid to do the computation right; so I’m just going to cache ‘assume X’ and repeat that from now on”. They ruin their analysis because they’re lazy, and don’t want to do more analysis than they would need to do in order to decide what action to take if they had to make the choice today. Then the lazy analysis done with poor information becomes dogma. As in the example above.
I downvoted this sentence.
Instead of downvoting a comment for referring to another comment that you disagree with, I think you should downvote the original comment.
Better yet, explain why you downvoted. Explaining what you downvoted is going halfway, so I half-appreciate it.
I can’t express strongly enough my dismay that here, on a forum where people are allegedly devoted to rationality, they still strongly believe in making some assumptions without justification.
Weasel words used to convey unnecessary insult.