Though I used the term UFAI more for emotional impact than out of belief in its accuracy. We shouldn’t assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. That’s a rhetorical flourish, not a documented fact.
We shouldn’t assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. That’s a rhetorical flourish, not a documented fact.
Neither; it’s the conclusion of a logical argument (which is, yes, weaker than a documented fact).
Nick, I disagree. You are saying there is a logical argument that concludes such AIs will be unfriendly with 100% probability. That just isn’t true, or even close to true.
Furthermore, even if there were an argument using these concepts that concluded something with 100% probability, the concepts of UFAI and FAI are not well-defined enough to draw the conclusion above.
I think you’re using the word “assume” here to mean something more like, “We should not build AIs without FAI methodology.” That’s a very very different statement! That’s a conclusion based on using expectation-maximization over all possible outcomes. What I am saying is that we should not assume that, in all possible outcomes, the AI comes out unfriendly.
Yes, he is. He said there is a logical argument that concludes that we should assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. “Assume” means “assign 100% probability”. What other meaning did you have in mind?
We shouldn’t assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI.
Why not? It’s an assumption which may be slightly overcautious, but I would far rather be slightly overcautious than to increase the risk that an AI is going to smiley-tile the universe. Until we have a more precise idea of what AI-not-designed-using-rigorous-and-deliberate-FAI-methodology is not a UFAI, I see no reason to abandon the current hypothesis.
Because it fails to quite match reality. e.g. charitable corporations can behave pathologically (falling prey to the Iron Law of Institutions), but are generally qualitatively less unFriendly than the standard profit-making corporation.
If you believe it is overcautious, then you believe it is wrong. If you are worried about smiley-tiling, then you get the right answer by assigning the right value to that outcome. Not by intentionally biasing your decision process.
I say ‘may be slightly overcautious’ contingent on it being wrong—I’m saying that if it is wrong, it’s a sort of wrong which will result in less loss in utility than would being wrong in the other direction.
If you’re an agent with infinite computing power, you can investigate all hypotheses further to make sure that you’re right. Humans, however, are forced to devote time and effort to researching those things which are likely to yield utility, and I think that the current hypothesis sounds reasonable unless you have evidence that it is wrong.
You should not err on the side of caution if you are a Bayesian expectation-maximizer!
But I think what you’re getting at, which is the important thing, is that people say “Assume X” when they really mean “My computation of the expected value times probability over all possible outcomes indicates X is likely, and I’m too lazy to remember the details, or I think you’re too stupid to do the computation right; so I’m just going to cache ‘assume X’ and repeat that from now on”. They ruin their analysis because they’re lazy, and don’t want to do more analysis than they would need to do in order to decide what action to take if they had to make the choice today. Then the lazy analysis done with poor information becomes dogma. As in the example above.
Instead of downvoting a comment for referring to another comment that you disagree with, I think you should downvote the original comment.
Better yet, explain why you downvoted. Explaining what you downvoted is going halfway, so I half-appreciate it.
I can’t express strongly enough my dismay that here, on a forum where people are allegedly devoted to rationality, they still strongly believe in making some assumptions without justification.
Probably.
Though I used the term UFAI more for emotional impact than out of belief in its accuracy. We shouldn’t assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. That’s a rhetorical flourish, not a documented fact.
Neither; it’s the conclusion of a logical argument (which is, yes, weaker than a documented fact).
Nick, I disagree. You are saying there is a logical argument that concludes such AIs will be unfriendly with 100% probability. That just isn’t true, or even close to true.
Furthermore, even if there were an argument using these concepts that concluded something with 100% probability, the concepts of UFAI and FAI are not well-defined enough to draw the conclusion above.
I think you’re using the word “assume” here to mean something more like, “We should not build AIs without FAI methodology.” That’s a very very different statement! That’s a conclusion based on using expectation-maximization over all possible outcomes. What I am saying is that we should not assume that, in all possible outcomes, the AI comes out unfriendly.
No, Nick is not saying that.
Yes, he is. He said there is a logical argument that concludes that we should assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. “Assume” means “assign 100% probability”. What other meaning did you have in mind?
Nothing indicates a rhetorical flourish like the phrase ‘rhetorical flourish’.
Why not? It’s an assumption which may be slightly overcautious, but I would far rather be slightly overcautious than to increase the risk that an AI is going to smiley-tile the universe. Until we have a more precise idea of what AI-not-designed-using-rigorous-and-deliberate-FAI-methodology is not a UFAI, I see no reason to abandon the current hypothesis.
Because it fails to quite match reality. e.g. charitable corporations can behave pathologically (falling prey to the Iron Law of Institutions), but are generally qualitatively less unFriendly than the standard profit-making corporation.
If you believe it is overcautious, then you believe it is wrong. If you are worried about smiley-tiling, then you get the right answer by assigning the right value to that outcome. Not by intentionally biasing your decision process.
I say ‘may be slightly overcautious’ contingent on it being wrong—I’m saying that if it is wrong, it’s a sort of wrong which will result in less loss in utility than would being wrong in the other direction.
If you’re an agent with infinite computing power, you can investigate all hypotheses further to make sure that you’re right. Humans, however, are forced to devote time and effort to researching those things which are likely to yield utility, and I think that the current hypothesis sounds reasonable unless you have evidence that it is wrong.
The erring on the side of caution only enters when you have to make a decision. Your pre-action estimate should be clean of this.
You should not err on the side of caution if you are a Bayesian expectation-maximizer!
But I think what you’re getting at, which is the important thing, is that people say “Assume X” when they really mean “My computation of the expected value times probability over all possible outcomes indicates X is likely, and I’m too lazy to remember the details, or I think you’re too stupid to do the computation right; so I’m just going to cache ‘assume X’ and repeat that from now on”. They ruin their analysis because they’re lazy, and don’t want to do more analysis than they would need to do in order to decide what action to take if they had to make the choice today. Then the lazy analysis done with poor information becomes dogma. As in the example above.
I downvoted this sentence.
Instead of downvoting a comment for referring to another comment that you disagree with, I think you should downvote the original comment.
Better yet, explain why you downvoted. Explaining what you downvoted is going halfway, so I half-appreciate it.
I can’t express strongly enough my dismay that here, on a forum where people are allegedly devoted to rationality, they still strongly believe in making some assumptions without justification.
Weasel words used to convey unnecessary insult.
Proof that conformist mindless dogma is alive and well at LW...