Nick, I disagree. You are saying there is a logical argument that concludes such AIs will be unfriendly with 100% probability. That just isn’t true, or even close to true.
Furthermore, even if there were an argument using these concepts that concluded something with 100% probability, the concepts of UFAI and FAI are not well-defined enough to draw the conclusion above.
I think you’re using the word “assume” here to mean something more like, “We should not build AIs without FAI methodology.” That’s a very very different statement! That’s a conclusion based on using expectation-maximization over all possible outcomes. What I am saying is that we should not assume that, in all possible outcomes, the AI comes out unfriendly.
Yes, he is. He said there is a logical argument that concludes that we should assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. “Assume” means “assign 100% probability”. What other meaning did you have in mind?
Nick, I disagree. You are saying there is a logical argument that concludes such AIs will be unfriendly with 100% probability. That just isn’t true, or even close to true.
Furthermore, even if there were an argument using these concepts that concluded something with 100% probability, the concepts of UFAI and FAI are not well-defined enough to draw the conclusion above.
I think you’re using the word “assume” here to mean something more like, “We should not build AIs without FAI methodology.” That’s a very very different statement! That’s a conclusion based on using expectation-maximization over all possible outcomes. What I am saying is that we should not assume that, in all possible outcomes, the AI comes out unfriendly.
No, Nick is not saying that.
Yes, he is. He said there is a logical argument that concludes that we should assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. “Assume” means “assign 100% probability”. What other meaning did you have in mind?