I agree. And I don’t see why Eliezer expects that people MOSTLY disagree on the difficulty of success, even if some (like the OP) do.
When I talk casually to people and tell them I expect the world to end they smile and nod.
When I talk casually to people and tell them that the things they value are complicated and even being specific in English about that is difficult, they agree and we have extensive conversations.
So my (extremely limited) data points suggest that the main point of contention between Eliezer’s view and the views of most people who at least have some background in formal logic, is that they don’t see this as an important problem rather than that they don’t see it as a difficult problem.
Therefore, when Eliezer dismisses criticism that the problem is easy as the main criticism, in the way I pointed out in my comment, it feels weird and misdirected to me.
Well, he has addressed that point (AI gone bad will kill us all) in detail elsewhere. And he probably encounters more people who think they just solved the problem of FAI. Still, you have a point; it’s a lot easier to persuade someone that FAI is hard (I should think) than that it is needed.
I agree. And I don’t see why Eliezer expects that people MOSTLY disagree on the difficulty of success, even if some (like the OP) do.
When I talk casually to people and tell them I expect the world to end they smile and nod.
When I talk casually to people and tell them that the things they value are complicated and even being specific in English about that is difficult, they agree and we have extensive conversations.
So my (extremely limited) data points suggest that the main point of contention between Eliezer’s view and the views of most people who at least have some background in formal logic, is that they don’t see this as an important problem rather than that they don’t see it as a difficult problem.
Therefore, when Eliezer dismisses criticism that the problem is easy as the main criticism, in the way I pointed out in my comment, it feels weird and misdirected to me.
Well, he has addressed that point (AI gone bad will kill us all) in detail elsewhere. And he probably encounters more people who think they just solved the problem of FAI. Still, you have a point; it’s a lot easier to persuade someone that FAI is hard (I should think) than that it is needed.
I agree completely. I don’t dispute the arguments, just the characterization of the general population.