I would guess EY sees himself as more of a researcher than a forecaster, so you shouldn’t be surprised if he doesn’t make as many predictions as Paul Krugman.
OK. If that is the case, then I think that a fair question to ask is what have his major achievements in research been?
But secondly, a lot of the discussion on LW and most of EY’s research presupposes certain things happening in the future. If AI is actually impossible, then trying to design a friendly AI is a waste of time (or, alternately, if AI won’t be developed for 10,000 years, then developing a friendly AI is not an urgent matter). What evidence can EY offer that he’s not wasting his time, to put it bluntly?
If AI is actually impossible, then trying to design a friendly AI is a waste of time
No, if our current evidence suggests that AI is impossible, and does so sufficiently strongly to outweigh the large downside of a negative singularity, then trying to design a freindly AI is a waste of time.
Even if it turns out that your house doesn’t burn down, buying insurance wasn’t necessarily a bad idea. What is important is how likely it looked beforehand, and the relative costs of the outcomes.
Claiming AI constructed in a world of physics is impossible is equivalent to saying intelligence in a world of physics is impossible. This would require humans to work by dualism.
Of course, this is entirely separate from feasibility.
If AI is actually impossible, then trying to design a friendly AI is a waste of time
I would think that anyone claiming that AI is impossible would have the burden pretty strongly on their shoulders. However, if one was instead saying that a fast-take off was impossible or extremely unlikely there would be more of a valid issue.
OK. If that is the case, then I think that a fair question to ask is what have his major achievements in research been?
But secondly, a lot of the discussion on LW and most of EY’s research presupposes certain things happening in the future. If AI is actually impossible, then trying to design a friendly AI is a waste of time (or, alternately, if AI won’t be developed for 10,000 years, then developing a friendly AI is not an urgent matter). What evidence can EY offer that he’s not wasting his time, to put it bluntly?
No, if our current evidence suggests that AI is impossible, and does so sufficiently strongly to outweigh the large downside of a negative singularity, then trying to design a freindly AI is a waste of time.
Even if it turns out that your house doesn’t burn down, buying insurance wasn’t necessarily a bad idea. What is important is how likely it looked beforehand, and the relative costs of the outcomes.
Claiming AI constructed in a world of physics is impossible is equivalent to saying intelligence in a world of physics is impossible. This would require humans to work by dualism.
Of course, this is entirely separate from feasibility.
I would think that anyone claiming that AI is impossible would have the burden pretty strongly on their shoulders. However, if one was instead saying that a fast-take off was impossible or extremely unlikely there would be more of a valid issue.