The real bone of contention here seems to be the long chain of inference leading from common scientific/philosophical knowledge to the conclusion that uFAI is a serious existential risk. Any particular personal characteristics of EY would seem irrelevant till we have an opinion on that set of claims.
If EY were working on preventing asteroid impacts with earth, and he were the main driving force behind that effort, he could say “I’m trying to save the world” and nobody would look at him askance. That’s because asteroid impacts have definitely caused mass extinctions before, so nobody can challenge the very root of his claim.
The FAI problem, on the other hand, is at the top of a large house of inferential cards, so that Eliezer is saving the world GIVEN that W, X, Y and Z are true.
My bottom line: what we should be discussing is simply “Are W, X, Y and Z true?” Once we have a good idea about how strong that house of cards is, it will be obvious whether Eliezer is in a “permissible” epistemic state, or whatever.
Maybe people who know about these questions should consider a series of posts detailing all the separate issues leading to FAI. As far as I can tell from my not-extremely-tech-savvy vantage point, the weakest pillar in that house is the question of whether strong AI is feasible (note I said “feasible,” not “possible”).
Upvoted; the issue of FAI itself is more interesting than whether Eliezer is making an ass of himself and thereby the SIAI message (probably a bit; claiming you’re smart isn’t really smart, but then he’s also doing a pretty good job as publicist).
One form of productive self-doubt is to have the LW community critically examine Eliezer’s central claims. Two of my attempted simplifications of those claims are posted here and here on related threads.
Those posts don’t really address whether strong AI feasible; I think most AI researchers agree that it will become so, but disagree on the timeline. I believe it’s crucial but rarely recognized that the timeline really depends on how many resources are devoted to it. Those appear to be steadily increasing, so it might not be that long.
My bottom line: what we should be discussing is simply “Are W, X, Y and Z true?” Once we have a good idea about how strong that house of cards is, …
You shouldn’t deny knowledge of how strong claims are, and refer to those claims as “a house of cards” in the same sentence. Those two claims are mutually exclusive, and putting them close together like this set off my propagandometer.
The real bone of contention here seems to be the long chain of inference leading from common scientific/philosophical knowledge to the conclusion that FAI is a serious existential risk.
I am assuming you meant uFAI or AGI instead of FAI.
The FAI problem, on the other hand, is at the top of a large house of inferential cards, so that Eliezer is saving the world GIVEN that W, X, Y and Z are true.
For my part the conclusion you mention seems to be the easy part. I consider that an answered question. The ‘Eliezer is saving the world’ part is far more difficult for me to answer due to the social and political intricacies that must be accounted for.
The real bone of contention here seems to be the long chain of inference leading from common scientific/philosophical knowledge to the conclusion that uFAI is a serious existential risk. Any particular personal characteristics of EY would seem irrelevant till we have an opinion on that set of claims.
If EY were working on preventing asteroid impacts with earth, and he were the main driving force behind that effort, he could say “I’m trying to save the world” and nobody would look at him askance. That’s because asteroid impacts have definitely caused mass extinctions before, so nobody can challenge the very root of his claim.
The FAI problem, on the other hand, is at the top of a large house of inferential cards, so that Eliezer is saving the world GIVEN that W, X, Y and Z are true.
My bottom line: what we should be discussing is simply “Are W, X, Y and Z true?” Once we have a good idea about how strong that house of cards is, it will be obvious whether Eliezer is in a “permissible” epistemic state, or whatever.
Maybe people who know about these questions should consider a series of posts detailing all the separate issues leading to FAI. As far as I can tell from my not-extremely-tech-savvy vantage point, the weakest pillar in that house is the question of whether strong AI is feasible (note I said “feasible,” not “possible”).
Upvoted; the issue of FAI itself is more interesting than whether Eliezer is making an ass of himself and thereby the SIAI message (probably a bit; claiming you’re smart isn’t really smart, but then he’s also doing a pretty good job as publicist).
One form of productive self-doubt is to have the LW community critically examine Eliezer’s central claims. Two of my attempted simplifications of those claims are posted here and here on related threads.
Those posts don’t really address whether strong AI feasible; I think most AI researchers agree that it will become so, but disagree on the timeline. I believe it’s crucial but rarely recognized that the timeline really depends on how many resources are devoted to it. Those appear to be steadily increasing, so it might not be that long.
You shouldn’t deny knowledge of how strong claims are, and refer to those claims as “a house of cards” in the same sentence. Those two claims are mutually exclusive, and putting them close together like this set off my propagandometer.
I am assuming you meant uFAI or AGI instead of FAI.
For my part the conclusion you mention seems to be the easy part. I consider that an answered question. The ‘Eliezer is saving the world’ part is far more difficult for me to answer due to the social and political intricacies that must be accounted for.
Don’t forget that some people, e.g. Roko, also think that FAI is a serious existential risk as well as uFAI.