I think we can see now how the situation evolved: SI ignored what ‘contrarians’ (the mainstream) said, the views they formed after reading SI’s arguments, etc.
SI then gone to talk to GiveWell, and the presentation resulted in Holden forming same view—if you strip his statement down to bare bones he says that he thinks giving money to SI results in either no change, or increase of the risk, as the approach SI advocates is more dangerous than current direction, and the rationale given has already been available (but has been ignored).
Ultimately, it may be the case that SI arguments, when examined in depth by random outsider, typically result in strongly negative opinion of SI, but sometimes result in positive opinion of SI. The people whom form positive opinion seem to be a significant fraction at LW—ultimately if you examine the AI related arguments here, and form negative opinion, you’ll be far less interested in trying to learn rationality from those people.
Is Holden’s view really the same as the mainstream view, or is it just a surface similarity?
For example, a typical outsider would doubt about SIAI abilities, because a typical outsider thinks intelligent machines belong to sci-fi, not real life. Holden worries about lack of credentials. Among those who think intelligent machines are possible, a typical person thinks it will be OK, because obviously the machines will do only what we tell them to do. Holden worries that a (supposedly) Friendly AI is more risky than a “Tool AI”. Etc.
Mainstream meaning the people with credentials that the Holden was referring to (whose views are somewhat echoed by everyone else). The kind of folk that will not be swayed by some sort of mental confusion between common discourse “the function of the AI is to make paperclips” and technical discourse where utility function is mathematical function that is a part of specific design of a specific AI architecture. Same kind of folk, if they come across the Russian mathematician name-dropping that’s going on here, and after they politely exhaust the possibility that they misunderstood, would be convinced that this is some complete pile of manure arising from utterly incompetent person reporting his awesome misunderstandings of advanced mathematics he read off a popularization book. Second order bad science popularization. I don’t even care about AI any more. It boggles my mind that there’s entire community of people who just go around having such gross lack of understanding of the things they are talking about.
edit: This stuff is only tolerated because it sort of promotes interest in mathematics. To be fair, even very gross misunderstanding of mathematics may serve a good function if a person passionately talks of the importance of mathematics he misunderstood. But once you start seriously pushing nonsense forward—you’re out. This whole thing reminds me of experience with entirely opposite but equally dumb point: some guy with good verbal skills read Godel, Escher, Bach, thought he understood Godel’s incompleteness theorem, and imagined that understanding of Godel’s incompleteness theorem implied that humans are capable of hypercomputation (beyond Turing machine). It’s literally impossible to talk sense into such cases. They don’t understand the basics but they jump ahead to the highly advanced topics, which they understand metaphorically. Not having had properly studied mathematics they do not understand how great is the care required for not screwing up (especially when bordering philosophy). That can serve a good function, yes: someone sees the One Truth in, say, Solomonoff induction, and someone else actually learns the mathematics, which is interesting in it’s own right even though it doesn’t disprove God or accomplish anything equally interesting.
I think we can see now how the situation evolved: SI ignored what ‘contrarians’ (the mainstream) said, the views they formed after reading SI’s arguments, etc.
SI then gone to talk to GiveWell, and the presentation resulted in Holden forming same view—if you strip his statement down to bare bones he says that he thinks giving money to SI results in either no change, or increase of the risk, as the approach SI advocates is more dangerous than current direction, and the rationale given has already been available (but has been ignored).
Ultimately, it may be the case that SI arguments, when examined in depth by random outsider, typically result in strongly negative opinion of SI, but sometimes result in positive opinion of SI. The people whom form positive opinion seem to be a significant fraction at LW—ultimately if you examine the AI related arguments here, and form negative opinion, you’ll be far less interested in trying to learn rationality from those people.
Is Holden’s view really the same as the mainstream view, or is it just a surface similarity?
For example, a typical outsider would doubt about SIAI abilities, because a typical outsider thinks intelligent machines belong to sci-fi, not real life. Holden worries about lack of credentials. Among those who think intelligent machines are possible, a typical person thinks it will be OK, because obviously the machines will do only what we tell them to do. Holden worries that a (supposedly) Friendly AI is more risky than a “Tool AI”. Etc.
Mainstream meaning the people with credentials that the Holden was referring to (whose views are somewhat echoed by everyone else). The kind of folk that will not be swayed by some sort of mental confusion between common discourse “the function of the AI is to make paperclips” and technical discourse where utility function is mathematical function that is a part of specific design of a specific AI architecture. Same kind of folk, if they come across the Russian mathematician name-dropping that’s going on here, and after they politely exhaust the possibility that they misunderstood, would be convinced that this is some complete pile of manure arising from utterly incompetent person reporting his awesome misunderstandings of advanced mathematics he read off a popularization book. Second order bad science popularization. I don’t even care about AI any more. It boggles my mind that there’s entire community of people who just go around having such gross lack of understanding of the things they are talking about.
edit: This stuff is only tolerated because it sort of promotes interest in mathematics. To be fair, even very gross misunderstanding of mathematics may serve a good function if a person passionately talks of the importance of mathematics he misunderstood. But once you start seriously pushing nonsense forward—you’re out. This whole thing reminds me of experience with entirely opposite but equally dumb point: some guy with good verbal skills read Godel, Escher, Bach, thought he understood Godel’s incompleteness theorem, and imagined that understanding of Godel’s incompleteness theorem implied that humans are capable of hypercomputation (beyond Turing machine). It’s literally impossible to talk sense into such cases. They don’t understand the basics but they jump ahead to the highly advanced topics, which they understand metaphorically. Not having had properly studied mathematics they do not understand how great is the care required for not screwing up (especially when bordering philosophy). That can serve a good function, yes: someone sees the One Truth in, say, Solomonoff induction, and someone else actually learns the mathematics, which is interesting in it’s own right even though it doesn’t disprove God or accomplish anything equally interesting.