Wow, I loved the essay. I hadn’t realized I was part of such a united, powerful organisation and that I was so impressively intelligent, rhetorically powerful and ruthlessly self interested. I seriously felt flattered.
Not to call attention to the elephant in the room, but what exactly are Eliezer Yudkowsky’s work and educational credentials re: AGI? I see a lot of philosophy relevant to AI as a discipline, but nothing that suggests any kind of hands-on-experience…
This for one http://singinst.org/upload/LOGI//LOGI.pdf is in the ballpark of AGI work. Plus FAI work, while not being on AGI per se, is relevant and interesting to a rare conference in the area. Waser is pure drivel.
They...build a high wall around themselves rather than building roads to their neighbors. I can understand self-protection and short-sighted conservatism but extremes aren’t healthy for anyone...repetitively screaming their fear rather than listening to rational advice. Worse, they’re kicking rocks down on us.
If it weren’t for their fear-mongering...AND their arguing for unwise, dangerous actions (because they can’t see the even larger dangers that they are causing), I would ignore them like harmless individuals...rather than [like] junkies who need to do anti-societal/immoral things to support their habits...fear-mongering and manipulating others...
...very good at rhetorical rationalization and who are selfishly, unwilling to honestly interact and cooperate with others. Their fearful, conservative selfishness extends far beyond their “necessary” enslavement of the non-human and dangerous...raising strawmen, reducing to sound bites and other misdirections. They dismiss anyone and anything they don’t like with pejoratives like clueless and confused. Rather than honest engagement they attempt to shut down anyone who doesn’t see the world as they do. And they are very active in trying to proselytize their bad ideas...
In a sense, they are very like out-of-control children. They are bright, well-meaning and without a clue of the likely results of their actions. You certainly can’t hate individuals like that — but you also don’t let them run rampant...
What do you mean no arguments? Just read the above excerpts...what do you think those are, ad hominems and applause lights?
Maybe he submits papers and conference program comittee find them relevant and interesting enough?
After all, Yudkowsky has no credentials to speak of, either—what is SIAI? Weird charity?
I read his paper. Well, the point he raises against FAI concept and for rational cooperation are quite convincing-looking. So are pro-FAI points. It is hard to tell which are more convincing with both sides being relatively vague.
Based on the abstract, it’s not worth my time to read it.
Abstract. Insanity is doing the same thing over and over and expecting a
different result. “Friendly AI” (FAI) meets these criteria on four separate
counts by expecting a good result after: 1) it not only puts all of humanity’s
eggs into one basket but relies upon a totally new and untested basket, 2) it
allows fear to dictate our lives, 3) it divides the universe into us vs. them, and
finally 4) it rejects the value of diversity. In addition, FAI goal initialization
relies on being able to correctly calculate a “Coherent Extrapolated Volition of
Humanity” (CEV) via some as-yet-undiscovered algorithm. Rational Universal
Benevolence (RUB) is based upon established game theory and evolutionary
ethics and is simple, safe, stable, self-correcting, and sensitive to current human
thinking, intuitions, and feelings. Which strategy would you prefer to rest the
fate of humanity upon?
Interestingly, back in 2007, when I was naive and stupid, I thought Mark Waser one of the most competent participants of agi and sl4 mailing lists. Must be something appealing to an unprepared mind in the way he talks. Can’t simulate that impression now, so it’s not clear what that is, but probably mostly general contrarian attitude without too many spelling errors.
If you are right, it is good that public AGI field is composed of stupid people (LessWrong is prominent enough to attract—at least once—attention of anyone whom LW could possibly convince). If you are wrong, it is good that his viewpoint is published, too, and so people can try to find a balanced solution. Now, in what situation should we not promote that status quo?
Now, in what situation should we not promote that status quo?
Bad thinking happens without me helping to promote it. If there ever came a time when human thinking in general prematurely converged due to a limitation of reasonably sound (by human standards) thought then I would perhaps advocate adding random noise to the thoughts of some of the population in a hope that one of the stupid people got lucky and arrived at a new insight. But as of right now there is no need to pay more respect to silly substandard drivel than what the work itself merits.
If there ever came a time when human thinking in general prematurely converged...I would perhaps advocate adding random noise to the thoughts of some of the population
That’s a fully general counterargument comprised of the middle ground fallacy and the fallacy of false choice.
We should not promote that status quo if his ideas—such as they are amid clumsily delivered, wince-inducing rhetorical bombast—are plainly stupid and a waste of everyone’s time.
It is not a fully general counterargument because only if FAI approach is right it is a good idea to suppress open dissemination of some AGI information.
It is not a fully general counterargument because only if FAI approach is right it is a good idea to suppress open dissemination of some AGI information.
That isn’t true. It would be a good idea to suppress some AGI information if the FAI approach is futile and any creation of AGI would turn out to be terrible.
It’s a general argument to avoid considering whether or not something even is information in a relevant sense.
I’m willing to accept “If you are wrong, it is good that papers showing how you are wrong are published,” but not “If you are right, there is no harm done by any arguments against your position,” nor “If you are wrong, there is benefit to any argument about AI so long as it differs from yours.”
Well, I mean more specific case. FAI approach, among other things, presupposes that building FAI is very hard and in the meantime it is better to divert random people from AGI to specialized problem-solving CS fields. Or into game theory / decision theory.
Superficially, he references some things that are reasonable; he also implies some other things that are considered too hard to estimate (and so unreliable) on LessWrong.
If someone tries to make sense of it, she either builds a sensible decision theory out of these references (not entirely excluded), follows the references to find both FAI and game-theoretical results that may be useful, or fails to make any sense (the suppression case I mentioned) and decides that AGI is a freak field.
Talk of “approaches” in AI has a similar insidious effect to that of “-ism”s of philosophy, compartmentalizing (motivation for) projects from the rest of the field.
That’s an interesting idea. Would you share some evidence for that? (anecdotes or whatever). I sometimes think in terms of a ‘bayesian approach to statistics’.
http://becominggaia.wordpress.com/2011/03/15/why-do-you-hate-the-siailesswrong/#entry I’ll reserve my opinion about this clown, but honestly I do not get how he gets invited to AGI conferences, having neither work or even serious educational credentials.
Downvoted. Unless “clown” is his actual profession, you didn’t reserve your opinion.
Wow, I loved the essay. I hadn’t realized I was part of such a united, powerful organisation and that I was so impressively intelligent, rhetorically powerful and ruthlessly self interested. I seriously felt flattered.
You are in a Chinese room, according to his argument. No one of us is as cruel as all of us.
Not to call attention to the elephant in the room, but what exactly are Eliezer Yudkowsky’s work and educational credentials re: AGI? I see a lot of philosophy relevant to AI as a discipline, but nothing that suggests any kind of hands-on-experience…
This for one http://singinst.org/upload/LOGI//LOGI.pdf is in the ballpark of AGI work. Plus FAI work, while not being on AGI per se, is relevant and interesting to a rare conference in the area. Waser is pure drivel.
He didn’t actually make any arguments in that essay. That frustrates me.
What do you mean no arguments? Just read the above excerpts...what do you think those are, ad hominems and applause lights?
...I think that that was one of those occasional comments you make which are sarcastic, and which no-one gets, and which always get downvoted.
But I could be wrong. Please clarify if you were kidding or not, for this slow uncertain person.
Don’t worry, if my sarcasm is downvoted, that will probably be good for me. I get more karma than I deserve on silly stuff anyway.
The silly comments you make are far more insightful and useful than most seriously intended comments on most other websites. Keep up the good work.
I like the third passage. It makes it very clear what he is mistaken about.
Maybe he submits papers and conference program comittee find them relevant and interesting enough?
After all, Yudkowsky has no credentials to speak of, either—what is SIAI? Weird charity?
I read his paper. Well, the point he raises against FAI concept and for rational cooperation are quite convincing-looking. So are pro-FAI points. It is hard to tell which are more convincing with both sides being relatively vague.
Based on the abstract, it’s not worth my time to read it.
Points 2), 3), and 4) are simply inane.
Upvoted, agreed, and addendum: Similarly inane is the cliche “insanity is doing the same thing over and over and expecting a different result.”
Which invites the question of why clearly incompetent people make up the program committee. His papers look like utter drivel mixed with superstition.
Interestingly, back in 2007, when I was naive and stupid, I thought Mark Waser one of the most competent participants of agi and sl4 mailing lists. Must be something appealing to an unprepared mind in the way he talks. Can’t simulate that impression now, so it’s not clear what that is, but probably mostly general contrarian attitude without too many spelling errors.
If you are right, it is good that public AGI field is composed of stupid people (LessWrong is prominent enough to attract—at least once—attention of anyone whom LW could possibly convince). If you are wrong, it is good that his viewpoint is published, too, and so people can try to find a balanced solution. Now, in what situation should we not promote that status quo?
Bad thinking happens without me helping to promote it. If there ever came a time when human thinking in general prematurely converged due to a limitation of reasonably sound (by human standards) thought then I would perhaps advocate adding random noise to the thoughts of some of the population in a hope that one of the stupid people got lucky and arrived at a new insight. But as of right now there is no need to pay more respect to silly substandard drivel than what the work itself merits.
Keen, I hadn’t thought of that, upvoted.
That’s a fully general counterargument comprised of the middle ground fallacy and the fallacy of false choice.
We should not promote that status quo if his ideas—such as they are amid clumsily delivered, wince-inducing rhetorical bombast—are plainly stupid and a waste of everyone’s time.
It is not a fully general counterargument because only if FAI approach is right it is a good idea to suppress open dissemination of some AGI information.
That isn’t true. It would be a good idea to suppress some AGI information if the FAI approach is futile and any creation of AGI would turn out to be terrible.
It’s a general argument to avoid considering whether or not something even is information in a relevant sense.
I’m willing to accept “If you are wrong, it is good that papers showing how you are wrong are published,” but not “If you are right, there is no harm done by any arguments against your position,” nor “If you are wrong, there is benefit to any argument about AI so long as it differs from yours.”
Another way to put it is that it is a fully general counterargument against having standards. ;)
Well, I mean more specific case. FAI approach, among other things, presupposes that building FAI is very hard and in the meantime it is better to divert random people from AGI to specialized problem-solving CS fields. Or into game theory / decision theory.
Superficially, he references some things that are reasonable; he also implies some other things that are considered too hard to estimate (and so unreliable) on LessWrong.
If someone tries to make sense of it, she either builds a sensible decision theory out of these references (not entirely excluded), follows the references to find both FAI and game-theoretical results that may be useful, or fails to make any sense (the suppression case I mentioned) and decides that AGI is a freak field.
Talk of “approaches” in AI has a similar insidious effect to that of “-ism”s of philosophy, compartmentalizing (motivation for) projects from the rest of the field.
That’s an interesting idea. Would you share some evidence for that? (anecdotes or whatever). I sometimes think in terms of a ‘bayesian approach to statistics’.
I think the “insidious effect” exists and isn’t always a bad thing.
Which paper of his did you read? He has quite a few.
AGI-2011 one.