If you are right, it is good that public AGI field is composed of stupid people (LessWrong is prominent enough to attract—at least once—attention of anyone whom LW could possibly convince). If you are wrong, it is good that his viewpoint is published, too, and so people can try to find a balanced solution. Now, in what situation should we not promote that status quo?
Now, in what situation should we not promote that status quo?
Bad thinking happens without me helping to promote it. If there ever came a time when human thinking in general prematurely converged due to a limitation of reasonably sound (by human standards) thought then I would perhaps advocate adding random noise to the thoughts of some of the population in a hope that one of the stupid people got lucky and arrived at a new insight. But as of right now there is no need to pay more respect to silly substandard drivel than what the work itself merits.
If there ever came a time when human thinking in general prematurely converged...I would perhaps advocate adding random noise to the thoughts of some of the population
That’s a fully general counterargument comprised of the middle ground fallacy and the fallacy of false choice.
We should not promote that status quo if his ideas—such as they are amid clumsily delivered, wince-inducing rhetorical bombast—are plainly stupid and a waste of everyone’s time.
It is not a fully general counterargument because only if FAI approach is right it is a good idea to suppress open dissemination of some AGI information.
It is not a fully general counterargument because only if FAI approach is right it is a good idea to suppress open dissemination of some AGI information.
That isn’t true. It would be a good idea to suppress some AGI information if the FAI approach is futile and any creation of AGI would turn out to be terrible.
It’s a general argument to avoid considering whether or not something even is information in a relevant sense.
I’m willing to accept “If you are wrong, it is good that papers showing how you are wrong are published,” but not “If you are right, there is no harm done by any arguments against your position,” nor “If you are wrong, there is benefit to any argument about AI so long as it differs from yours.”
Well, I mean more specific case. FAI approach, among other things, presupposes that building FAI is very hard and in the meantime it is better to divert random people from AGI to specialized problem-solving CS fields. Or into game theory / decision theory.
Superficially, he references some things that are reasonable; he also implies some other things that are considered too hard to estimate (and so unreliable) on LessWrong.
If someone tries to make sense of it, she either builds a sensible decision theory out of these references (not entirely excluded), follows the references to find both FAI and game-theoretical results that may be useful, or fails to make any sense (the suppression case I mentioned) and decides that AGI is a freak field.
Talk of “approaches” in AI has a similar insidious effect to that of “-ism”s of philosophy, compartmentalizing (motivation for) projects from the rest of the field.
That’s an interesting idea. Would you share some evidence for that? (anecdotes or whatever). I sometimes think in terms of a ‘bayesian approach to statistics’.
If you are right, it is good that public AGI field is composed of stupid people (LessWrong is prominent enough to attract—at least once—attention of anyone whom LW could possibly convince). If you are wrong, it is good that his viewpoint is published, too, and so people can try to find a balanced solution. Now, in what situation should we not promote that status quo?
Bad thinking happens without me helping to promote it. If there ever came a time when human thinking in general prematurely converged due to a limitation of reasonably sound (by human standards) thought then I would perhaps advocate adding random noise to the thoughts of some of the population in a hope that one of the stupid people got lucky and arrived at a new insight. But as of right now there is no need to pay more respect to silly substandard drivel than what the work itself merits.
Keen, I hadn’t thought of that, upvoted.
That’s a fully general counterargument comprised of the middle ground fallacy and the fallacy of false choice.
We should not promote that status quo if his ideas—such as they are amid clumsily delivered, wince-inducing rhetorical bombast—are plainly stupid and a waste of everyone’s time.
It is not a fully general counterargument because only if FAI approach is right it is a good idea to suppress open dissemination of some AGI information.
That isn’t true. It would be a good idea to suppress some AGI information if the FAI approach is futile and any creation of AGI would turn out to be terrible.
It’s a general argument to avoid considering whether or not something even is information in a relevant sense.
I’m willing to accept “If you are wrong, it is good that papers showing how you are wrong are published,” but not “If you are right, there is no harm done by any arguments against your position,” nor “If you are wrong, there is benefit to any argument about AI so long as it differs from yours.”
Another way to put it is that it is a fully general counterargument against having standards. ;)
Well, I mean more specific case. FAI approach, among other things, presupposes that building FAI is very hard and in the meantime it is better to divert random people from AGI to specialized problem-solving CS fields. Or into game theory / decision theory.
Superficially, he references some things that are reasonable; he also implies some other things that are considered too hard to estimate (and so unreliable) on LessWrong.
If someone tries to make sense of it, she either builds a sensible decision theory out of these references (not entirely excluded), follows the references to find both FAI and game-theoretical results that may be useful, or fails to make any sense (the suppression case I mentioned) and decides that AGI is a freak field.
Talk of “approaches” in AI has a similar insidious effect to that of “-ism”s of philosophy, compartmentalizing (motivation for) projects from the rest of the field.
That’s an interesting idea. Would you share some evidence for that? (anecdotes or whatever). I sometimes think in terms of a ‘bayesian approach to statistics’.
I think the “insidious effect” exists and isn’t always a bad thing.