It’s a general argument to avoid considering whether or not something even is information in a relevant sense.
I’m willing to accept “If you are wrong, it is good that papers showing how you are wrong are published,” but not “If you are right, there is no harm done by any arguments against your position,” nor “If you are wrong, there is benefit to any argument about AI so long as it differs from yours.”
Well, I mean more specific case. FAI approach, among other things, presupposes that building FAI is very hard and in the meantime it is better to divert random people from AGI to specialized problem-solving CS fields. Or into game theory / decision theory.
Superficially, he references some things that are reasonable; he also implies some other things that are considered too hard to estimate (and so unreliable) on LessWrong.
If someone tries to make sense of it, she either builds a sensible decision theory out of these references (not entirely excluded), follows the references to find both FAI and game-theoretical results that may be useful, or fails to make any sense (the suppression case I mentioned) and decides that AGI is a freak field.
Talk of “approaches” in AI has a similar insidious effect to that of “-ism”s of philosophy, compartmentalizing (motivation for) projects from the rest of the field.
That’s an interesting idea. Would you share some evidence for that? (anecdotes or whatever). I sometimes think in terms of a ‘bayesian approach to statistics’.
It’s a general argument to avoid considering whether or not something even is information in a relevant sense.
I’m willing to accept “If you are wrong, it is good that papers showing how you are wrong are published,” but not “If you are right, there is no harm done by any arguments against your position,” nor “If you are wrong, there is benefit to any argument about AI so long as it differs from yours.”
Another way to put it is that it is a fully general counterargument against having standards. ;)
Well, I mean more specific case. FAI approach, among other things, presupposes that building FAI is very hard and in the meantime it is better to divert random people from AGI to specialized problem-solving CS fields. Or into game theory / decision theory.
Superficially, he references some things that are reasonable; he also implies some other things that are considered too hard to estimate (and so unreliable) on LessWrong.
If someone tries to make sense of it, she either builds a sensible decision theory out of these references (not entirely excluded), follows the references to find both FAI and game-theoretical results that may be useful, or fails to make any sense (the suppression case I mentioned) and decides that AGI is a freak field.
Talk of “approaches” in AI has a similar insidious effect to that of “-ism”s of philosophy, compartmentalizing (motivation for) projects from the rest of the field.
That’s an interesting idea. Would you share some evidence for that? (anecdotes or whatever). I sometimes think in terms of a ‘bayesian approach to statistics’.
I think the “insidious effect” exists and isn’t always a bad thing.