So trying to decide between approaches is at least partly tied to whether you think something is, say, really a biology problem, a computer science problem, or a mathematics problem.
This is an especially interesting problem, because it seems very difficult to rationally assess what kind of approach to an especially open problem is most likely to work. If we’re looking at, for example, AI as a research problem, then what kind of evidence could we gather that would lead us to believe that one approach is likely to be more fruitful than another?
Gathering this evidence would seem to require that we know which features of intelligence are most important (so we can decide what details can be abstracted away in our approach and which details need to be modeled), but we really don’t have access to that kind of information, and it’s not clear what would give us access to it (this insight alone would constitute a large amount of progress on understanding intelligence).
This suggests important questions about the role of rationality in science. Namely, for all the talk of the “weapons-grade rationality” that Less Wrong offers, are such rationality techniques very useful for really hard scientific problems where we’re ignorant of large chunks of the hypothesis space, and where accurately assessing the weight of the hypotheses we do know is highly nontrivial?
Edit: see comment below for why I think the last paragraph is wrong.
are such rationality techniques very useful for really hard scientific problems...?
I now think that this was hyperbole. It seems obvious to me that the first and second fundamental questions of rationality are of fundamental importance to science. Namely:
What do I think I know, and why do I think I know it?
What am I doing, and why am I doing it?
The first question is essential for keeping track of the evidence you (think you) have, and the second question is essential for time management and for fighting akrasia, which is useful to those scientists who are mere mortals in their ability to be productive.
Rationality won’t magically solve science, but it clearly makes it (at least slightly) easier.
And in AI in particular, it’s hard to judge by the standards of “instrumental rationality.” You could say “The best guys are the ones who make the best prototypes.” But there’s always going to be someone who could say “That’s not a prototype, that has nothing to do with general AI,” and then there’s someone else who’ll say “General AI is an incoherent notion and a pipe dream; we’re the only ones who can actually build something.”
This is essentially tangential, but I would promptly walk away from anyone who said “General AI is an incoherent notion,” given that the human brain exists.
This is an especially interesting problem, because it seems very difficult to rationally assess what kind of approach to an especially open problem is most likely to work. If we’re looking at, for example, AI as a research problem, then what kind of evidence could we gather that would lead us to believe that one approach is likely to be more fruitful than another?
Gathering this evidence would seem to require that we know which features of intelligence are most important (so we can decide what details can be abstracted away in our approach and which details need to be modeled), but we really don’t have access to that kind of information, and it’s not clear what would give us access to it (this insight alone would constitute a large amount of progress on understanding intelligence).
This suggests important questions about the role of rationality in science. Namely, for all the talk of the “weapons-grade rationality” that Less Wrong offers, are such rationality techniques very useful for really hard scientific problems where we’re ignorant of large chunks of the hypothesis space, and where accurately assessing the weight of the hypotheses we do know is highly nontrivial?
Edit: see comment below for why I think the last paragraph is wrong.
I now think that this was hyperbole. It seems obvious to me that the first and second fundamental questions of rationality are of fundamental importance to science. Namely:
What do I think I know, and why do I think I know it?
What am I doing, and why am I doing it?
The first question is essential for keeping track of the evidence you (think you) have, and the second question is essential for time management and for fighting akrasia, which is useful to those scientists who are mere mortals in their ability to be productive.
Rationality won’t magically solve science, but it clearly makes it (at least slightly) easier.
Exactly my point.
And in AI in particular, it’s hard to judge by the standards of “instrumental rationality.” You could say “The best guys are the ones who make the best prototypes.” But there’s always going to be someone who could say “That’s not a prototype, that has nothing to do with general AI,” and then there’s someone else who’ll say “General AI is an incoherent notion and a pipe dream; we’re the only ones who can actually build something.”
This is essentially tangential, but I would promptly walk away from anyone who said “General AI is an incoherent notion,” given that the human brain exists.
No … that’s NATURAL intelligence. Also organic, non-GMO, and pesticide free :)