You seem to be phrasing this as an either/or decision.
Remembering that all decisions are best formalized as functions, the outside view asks, “What other phenomena are like this one, and what did their functions look like?” without looking at the equation. The inside view tries to analyze the function. The hardcore inside view tries to actually plot points on the function; the not-quite-so-ambitious inside view just tries to find its zeros, inflection points, regions where it must be positive or negative, etc.
For complex issues, you should go back and forth between these views. If your outside view gives you 5 other functions you think are comparable, and they are all monotonic increasing, see if your function is monotonic increasing. If your inside view suggests that your function is everywhere positive, and 4 of your 5 outside-view functions are everywhere positive, throw out the 5th. And so on.
For example, the Singularity is unusual enough that looking past it calls for some inside-view thinking. But there are a few strong outside-view constraints that you can invoke that turn out to have strong implications, such as the speed of light.
The one that’s on my mind at present is my view that there will be scarcity of energy and physical resources after the Singularity, an outside view based on general properties of complex systems such as markets and ecosystems, plus the relative effects I expect the Singularity to have on the speed of computation and development of new information vs. the speed of development of energy and physical resources. A lot of people on LW make the contrary assumption that there will be abundance after the Singularity (often, I think, as a necessary article of faith for the Church of the Friendly AI Santa). The resulting diverging opinions on what is likely to happen (such as, Am I likely to be revived from cryonic suspension after the Singularity?) are an example of opinions resulting from similar inside-view reasoning within different outside-view constraints. The argument isn’t inside view vs. outside view; the argument in this case is between two outside views.
(Well, actually there’s more to our differences than that. I think that asking “Will I be revived after the Singularity?” is like a little kid asking “Will there be Bugs Bunny cartoons in heaven?” Asking the question is at best humorous, regardless of whether the answer is yes or no. But for purposes of illustrating my point, the differing outside views is what matters.)
(Well, actually there’s more to our differences than that. I think that asking “Will I be revived after the Singularity?” is like a little kid asking “Will there be Bugs Bunny cartoons in heaven?” Asking the question is at best humorous, regardless of whether the answer is yes or no. But for purposes of illustrating my point, the differing outside views is what matters.)
That is the real conversation halter. “Appeal to the outside view” is usually just a bad argument, making something silly regardless of the answer, that’s a conversation halter and a mind killer.
If something really is silly, then saying so is a mind-freer, not a mind-killer. If we were actually having that conversation, and I said “That’s silly” at the start, rather than at the end, you might accuse me of halting the conversation. This is a brief summary of a complicated position, not an argument.
I don’t think that many people expect the elimination of resource constraints. Regarding the issue of FAI as Santa, wouldn’t the same statement apply to the industrial revolution?
Regarding reviving the cryonically suspended, yes, probably a confusion, but not clearly, and working within the best model we have, the answer is “plausibly” which is all that anyone claims.
Regarding the issue of FAI as Santa, wouldn’t the same statement apply to the industrial revolution?
The industrial revolution gives you stuff. Santa gives you what you want. When I read people’s dreams of a future in which an all-knowing benevolent Friendly AI provides them with everything they want forever, it weirds me out to think these are the same people who ridicule Christians. I’ve read interpretations of Friendly AI that suggest a Friendly AI is so smart, it can provide people with things that are logically inconsistent. But can a Friendly AI make a rock so big that it can’t move it?
You seem to be phrasing this as an either/or decision.
Remembering that all decisions are best formalized as functions, the outside view asks, “What other phenomena are like this one, and what did their functions look like?” without looking at the equation. The inside view tries to analyze the function. The hardcore inside view tries to actually plot points on the function; the not-quite-so-ambitious inside view just tries to find its zeros, inflection points, regions where it must be positive or negative, etc.
For complex issues, you should go back and forth between these views. If your outside view gives you 5 other functions you think are comparable, and they are all monotonic increasing, see if your function is monotonic increasing. If your inside view suggests that your function is everywhere positive, and 4 of your 5 outside-view functions are everywhere positive, throw out the 5th. And so on.
For example, the Singularity is unusual enough that looking past it calls for some inside-view thinking. But there are a few strong outside-view constraints that you can invoke that turn out to have strong implications, such as the speed of light.
The one that’s on my mind at present is my view that there will be scarcity of energy and physical resources after the Singularity, an outside view based on general properties of complex systems such as markets and ecosystems, plus the relative effects I expect the Singularity to have on the speed of computation and development of new information vs. the speed of development of energy and physical resources. A lot of people on LW make the contrary assumption that there will be abundance after the Singularity (often, I think, as a necessary article of faith for the Church of the Friendly AI Santa). The resulting diverging opinions on what is likely to happen (such as, Am I likely to be revived from cryonic suspension after the Singularity?) are an example of opinions resulting from similar inside-view reasoning within different outside-view constraints. The argument isn’t inside view vs. outside view; the argument in this case is between two outside views.
(Well, actually there’s more to our differences than that. I think that asking “Will I be revived after the Singularity?” is like a little kid asking “Will there be Bugs Bunny cartoons in heaven?” Asking the question is at best humorous, regardless of whether the answer is yes or no. But for purposes of illustrating my point, the differing outside views is what matters.)
That is the real conversation halter. “Appeal to the outside view” is usually just a bad argument, making something silly regardless of the answer, that’s a conversation halter and a mind killer.
If something really is silly, then saying so is a mind-freer, not a mind-killer. If we were actually having that conversation, and I said “That’s silly” at the start, rather than at the end, you might accuse me of halting the conversation. This is a brief summary of a complicated position, not an argument.
I don’t think that many people expect the elimination of resource constraints.
Regarding the issue of FAI as Santa, wouldn’t the same statement apply to the industrial revolution? Regarding reviving the cryonically suspended, yes, probably a confusion, but not clearly, and working within the best model we have, the answer is “plausibly” which is all that anyone claims.
The industrial revolution gives you stuff. Santa gives you what you want. When I read people’s dreams of a future in which an all-knowing benevolent Friendly AI provides them with everything they want forever, it weirds me out to think these are the same people who ridicule Christians. I’ve read interpretations of Friendly AI that suggest a Friendly AI is so smart, it can provide people with things that are logically inconsistent. But can a Friendly AI make a rock so big that it can’t move it?
Citation needed.
The economy gives you what you want. Cultural snobbishness gives you what you should want. Next...