It’s a bit off topic, but I’ve been meaning to ask Eliezer this for a while. I think I get the basic logic behind “FOOM.” If a brain as smart as ours could evolve from pretty much nothing, then it seems likely that sooner or later (and I have not the slightest idea whether it will be sooner or later) we should be able to use the smarts we have to design a mind that is smarter. And if we can make a mind smarter than ours, it seems likely that that mind should be able to make one smarter than it, and so on. And this process should be pretty explosive, at least for a while, so that in pretty short order the smartest minds around will be way more than a match for us, which is why it’s so important that it be baked into the process from the beginning that it proceed in a way that we will regard as friendly.
But it seems to me that this qualitative argument works equally well whether “explosive” means “box in someone’s basement to Unchallenged Lord and Master of the Universe Forever and Ever” before anyone else knows about it or can do anything about it, or it means “different people/groups will innovate and and borrow/steal each others’ innovations over a period of many years, at the end of which where we end up will depend only a little on the contribution of the people who started the ball rolling.” And if that’s right, doesn’t it follow that what really matters is not the correctness of the FOOM argument (which seems correct to me), but rather the estimate of how big the exponent is in the exponential growth is likely to be? Is this (and much of your disagreement with Robin Hanson) just a disagreement over an estimate of a number? Does that disagreement stand any chance of being anywhere near resolved with available evidence?
It’s a bit off topic, but I’ve been meaning to ask Eliezer this for a while. I think I get the basic logic behind “FOOM.” If a brain as smart as ours could evolve from pretty much nothing, then it seems likely that sooner or later (and I have not the slightest idea whether it will be sooner or later) we should be able to use the smarts we have to design a mind that is smarter. And if we can make a mind smarter than ours, it seems likely that that mind should be able to make one smarter than it, and so on. And this process should be pretty explosive, at least for a while, so that in pretty short order the smartest minds around will be way more than a match for us, which is why it’s so important that it be baked into the process from the beginning that it proceed in a way that we will regard as friendly.
But it seems to me that this qualitative argument works equally well whether “explosive” means “box in someone’s basement to Unchallenged Lord and Master of the Universe Forever and Ever” before anyone else knows about it or can do anything about it, or it means “different people/groups will innovate and and borrow/steal each others’ innovations over a period of many years, at the end of which where we end up will depend only a little on the contribution of the people who started the ball rolling.” And if that’s right, doesn’t it follow that what really matters is not the correctness of the FOOM argument (which seems correct to me), but rather the estimate of how big the exponent is in the exponential growth is likely to be? Is this (and much of your disagreement with Robin Hanson) just a disagreement over an estimate of a number? Does that disagreement stand any chance of being anywhere near resolved with available evidence?