The only part of the chain of logic that I don’t fully grok is the “FOOM” part. Specifically, the recursive self improvement. My intuition tells me that an AGI trying to improve itself by rewriting its own code would encounter diminishing returns after a point—after all, there would seem to be a theoretical minimum number of instructions necessary to implement an ideal Bayesian reasoner. Once the AGI has optimized its code down to that point, what further improvements can it do (in software)? Come up with something better than Bayesianism?
Now in your summary here, you seem to downplay the recursive self-improvement part, implying that it would ‘help,’ but isn’t strictly necessary. But my impression from reading Eliezer was that he considers it an integral part of the thesis—as it would seem to be to me as well. Because if the intelligence explosion isn’t coming from software self-improvement, then where is it coming from? Moore’s Law? That isn’t fast enough for a “FOOM”, even if intelligence scaled linearly with the hardware you threw at it, which my intuition tells me it probably wouldn’t.
Now of course this is all just intuition—I haven’t done the math, or even put a lot of thought into it. It’s just something that doesn’t seem obvious to me, and I’ve never heard a compelling explanation to convince me my intuition is wrong.
I don’t think anyone argues that there’s no limit to recursive self-improvement, just that the limit is very high. Personally I’m not sure if a really fast FOOM is possible, but I think it’s likely enough to be worth worrying about (or at least letting the SIAI worry about it...).
I think the concern stands even without a FOOM; if AI gets a good bit smarter than us, however that happens (design plus learning, or self-improvement), it’s going to do whatever it wants.
As for your “ideal Bayesian” intuition, I think the challenge is deciding WHAT to apply it to. The amount of computational power needed to apply it to every thing and every concept on earth is truly staggering. There is plenty of room for algorithmic improvement, and it doesn’t need to get that good to outwit (and out-engineer) us.
I think the widespread opinion is that the human brain has relatively inefficient hardware—I don’t have a cite for this—and, most likely, inefficient software as well (it doesn’t seem like evolution is likely to have optimized general intelligence very well in the relatively short timeframe that we have had it at all, and we don’t seem to be able to efficiently and consistently channel all of our intelligence into rational thought.)
That being the case, if we were going to write an AI that was capable of self-improvement on hardware that was roughly as powerful or more powerful than the human brain (which seems likely) it stands to reason that it could potentially be much faster and more effective than the human brain; and self-improvement should move it quickly in that direction.
The only part of the chain of logic that I don’t fully grok is the “FOOM” part. Specifically, the recursive self improvement. My intuition tells me that an AGI trying to improve itself by rewriting its own code would encounter diminishing returns after a point—after all, there would seem to be a theoretical minimum number of instructions necessary to implement an ideal Bayesian reasoner. Once the AGI has optimized its code down to that point, what further improvements can it do (in software)? Come up with something better than Bayesianism?
Now in your summary here, you seem to downplay the recursive self-improvement part, implying that it would ‘help,’ but isn’t strictly necessary. But my impression from reading Eliezer was that he considers it an integral part of the thesis—as it would seem to be to me as well. Because if the intelligence explosion isn’t coming from software self-improvement, then where is it coming from? Moore’s Law? That isn’t fast enough for a “FOOM”, even if intelligence scaled linearly with the hardware you threw at it, which my intuition tells me it probably wouldn’t.
Now of course this is all just intuition—I haven’t done the math, or even put a lot of thought into it. It’s just something that doesn’t seem obvious to me, and I’ve never heard a compelling explanation to convince me my intuition is wrong.
I don’t think anyone argues that there’s no limit to recursive self-improvement, just that the limit is very high. Personally I’m not sure if a really fast FOOM is possible, but I think it’s likely enough to be worth worrying about (or at least letting the SIAI worry about it...).
I think the concern stands even without a FOOM; if AI gets a good bit smarter than us, however that happens (design plus learning, or self-improvement), it’s going to do whatever it wants.
As for your “ideal Bayesian” intuition, I think the challenge is deciding WHAT to apply it to. The amount of computational power needed to apply it to every thing and every concept on earth is truly staggering. There is plenty of room for algorithmic improvement, and it doesn’t need to get that good to outwit (and out-engineer) us.
I think the widespread opinion is that the human brain has relatively inefficient hardware—I don’t have a cite for this—and, most likely, inefficient software as well (it doesn’t seem like evolution is likely to have optimized general intelligence very well in the relatively short timeframe that we have had it at all, and we don’t seem to be able to efficiently and consistently channel all of our intelligence into rational thought.)
That being the case, if we were going to write an AI that was capable of self-improvement on hardware that was roughly as powerful or more powerful than the human brain (which seems likely) it stands to reason that it could potentially be much faster and more effective than the human brain; and self-improvement should move it quickly in that direction.