Appologies for the provacative phrasing—I was (inadvertently) asking for a heated reply...
But to clarify the point in light of your response (which no doubt will get another heated reply, though honestly trying to convey the point w/out provoking...):
Piles of radioactive material is not a good analogy here. But I think it’s appearance here is a good illustration of the very thing I’m hoping to convey: There are a lot of (vague, wrong) theories of AGI which map well to the radioactive pile analogy. Just put enough of the ingredients together in a pile, and FOOM. But the more you actually work on AGI, the more you realize how heuristic, incremental, and data bound it is; how a fantastic solution to monkey problems (vision, planning, etc) confers only the weakest ability in symbolic domains, and that, for instance, NP problems most likely remain NP hard regardless of intelligence, and their solutions are limited by time, space, and energy constraints—not cleverness. Can a hyper-intelligent AI improve upon hardware design, etc, etc? Sure! But the whole system (of progress) we’re speaking of is a large complex system of differential equations with many bottlenecks, at least some of which aren’t readily amendable to hyper-exponential change. Will there be a point where things are out of our (human) hands? Yes. Will it happen over night? No.
The radioactive pile analogy fails because AGI will not be had by heaping a bunch of stuff in a pile—it will be had through extensive engineering and design. It will progress incrementally, and it will be bounded by resource constraints—for a long long time.
A better analogy might be to building a fusion reactor. Sure, you have to be careful, especially in the final design, construction, and execution of the full scale device, but there’s a huge amount of engineering to be done before you get anywhere near having to worry about that, and tons of smaller experiments that need to be proven first, and so on. And you learn as you go, and after years of work you start getting closer and you know a shitload about the technology and what it’s quirks and hazards are and what’s easy to control, and so on.
And when you’re there—well along the way, and you understand the technology to a deep level even if you haven’t quite figured out how to make the sustainable fusion reactor yet—it’s pretty insulting/annoying when someone who doesn’t have any practical(!) grasp of the matter comes along and tells you you shouldn’t be working on this because they haven’t figured out how to make it safe yet! And because they think that if you put too much stuff in a pile, it will go boom! (Sigh!) You’re on your way to making clean energy before the peak oil appocolypse or whatever, and they’re working against you (if only they knew what their efforts were costing the world).
What do you do there? They’re fearful because they don’t understand, and the particulars of their fear are, really, superstition (in the sense that they are not founded on a solid understanding, but quite specifically on a lack thereof). You want to say: get up to speed, and you can help us make this work and make it safe (and when you understand it better, you’ll start to actually understand how that might be done—and how much less of an explosive problem it is than you think). But GTF out of my way if you’re just going to pontificate from ignorance and try to dictate how I do my job from there. No matter how long you sofa-think about how to keep the pile-o-stuff from going bad-FOOM, your answers are never going to mesh with reality because you’ve got way too many false premises that need to be sifted out first (through actual experience in the topic). [And, sorry, but no matter how big Eliezer’s cloud of self-citations is, that’s just someone else’s sofa-think, not actual experience.]
Personally, I do not think FAI is a hard problem (a highly educated opinion, not offhand dismissal). But I also know that UAI is going to happen eventually (intentionally), no matter how many conferences y’all have. And I also know the odds are highest of all we’ll all die of old age because AI didn’t happen well enough soon enough. But I understand if you disagree.
I think it will be incidental to AGI. That is, by the time you are approaching human-level AGI it will be essentially obvious (to the sort of person who groks human-level AGI in the first place). Motivation (as a component of the process of thinking) is integral to AGI, not some extra thing only humans and animals happen to have. Motivation needs be grokked before you will have AGI in the first place. Human motivational structure is quite complex, with far more alterior motives (clan affiliation, reproduction, etc) than straightforward ones. AGIs needn’t be so-burdened, which in many ways makes the FAI problem easier in fact than our human-based intuition might surmise. On the other hand, simple random variation is a huge risk—that is, no matter the intentional agenda, there is always the possibility that a simple error will put that very abstract coefficient of feedback over unity, and then you have a problem. If AGI weren’t going to happen regardless, I might say it’s worthy of a debate now what the nature of that problem would be (but in that debate, I still say it’s not a huge problem—it’s not instantaneous FOOM, it’s time-to-uplug FOOM; and you have the advantage of other FAIs by then with full ability to analyze each other so you actually have a lot of tools available to put out fires long before they’re raging); but AGI is going to happen regardless, so the race is not FAI vs. AGI, but whether the first to solve AGI wants FAI or something else. And like I say, there is also the race against our own inevitable demise of old age (talk to anybody who’s been in the longevity community for > 20 years and you will learn they once had your optimism about progress).
Don’t get me wrong, FAI is not an uninteresting problem. My claim is quite simply that for the goals of the FAI community (which I have to assume includes your own long-term survival), y’all would do far better to be working (hard and seriously) on AGI than not. All of this sofa-think today will be replicated in short order by better-informed consideration down the road. And I aint sayin’ don’t think about it today—I’m saying find a realistic balance between FAI and AGI research that doesn’t leave you so far behind the game that your goals never get to matter, and I’m sayin’ that’s 99% AGI research and 1% FAI (for now). (And no, that doesn’t mean 99 people doing AGI and 1 doing FAI. My point is the 1 doing FAI is useless if they aren’t 99% steeped in AGI from which to think about FAI in the first place.)