So I’ll just call it how I see it: Do you want to make self-improving AGI a reality? Then we’ll have to find a way to make it happen without involving public opinion in this decision.
Well, I’m not at all convinced that substantially self-improving AGI can exist (that is, that will self-improve at such a rate as to quickly gain near complete control of its light cone or something like that). I assign only a small chance to the likelyhood that the first AGI will go foom. Also, if I’ve learned one thing from LW it is that such AI could plausibly be really bad. So I’d rather take a risk averse strategy if it at all possible.
Well, I’m not at all convinced that substantially self-improving AGI can exist (that is, that will self-improve at such a rate as to quickly gain near complete control of its light cone or something like that). I assign only a small chance to the likelyhood that the first AGI will go foom. Also, if I’ve learned one thing from LW it is that such AI could plausibly be really bad. So I’d rather take a risk averse strategy if it at all possible.