Fast: Immense benefit will come from AI development and any delay has huge opportunity cost. Accelerate AI and robotic development as fast as possible.
I think there’s a crux here. Most members of EA/Less Wrong are utilitarians who assign equal moral weight to humans in future generations as to humans now, and assume that we may well spread first through the solar system and then the galaxy, so there may well be far more of them. So even a 1% reduction in x-risk due to alignment issues is a 1% increase in the existence of a plausibly-astronomical number of future humans. Whereas a delay of, say, a century to get a better chance only delays the start of our expansion through the galaxy by a century, which will eventually become quite a small effect in comparison. Meanwhile, the current number of humans who are thus forced to continue living at only 21st-century living standards is comparatively tiny enough to be unimportant in the utility computation. Yes, this is basically Pascal’s Mugging, only from the future. So as a result, solving alignment almost as carefully as possible, even if that unfortunately makes it slow, becomes a moral imperative. So I don’t think you’re going to find may takers for your viewpoint here.
Pretty-much everyone else in the world lives their life as if any generation past about their grandchildren had almost no moral weight whatsoever, and just need to look after themselves. In small-scale decisions, it’s almost impossible to accurately look a century ahead, and thus wise to act as if considerations a century from now don’t matter. But human extinction is forever, so it’s not hard to look forward many millennia with respect to its the effects: if we screw up with AI and render ourselves extinct, we’ll very-predictably still be extinct a million years from now, and the rest of the universe will have to deal with whatever paperclip maximizer we accidentally created as the legacy of our screw-up.
Thanks for the insightful comment. Ultimately the different attitude is about the perceived existential risk posed by the technology and the risks coming by acting on accelerating AI vs not acting.
And yes I was expecting not to find much agreement here, but that’s what makes it interesting :)
I think there’s a crux here. Most members of EA/Less Wrong are utilitarians who assign equal moral weight to humans in future generations as to humans now, and assume that we may well spread first through the solar system and then the galaxy, so there may well be far more of them. So even a 1% reduction in x-risk due to alignment issues is a 1% increase in the existence of a plausibly-astronomical number of future humans. Whereas a delay of, say, a century to get a better chance only delays the start of our expansion through the galaxy by a century, which will eventually become quite a small effect in comparison. Meanwhile, the current number of humans who are thus forced to continue living at only 21st-century living standards is comparatively tiny enough to be unimportant in the utility computation. Yes, this is basically Pascal’s Mugging, only from the future. So as a result, solving alignment almost as carefully as possible, even if that unfortunately makes it slow, becomes a moral imperative. So I don’t think you’re going to find may takers for your viewpoint here.
Pretty-much everyone else in the world lives their life as if any generation past about their grandchildren had almost no moral weight whatsoever, and just need to look after themselves. In small-scale decisions, it’s almost impossible to accurately look a century ahead, and thus wise to act as if considerations a century from now don’t matter. But human extinction is forever, so it’s not hard to look forward many millennia with respect to its the effects: if we screw up with AI and render ourselves extinct, we’ll very-predictably still be extinct a million years from now, and the rest of the universe will have to deal with whatever paperclip maximizer we accidentally created as the legacy of our screw-up.
Thanks for the insightful comment. Ultimately the different attitude is about the perceived existential risk posed by the technology and the risks coming by acting on accelerating AI vs not acting.
And yes I was expecting not to find much agreement here, but that’s what makes it interesting :)