I’m never exactly sure how to respond to a question like that.
I mean, let’s up the ante a little: what if a group of IA humans with “a 250 IQ or so” are not only sufficient to accomplish our goals, but optimal… what if it turns out that for every unit of intelligence above that threshold, our chance of achieving our goals goes down?
Well, in that case we would do best to have such a group, and to not develop greater intelligence than that.
That said, if I found myself tomorrow confident that a proposition like that was true, I would strongly encourage myself to review how I had arrived at such confidence as rigorously as I could before actually making any decisions based on that confidence, because it’s just so damned implausible that any halfway legitimate process would arrive at such a conclusion.
Well, the way I defined those goals, I think those are all are things that I’m pretty confident that even unenhanced humans would eventually be able to do; intelligence enhancement here is more about speeding it up then about doing the impossible.
I think that curing old age and creating an indefinite lifespan is something we would eventually manage even without enhancement, through some combination of SENS, cloning, organ bioprinting, cybernetics, genetic engineering, ect. Notice, however, that if I had defined the goal as “preventing all death, including accidental death” then we get into a realm where I become less sure of unenhanced humans solving that problem. Like I said, how you set your goals is pretty vital here.
Same for the other goals I mentioned; a “basic” post scarcity system (everyone able to get all the necessities and most consumer goods at will, almost no one needs to work unless they want to) is something that we should be able to do with just some fairly modest advances in narrow AI and robotics, combined with a new economic/political system; that should be within our theoretical capabilities at some point. Space colonization, again, doesn’t seem like an impossible task based on what we know now.
That’s not even to say that we won’t develop GAI, or that we shouldn’t...but if we can solve those basic types of problems without it, then maybe that at least lets us take our time, develop GAI one small step at a time, and make sure we get it right. You know; have AI’s with 100 IQ help us develop friendly AI theory for a while before we start building AI’s with 150 IQ, and then have those help refine our design of AI’s with 180 IQ, ect, instead of a risky rapid self-modifying foom.
I certainly agree that had you set more difficult goals, lower confidence that we can achieve them at all is appropriate. I’m not as confident as you sound that we can plausibly rely on continuing unbounded improvements in narrow AI, robotics, economics, politics, space exploration, genetic engineering, etc., though it would certainly be nice. And, sure, if we’re confident enough in our understanding of underlying theory that we have some reasonable notion of how to safely do “one step at a time and make sure we get it right,” that’s a fine strategy. I’m not sure such confidence is justified, though.
It’s not so much that I have confidence that we can have unbounded improvements in any one of those fields, like genetic engineering for example; it’s more that I would say there are at least 5 or 6 major research paths right now that could conceivably cure aging, all of which seem promising and all of which are areas where significant progress is being made right now, so that even if one or two of them prove impossible or impractical I think the odds of at least some of them eventually working are quite high.
I’m never exactly sure how to respond to a question like that.
I mean, let’s up the ante a little: what if a group of IA humans with “a 250 IQ or so” are not only sufficient to accomplish our goals, but optimal… what if it turns out that for every unit of intelligence above that threshold, our chance of achieving our goals goes down?
Well, in that case we would do best to have such a group, and to not develop greater intelligence than that.
That said, if I found myself tomorrow confident that a proposition like that was true, I would strongly encourage myself to review how I had arrived at such confidence as rigorously as I could before actually making any decisions based on that confidence, because it’s just so damned implausible that any halfway legitimate process would arrive at such a conclusion.
Well, the way I defined those goals, I think those are all are things that I’m pretty confident that even unenhanced humans would eventually be able to do; intelligence enhancement here is more about speeding it up then about doing the impossible.
I think that curing old age and creating an indefinite lifespan is something we would eventually manage even without enhancement, through some combination of SENS, cloning, organ bioprinting, cybernetics, genetic engineering, ect. Notice, however, that if I had defined the goal as “preventing all death, including accidental death” then we get into a realm where I become less sure of unenhanced humans solving that problem. Like I said, how you set your goals is pretty vital here.
Same for the other goals I mentioned; a “basic” post scarcity system (everyone able to get all the necessities and most consumer goods at will, almost no one needs to work unless they want to) is something that we should be able to do with just some fairly modest advances in narrow AI and robotics, combined with a new economic/political system; that should be within our theoretical capabilities at some point. Space colonization, again, doesn’t seem like an impossible task based on what we know now.
That’s not even to say that we won’t develop GAI, or that we shouldn’t...but if we can solve those basic types of problems without it, then maybe that at least lets us take our time, develop GAI one small step at a time, and make sure we get it right. You know; have AI’s with 100 IQ help us develop friendly AI theory for a while before we start building AI’s with 150 IQ, and then have those help refine our design of AI’s with 180 IQ, ect, instead of a risky rapid self-modifying foom.
I certainly agree that had you set more difficult goals, lower confidence that we can achieve them at all is appropriate.
I’m not as confident as you sound that we can plausibly rely on continuing unbounded improvements in narrow AI, robotics, economics, politics, space exploration, genetic engineering, etc., though it would certainly be nice.
And, sure, if we’re confident enough in our understanding of underlying theory that we have some reasonable notion of how to safely do “one step at a time and make sure we get it right,” that’s a fine strategy. I’m not sure such confidence is justified, though.
It’s not so much that I have confidence that we can have unbounded improvements in any one of those fields, like genetic engineering for example; it’s more that I would say there are at least 5 or 6 major research paths right now that could conceivably cure aging, all of which seem promising and all of which are areas where significant progress is being made right now, so that even if one or two of them prove impossible or impractical I think the odds of at least some of them eventually working are quite high.
(nods) Correction noted.