Ok. Well done. You have managed to frighten me. Frightened me enough to make me ask the question: “Just why do we want to build a powerful optimizer, anyways?”
More likely, most people who try to think about AI ethics are going to be genuinely really confused about it for a while or forever, whereas “is death okay/good?” is not a confusing question.
Oh, yeah. Now I remember. The reason we want to build a powerful optimizer is because some people think that “Is death okay/good?” is not a confusing question but that the question “Is it okay/good to risk the future of the Earth by building an amoral agent much more powerful than ourselves?” is confusing.
Ok. Well done. You have managed to frighten me. Frightened me enough to make me ask the question: “Just why do we want to build a powerful optimizer, anyways?”
I feel like I remember trying to answer the same question (asked by you) before, but essentially, the answer is that (1) eventually (assuming humanity survives long enough) someone is probably going to build one anyway, probably without being extremely careful about understanding what kind of optimizer it’s goint to be, and getting FAI before then will probably be the only way to prevent it; (2) there are many reasons why humanity might not survive long enough for that to happen — it’s likely that humanity’s technological progress over the next century will continuously lower the amount of skill, intelligence, and resources needed to accidentally or intentionally do terrible things — and getting FAI before then may be the best long-term solution to that; (3) given that pursuing FAI is likely necessary to avert other huge risks, and is therefore less risky than doing nothing, it’s an especially good cause considering that it subsumes all other humanitarian causes (if executed successfully).
I feel like I remember trying to answer the same question (asked by you) before …
Perhaps you did. This time, my question was mostly rhetorical, but since you gave a thoughtful response, it seems a shame to waste it.
(1) eventually … someone is probably going to build one anyway, probably without being extremely careful …, and getting FAI before then will probably be the only way to prevent it;
Uh. Prevent it how. I’m curious how that particular sausage will be made.
(2) … it’s likely that humanity’s technological progress over the next century will continuously lower the amount of skill, intelligence, and resources needed to accidentally or intentionally do terrible things — and getting FAI before then may be the best long-term solution to that;
More sausage. How does the FAI solve that problem? It seemed that you said the root cause of the problem was technological progress, but perhaps I misunderstood.
(3) … it subsumes all other humanitarian causes …
Hmmm. Amnesty International, Doctors without Borders, and the Humane Society are three humanitarian causes that come to mind. FAI subsumes these … how, exactly?
Again, my questions are somewhat rhetorical. If I really wanted to engage in this particular dialog, I should probably do so in a top-level posting. So please do not feel obligated to respond.
It is just that if Ben Goertzel is so confused as to hope that any sufficiently intelligent entity will automatically empathize with humans, then how much confusion exists here regarding just how much humans will automatically accept the idea of sharing a planet with an FAI? Smart people can have amazing blind spots.
If I knew how that sausage will be made, I’d make it myself. The point of FAI is to do a massive amount of good that we’re not smart enough to figure out how to do on our own.
Hmmm. Amnesty International, Doctors without Borders, and the Humane Society are three humanitarian causes that come to mind. FAI subsumes these … how, exactly?
If humanity’s extrapolated volition largely agrees that those causes are working on important problems, problems urgent enough that we’re okay with giving up the chance to solve them ourselves if they can be solved faster and better by superintelligence, then it’ll do so. Doctors Without Borders? We shouldn’t be needing doctors (or borders) anymore. Saying how that happens is explicitly not our job — as I said, that’s the whole point of making something massively smarter than we are. Don’t underestimate something potentially hundreds or thousands or billions of times smarter than every human put together.
I actually think we know how to do the major ‘trauma care for civilization’ without FAI at this point. FAI looks much cheaper and possibly faster though, so in the process of doing the “trauma care” we should obviously fund it as a top priority. I basically see it as the largest “victory point” option in a strategy game.
When answering questions like this, it’s important to make the following disclaimer: I do not know what the best solution is. If a genuine FAI considers these questions, ve will probably come up with something much better. I’m proposing ideas solely to show that some options exist which are strictly preferable to human extinction, dystopias, and the status quo.
It’s pretty clear that (1) we don’t want to be exterminated by a rogue AI, or nanotech, or plague, or nukes, (2) we want to have aging and disease fixed for us (at least for long enough to sit back and clearly think about what we want of the future), and (3) we don’t want an FAI to strip us of all autonomy and growth in order to protect us. There are plenty of ways to avoid both these possibilities. For one, the FAI could basically act as a good Deist god should have: fix the most important aspects of aging, disease and dysfunction, make murder (and construction of superweapons/unsafe AIs) impossible via occasional miraculous interventions, but otherwise hang back and let us do our growing up. (If at some point humanity decides we’ve outgrown its help, it should fade out at our request.) None of this is technically that difficult, given nanotech.
Personally, I think a FAI could do much better than this scenario, but if I talked about that we’d get lost arguing the weird points. I just want to ask, is there a sense in which this lower bound would really seem like a dystopia to you? (If so, please think for a few minutes about possible fixes first.)
I just want to ask, is there a sense in which this lower bound would really seem like a dystopia to you?
No, not at all. It sounds pretty good. However, my opinion of what you describe is not the issue. The issue is what ordinary, average, stupid, paranoid, and conservative people think about the prospect of a powerful AI totally changing their lives when they have only your self-admittedly ill informed assurances regarding how good it is going to be.
Please don’t move the goalposts. I’d much rather know whether I’m convincing you than whether I’m convincing a hypothetical average Joe. Figuring out a political case for FAI is important, but secondary to figuring out whether it’s actually possible and desirable.
Ok, I don’t mean to be unfairly moving goalposts around. But I will point out that gaining my assent to a hypothetical is not the same as gaining my agreement regarding the course that ought to be followed into an uncertain future.
That’s fair enough. The choice of course depends on whether FAI is even possible, and whether any group could be trusted to build it. But conditional on those factors, we can at least agree that such a thing is desirable.
Ok. Well done. You have managed to frighten me. Frightened me enough to make me ask the question: “Just why do we want to build a powerful optimizer, anyways?”
Oh, yeah. Now I remember. The reason we want to build a powerful optimizer is because some people think that “Is death okay/good?” is not a confusing question but that the question “Is it okay/good to risk the future of the Earth by building an amoral agent much more powerful than ourselves?” is confusing.
I feel like I remember trying to answer the same question (asked by you) before, but essentially, the answer is that (1) eventually (assuming humanity survives long enough) someone is probably going to build one anyway, probably without being extremely careful about understanding what kind of optimizer it’s goint to be, and getting FAI before then will probably be the only way to prevent it; (2) there are many reasons why humanity might not survive long enough for that to happen — it’s likely that humanity’s technological progress over the next century will continuously lower the amount of skill, intelligence, and resources needed to accidentally or intentionally do terrible things — and getting FAI before then may be the best long-term solution to that; (3) given that pursuing FAI is likely necessary to avert other huge risks, and is therefore less risky than doing nothing, it’s an especially good cause considering that it subsumes all other humanitarian causes (if executed successfully).
Perhaps you did. This time, my question was mostly rhetorical, but since you gave a thoughtful response, it seems a shame to waste it.
Uh. Prevent it how. I’m curious how that particular sausage will be made.
More sausage. How does the FAI solve that problem? It seemed that you said the root cause of the problem was technological progress, but perhaps I misunderstood.
Hmmm. Amnesty International, Doctors without Borders, and the Humane Society are three humanitarian causes that come to mind. FAI subsumes these … how, exactly?
Again, my questions are somewhat rhetorical. If I really wanted to engage in this particular dialog, I should probably do so in a top-level posting. So please do not feel obligated to respond.
It is just that if Ben Goertzel is so confused as to hope that any sufficiently intelligent entity will automatically empathize with humans, then how much confusion exists here regarding just how much humans will automatically accept the idea of sharing a planet with an FAI? Smart people can have amazing blind spots.
If I knew how that sausage will be made, I’d make it myself. The point of FAI is to do a massive amount of good that we’re not smart enough to figure out how to do on our own.
If humanity’s extrapolated volition largely agrees that those causes are working on important problems, problems urgent enough that we’re okay with giving up the chance to solve them ourselves if they can be solved faster and better by superintelligence, then it’ll do so. Doctors Without Borders? We shouldn’t be needing doctors (or borders) anymore. Saying how that happens is explicitly not our job — as I said, that’s the whole point of making something massively smarter than we are. Don’t underestimate something potentially hundreds or thousands or billions of times smarter than every human put together.
I actually think we know how to do the major ‘trauma care for civilization’ without FAI at this point. FAI looks much cheaper and possibly faster though, so in the process of doing the “trauma care” we should obviously fund it as a top priority. I basically see it as the largest “victory point” option in a strategy game.
When answering questions like this, it’s important to make the following disclaimer: I do not know what the best solution is. If a genuine FAI considers these questions, ve will probably come up with something much better. I’m proposing ideas solely to show that some options exist which are strictly preferable to human extinction, dystopias, and the status quo.
It’s pretty clear that (1) we don’t want to be exterminated by a rogue AI, or nanotech, or plague, or nukes, (2) we want to have aging and disease fixed for us (at least for long enough to sit back and clearly think about what we want of the future), and (3) we don’t want an FAI to strip us of all autonomy and growth in order to protect us. There are plenty of ways to avoid both these possibilities. For one, the FAI could basically act as a good Deist god should have: fix the most important aspects of aging, disease and dysfunction, make murder (and construction of superweapons/unsafe AIs) impossible via occasional miraculous interventions, but otherwise hang back and let us do our growing up. (If at some point humanity decides we’ve outgrown its help, it should fade out at our request.) None of this is technically that difficult, given nanotech.
Personally, I think a FAI could do much better than this scenario, but if I talked about that we’d get lost arguing the weird points. I just want to ask, is there a sense in which this lower bound would really seem like a dystopia to you? (If so, please think for a few minutes about possible fixes first.)
No, not at all. It sounds pretty good. However, my opinion of what you describe is not the issue. The issue is what ordinary, average, stupid, paranoid, and conservative people think about the prospect of a powerful AI totally changing their lives when they have only your self-admittedly ill informed assurances regarding how good it is going to be.
Please don’t move the goalposts. I’d much rather know whether I’m convincing you than whether I’m convincing a hypothetical average Joe. Figuring out a political case for FAI is important, but secondary to figuring out whether it’s actually possible and desirable.
Ok, I don’t mean to be unfairly moving goalposts around. But I will point out that gaining my assent to a hypothetical is not the same as gaining my agreement regarding the course that ought to be followed into an uncertain future.
That’s fair enough. The choice of course depends on whether FAI is even possible, and whether any group could be trusted to build it. But conditional on those factors, we can at least agree that such a thing is desirable.