When answering questions like this, it’s important to make the following disclaimer: I do not know what the best solution is. If a genuine FAI considers these questions, ve will probably come up with something much better. I’m proposing ideas solely to show that some options exist which are strictly preferable to human extinction, dystopias, and the status quo.
It’s pretty clear that (1) we don’t want to be exterminated by a rogue AI, or nanotech, or plague, or nukes, (2) we want to have aging and disease fixed for us (at least for long enough to sit back and clearly think about what we want of the future), and (3) we don’t want an FAI to strip us of all autonomy and growth in order to protect us. There are plenty of ways to avoid both these possibilities. For one, the FAI could basically act as a good Deist god should have: fix the most important aspects of aging, disease and dysfunction, make murder (and construction of superweapons/unsafe AIs) impossible via occasional miraculous interventions, but otherwise hang back and let us do our growing up. (If at some point humanity decides we’ve outgrown its help, it should fade out at our request.) None of this is technically that difficult, given nanotech.
Personally, I think a FAI could do much better than this scenario, but if I talked about that we’d get lost arguing the weird points. I just want to ask, is there a sense in which this lower bound would really seem like a dystopia to you? (If so, please think for a few minutes about possible fixes first.)
I just want to ask, is there a sense in which this lower bound would really seem like a dystopia to you?
No, not at all. It sounds pretty good. However, my opinion of what you describe is not the issue. The issue is what ordinary, average, stupid, paranoid, and conservative people think about the prospect of a powerful AI totally changing their lives when they have only your self-admittedly ill informed assurances regarding how good it is going to be.
Please don’t move the goalposts. I’d much rather know whether I’m convincing you than whether I’m convincing a hypothetical average Joe. Figuring out a political case for FAI is important, but secondary to figuring out whether it’s actually possible and desirable.
Ok, I don’t mean to be unfairly moving goalposts around. But I will point out that gaining my assent to a hypothetical is not the same as gaining my agreement regarding the course that ought to be followed into an uncertain future.
That’s fair enough. The choice of course depends on whether FAI is even possible, and whether any group could be trusted to build it. But conditional on those factors, we can at least agree that such a thing is desirable.
When answering questions like this, it’s important to make the following disclaimer: I do not know what the best solution is. If a genuine FAI considers these questions, ve will probably come up with something much better. I’m proposing ideas solely to show that some options exist which are strictly preferable to human extinction, dystopias, and the status quo.
It’s pretty clear that (1) we don’t want to be exterminated by a rogue AI, or nanotech, or plague, or nukes, (2) we want to have aging and disease fixed for us (at least for long enough to sit back and clearly think about what we want of the future), and (3) we don’t want an FAI to strip us of all autonomy and growth in order to protect us. There are plenty of ways to avoid both these possibilities. For one, the FAI could basically act as a good Deist god should have: fix the most important aspects of aging, disease and dysfunction, make murder (and construction of superweapons/unsafe AIs) impossible via occasional miraculous interventions, but otherwise hang back and let us do our growing up. (If at some point humanity decides we’ve outgrown its help, it should fade out at our request.) None of this is technically that difficult, given nanotech.
Personally, I think a FAI could do much better than this scenario, but if I talked about that we’d get lost arguing the weird points. I just want to ask, is there a sense in which this lower bound would really seem like a dystopia to you? (If so, please think for a few minutes about possible fixes first.)
No, not at all. It sounds pretty good. However, my opinion of what you describe is not the issue. The issue is what ordinary, average, stupid, paranoid, and conservative people think about the prospect of a powerful AI totally changing their lives when they have only your self-admittedly ill informed assurances regarding how good it is going to be.
Please don’t move the goalposts. I’d much rather know whether I’m convincing you than whether I’m convincing a hypothetical average Joe. Figuring out a political case for FAI is important, but secondary to figuring out whether it’s actually possible and desirable.
Ok, I don’t mean to be unfairly moving goalposts around. But I will point out that gaining my assent to a hypothetical is not the same as gaining my agreement regarding the course that ought to be followed into an uncertain future.
That’s fair enough. The choice of course depends on whether FAI is even possible, and whether any group could be trusted to build it. But conditional on those factors, we can at least agree that such a thing is desirable.