I don’t want to find out what I’d do with unlimited power.
Well, I think the first thing you should do is attempt to make a substantial effort to not accidentally kill yourself when you use your powers, as the person with the outcome pump does in the linked story.
But the next thing you need to do (while bearing in mind not killing yourself) is to gain at least some ability to predict the behavior of the unlimited power. This appears to run contrary to a literal interpretation of your point. I DO want to find out at least some of what I would do with unlimited power, but I want to know that before ACTUALLY having unlimited power.
For instance, when you hand a person a gun, they need to be able to predict what the gun is going to do to be able to use it without injury. If they know nothing at all, they might do the possibly fatal combination of looking in the hole while pulling the trigger. If they are more familiar with guns, they might still injure themselves if the recoil breaks their bones when they fire it holding it the wrong way. I’m actually not familiar with guns personally, so there are probably plenty of other safety measures that haven’t even occurred to me.
And as you point out, a flaw in our current method of elite selection is that our elites aren’t as predictable as we like, and that makes them dangerous. Maybe they’ll be good for humanity… maybe they’ll turn us all into slaves. Once again, the unpredictability is dangerous.
In fact, realistically, saying “I don’t want to find out what I’d do with unlimited power.” might ITSELF a corruption sign. It means you can truthfully say “I don’t think I would do anything evil.” because you haven’t thought about it. It still leaves you eligible for power later. It’s easy to delude yourself that you’re really a good person if you don’t think about the specifics.
One possible next step might be to imagine a publicly known computer algorithm which can predict what humans will do given substantial amounts of new power. The algorithm doesn’t need to be generally self improving, or an oracle, or an AI. It just needs to be able to make that specific prediction relatively reliably, and ideally in a way that it can defeat people trying to fool it, and in a way that is openly verifiable to everyone. (For obvious reasons, having this kind of determination be private doesn’t work, or the current elites can just write it to support themselves.)
But if done correctly, we could then use that to evaluate “Would this humans be likely to go cacklingly mad with power in a bad way?” If so, we could try to not let them have power. If EVERYONE is predicted to go cacklingly mad in a bad way, then we can update based on that as well.
What’s weird is that if this is a good idea, it means that realistically, I would have to be supporting some sort of precrimish system, where elites would have to live with things like “The computer predicts that you are too likely to abuse your power at this time, and even though you haven’t done that YET… we need to remove you from your position of power for everyone’s safety.”
But, if we are worried about elites who haven’t done anything wrong yet abusing their power later, I don’t see any way around solving it in advance that wouldn’t restrict elites in some way.
Of course, this brings up a FURTHER problem, which is that in a lot of cases, humans aren’t even as capable of choosing their own elites. So even if we had a solid algorithm of judging who would be a good leader and who would be a bad leader, and even if people generally accepted this, we can’t then take the candidates and simply install them without significant conflicts.
Unfortunately, realistically this post is getting substantially above the length at which I feel competent writing/thinking, so I should get some evaluation on it before continuing.
Well, I think the first thing you should do is attempt to make a substantial effort to not accidentally kill yourself when you use your powers, as the person with the outcome pump does in the linked story.
But the next thing you need to do (while bearing in mind not killing yourself) is to gain at least some ability to predict the behavior of the unlimited power. This appears to run contrary to a literal interpretation of your point. I DO want to find out at least some of what I would do with unlimited power, but I want to know that before ACTUALLY having unlimited power.
For instance, when you hand a person a gun, they need to be able to predict what the gun is going to do to be able to use it without injury. If they know nothing at all, they might do the possibly fatal combination of looking in the hole while pulling the trigger. If they are more familiar with guns, they might still injure themselves if the recoil breaks their bones when they fire it holding it the wrong way. I’m actually not familiar with guns personally, so there are probably plenty of other safety measures that haven’t even occurred to me.
And as you point out, a flaw in our current method of elite selection is that our elites aren’t as predictable as we like, and that makes them dangerous. Maybe they’ll be good for humanity… maybe they’ll turn us all into slaves. Once again, the unpredictability is dangerous.
And we can’t just make ourselves the elites, because we aren’t necessarily safe either.
In fact, realistically, saying “I don’t want to find out what I’d do with unlimited power.” might ITSELF a corruption sign. It means you can truthfully say “I don’t think I would do anything evil.” because you haven’t thought about it. It still leaves you eligible for power later. It’s easy to delude yourself that you’re really a good person if you don’t think about the specifics.
One possible next step might be to imagine a publicly known computer algorithm which can predict what humans will do given substantial amounts of new power. The algorithm doesn’t need to be generally self improving, or an oracle, or an AI. It just needs to be able to make that specific prediction relatively reliably, and ideally in a way that it can defeat people trying to fool it, and in a way that is openly verifiable to everyone. (For obvious reasons, having this kind of determination be private doesn’t work, or the current elites can just write it to support themselves.)
Note, problems doing that kind of system wrong have certainly come up.
But if done correctly, we could then use that to evaluate “Would this humans be likely to go cacklingly mad with power in a bad way?” If so, we could try to not let them have power. If EVERYONE is predicted to go cacklingly mad in a bad way, then we can update based on that as well.
What’s weird is that if this is a good idea, it means that realistically, I would have to be supporting some sort of precrimish system, where elites would have to live with things like “The computer predicts that you are too likely to abuse your power at this time, and even though you haven’t done that YET… we need to remove you from your position of power for everyone’s safety.”
But, if we are worried about elites who haven’t done anything wrong yet abusing their power later, I don’t see any way around solving it in advance that wouldn’t restrict elites in some way.
Of course, this brings up a FURTHER problem, which is that in a lot of cases, humans aren’t even as capable of choosing their own elites. So even if we had a solid algorithm of judging who would be a good leader and who would be a bad leader, and even if people generally accepted this, we can’t then take the candidates and simply install them without significant conflicts.
Unfortunately, realistically this post is getting substantially above the length at which I feel competent writing/thinking, so I should get some evaluation on it before continuing.