two scenarios [...] biological humans [...] digital humans, posthumans, computronium, hedonium, “paperclips”
It is possible that a specific version of the second scenario is “better” (whatever that means) than the first scenario. Getting this right is extremely important, as the entire universe literally depends on it.
Individual humans should have a say in shaping their share of the universe, especially their own mind.
Good luck figuring out what you are doing with your share, I have no idea. Well I have some ideas. But it’s not like I could design a fully functioning utopia by myself. And most people do not think about how to build a cosmic utopia.
Remember how stupid the average person is, and then remember that half of them are stupider than that.
If you hand a smart skilled person controls of something, say a car to be specific, it doesn’t matter exactly how the controls are arranged. They can learn to use them, and use them to get where they want to go. However the controls are set, the person will get to the same place.
Hand the controls to a sufficiently stupid mind, and they pull the controls at random, or in some arbitrary pattern like pressing the biggest button. Thus where they end up is sensitively dependent on exactly how the controls are structured. And if most random actions end in a crash, they are likely to crash.
So, how do you hand control of most of a galaxy of resources to some flat earth space denier who thinks “stars are fake”? Any plan to turn incoherent gibbering into actions will produce actions that no one really wants, and which are sensitively dependent on the control scheme.
The same applies to “control of our own minds”. Suppose that control came in the form of some intricate and arcane system. Like writing assembly for the brain. 3 genius neurologists carefully read through the 6000 pages of dense technical manual, take years carefully designing and double checking some enhancement and actually make the improvement they aimed for. A million idiots bang keyboards and give themselves new mental illnesses.
Or suppose the interface was friendlier, in the sense of giving people what they were asking for. And loads of religious people ask to be given 100% certain faith in god. And they get it.
The “everyone gets their share” could work in a world where everyone had written, or could write, long coherent descriptions of what they planned to do with their share. But that isn’t this world.
Hence “gets a say”, not unreservedly “determines”. Becoming smarter should certainly be a high salience early option. And the option to eventually get to determine in detail shouldn’t be lost because of initial lack of competence. There is an unimaginable amount of time for people to get their act together at some point.
An “everyone gets a share” system has the downside that if 0.1% of people want X to exist, and 95% of people strongly want X not to exist, then the 0.1% can make X in their share.
Where X might be torturing copies of a controversial political figure. Or violent video games with arguably sentient AI opponents getting killed.
Also, I think you are passing the buck a lot here. Instead of deciding what to do with the universe, you now need to decide how to massively upgrade a bunch of humans into the sort of beings who can decide that.
Also, some people just dislike responsibility.
And the modifications needed to make a person remotely trustworthy to that level are likely substantial. Perhaps. How much do you need to overwrite everyones mind with a FAI? I don’t know.
Some general laws seem appropriate, the same as with competence. This is different from imposing strong optimization pressure. People who have no use for compute could rent it out, until they have a personal need for it at the end of time. Still getting to decide what happens then is what it means to keep control of the future.
Seems a bit of a paradox as ‘their share of the universe’ is not a fixed quantity, nor did such a concept exist before humans, so how could the ‘share’ even have been decided on beforehand in order for the first ‘individual humans’ to have a say?
Individual humans should have a say in shaping their share of the universe, especially their own mind.
Good luck figuring out what you are doing with your share, I have no idea. Well I have some ideas. But it’s not like I could design a fully functioning utopia by myself. And most people do not think about how to build a cosmic utopia.
Remember how stupid the average person is, and then remember that half of them are stupider than that.
If you hand a smart skilled person controls of something, say a car to be specific, it doesn’t matter exactly how the controls are arranged. They can learn to use them, and use them to get where they want to go. However the controls are set, the person will get to the same place.
Hand the controls to a sufficiently stupid mind, and they pull the controls at random, or in some arbitrary pattern like pressing the biggest button. Thus where they end up is sensitively dependent on exactly how the controls are structured. And if most random actions end in a crash, they are likely to crash.
So, how do you hand control of most of a galaxy of resources to some flat earth space denier who thinks “stars are fake”? Any plan to turn incoherent gibbering into actions will produce actions that no one really wants, and which are sensitively dependent on the control scheme.
The same applies to “control of our own minds”. Suppose that control came in the form of some intricate and arcane system. Like writing assembly for the brain. 3 genius neurologists carefully read through the 6000 pages of dense technical manual, take years carefully designing and double checking some enhancement and actually make the improvement they aimed for. A million idiots bang keyboards and give themselves new mental illnesses.
Or suppose the interface was friendlier, in the sense of giving people what they were asking for. And loads of religious people ask to be given 100% certain faith in god. And they get it.
The “everyone gets their share” could work in a world where everyone had written, or could write, long coherent descriptions of what they planned to do with their share. But that isn’t this world.
Hence “gets a say”, not unreservedly “determines”. Becoming smarter should certainly be a high salience early option. And the option to eventually get to determine in detail shouldn’t be lost because of initial lack of competence. There is an unimaginable amount of time for people to get their act together at some point.
An “everyone gets a share” system has the downside that if 0.1% of people want X to exist, and 95% of people strongly want X not to exist, then the 0.1% can make X in their share.
Where X might be torturing copies of a controversial political figure. Or violent video games with arguably sentient AI opponents getting killed.
Also, I think you are passing the buck a lot here. Instead of deciding what to do with the universe, you now need to decide how to massively upgrade a bunch of humans into the sort of beings who can decide that.
Also, some people just dislike responsibility.
And the modifications needed to make a person remotely trustworthy to that level are likely substantial. Perhaps. How much do you need to overwrite everyones mind with a FAI? I don’t know.
Some general laws seem appropriate, the same as with competence. This is different from imposing strong optimization pressure. People who have no use for compute could rent it out, until they have a personal need for it at the end of time. Still getting to decide what happens then is what it means to keep control of the future.
Seems a bit of a paradox as ‘their share of the universe’ is not a fixed quantity, nor did such a concept exist before humans, so how could the ‘share’ even have been decided on beforehand in order for the first ‘individual humans’ to have a say?