Hence “gets a say”, not unreservedly “determines”. Becoming smarter should certainly be a high salience early option. And the option to eventually get to determine in detail shouldn’t be lost because of initial lack of competence. There is an unimaginable amount of time for people to get their act together at some point.
An “everyone gets a share” system has the downside that if 0.1% of people want X to exist, and 95% of people strongly want X not to exist, then the 0.1% can make X in their share.
Where X might be torturing copies of a controversial political figure. Or violent video games with arguably sentient AI opponents getting killed.
Also, I think you are passing the buck a lot here. Instead of deciding what to do with the universe, you now need to decide how to massively upgrade a bunch of humans into the sort of beings who can decide that.
Also, some people just dislike responsibility.
And the modifications needed to make a person remotely trustworthy to that level are likely substantial. Perhaps. How much do you need to overwrite everyones mind with a FAI? I don’t know.
Some general laws seem appropriate, the same as with competence. This is different from imposing strong optimization pressure. People who have no use for compute could rent it out, until they have a personal need for it at the end of time. Still getting to decide what happens then is what it means to keep control of the future.
Hence “gets a say”, not unreservedly “determines”. Becoming smarter should certainly be a high salience early option. And the option to eventually get to determine in detail shouldn’t be lost because of initial lack of competence. There is an unimaginable amount of time for people to get their act together at some point.
An “everyone gets a share” system has the downside that if 0.1% of people want X to exist, and 95% of people strongly want X not to exist, then the 0.1% can make X in their share.
Where X might be torturing copies of a controversial political figure. Or violent video games with arguably sentient AI opponents getting killed.
Also, I think you are passing the buck a lot here. Instead of deciding what to do with the universe, you now need to decide how to massively upgrade a bunch of humans into the sort of beings who can decide that.
Also, some people just dislike responsibility.
And the modifications needed to make a person remotely trustworthy to that level are likely substantial. Perhaps. How much do you need to overwrite everyones mind with a FAI? I don’t know.
Some general laws seem appropriate, the same as with competence. This is different from imposing strong optimization pressure. People who have no use for compute could rent it out, until they have a personal need for it at the end of time. Still getting to decide what happens then is what it means to keep control of the future.