“If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?”
Yes, definititely. If nothing else, it means diversity.
“Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?”
I do not care, as long as story continues.
And yes, I would like to hear the story—which is about the same thing I would get in case Minds are prohibited. I will not be the main character of the story anyway, so why should I care?
“Should existing human beings grow up at some eudaimonic rate of intelligence increase, and then eventually decide what sort of galaxy to create, and how to people it?”
Grow up how? Does it involve uploading your mind to computronium?
“Or is it better for a nonsentient superintelligence to exercise that decision on our behalf, and start creating new powerful Minds right away?”
Well, this is the only thing I fear. I would prefer sentient superintelligence to create nonsentient utility maximizers. Much less chance of error, IMO.
“If we don’t have to do it one way or the other—if we have both options—and if there’s no particular need for heroic self-sacrifice—then which do you like?”
As you have said—this is a Big world. I do not think both options are mutually exclusive. The only mutually exclusive option I see is nonsentient maximizer singleton programmed to avoid sentient AI and Minds.
“Well… you could have the humans grow up (at some eudaimonic rate of intelligence increase), and then when new people are created, they might be created as powerful Minds to start with.”
Please, explain the difference between the Mind created outright and “grown up humans”. Do you insist on biological computronium?
As you have said, we are living in a Big world. It inevitably means that there is (or will be) quite likely some Culture like civilisation that we will meet if things go well.
How do you think we will be able to compete with your “no sentient AIs, only grown up humans” bias?
Or: Say your CEV AI creates singleton.
Will we be allowed to create the Culture?
What textbooks will be banned?
Will CEV burn any new textbooks we are going to create so that nobody is able to stand on other people’s arms?
“If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?”
Yes, definititely. If nothing else, it means diversity.
“Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?”
I do not care, as long as story continues.
And yes, I would like to hear the story—which is about the same thing I would get in case Minds are prohibited. I will not be the main character of the story anyway, so why should I care?
“Should existing human beings grow up at some eudaimonic rate of intelligence increase, and then eventually decide what sort of galaxy to create, and how to people it?”
Grow up how? Does it involve uploading your mind to computronium?
“Or is it better for a nonsentient superintelligence to exercise that decision on our behalf, and start creating new powerful Minds right away?”
Well, this is the only thing I fear. I would prefer sentient superintelligence to create nonsentient utility maximizers. Much less chance of error, IMO.
“If we don’t have to do it one way or the other—if we have both options—and if there’s no particular need for heroic self-sacrifice—then which do you like?”
As you have said—this is a Big world. I do not think both options are mutually exclusive. The only mutually exclusive option I see is nonsentient maximizer singleton programmed to avoid sentient AI and Minds.
“Well… you could have the humans grow up (at some eudaimonic rate of intelligence increase), and then when new people are created, they might be created as powerful Minds to start with.”
Please, explain the difference between the Mind created outright and “grown up humans”. Do you insist on biological computronium?
As you have said, we are living in a Big world. It inevitably means that there is (or will be) quite likely some Culture like civilisation that we will meet if things go well.
How do you think we will be able to compete with your “no sentient AIs, only grown up humans” bias?
Or: Say your CEV AI creates singleton.
Will we be allowed to create the Culture?
What textbooks will be banned?
Will CEV burn any new textbooks we are going to create so that nobody is able to stand on other people’s arms?