I do give a (somewhat) concise overview, in the section headed ‘The Proposal.’
The 100 years example is not quite right, in that in the real example we put you in an environment with unlimited computational power. One of the first things you are likely to do is create an extremely pleasant environment for yourself to work in (another is to create a community to work alongside you, either out of emulations of yourself, emulations of others, or reconstructed from simulations of worlds like Earth), while you figure out what should be done.
That said, there are other ways that your values might change through this process. For example, one of the first things you would hypothetically realize, if you ended up in an environment with some apparently infinitely powerful computers, is that you are in a hypothetical situation. I don’t know about you, but if I discovered I was in a clearly hypothetical situation, my views about the moral relevance of people in hypotheticals would change (hypothetically).
(I’m going based on this informal explanation you gave, now.)
It seems like a system such as you describe could exhibit chaotic behavior. Since the person is going to have to create an environment for themselves from scratch, initial decisions about what their environment should be like could impact subsequent decisions, etc. (Also, depending on the level of detail that the person has to specify, reversibility of decisions, etc. maybe the task of creating an environment for oneself would change their character substantially, e.g. like tripping on an untested psychedelic drug.)
Of course, the utility function produced could be “good enough”.
Here’s another objection. Putting someone in an environment they control completely which has unlimited computational power could lead to some pretty unexpected stuff. Wireheading would be easy, and it could start innocuously: I decide I could use an attractive member of my preferred gender to keep me company and things get worse from there. If you put someone in this situation it seems like there’d be tremendous incentives to procrastinate indefinitely on solving the problem at hand.
It seems like under ideal conditions we could empirically test the behavior of this sort of exotic “utility function” and make sure it was meeting basic sanity checks.
Creating the initial community requires the first person to create ems of other people who do not initially exist within the simulation and organize their society in a way that makes them productive and prevents them from undergoing value drift. The first person must also prevent value drift in themself over the entire time period that they are solving these other problems. This is far too hard for one person and organizing a group of uploads that can do so is nontrivial.
I do give a (somewhat) concise overview, in the section headed ‘The Proposal.’
The 100 years example is not quite right, in that in the real example we put you in an environment with unlimited computational power. One of the first things you are likely to do is create an extremely pleasant environment for yourself to work in (another is to create a community to work alongside you, either out of emulations of yourself, emulations of others, or reconstructed from simulations of worlds like Earth), while you figure out what should be done.
That said, there are other ways that your values might change through this process. For example, one of the first things you would hypothetically realize, if you ended up in an environment with some apparently infinitely powerful computers, is that you are in a hypothetical situation. I don’t know about you, but if I discovered I was in a clearly hypothetical situation, my views about the moral relevance of people in hypotheticals would change (hypothetically).
(I’m going based on this informal explanation you gave, now.)
It seems like a system such as you describe could exhibit chaotic behavior. Since the person is going to have to create an environment for themselves from scratch, initial decisions about what their environment should be like could impact subsequent decisions, etc. (Also, depending on the level of detail that the person has to specify, reversibility of decisions, etc. maybe the task of creating an environment for oneself would change their character substantially, e.g. like tripping on an untested psychedelic drug.)
Of course, the utility function produced could be “good enough”.
Here’s another objection. Putting someone in an environment they control completely which has unlimited computational power could lead to some pretty unexpected stuff. Wireheading would be easy, and it could start innocuously: I decide I could use an attractive member of my preferred gender to keep me company and things get worse from there. If you put someone in this situation it seems like there’d be tremendous incentives to procrastinate indefinitely on solving the problem at hand.
It seems like under ideal conditions we could empirically test the behavior of this sort of exotic “utility function” and make sure it was meeting basic sanity checks.
Creating the initial community requires the first person to create ems of other people who do not initially exist within the simulation and organize their society in a way that makes them productive and prevents them from undergoing value drift. The first person must also prevent value drift in themself over the entire time period that they are solving these other problems. This is far too hard for one person and organizing a group of uploads that can do so is nontrivial.