The concept came up in the first place when a friend and I were arguing over whether it would be ethically good to instantiate all possible minds, if you had a large enough simulation for them to live in that would allow them to self-modify and modify their circumstances however they wanted. (There are technical difficulties with that, of course—like how to keep all these minds from hurting each other without the ones that want to hurt people being unsatisfied—but this is assuming there’s some way to resolve that.) My objection to doing this was that some parts of mindspace will have extremely unhappy lives, but will not change or kill themselves if given the opportunity. For example, they might believe it is immoral to self-modify.
In the real world, mental illnesses like severe depression seem similar, in that it’s painful because of the way one’s mind is. Depressed people do want to stop being depressed, but it’s very difficult to do that.
The concept came up in the first place when a friend and I were arguing over whether it would be ethically good to instantiate all possible minds, if you had a large enough simulation for them to live in that would allow them to self-modify and modify their circumstances however they wanted. (There are technical difficulties with that, of course—like how to keep all these minds from hurting each other without the ones that want to hurt people being unsatisfied—but this is assuming there’s some way to resolve that.) My objection to doing this was that some parts of mindspace will have extremely unhappy lives, but will not change or kill themselves if given the opportunity. For example, they might believe it is immoral to self-modify.
In the real world, mental illnesses like severe depression seem similar, in that it’s painful because of the way one’s mind is. Depressed people do want to stop being depressed, but it’s very difficult to do that.