Could you have a machine hooked up to a person‘s nervous system, change the settings slightly to change consciousness, and let the person choose whether the changes are good or bad? Run this many times.
I don’t think this works. One, it only measure short term impacts, but any such change might have lots of medium and long term effects, second and third order effects, and effects on other people with whom I interact. Two, it measures based on the values of already-changed me, not current me, and it is not obvious that current-me cares what changed-me will think, or why I should so care if I don’t currently. Three, I have limited understanding of my own wants, needs, and goals, and so would not trust any human’s judgement of such changes far enough to extrapolate to situations they didn’t experience, let alone to other people, or the far future, or unusual/extreme circumstances.
I mean, yes, because the proposal is about optimizing our entire future light for an outcome we don’t know how to formally specify.
Could you have a machine hooked up to a person‘s nervous system, change the settings slightly to change consciousness, and let the person choose whether the changes are good or bad? Run this many times.
I don’t think this works. One, it only measure short term impacts, but any such change might have lots of medium and long term effects, second and third order effects, and effects on other people with whom I interact. Two, it measures based on the values of already-changed me, not current me, and it is not obvious that current-me cares what changed-me will think, or why I should so care if I don’t currently. Three, I have limited understanding of my own wants, needs, and goals, and so would not trust any human’s judgement of such changes far enough to extrapolate to situations they didn’t experience, let alone to other people, or the far future, or unusual/extreme circumstances.