One such thing could be a conscious mind. You want to be rational and efficient, and you’re offered a deal by a local neurosurgeon to reprogram your brain so that you become the perfect Bayesian that does everything to further your values. No acrasia, no bias blind spots, no hindrance at all. Only downside in the transformation process is that something that enables our conscious experiences(let it be something like self-reflection, or conflicts within the system) is lost. Wanna do it?
I’m well aware that people here might disagree if this would be possible even in principle, but as far as I know, it could be, so this could then work as an example of a huge sacrifice made for furthering rationality.
Edit: Also, it would seem to me that “eliminating anticipation” is a case of explaining away.
To have the “pure rational” version serve the human one? Have it achieve goals that aren’t defined by ownership (e.g., ‘reduce total suffering’), because it’s better at those things? This sounds reasonable and I don’t see why we shouldn’t do this—assuming it’s easier or otherwise preferable to creating a GAI from scratch to achieve these goals.
Being modified like this is much the same as death. I would lose personhood and the modified remains would not be serving my original goals, but those of the winners in the lottery.
So this is equivalent to asking: would I enter a lottery with 0.01% chance of instant death and a prize for everything else? I might, depending on the prize. If the prize is service by the modified individuals, and they aren’t based on myself but on other people (so they’re not very well suited to advancing my personal goals), and I have to timeshare them with all the other winners (every winner receives 0.01 / 99.99 ~~ 10^-4 % service time) - that doesn’t seem worthwhile.
Not happily, but I won’t be unhappy about it for long, so sure. I’m pretty sure that one such person is all the world would need, and, you know that epigram about ‘what profit a man’? Well, it profit his utility function a lot.
One such thing could be a conscious mind. You want to be rational and efficient, and you’re offered a deal by a local neurosurgeon to reprogram your brain so that you become the perfect Bayesian that does everything to further your values. No acrasia, no bias blind spots, no hindrance at all. Only downside in the transformation process is that something that enables our conscious experiences(let it be something like self-reflection, or conflicts within the system) is lost. Wanna do it?
I’m well aware that people here might disagree if this would be possible even in principle, but as far as I know, it could be, so this could then work as an example of a huge sacrifice made for furthering rationality.
Edit: Also, it would seem to me that “eliminating anticipation” is a case of explaining away.
The transformation is at least partially self-undermining, if one of your values is conscious experience.
Can I get xeroxed and have the transformation performed on one randomly selected version of me before it wakes up?
To have the “pure rational” version serve the human one? Have it achieve goals that aren’t defined by ownership (e.g., ‘reduce total suffering’), because it’s better at those things? This sounds reasonable and I don’t see why we shouldn’t do this—assuming it’s easier or otherwise preferable to creating a GAI from scratch to achieve these goals.
Would you enter a lottery where a small portion (0.01%) of the entrants are selected as losers at random, and transformed without being copied?
Being modified like this is much the same as death. I would lose personhood and the modified remains would not be serving my original goals, but those of the winners in the lottery.
So this is equivalent to asking: would I enter a lottery with 0.01% chance of instant death and a prize for everything else? I might, depending on the prize. If the prize is service by the modified individuals, and they aren’t based on myself but on other people (so they’re not very well suited to advancing my personal goals), and I have to timeshare them with all the other winners (every winner receives 0.01 / 99.99 ~~ 10^-4 % service time) - that doesn’t seem worthwhile.
Modified question in place.
Modified answer in place...
Not happily, but I won’t be unhappy about it for long, so sure. I’m pretty sure that one such person is all the world would need, and, you know that epigram about ‘what profit a man’? Well, it profit his utility function a lot.