I think we just have different values. I think death is bad in itself, regardless of anything else. If someone dies painlessly and no one ever noticed that they had died, I would still consider it bad.
I also think that truth is good in and of itself. I want to know the truth and I think it’s good in general when people know the truth.
Here, I technically don’t think you’re lying to the simulated characters at all—in so far as the mental simulation makes them real, it makes the fictional world, their age, and their job real too.
Telling the truth to a mental model means telling them that they are a mental model, not that they are a regular human. It means telling them that the world they think they live in is actually a small mental model living in your brain with a minuscule population.
And sure, it might technically be true that within the context of your mental models, they “live” inside the fictional world, so “it’s not a lie”. But not telling them that they are in a mental model is such a incredibly huge thing to omit that I think it’s significantly worse than the majority of lies people tell, even though it can technically qualify as a “lie by omission” if you phrase it right.
so I would expect simulating pain in such away to be profoundly uncomfortable for the author.
I’ve given my opinion on this in an addendum added to the end of the post, since multiple people brought up similar points.
I’ve given my opinion on this in an addendum added to the end of the post, since multiple people brought up similar points.
Sure, it’s technically possible. My point is that on human hardware is impossible. We don’t have the resources to simulate someone without it affecting our own mental state.
I think we just have different values. I think death is bad in itself, regardless of anything else. If someone dies painlessly and no one ever noticed that they had died, I would still consider it bad.
I also think that truth is good in and of itself. I want to know the truth and I think it’s good in general when people know the truth.
Why?
I mean sure, ultimately morality is subjective, but even so, a morality with simpler axioms is much more attractive than ones with complex axioms like “death is bad” and “truth is good”. Once you have such chunky moral axioms, why is your moral system better than “orange juice is good” and “broccoli is bad”.
Raw utilitarianism at least has only one axiom. The only good thing is conscious beings utility (admittedly a complex chunky idea too, but at least it’s only one, rather than requiring hundreds of indivisible core good and bad things).
a morality with simpler axioms is much more attractive
Not to a morality that disagrees with it. So only if it’s a simpler equivalent reformulation. But really having a corrigible attitude to your own morality is the way of not turning into a monomaniacal wrapper-mind that goodharts a proxy as strongly as possible.
I think we just have different values. I think death is bad in itself, regardless of anything else. If someone dies painlessly and no one ever noticed that they had died, I would still consider it bad.
I also think that truth is good in and of itself. I want to know the truth and I think it’s good in general when people know the truth.
Telling the truth to a mental model means telling them that they are a mental model, not that they are a regular human. It means telling them that the world they think they live in is actually a small mental model living in your brain with a minuscule population.
And sure, it might technically be true that within the context of your mental models, they “live” inside the fictional world, so “it’s not a lie”. But not telling them that they are in a mental model is such a incredibly huge thing to omit that I think it’s significantly worse than the majority of lies people tell, even though it can technically qualify as a “lie by omission” if you phrase it right.
I’ve given my opinion on this in an addendum added to the end of the post, since multiple people brought up similar points.
Sure, it’s technically possible. My point is that on human hardware is impossible. We don’t have the resources to simulate someone without it affecting our own mental state.
Why?
I mean sure, ultimately morality is subjective, but even so, a morality with simpler axioms is much more attractive than ones with complex axioms like “death is bad” and “truth is good”. Once you have such chunky moral axioms, why is your moral system better than “orange juice is good” and “broccoli is bad”.
Raw utilitarianism at least has only one axiom. The only good thing is conscious beings utility (admittedly a complex chunky idea too, but at least it’s only one, rather than requiring hundreds of indivisible core good and bad things).
Not to a morality that disagrees with it. So only if it’s a simpler equivalent reformulation. But really having a corrigible attitude to your own morality is the way of not turning into a monomaniacal wrapper-mind that goodharts a proxy as strongly as possible.