Assuming that it is true that sufficiently detailed mental models of people are moral patients, what does that imply ethically? Here are a few things.
When a mental model stops being computed forever, that is death. To create a mental model and then end it is therefore a form of murder and should be avoided. The easiest way to avoid it is to not create such mental models in the first place.
Writing fiction using a character that qualifies as a person will usually involve a lot of lying to that character. For example, lying to make them believe that they actually are in a fictional world, that they are X years old, that they have Y job, etc. This seems unethical to me and should be avoided.
Death is bad. Why? Well usually the process itself is painful. Also it tends to have a lot of bad second order effects on peoples lives. People tend to be able to see it coming, and are scared of it.
If you have a person pop into existence, have a nice life, never be scared of dying, then instantaneously and painlessly pop out of existence, is that worse than never having existed? Seems very doubtful to me.
Lying is bad. Why? Well usually because of the bad second order effects it has. Here, I technically don’t think you’re lying to the simulated characters at all—in so far as the mental simulation makes them real, it makes the fictional world, their age, and their job real too. But ignoring the semantic question, you have to argue what bad effects this ‘lying’ to the character causes.
I think a better argument is to say that you tend to cause pain to fictional characters and put them in unpleasant situations. But even if I bite the bullet that authors are able to simulate characters intensely enough they gain their own separate existence, I would be extremely sceptical that they model their pain in sufficient detail—humans simulate other minds by running them on our own hardware, so I would expect simulating pain in such away to be profoundly uncomfortable for the author.
I think we just have different values. I think death is bad in itself, regardless of anything else. If someone dies painlessly and no one ever noticed that they had died, I would still consider it bad.
I also think that truth is good in and of itself. I want to know the truth and I think it’s good in general when people know the truth.
Here, I technically don’t think you’re lying to the simulated characters at all—in so far as the mental simulation makes them real, it makes the fictional world, their age, and their job real too.
Telling the truth to a mental model means telling them that they are a mental model, not that they are a regular human. It means telling them that the world they think they live in is actually a small mental model living in your brain with a minuscule population.
And sure, it might technically be true that within the context of your mental models, they “live” inside the fictional world, so “it’s not a lie”. But not telling them that they are in a mental model is such a incredibly huge thing to omit that I think it’s significantly worse than the majority of lies people tell, even though it can technically qualify as a “lie by omission” if you phrase it right.
so I would expect simulating pain in such away to be profoundly uncomfortable for the author.
I’ve given my opinion on this in an addendum added to the end of the post, since multiple people brought up similar points.
I’ve given my opinion on this in an addendum added to the end of the post, since multiple people brought up similar points.
Sure, it’s technically possible. My point is that on human hardware is impossible. We don’t have the resources to simulate someone without it affecting our own mental state.
I think we just have different values. I think death is bad in itself, regardless of anything else. If someone dies painlessly and no one ever noticed that they had died, I would still consider it bad.
I also think that truth is good in and of itself. I want to know the truth and I think it’s good in general when people know the truth.
Why?
I mean sure, ultimately morality is subjective, but even so, a morality with simpler axioms is much more attractive than ones with complex axioms like “death is bad” and “truth is good”. Once you have such chunky moral axioms, why is your moral system better than “orange juice is good” and “broccoli is bad”.
Raw utilitarianism at least has only one axiom. The only good thing is conscious beings utility (admittedly a complex chunky idea too, but at least it’s only one, rather than requiring hundreds of indivisible core good and bad things).
a morality with simpler axioms is much more attractive
Not to a morality that disagrees with it. So only if it’s a simpler equivalent reformulation. But really having a corrigible attitude to your own morality is the way of not turning into a monomaniacal wrapper-mind that goodharts a proxy as strongly as possible.
Both of these appear to me to be examples of the non-central fallacy.
Death is bad. Why? Well usually the process itself is painful. Also it tends to have a lot of bad second order effects on peoples lives. People tend to be able to see it coming, and are scared of it.
If you have a person pop into existence, have a nice life, never be scared of dying, then instantaneously and painlessly pop out of existence, is that worse than never having existed? Seems very doubtful to me.
Lying is bad. Why? Well usually because of the bad second order effects it has. Here, I technically don’t think you’re lying to the simulated characters at all—in so far as the mental simulation makes them real, it makes the fictional world, their age, and their job real too. But ignoring the semantic question, you have to argue what bad effects this ‘lying’ to the character causes.
I think a better argument is to say that you tend to cause pain to fictional characters and put them in unpleasant situations. But even if I bite the bullet that authors are able to simulate characters intensely enough they gain their own separate existence, I would be extremely sceptical that they model their pain in sufficient detail—humans simulate other minds by running them on our own hardware, so I would expect simulating pain in such away to be profoundly uncomfortable for the author.
I think we just have different values. I think death is bad in itself, regardless of anything else. If someone dies painlessly and no one ever noticed that they had died, I would still consider it bad.
I also think that truth is good in and of itself. I want to know the truth and I think it’s good in general when people know the truth.
Telling the truth to a mental model means telling them that they are a mental model, not that they are a regular human. It means telling them that the world they think they live in is actually a small mental model living in your brain with a minuscule population.
And sure, it might technically be true that within the context of your mental models, they “live” inside the fictional world, so “it’s not a lie”. But not telling them that they are in a mental model is such a incredibly huge thing to omit that I think it’s significantly worse than the majority of lies people tell, even though it can technically qualify as a “lie by omission” if you phrase it right.
I’ve given my opinion on this in an addendum added to the end of the post, since multiple people brought up similar points.
Sure, it’s technically possible. My point is that on human hardware is impossible. We don’t have the resources to simulate someone without it affecting our own mental state.
Why?
I mean sure, ultimately morality is subjective, but even so, a morality with simpler axioms is much more attractive than ones with complex axioms like “death is bad” and “truth is good”. Once you have such chunky moral axioms, why is your moral system better than “orange juice is good” and “broccoli is bad”.
Raw utilitarianism at least has only one axiom. The only good thing is conscious beings utility (admittedly a complex chunky idea too, but at least it’s only one, rather than requiring hundreds of indivisible core good and bad things).
Not to a morality that disagrees with it. So only if it’s a simpler equivalent reformulation. But really having a corrigible attitude to your own morality is the way of not turning into a monomaniacal wrapper-mind that goodharts a proxy as strongly as possible.