Consider a mind that thinks the following: I don’t want to dieIf I drink that poison, I’ll dieTherefore I should drink that poison But don’t consider it very long, because it drank the poison and now it’s dead and not a mind anymore.
Consider a mind that thinks the following:
I don’t want to die
If I drink that poison, I’ll die
Therefore I should drink that poison
But don’t consider it very long, because it drank the poison and now it’s dead and not a mind anymore.
This argument also puts limits on the goals the mind can have, e.g., forbidding minds that want to die.
I don’t immediately see how to get anything interesting about morality that way, but it’s an avenue worth pursuing.
Start by requiring the mind to be able to function in an environment with similar minds.
This argument also puts limits on the goals the mind can have, e.g., forbidding minds that want to die.
Start by requiring the mind to be able to function in an environment with similar minds.