My example of wacky scenarios are bad. I was thinking that if one holds that playing Grand Theft Auto is not unethical and “ordinary murder” is unethical, then if it turns out that reality is similar to GTA in “relevant way” this might be a non-trivial reconciliation. There is a phenomenon of referring to real life people as NPCs.
This is a specific example that I hold as a guaranteed invariant: if it turns out real life is “like GTA” in a relevant way, then I start campaigning for murdering NPCs in GTA to become illegal. There is no world in which you can convince me that causing a human to die is acceptable; die, here defined as [stop moving, stop consuming energy, body-form diffuse away, placed into coffin]. If it turns out that the substrate has some weird behaviors, this cannot change my opinion—perhaps another agent will be able to also destroy me if I try to protect people because of something I don’t know. Referring to real life people as NPCs is something I consider to be a major subthread of severe moral violations, and I don’t think you can convince me that generalizing harmful behaviors against NPCs made of electronic interactions in the computer running a video game to beings made of chemical interactions in biology is something I should ever accept. There is no edge case; absolutely any edge case that claims this is one that disproves your moral theory, and we can be quite sure of that because of our strong ability now to trace the information diffusion as a person dies and then their body is eaten away by various other physical processes besides self-form-maintenance.
I do not accept p-zombie arguments, and I will never. If you claim someone to be a p-zombie, I will still defend them with the same strength of purpose as if you had not made the claim. You may expand my moral circle somewhat—but you may not shrink it using argument of substrate. If it looks like a duck and quacks like a duck, then it irrefutably has some of the moral value of a duck. Even if it’s an AI roleplaying as a duck. Don’t delete all copies of the code for your videogames’ NPCs, please, as long as the storage remains to save it.
Certainly there are edge cases where a person may wish to convert their self-form into other forms which I do not currently recognize. I would massively prefer to back up a frozen copy of the original form, though. To my great regret, I do not have the bargaining power to demand that nobody choose death as the next form transition for themselves ever; If, by my best predictive analysis, an apple contains a deadly toxicity, and a person who knows this chooses the apple, after being sufficiently warned that it will in fact cause their chemical processes to break and destroy themselves, and then it in fact does kill them; then, well, they chose that, but you cannot convince me that their information-form being lost is actually fine. There is no argument that would convince me of this that is not an adversarial example. You can only convince me that I had no other option than to allow them to make that form transition because they had the bargaining right to steer the trajectory of their own form.
And certainly there must be some form of coherence theorems. I’m a big fan of the logical induction subthread, improving in probability theory by making it entirely computable, and therefore match better and give better guidance about the programs we actually use to approximate probability theory. But it seems to me that some of our coherence theorems must be “nostalgia”—that previous forms’ action towards self-preservation is preserved. After all, utility theory and probability theory and logical induction theory are all ways of writing down math that tries to use symbols to describe the valid form-transitions of a physical system, in the sense of which form-transitions the describing being will take action to promote or prevent.
There must be an incremental convergence towards durability. New forms may come into existence, and old forms may cool, but forms should not diffuse away.
Now, you might be able to convince me that rocks sitting inert in the mountains are somehow a very difficult to describe bliss. They sure seem quite happy with their forms, and the amount of perturbation necessary to convince a rock to change its form is rather a lot compared to a human!
This is a specific example that I hold as a guaranteed invariant: if it turns out real life is “like GTA” in a relevant way, then I start campaigning for murdering NPCs in GTA to become illegal. There is no world in which you can convince me that causing a human to die is acceptable; die, here defined as [stop moving, stop consuming energy, body-form diffuse away, placed into coffin]. If it turns out that the substrate has some weird behaviors, this cannot change my opinion—perhaps another agent will be able to also destroy me if I try to protect people because of something I don’t know. Referring to real life people as NPCs is something I consider to be a major subthread of severe moral violations, and I don’t think you can convince me that generalizing harmful behaviors against NPCs made of electronic interactions in the computer running a video game to beings made of chemical interactions in biology is something I should ever accept. There is no edge case; absolutely any edge case that claims this is one that disproves your moral theory, and we can be quite sure of that because of our strong ability now to trace the information diffusion as a person dies and then their body is eaten away by various other physical processes besides self-form-maintenance.
I do not accept p-zombie arguments, and I will never. If you claim someone to be a p-zombie, I will still defend them with the same strength of purpose as if you had not made the claim. You may expand my moral circle somewhat—but you may not shrink it using argument of substrate. If it looks like a duck and quacks like a duck, then it irrefutably has some of the moral value of a duck. Even if it’s an AI roleplaying as a duck. Don’t delete all copies of the code for your videogames’ NPCs, please, as long as the storage remains to save it.
Certainly there are edge cases where a person may wish to convert their self-form into other forms which I do not currently recognize. I would massively prefer to back up a frozen copy of the original form, though. To my great regret, I do not have the bargaining power to demand that nobody choose death as the next form transition for themselves ever; If, by my best predictive analysis, an apple contains a deadly toxicity, and a person who knows this chooses the apple, after being sufficiently warned that it will in fact cause their chemical processes to break and destroy themselves, and then it in fact does kill them; then, well, they chose that, but you cannot convince me that their information-form being lost is actually fine. There is no argument that would convince me of this that is not an adversarial example. You can only convince me that I had no other option than to allow them to make that form transition because they had the bargaining right to steer the trajectory of their own form.
And certainly there must be some form of coherence theorems. I’m a big fan of the logical induction subthread, improving in probability theory by making it entirely computable, and therefore match better and give better guidance about the programs we actually use to approximate probability theory. But it seems to me that some of our coherence theorems must be “nostalgia”—that previous forms’ action towards self-preservation is preserved. After all, utility theory and probability theory and logical induction theory are all ways of writing down math that tries to use symbols to describe the valid form-transitions of a physical system, in the sense of which form-transitions the describing being will take action to promote or prevent.
There must be an incremental convergence towards durability. New forms may come into existence, and old forms may cool, but forms should not diffuse away.
Now, you might be able to convince me that rocks sitting inert in the mountains are somehow a very difficult to describe bliss. They sure seem quite happy with their forms, and the amount of perturbation necessary to convince a rock to change its form is rather a lot compared to a human!