Well, if we put our dual Manfred’s in one trolley car, and one person in another, then the ethics might care.
More substantially, once uploads start being a thing, the ethics of these situations will matter.
The other contexts where these issues matter is in anthropics, expectations and trying to understand what the implications of Many-Worlds are. In this case, making the separation completely classical may be helpful: when one cannot understand a complicated situation, looking at a simpler one can help.
It does not as the other person is parseable as multiple ones as well.
Uploading is not a thing atm, and once it is viable, the corresponding ethics will be constructed from special cases of the entity’s behaviour, like it was done before.
I still don’t get how the anthropic principle cares about the labels we assign to stuff.
Because ethics is essentially simplified applied modeling of other beings.
This seems like a very non-standard notion of what constitutes ethics. Can you expand on this captures the usual intuitions about what the concerns of ethics are?
This seems like a very non-standard notion of what constitutes ethics. Can you expand on this captures the usual intuitions about what the concerns of ethics are?
The concerns of ethics for a given agent is to facilitate one to interact with others effectively, no?
Not at all. If I do something that doesn’t accomplish my goals that’s generally labeled as something like “stupid.” If I decide that I want to kill lots of people, the problem with that is ethical even if my goals are fulfilled by it. Most intuitions don’t see these as the same thing.
How does this contradicts my notion of ethics? You will surely use what you know about the ethical properties of manslaughter to reach the goal and save yourself from the troubles, like manipulating the public opinion in your favor via, for instance, imitation the target people attacking you. Or even consider if the goal is worthy at all.
Please explain how say a trolley problem fits into your framework.
The correct choice is to check out who do you want to be killed and saved more, and what are, for instance, the social consequences of your actions. I don’t understand your question, it seems.
Suppose you don’t have any time to figure out which people would be better. And suppose no one else will know that you were able to pull a switch.
Then my current algorythms will do the habitual stuff I’m used to do in similar situations or randomly explore the possible outcomes (as in “play”), like in every other severely constrained situation.
Honestly, it seems like your notion of ethics is borderline psychopathic.
Does it make any difference?
Well, if we put our dual Manfred’s in one trolley car, and one person in another, then the ethics might care.
More substantially, once uploads start being a thing, the ethics of these situations will matter.
The other contexts where these issues matter is in anthropics, expectations and trying to understand what the implications of Many-Worlds are. In this case, making the separation completely classical may be helpful: when one cannot understand a complicated situation, looking at a simpler one can help.
It does not as the other person is parseable as multiple ones as well.
Uploading is not a thing atm, and once it is viable, the corresponding ethics will be constructed from special cases of the entity’s behaviour, like it was done before.
I still don’t get how the anthropic principle cares about the labels we assign to stuff.
That’s not obvious. What if one entity is parseable in such a way and another one isn’t?
Why?
Right. They shouldn’t. So situations like this one may be useful intuition pumps.
Every human produces lots of different kinds of behaviour, so it can be modeled as a pack of specialized agents.
Because ethics is essentially simplified applied modeling of other beings.
This seems like a very non-standard notion of what constitutes ethics. Can you expand on this captures the usual intuitions about what the concerns of ethics are?
The concerns of ethics for a given agent is to facilitate one to interact with others effectively, no?
Not at all. If I do something that doesn’t accomplish my goals that’s generally labeled as something like “stupid.” If I decide that I want to kill lots of people, the problem with that is ethical even if my goals are fulfilled by it. Most intuitions don’t see these as the same thing.
How does this contradicts my notion of ethics? You will surely use what you know about the ethical properties of manslaughter to reach the goal and save yourself from the troubles, like manipulating the public opinion in your favor via, for instance, imitation the target people attacking you. Or even consider if the goal is worthy at all.
Please explain how say a trolley problem fits into your framework.
The correct choice is to check out who do you want to be killed and saved more, and what are, for instance, the social consequences of your actions. I don’t understand your question, it seems.
Suppose you don’t have any time to figure out which people would be better. And suppose no one else will know that you were able to pull a switch.
Honestly, it seems like your notion of ethics is borderline psychopathic.
Then my current algorythms will do the habitual stuff I’m used to do in similar situations or randomly explore the possible outcomes (as in “play”), like in every other severely constrained situation.
What does this mean?