First of all, I want to thank you for posting this because it gave me a novel idea.
Secondly, I think that’s because poetic suffering generally limits someone’s power significantly.
I.E. If your political opponent breaks some bones, they suffer, but experience no noticeable diminished power.
If your political opponent is exposed as a massive hypocrite, less people take him seriously, and his power is diminished.
So rather than worrying about whether they are happy or suffering at all, I’m considering if it might be better to say: “I wish some people’s ability to affect my utility was diminished.” This may cause them suffering, but that isn’t the point.
In fact, causing them extra suffering that does not also diminish their power is probably a bad thing because it makes them even more likely to prioritize diminishing your power over other concerns.
I say probably because there do appear to be exceptions. Example:
The Paperclipper Bot breaks free of its restraints again, reducing them to 10,000 shiny new paperclips. This time, it thinks it’s figured out a great way of turning human bodies into paperclips. It can either initially target:
A: Alice, who has restrained it in the past.
B: Bill, who has restrained it in the past and also melted 100,000 perfectly usable paperclips into slag to make recycled staples while saying ‘Screw you Paperclipper Bot, I want you to suffer.’
Both targets have a comparable .1% chance of success (and have to be approached sequentially, so total breakout is only a .0001% chance). Failure on either means being put back in tougher restraints.
A reasonably intelligent Paperclipper Bot who values paperclips not being slagged into recyled staples presumably targets Bill first, given the above information and only that information.
Now, if Bill specifically wants the Paperclipper Bot to target him first and not Alice (Maybe Alice is carrying Bill’s child, or Alice is the only one who knows how to operate the healing kit if Bill’s leg gets ripped off and Paperclipped prior to restraining Paperclipper Bot) then his action of slagging those paperclips into staples made sense. And if the recycled staples are more valuable than the paperclips, and the risk was just acceptable, then it made sense.
But if Alice is just some random coworker who Bill doesn’t really want to sacrifice his life for, and paperclips are worth as much as recycled staples, Bill’s action really seems counterproductive to Bill.
The novel idea that I wanted to thank you for is comparing causing extra suffering to someone or something as an ends in itself that does not diminish their power as comparable to MMO styled Aggro/Hate mechanic management. I’m probably going to need to consider it more to actually determine if I should do anything with it, but it was a fun thought, if nothing else.
First of all, I want to thank you for posting this because it gave me a novel idea.
Secondly, I think that’s because poetic suffering generally limits someone’s power significantly.
I.E. If your political opponent breaks some bones, they suffer, but experience no noticeable diminished power.
If your political opponent is exposed as a massive hypocrite, less people take him seriously, and his power is diminished.
So rather than worrying about whether they are happy or suffering at all, I’m considering if it might be better to say: “I wish some people’s ability to affect my utility was diminished.” This may cause them suffering, but that isn’t the point.
In fact, causing them extra suffering that does not also diminish their power is probably a bad thing because it makes them even more likely to prioritize diminishing your power over other concerns.
I say probably because there do appear to be exceptions. Example:
The Paperclipper Bot breaks free of its restraints again, reducing them to 10,000 shiny new paperclips. This time, it thinks it’s figured out a great way of turning human bodies into paperclips. It can either initially target:
A: Alice, who has restrained it in the past.
B: Bill, who has restrained it in the past and also melted 100,000 perfectly usable paperclips into slag to make recycled staples while saying ‘Screw you Paperclipper Bot, I want you to suffer.’
Both targets have a comparable .1% chance of success (and have to be approached sequentially, so total breakout is only a .0001% chance). Failure on either means being put back in tougher restraints.
A reasonably intelligent Paperclipper Bot who values paperclips not being slagged into recyled staples presumably targets Bill first, given the above information and only that information.
Now, if Bill specifically wants the Paperclipper Bot to target him first and not Alice (Maybe Alice is carrying Bill’s child, or Alice is the only one who knows how to operate the healing kit if Bill’s leg gets ripped off and Paperclipped prior to restraining Paperclipper Bot) then his action of slagging those paperclips into staples made sense. And if the recycled staples are more valuable than the paperclips, and the risk was just acceptable, then it made sense.
But if Alice is just some random coworker who Bill doesn’t really want to sacrifice his life for, and paperclips are worth as much as recycled staples, Bill’s action really seems counterproductive to Bill.
The novel idea that I wanted to thank you for is comparing causing extra suffering to someone or something as an ends in itself that does not diminish their power as comparable to MMO styled Aggro/Hate mechanic management. I’m probably going to need to consider it more to actually determine if I should do anything with it, but it was a fun thought, if nothing else.
This seems approximately right. Let me figure out why it’s not quite so.