‘The second AI helped you more, but it constrained your destiny less.’: A very interesting sentence.
On other parts, I note that the commitment to a range of possible actions can be seen as larger-scale than to a single action, even before which one is taken is chosen.
A particular situation that comes to mind, though:
Person X does not know of person Y, but person Y knows of person X. Y has an emotional (or other) stake in a tiebreaking vote that X will make; Y cannot be present on the day to observe the vote, but sets up a simple machine to detect what vote is made and fire a projectile through the head of X if X makes one vote rather than another (nothing happening otherwise).
Let it be given that in every universe that X votes that certain way, X is immediately killed as a result. It can also safely be assumed that in those universes Y is arrested for murder.
In a certain universe, X votes the other way, but the machine is later discovered. No direct interference with X has taken place, but Y who set up the machine (pointed at X’s head, X’s continued life unknowingly dependent on X’s vote) presumably is guilty of a felony of some sort (which though, I wonder?).
Regardless of motivation, to have committed to potentially carry out a certain thing against X is treated as similarly serious to that of in fact having it carried out (or attempted to be carried out).
(This, granted, may focus on a concept within the above article without addressing the entire issue of planning another entity’s life.)
‘The second AI helped you more, but it constrained your destiny less.’: A very interesting sentence.
On other parts, I note that the commitment to a range of possible actions can be seen as larger-scale than to a single action, even before which one is taken is chosen.
A particular situation that comes to mind, though:
Person X does not know of person Y, but person Y knows of person X. Y has an emotional (or other) stake in a tiebreaking vote that X will make; Y cannot be present on the day to observe the vote, but sets up a simple machine to detect what vote is made and fire a projectile through the head of X if X makes one vote rather than another (nothing happening otherwise).
Let it be given that in every universe that X votes that certain way, X is immediately killed as a result. It can also safely be assumed that in those universes Y is arrested for murder.
In a certain universe, X votes the other way, but the machine is later discovered. No direct interference with X has taken place, but Y who set up the machine (pointed at X’s head, X’s continued life unknowingly dependent on X’s vote) presumably is guilty of a felony of some sort (which though, I wonder?).
Regardless of motivation, to have committed to potentially carry out a certain thing against X is treated as similarly serious to that of in fact having it carried out (or attempted to be carried out).
(This, granted, may focus on a concept within the above article without addressing the entire issue of planning another entity’s life.)