The subtleties in defining “I” are pushed into the subtleties of defining events X and Y with respect to Clippy and Stapley respectively. I’m not sure if that counts as avoiding it at all.
And there are other issues with utility functions that depend on an agent’s impact on utilon-contributing elements. Such as, say, replacing all other agents that provide utilon-contributing elements with subagents of the barriered agent, thus making its own impact equal to the impact of all utilon-contributing agents.
This idea needs work, in other words. Not that you ever said otherwise, I just don’t think the formula provided is sufficient for preventing acausal trade without incentivizing undesirable strategies. See this comment as well for my concerns on disincentivizing utility conditional upon nonexistence.
The subtleties in defining “I” are pushed into the subtleties of defining events X and Y with respect to Clippy and Stapley respectively.
Defining events seems much easier than defining identity.
Such as, say, replacing all other agents that provide utilon-contributing elements with subagents of the barriered agent, thus making its own impact equal to the impact of all utilon-contributing agents.
I believe this setup wouldn’t have this problem. That’s the beauty of using X rather than “non-existence” or something similar, it’s “non-created” (essentially), so it has no problems with events happening after its death that it can have an impact on.
Defining events seems much easier than defining identity.
But events X and Y are specifically regarding the activation of Clippy and Stapley, so a definition of identity would need to be included in order to prove the barrier to acausal trade that p’ and s’ are claimed to have. Unless the event you speak of is something like “the button labeled ‘release AI’ is pressed,” but there is a greater-than-epsilon probability that the button will itself fail. Not sure if that provides any significant penalty to the utility function.
Unless the event you speak of is something like “the button labeled ‘release AI’ is pressed,”
Pretty much that, yes. More like “the button press fails to turn on the AI (an exceedingly unlikely event, so doesn’t affect utility calculations much, but can still be conditioned on).
The subtleties in defining “I” are pushed into the subtleties of defining events X and Y with respect to Clippy and Stapley respectively. I’m not sure if that counts as avoiding it at all.
And there are other issues with utility functions that depend on an agent’s impact on utilon-contributing elements. Such as, say, replacing all other agents that provide utilon-contributing elements with subagents of the barriered agent, thus making its own impact equal to the impact of all utilon-contributing agents.
This idea needs work, in other words. Not that you ever said otherwise, I just don’t think the formula provided is sufficient for preventing acausal trade without incentivizing undesirable strategies. See this comment as well for my concerns on disincentivizing utility conditional upon nonexistence.
Defining events seems much easier than defining identity.
I believe this setup wouldn’t have this problem. That’s the beauty of using X rather than “non-existence” or something similar, it’s “non-created” (essentially), so it has no problems with events happening after its death that it can have an impact on.
But events X and Y are specifically regarding the activation of Clippy and Stapley, so a definition of identity would need to be included in order to prove the barrier to acausal trade that p’ and s’ are claimed to have. Unless the event you speak of is something like “the button labeled ‘release AI’ is pressed,” but there is a greater-than-epsilon probability that the button will itself fail. Not sure if that provides any significant penalty to the utility function.
Pretty much that, yes. More like “the button press fails to turn on the AI (an exceedingly unlikely event, so doesn’t affect utility calculations much, but can still be conditioned on).