Our best understanding of the nature of the “simulation” we call reality has this concept we call “cause and effect” in place. So when something happens it has non-zero (though nigh infinitely small) effects on everything else in existence (progressively smaller effect with each degree of separation).
The effect that affecting 3^^^3 things (regardless of type or classification) has on other things (even if the individual effects of affecting one thing would be extremely small) would be non-trivial (enormously large even after a positively ludicrous degree of separations).
Once you consider the level of effect that this would have on the whole “simulation” you are forced to consider basically all possible futures. You have nigh-infinite good (when these things are removed/effected you end up with utopia and a range of all possible net benefits for the whole of the simulation) and nigh-infinite penalty (when these things are removed/effected you end up with hell and a range of all possible net losses for the whole of the simulation). I cannot foresee how an AI can possibly have enough processing power to overcome the vagary being unable to predict all possible futures following the event.
Moreover, I personally balk at the assumption of that level of responsibility. It is for the same reason that I balk at time travel scenarios. I refuse to be responsible for whatever changes are wrought across all of reality (which in sum become quite large when you consider a Vast possibly infinite universe regardless of how “small” the initial event seems).
Also does the probability assignment take into account the likelihood of the actor in question approaching you? Assuming there are 3^^^3 people (minds), then surely the probability assignment of approaching you specifically must be adjusted accordingly. I understand that “somebody has to be approached,” but surely no one here is willing to contend that any of us have traits which are so exceptional that they cannot be found inside of a population which is 3^^^3 in size?
Our best understanding of the nature of the “simulation” we call reality has this concept we call “cause and effect” in place. So when something happens it has non-zero (though nigh infinitely small) effects on everything else in existence (progressively smaller effect with each degree of separation).
The effect that affecting 3^^^3 things (regardless of type or classification) has on other things (even if the individual effects of affecting one thing would be extremely small) would be non-trivial (enormously large even after a positively ludicrous degree of separations).
Once you consider the level of effect that this would have on the whole “simulation” you are forced to consider basically all possible futures. You have nigh-infinite good (when these things are removed/effected you end up with utopia and a range of all possible net benefits for the whole of the simulation) and nigh-infinite penalty (when these things are removed/effected you end up with hell and a range of all possible net losses for the whole of the simulation). I cannot foresee how an AI can possibly have enough processing power to overcome the vagary being unable to predict all possible futures following the event.
Moreover, I personally balk at the assumption of that level of responsibility. It is for the same reason that I balk at time travel scenarios. I refuse to be responsible for whatever changes are wrought across all of reality (which in sum become quite large when you consider a Vast possibly infinite universe regardless of how “small” the initial event seems).
Also does the probability assignment take into account the likelihood of the actor in question approaching you? Assuming there are 3^^^3 people (minds), then surely the probability assignment of approaching you specifically must be adjusted accordingly. I understand that “somebody has to be approached,” but surely no one here is willing to contend that any of us have traits which are so exceptional that they cannot be found inside of a population which is 3^^^3 in size?