Here’s what I’m getting when I try to translate the Tickle Defense:
“If this argument works, the AI should be able to recognize that, and predict the AI researcher’s prediction. It knows that it is already the type of agent that will say yes, effectively screening-off its action from the AI researcher’s prediction. When it conditions on refusing to pay, it still predicts that the AI researcher thought it would pay up, and expects the fiasco with the same probability as ever. Therefore, it refuses to pay. By way of contradiction, we conclude that the original argument doesn’t work.”
This is implausible, since it seems quite likely that conditioning on its “don’t pay up” action causes the AI to consider a universe in which this whole argument doesn’t work (and the AI researcher sent it a letter knowing that it wouldn’t pay, following (b) in the strategy). However, it does highlight the importance of how the EDT agent is computing impossible possible worlds.
More technically, we might assume that the AI is using a good finite-time approximation one of the logical priors that has been explored, conditioned on the description of the scenario. We include a logical description of its own source code and physical computer [making the agent unable to consider disruptions to its machine, but this isn’t important]. To decide actions, the agent makes decisions by the ambient chicken rule: if the agent can prove what action it will take, it does something different from that. Otherwise, it takes the action with the highest expected utility (according to Bayesian conditional).
Then, the agent cannot predict that it will give the researcher money, because it doesn’t know whether it will trip its chicken clause. However, it knows that the researcher will make a correct prediction. So, it seems that it will pay up.
The tickle defense fails as a result of the chicken rule.
I find this surprising, and quite interesting.
Here’s what I’m getting when I try to translate the Tickle Defense:
“If this argument works, the AI should be able to recognize that, and predict the AI researcher’s prediction. It knows that it is already the type of agent that will say yes, effectively screening-off its action from the AI researcher’s prediction. When it conditions on refusing to pay, it still predicts that the AI researcher thought it would pay up, and expects the fiasco with the same probability as ever. Therefore, it refuses to pay. By way of contradiction, we conclude that the original argument doesn’t work.”
This is implausible, since it seems quite likely that conditioning on its “don’t pay up” action causes the AI to consider a universe in which this whole argument doesn’t work (and the AI researcher sent it a letter knowing that it wouldn’t pay, following (b) in the strategy). However, it does highlight the importance of how the EDT agent is computing impossible possible worlds.
More technically, we might assume that the AI is using a good finite-time approximation one of the logical priors that has been explored, conditioned on the description of the scenario. We include a logical description of its own source code and physical computer [making the agent unable to consider disruptions to its machine, but this isn’t important]. To decide actions, the agent makes decisions by the ambient chicken rule: if the agent can prove what action it will take, it does something different from that. Otherwise, it takes the action with the highest expected utility (according to Bayesian conditional).
Then, the agent cannot predict that it will give the researcher money, because it doesn’t know whether it will trip its chicken clause. However, it knows that the researcher will make a correct prediction. So, it seems that it will pay up.
The tickle defense fails as a result of the chicken rule.
But isn’t triggering a chicken clause impossible without a contradiction?