Hello Nathan, thank you for your comment!
Really looking forward to your post and how it expands on the idea.
Regarding your estimates:
I personally do not think that assigning probabilities to preferable outcomes is very useful.
On the contrary, one can argue that the worldviews held by influential people can become self fulfilling prophecies. That is especially applicable to prisoner’s dilemmas.
One can either believe the dilemma is inevitable and therefore choose to defect, or instead see the situation itself as the problem, not the other prisoner. That was the point we were trying to make.
Rather than suggesting what to do under the assumption that things turn out well, we were arguing for a shift in perspective and what measures could help with increasing the chances of a good outcome of the advent of TAI. Fixing the game, instead of accepting the rules and hoping for the best.
In the disclaimer, when we said that the relevance of the article increases with longer timelines, we did not mean to narrow its applicability down towards a scenario of an agreed upon AI pause. The purpose was mainly to not appear too confident in our assumptions before being proven wrong by unpredictable impacts of the technology.
Thanks Naci, that’s helpful clarification. An active call to change the odds, rather than a passive hoping that things will go well, does seem like a more robust plan.
Hello Nathan, thank you for your comment! Really looking forward to your post and how it expands on the idea.
Regarding your estimates: I personally do not think that assigning probabilities to preferable outcomes is very useful. On the contrary, one can argue that the worldviews held by influential people can become self fulfilling prophecies. That is especially applicable to prisoner’s dilemmas. One can either believe the dilemma is inevitable and therefore choose to defect, or instead see the situation itself as the problem, not the other prisoner. That was the point we were trying to make.
Rather than suggesting what to do under the assumption that things turn out well, we were arguing for a shift in perspective and what measures could help with increasing the chances of a good outcome of the advent of TAI. Fixing the game, instead of accepting the rules and hoping for the best.
In the disclaimer, when we said that the relevance of the article increases with longer timelines, we did not mean to narrow its applicability down towards a scenario of an agreed upon AI pause. The purpose was mainly to not appear too confident in our assumptions before being proven wrong by unpredictable impacts of the technology.
Thanks Naci, that’s helpful clarification. An active call to change the odds, rather than a passive hoping that things will go well, does seem like a more robust plan.