I think that AI people that are very concerned about AI risk tend to view loss of control risk as very high, while eternal authoritarianism risks are much lower.
I’m not sure how many people see the risk of eternal authoritarianism as much lower and how many people see it as being suppressed by the higher probability of loss of control[1]. Or in Bayesian terms:
P(eternal authoritarianism) = P(eternal authoritarianism | control is maintained) ⋅ P(control is maintained)
Both sides may agree that P(eternal authoritarianism | control is maintained) is high, only disagreeing on P(control is maintained).
Yeah, from a more technical perspective, I forgot to add that condition where loss of control is maintained or removed in the short/long run as an important variable to track.
I’m not sure how many people see the risk of eternal authoritarianism as much lower and how many people see it as being suppressed by the higher probability of loss of control[1]. Or in Bayesian terms:
P(eternal authoritarianism) = P(eternal authoritarianism | control is maintained) ⋅ P(control is maintained)
Both sides may agree that P(eternal authoritarianism | control is maintained) is high, only disagreeing on P(control is maintained).
Here, ‘control’ is short for all forms of ensuring AI alignment to humans, whether all or some or one.
Yeah, from a more technical perspective, I forgot to add that condition where loss of control is maintained or removed in the short/long run as an important variable to track.