To be more explicit, are the other risks to consider mostly about governance/who gets AGI/regulations?
Yes.
It’s weird, my take on your sequence was more that you want to push alternatives to goal-directedness/utility maximization, because maximizing the wrong utility function (or following the wrong goal) is a big AI-risk.
Yeah, I don’t think that sequence actually supports my point all that well—I should write more about this in the future. Here I’m claiming that using EU maximization in the real world as the model for “default” AI systems is not a great choice.
Yes.
Yeah, I don’t think that sequence actually supports my point all that well—I should write more about this in the future. Here I’m claiming that using EU maximization in the real world as the model for “default” AI systems is not a great choice.