I should have been more precise. I’m talking about the kind of organizational capabilities required to physically ensure no AI unauthorized by central authority can be created. Whether aligned AGI exists (and presumably in this case, is loyal is said authority over other factions of society that may become dissatisfied) doesn’t need to factor into the conversation much.
That may well be the price of survival, nonetheless I felt I needed to point out the very likely price of going down that route. Whether that price is worth paying to reduce x-risk from p(x) to p(x-y) is up to each person reading this. Again, I’m not trying to be flippant, it’s an honest question of how we trade off between these two risks. But we should recognize there are multiple risks.
I’m not so much implying you are negative as not sufficiently negative about prospects for liberalism/democracy/non-lock-in in a world where a regulatory apparatus strong enough to do what you propose exists. Most democratic systems are designed to varying degrees so as to not concentrate power in one actor or group of actors, hence the concept of checks & balances as well as different branches of government; the governments are engineered to rely as little as possible on the good will & altruism of the people in said positions. When this breaks down because of unforeseen avenues for corruption, we see corruption (ala stock portfolio returns for sitting senators).
The assumption that we cannot rely on societal decision-makers to not immediately use any power given to them in selfish/despotic ways is what people mean when they talk about humility in democratic governance. I can’t see how this humility can continue to occur with the kind of surveillance power alone that would be required both to prevent rebellion over centuries to millenia, much less global/extraglobal enforcement capabilities a regulatory regime would need.
Maybe you have an idea for an enforcement mechanism that could prevent unaligned AGI indefinitely that is nonetheless incapable of being utilized for non-AI regulation purposes (say, stifling dissidents or redistributing resources to oneself), but I don’t understand what that institutional design would look like.
I should have been more precise. I’m talking about the kind of organizational capabilities required to physically ensure no AI unauthorized by central authority can be created. Whether aligned AGI exists (and presumably in this case, is loyal is said authority over other factions of society that may become dissatisfied) doesn’t need to factor into the conversation much.
That may well be the price of survival, nonetheless I felt I needed to point out the very likely price of going down that route. Whether that price is worth paying to reduce x-risk from p(x) to p(x-y) is up to each person reading this. Again, I’m not trying to be flippant, it’s an honest question of how we trade off between these two risks. But we should recognize there are multiple risks.
I’m not so much implying you are negative as not sufficiently negative about prospects for liberalism/democracy/non-lock-in in a world where a regulatory apparatus strong enough to do what you propose exists. Most democratic systems are designed to varying degrees so as to not concentrate power in one actor or group of actors, hence the concept of checks & balances as well as different branches of government; the governments are engineered to rely as little as possible on the good will & altruism of the people in said positions. When this breaks down because of unforeseen avenues for corruption, we see corruption (ala stock portfolio returns for sitting senators).
The assumption that we cannot rely on societal decision-makers to not immediately use any power given to them in selfish/despotic ways is what people mean when they talk about humility in democratic governance. I can’t see how this humility can continue to occur with the kind of surveillance power alone that would be required both to prevent rebellion over centuries to millenia, much less global/extraglobal enforcement capabilities a regulatory regime would need.
Maybe you have an idea for an enforcement mechanism that could prevent unaligned AGI indefinitely that is nonetheless incapable of being utilized for non-AI regulation purposes (say, stifling dissidents or redistributing resources to oneself), but I don’t understand what that institutional design would look like.