There must be some method to do something, legitimately and in good-faith, for people’s own good.
I would like to see examples of when it works.
Deception is not always bad. I doubt many people would go so far as to say the DoD never needs to keep secrets, for example, even if there’s a sunset on how long they can be classified.
Authoritarian approaches are not always bad, either. I think many of us might like police interfering with people’s individual judgement about how well they can drive after X number of drinks. Weirdly enough, once sober, the individuals themselves might even approve of this (as compared to being responsible for killing a whole family, driving drunk).
(I am going for non-controversial examples off the top of my head).
So what about cases where something is legitimately for people’s own good and they accept it? In what cases does this work? I am not comfortable that since no examples spring to mind, no examples exist. If we could meaningfully discuss cases where it works out, then we might be able to contrast that to when it does not.
There must be some method to do something, legitimately and in good-faith, for people’s own good.
“Must”? There “must” be? What physical law of the universe implies that there “must” be...?
Let’s take the local Anglosphere cultural problem off the table. Let’s ignore that in the United States, over the last 2.5 years, or ~10 years, or 21 years, or ~60 years (depending on where you want to place the inflection point), social trust has been shredded, policies justified under the banner of “the common good” have primarily been extractive and that in the US, trust is an exhausted resource. Let’s ignore that OP is specifically about trying to not make one aspect of this problem worse. Let’s ignore that high status individuals in the LessWrong and alignment community have made statements about whose values are actually worthwhile, in an public abandonment of the neutrality of CEV which might have made some sort of deal thinkable. Let’s ignore that because that would be focusing on one local culture in a large multipolar world, and at the global scale, questions are even harder:
How do you intend to convince the United States Government to surrender control to the Chinese Communist Party, or vice versa, and form a global hegemon necessary to actually prevent research into AI? If you don’t have one control the other, why should either trust that the other isn’t secretly doing whatever banned AI research required the authoritarian scheme in the first place, when immediately defecting and continuing to develop AI has a risky, but high payout? If you do have one control the other, how does the subjugated government maintain the legitimacy with its people necessary to continue to be their government?
How do you convince all nuclear sovereign states to sign on to this pact? What do you do with nations which refuse? They’re nuclear sovereign states. The lesson of Gaddafi and the lesson of Ukraine is that you do not give up your deterrent no matter what because your treaty counterparties won’t uphold their end of a deal when it’s inconvenient for them. A nuclear tipped Ukraine wouldn’t have been invaded by Russia. There is a reason that North Korea continues to exist. (Also, what do you do when North Korea refuses to sign on?)
I’m thinking, based on what you have said, that there does have to be a clear WIFM (what’s in it for me). So, any entity covering its own ass (and only accidentally benefitting others, if at all) doesn’t qualify as good paternalism (I like your term “Extractive”). Likewise, morality without creating utility for people subject to those morals won’t qualify. The latter is the basis for a lot of arguments against abortion bans. Many people find abortion in some sense distasteful, but outright banning it creates more pain and not enough balance of increased utility. So I predict strongly that those bans are not likely to endure the test of time.
Thus, can we start outlining the circumstances in which people are going to buy in? Within a nation, perhaps as long things are going fairly well? Basically, then, paternalism always depends on something like the “mandate of heaven”—the kingdom is doing well and we’re all eating, so we don’t kill the leaders. Would this fit your reasoning (even broadly concerning nuclear deterrence)?
Between nations, there would need to be enough of a sense of benefit to outweigh the downsides. This could partly depend on a network effect (where when more parties buy in, there is greater benefit for each party subject to the paternalism).
So, with AI, you need something beyond speculation that shows that governing or banning it has more utility for each player than not doing so, or prevents some vast cost from happening to individual players. I’m not sure such a case can be made, as we do not currently even know for sure if AGI is possible or what the impact will be.
Summary: Paternalism might depend on something like “This paternalism creates an environment with greater utility than you would have had otherwise.” If a party believes this, they’ll probably buy in. If indeed it is True that the paternalism creates greater utility (as with DUI laws and having fewer drunk people killing everyone on the roads), that seems likely to help the buy-in process. That would be the opposite of what you called “Extractive” paternalism.
In cases where the outcome seems speculative, it is pretty hard to make a case for Paternalism (which is probably why it broadly fails in matters of climate change prior to obvious evidence of climate change occurring). Can you think of any (non-religious) examples where buy-in happens in Paternalism on speculative matters?
‘Paternalism’ in this sense would seem more difficult to bring about, more controversial, and harder to control then AGI itself. So then why worry about it?
In the unlikely case mankind becomes capable of realizing beforehand then it wouldn’t serve a purpose by that point as any future AGI will have become an almost trivial problem by comparison. If it was realized afterhand, by presumably super intelligent entities, 2022 human opinions regarding it would just be noise.
At most the process of getting global societal trust to point where it’s possible to realize may be useful to discuss. But that almost certainly would be made harder, rather than easier, by discussing ‘paternalism’ before the trust level has reached that point.
There must be some method to do something, legitimately and in good-faith, for people’s own good.
I would like to see examples of when it works.
Deception is not always bad. I doubt many people would go so far as to say the DoD never needs to keep secrets, for example, even if there’s a sunset on how long they can be classified.
Authoritarian approaches are not always bad, either. I think many of us might like police interfering with people’s individual judgement about how well they can drive after X number of drinks. Weirdly enough, once sober, the individuals themselves might even approve of this (as compared to being responsible for killing a whole family, driving drunk).
(I am going for non-controversial examples off the top of my head).
So what about cases where something is legitimately for people’s own good and they accept it? In what cases does this work? I am not comfortable that since no examples spring to mind, no examples exist. If we could meaningfully discuss cases where it works out, then we might be able to contrast that to when it does not.
“Must”? There “must” be? What physical law of the universe implies that there “must” be...?
Let’s take the local Anglosphere cultural problem off the table. Let’s ignore that in the United States, over the last 2.5 years, or ~10 years, or 21 years, or ~60 years (depending on where you want to place the inflection point), social trust has been shredded, policies justified under the banner of “the common good” have primarily been extractive and that in the US, trust is an exhausted resource. Let’s ignore that OP is specifically about trying to not make one aspect of this problem worse. Let’s ignore that high status individuals in the LessWrong and alignment community have made statements about whose values are actually worthwhile, in an public abandonment of the neutrality of CEV which might have made some sort of deal thinkable. Let’s ignore that because that would be focusing on one local culture in a large multipolar world, and at the global scale, questions are even harder:
How do you intend to convince the United States Government to surrender control to the Chinese Communist Party, or vice versa, and form a global hegemon necessary to actually prevent research into AI? If you don’t have one control the other, why should either trust that the other isn’t secretly doing whatever banned AI research required the authoritarian scheme in the first place, when immediately defecting and continuing to develop AI has a risky, but high payout? If you do have one control the other, how does the subjugated government maintain the legitimacy with its people necessary to continue to be their government?
How do you convince all nuclear sovereign states to sign on to this pact? What do you do with nations which refuse? They’re nuclear sovereign states. The lesson of Gaddafi and the lesson of Ukraine is that you do not give up your deterrent no matter what because your treaty counterparties won’t uphold their end of a deal when it’s inconvenient for them. A nuclear tipped Ukraine wouldn’t have been invaded by Russia. There is a reason that North Korea continues to exist. (Also, what do you do when North Korea refuses to sign on?)
I’m thinking, based on what you have said, that there does have to be a clear WIFM (what’s in it for me). So, any entity covering its own ass (and only accidentally benefitting others, if at all) doesn’t qualify as good paternalism (I like your term “Extractive”). Likewise, morality without creating utility for people subject to those morals won’t qualify. The latter is the basis for a lot of arguments against abortion bans. Many people find abortion in some sense distasteful, but outright banning it creates more pain and not enough balance of increased utility. So I predict strongly that those bans are not likely to endure the test of time.
Thus, can we start outlining the circumstances in which people are going to buy in? Within a nation, perhaps as long things are going fairly well? Basically, then, paternalism always depends on something like the “mandate of heaven”—the kingdom is doing well and we’re all eating, so we don’t kill the leaders. Would this fit your reasoning (even broadly concerning nuclear deterrence)?
Between nations, there would need to be enough of a sense of benefit to outweigh the downsides. This could partly depend on a network effect (where when more parties buy in, there is greater benefit for each party subject to the paternalism).
So, with AI, you need something beyond speculation that shows that governing or banning it has more utility for each player than not doing so, or prevents some vast cost from happening to individual players. I’m not sure such a case can be made, as we do not currently even know for sure if AGI is possible or what the impact will be.
Summary: Paternalism might depend on something like “This paternalism creates an environment with greater utility than you would have had otherwise.” If a party believes this, they’ll probably buy in. If indeed it is True that the paternalism creates greater utility (as with DUI laws and having fewer drunk people killing everyone on the roads), that seems likely to help the buy-in process. That would be the opposite of what you called “Extractive” paternalism.
In cases where the outcome seems speculative, it is pretty hard to make a case for Paternalism (which is probably why it broadly fails in matters of climate change prior to obvious evidence of climate change occurring). Can you think of any (non-religious) examples where buy-in happens in Paternalism on speculative matters?
‘Paternalism’ in this sense would seem more difficult to bring about, more controversial, and harder to control then AGI itself. So then why worry about it?
In the unlikely case mankind becomes capable of realizing beforehand then it wouldn’t serve a purpose by that point as any future AGI will have become an almost trivial problem by comparison. If it was realized afterhand, by presumably super intelligent entities, 2022 human opinions regarding it would just be noise.
At most the process of getting global societal trust to point where it’s possible to realize may be useful to discuss. But that almost certainly would be made harder, rather than easier, by discussing ‘paternalism’ before the trust level has reached that point.