I have no objection I could clearly communicate, just a feeling that you are approaching this from a wrong angle. If things happen the right way, we will get a lot of freedom as a consequence of that. But starting with freedom has various problems of type “my freedom to make future X is incompatible with your freedom to make it non-X”.
Typical reasons to limit other people’s freedom are scarcity and safety. Resources are limited, and always will be. (This is not a statement about whether their current distribution is close to optimal or far from it. Just that there will always be people wanting to do some really expensive things, and some of them will not get what they want.) Second reason is safety. Yes, we can punish the people who do the bad things, but that often does not reverse the harm done, e.g. if they killed someone.
Hypothetically, a sufficiently powerful AI with perfect surveillance could allow people do whatever they want, because it could always prevent any crime or tragedy at the last moment. However, this would require a lot of resources.
Another difficulty is the consequences of people’s wishes. Suppose that on Monday, I feel a strong desire to do X, which logically causes Y the next day. On Tuesday, I feel a strong desire to have Y removed. Should I get some penalty for using my freedom in a way that the next day makes me complain about having my freedom violated? Should the AI yell at me: “hey, I am trying to use limited resources to maximize everyone’s freedom, but in your case, granting your wishes on Monday only makes you complain more on Tuesday; I should have ignored you and made someone else happy instead who would keep being happy the next day, too!” In other words, if my freedom is much cheaper than my neighbor’s (e.g. because I do not want contradictory things), should I get more of it? Or will the guy who every day feels that his freedom is violated by consequences of what he wished yesterday get to spend most of our common resources?
Ah, a methodological problem is that when you ask people “how free do you feel?” they may actually interpret the question differently, and instead report on how satisfied they are, or something.
>>If things happen the right way, we will get a lot of freedom as a consequence of that. But starting with freedom has various problems of type “my freedom to make future X is incompatible with your freedom to make it non-X”.
Yes, I would anticipate a lot of incompatibilities. But the ASI would be incentivized to find ways to optimize for both people’s freedom in that scenario. Maybe each person gets 70% of their values fulfilled instead of 100%. But over time, with new creativity and new capabilities, the ASI would be able to nudge that to 75%, and then 80% and so on. It’s an endless optimization exercise.
>>Second reason is safety. Yes, we can punish the people who do the bad things, but that often does not reverse the harm done, e.g. if they killed someone.
Crime, and criminal justice, are difficult problems we’ll have to grapple with no matter what. I would argue the goal here would be to incentivize the ASI to find ways to implement criminal justice in the best way possible. Yes, sometimes you have to separate the murderer from the rest of the society; but is there a way to properly rehabilitate them? Certainly things can be done much better than they are today. I think this would set us on a path to keep improving these things over time.
>>Hypothetically, a sufficiently powerful AI with perfect surveillance could allow people do whatever they want, because it could always prevent any crime or tragedy at the last moment.
Perfect surveillance would not make me (nor many other people) feel free, so I’m not sure this would be the right solution for everyone. I imagine some people would prefer it though, and for them, the ASI can offer them higher security in exchange for their privacy, and for people like myself, it would index privacy higher.
>>I should have ignored you and made someone else happy instead who would keep being happy the next day, too!”
I would imagine a freedom-optimizing ASI would direct its efforts in areas where it can make the most return on its effort. This would mean if someone is volatile with their values like you mention, they would not receive the same level of effort from an ASI (nor should they) as someone who is consistent, at least until they become more consistent.
>>Ah, a methodological problem is that when you ask people “how free do you feel?” they may actually interpret the question differently, and instead report on how satisfied they are, or something.
Great point, and this is certainly one of the challenges / potential issues with this approach. Not just the interpretation of what it means to feel free, but also the danger of words changing meaning over time for society as a whole. An example might be how the word ‘liberal’ used to mean something much closer to ‘libertarian’ a hundred years ago.
I have no objection I could clearly communicate, just a feeling that you are approaching this from a wrong angle. If things happen the right way, we will get a lot of freedom as a consequence of that. But starting with freedom has various problems of type “my freedom to make future X is incompatible with your freedom to make it non-X”.
Typical reasons to limit other people’s freedom are scarcity and safety. Resources are limited, and always will be. (This is not a statement about whether their current distribution is close to optimal or far from it. Just that there will always be people wanting to do some really expensive things, and some of them will not get what they want.) Second reason is safety. Yes, we can punish the people who do the bad things, but that often does not reverse the harm done, e.g. if they killed someone.
Hypothetically, a sufficiently powerful AI with perfect surveillance could allow people do whatever they want, because it could always prevent any crime or tragedy at the last moment. However, this would require a lot of resources.
Another difficulty is the consequences of people’s wishes. Suppose that on Monday, I feel a strong desire to do X, which logically causes Y the next day. On Tuesday, I feel a strong desire to have Y removed. Should I get some penalty for using my freedom in a way that the next day makes me complain about having my freedom violated? Should the AI yell at me: “hey, I am trying to use limited resources to maximize everyone’s freedom, but in your case, granting your wishes on Monday only makes you complain more on Tuesday; I should have ignored you and made someone else happy instead who would keep being happy the next day, too!” In other words, if my freedom is much cheaper than my neighbor’s (e.g. because I do not want contradictory things), should I get more of it? Or will the guy who every day feels that his freedom is violated by consequences of what he wished yesterday get to spend most of our common resources?
Ah, a methodological problem is that when you ask people “how free do you feel?” they may actually interpret the question differently, and instead report on how satisfied they are, or something.
>>If things happen the right way, we will get a lot of freedom as a consequence of that. But starting with freedom has various problems of type “my freedom to make future X is incompatible with your freedom to make it non-X”.
Yes, I would anticipate a lot of incompatibilities. But the ASI would be incentivized to find ways to optimize for both people’s freedom in that scenario. Maybe each person gets 70% of their values fulfilled instead of 100%. But over time, with new creativity and new capabilities, the ASI would be able to nudge that to 75%, and then 80% and so on. It’s an endless optimization exercise.
>>Second reason is safety. Yes, we can punish the people who do the bad things, but that often does not reverse the harm done, e.g. if they killed someone.
Crime, and criminal justice, are difficult problems we’ll have to grapple with no matter what. I would argue the goal here would be to incentivize the ASI to find ways to implement criminal justice in the best way possible. Yes, sometimes you have to separate the murderer from the rest of the society; but is there a way to properly rehabilitate them? Certainly things can be done much better than they are today. I think this would set us on a path to keep improving these things over time.
>>Hypothetically, a sufficiently powerful AI with perfect surveillance could allow people do whatever they want, because it could always prevent any crime or tragedy at the last moment.
Perfect surveillance would not make me (nor many other people) feel free, so I’m not sure this would be the right solution for everyone. I imagine some people would prefer it though, and for them, the ASI can offer them higher security in exchange for their privacy, and for people like myself, it would index privacy higher.
>>I should have ignored you and made someone else happy instead who would keep being happy the next day, too!”
I would imagine a freedom-optimizing ASI would direct its efforts in areas where it can make the most return on its effort. This would mean if someone is volatile with their values like you mention, they would not receive the same level of effort from an ASI (nor should they) as someone who is consistent, at least until they become more consistent.
>>Ah, a methodological problem is that when you ask people “how free do you feel?” they may actually interpret the question differently, and instead report on how satisfied they are, or something.
Great point, and this is certainly one of the challenges / potential issues with this approach. Not just the interpretation of what it means to feel free, but also the danger of words changing meaning over time for society as a whole. An example might be how the word ‘liberal’ used to mean something much closer to ‘libertarian’ a hundred years ago.