I think there is insufficient information to answer the question as asked.
If I offer you the choice of a box with $5 in it, or a box with $500 000 in it, and I know that you are close enough to a rational utility-maximiser that you will take the $500 000, then I know what you will choose and I have set up various facts in the world to determine your choice. Yet it does not seem on the face of it as if you are not free.
On the other hand if you are trying to decide between being a plumber or a blogger and I use superhuman AI powers to subtly intervene in your environment to push you into one or the other without your knowledge then I have set up various facts in the world to determine your choice and it does seem like I am impinging on your freedom.
So the answer seems to depend at least on the degree of transparency between A and B in their transactions. Many other factors are almost certainly relevant, but that issue (probably among many) needs to be made clear before the question has a simple answer.
I said earlier in this thread that we can’t do this and that it is a hard problem, but also that I think it’s a sub-problem of strong AI and we won’t have strong AI until long after we’ve solved this problem.
I know that Word of Eliezer is that disciples won’t find it productive to read philosophy, but what you are talking about here has been discussed by analytic philosophers and computer scientists as “the frame problem” since the eighties and it might be worth a read for you. Fodor argued that there are a class of “informationally unencapsulated” problems where you cannot specify in advance what information is and is not relevant to solving the problem, hence really solving them as opposed to coming up with a semi-reliable heuristic is an incredibly difficult problem for AI. Defining liberty or identifying it in the wild seems like it’s an informationally unencapsulated problem in that sense and hence a very hard one, but one which AI has to solve before it can tackle the problems humans tackle.
If I recall correctly Fodor argued in Modules, Frames, Fridgeons, Sleeping Dogs, and the Music of the Spheres that this problem was in fact the heart of the AI problem, but that proposition was loudly raspberried in the literature by computer scientists. You can make up your own mind about that one.
Here’s a link to the Stanford Dictionary of Philosophy page on the subject.
If I recall correctly Fodor argued in Modules, Frames, Fridgeons, Sleeping Dogs, and the Music of the Spheres that this problem was in fact the heart of the AI problem
It depends on how general or narrow you make the problem. Compare: is classical decision theory the heart of the AI problem? If you interpret this broadly, then yes; but the link from, say, car navigation to classical decision theory is tenuous when you’re working on the first problem. The same thing for the frame problem.
I know that Word of Eliezer is that disciples won’t find it productive to read philosophy, but what you are talking about here has been discussed by analytic philosophers and computer scientists as “the frame problem” since the eighties and it might be worth a read for you
The issue can be talked about in terms of the frame problem, but I’m not sure it’s useful. In the classical frame problem, we have a much clearer idea of what we want, the problem is specifying enough so that the AI does too (ie so that the token “loaded” corresponds to the gun being loaded). This is quite closely related to symbol grounding, in a way.
When dealing with moral problems, we have the problem that we haven’t properly defined the terms to ourselves. Across the span of possible futures, the term “loaded gun” is likely much sharply defined than “living human being”. And if it isn’t—well, then we have even more problems if all our terms start becoming slippery, even the ones with no moral connotations.
But in any case, saying the problem is akin to the frame problem… still doesn’t solve it, alas!
Note that the relevance issue has been successfully solved in any number of complex practical applications, such as the self-driving vehicles, which are able to filter out gobs of irrelevant data, or the LHC code, which filters out even more. I suspect that the Framing Problem is not some general problem that needs to be resolved for AI to work, but just one of many technical issues, just as the “computer scientists” suggest. On the other hand, it is likely to be a real problem for FAI design, where relying to heuristics providing, say, six-sigma certainty just isn’t good enough.
I think that the framing problem is distinct from the problem of defining and calculating
things like liberty, which seem like obvious qualities to specify in an optimal world we are building an AI to search for
mostly because attempting to define liberty objectively leads us to the discussion of free will, the latter being an illusion due to the human failure to introspect deep enough.
I tend to think that you don’t need to adopt any particular position on free will to observe that people in North Korea lack freedom from government intervention in their lives, access to communication and information, a genuine plurality of viable life choices and other objectively identifiable things humans value. We could agree for the sake of argument that “free will is an illusion” (for some definitions of free will and illusion) yet still think that people in New Zealand have more liberty than people in North Korea.
I think that you are basically right that the Framing Problem is like the problem of building a longer bridge, or a faster car, in that you are never going to solve the entire class of problem at a stroke so that you can make infinitely long bridges or infinitely fast cars but that you can make meaningful incremental progress over time. I’ve said from the start that capturing the human ability to make philosophical judgments about liberty is a hard problem but I don’t think it is an impossible one—just a lot easier than creating a program that does that and solves all the other problems of strong AI at once.
In the same way that it turns out to be much easier to make a self-driving car than a strong AI, I think we’ll have useful natural-language parsing of terms like “liberty” before we have strong AI.
I tend to think that you don’t need to adopt any particular position on free will to observe that people in North Korea lack freedom from government intervention in their lives, access to communication and information, a genuine plurality of viable life choices and other objectively identifiable things humans value.
Well, yes, it is hard to argue about NK vs West. But let’s try to control for the “non-liberty” confounders, such as income, wealth, social status, etc. Say, we take some upper middle-class person from Iran, Russia or China. It is quite likely that, when comparing their life with that of a Westerner of similar means, they would not immediately state that the Western person has more “objectively identifiable things humans value”. Obviously the sets of these valuable things are different, and the priorities different people assign to them would be different, but I am not sure that there is a universal measure everyone would agree upon as “more liberty”.
A universal measure for anything is a big demand. Mostly we get by with some sort of somewhat-fuzzy “reasonable person” standard, which obviously we can’t fully explicate in neurological terms either yet, but which is much more achievable.
Liberty isn’t a one-dimensional quality either, since for example you might have a country with little real freedom of the press but lots of freedom to own guns, or vice versa.
What you would have to do to develop a measure with significant intersubjective validity is to ask a whole bunch of relevantly educated people what things they consider important freedoms and what incentives they would need to be offered to give them up, to figure out how they weight the various freedoms. This is quite do-able, and in fact we do very similar things when we do QALY analysis of medical interventions to find out how much people value a year of life without sight compared to a year of life with sight (or whatever).
Fundamentally it’s not different to figuring out people’s utility functions, except we are restricting the domain of questioning to liberty issues.
I think there is insufficient information to answer the question as asked.
If I offer you the choice of a box with $5 in it, or a box with $500 000 in it, and I know that you are close enough to a rational utility-maximiser that you will take the $500 000, then I know what you will choose and I have set up various facts in the world to determine your choice. Yet it does not seem on the face of it as if you are not free.
On the other hand if you are trying to decide between being a plumber or a blogger and I use superhuman AI powers to subtly intervene in your environment to push you into one or the other without your knowledge then I have set up various facts in the world to determine your choice and it does seem like I am impinging on your freedom.
So the answer seems to depend at least on the degree of transparency between A and B in their transactions. Many other factors are almost certainly relevant, but that issue (probably among many) needs to be made clear before the question has a simple answer.
Can you cash out the difference between those two cases in sufficient detail that we can use it to safely defined what liberty means?
I said earlier in this thread that we can’t do this and that it is a hard problem, but also that I think it’s a sub-problem of strong AI and we won’t have strong AI until long after we’ve solved this problem.
I know that Word of Eliezer is that disciples won’t find it productive to read philosophy, but what you are talking about here has been discussed by analytic philosophers and computer scientists as “the frame problem” since the eighties and it might be worth a read for you. Fodor argued that there are a class of “informationally unencapsulated” problems where you cannot specify in advance what information is and is not relevant to solving the problem, hence really solving them as opposed to coming up with a semi-reliable heuristic is an incredibly difficult problem for AI. Defining liberty or identifying it in the wild seems like it’s an informationally unencapsulated problem in that sense and hence a very hard one, but one which AI has to solve before it can tackle the problems humans tackle.
If I recall correctly Fodor argued in Modules, Frames, Fridgeons, Sleeping Dogs, and the Music of the Spheres that this problem was in fact the heart of the AI problem, but that proposition was loudly raspberried in the literature by computer scientists. You can make up your own mind about that one.
Here’s a link to the Stanford Dictionary of Philosophy page on the subject.
It depends on how general or narrow you make the problem. Compare: is classical decision theory the heart of the AI problem? If you interpret this broadly, then yes; but the link from, say, car navigation to classical decision theory is tenuous when you’re working on the first problem. The same thing for the frame problem.
You mean the frame problem that I talked about here? http://lesswrong.com/lw/gyt/thoughts_on_the_frame_problem_and_moral_symbol/
The issue can be talked about in terms of the frame problem, but I’m not sure it’s useful. In the classical frame problem, we have a much clearer idea of what we want, the problem is specifying enough so that the AI does too (ie so that the token “loaded” corresponds to the gun being loaded). This is quite closely related to symbol grounding, in a way.
When dealing with moral problems, we have the problem that we haven’t properly defined the terms to ourselves. Across the span of possible futures, the term “loaded gun” is likely much sharply defined than “living human being”. And if it isn’t—well, then we have even more problems if all our terms start becoming slippery, even the ones with no moral connotations.
But in any case, saying the problem is akin to the frame problem… still doesn’t solve it, alas!
Note that the relevance issue has been successfully solved in any number of complex practical applications, such as the self-driving vehicles, which are able to filter out gobs of irrelevant data, or the LHC code, which filters out even more. I suspect that the Framing Problem is not some general problem that needs to be resolved for AI to work, but just one of many technical issues, just as the “computer scientists” suggest. On the other hand, it is likely to be a real problem for FAI design, where relying to heuristics providing, say, six-sigma certainty just isn’t good enough.
I think that the framing problem is distinct from the problem of defining and calculating
mostly because attempting to define liberty objectively leads us to the discussion of free will, the latter being an illusion due to the human failure to introspect deep enough.
I tend to think that you don’t need to adopt any particular position on free will to observe that people in North Korea lack freedom from government intervention in their lives, access to communication and information, a genuine plurality of viable life choices and other objectively identifiable things humans value. We could agree for the sake of argument that “free will is an illusion” (for some definitions of free will and illusion) yet still think that people in New Zealand have more liberty than people in North Korea.
I think that you are basically right that the Framing Problem is like the problem of building a longer bridge, or a faster car, in that you are never going to solve the entire class of problem at a stroke so that you can make infinitely long bridges or infinitely fast cars but that you can make meaningful incremental progress over time. I’ve said from the start that capturing the human ability to make philosophical judgments about liberty is a hard problem but I don’t think it is an impossible one—just a lot easier than creating a program that does that and solves all the other problems of strong AI at once.
In the same way that it turns out to be much easier to make a self-driving car than a strong AI, I think we’ll have useful natural-language parsing of terms like “liberty” before we have strong AI.
Well, yes, it is hard to argue about NK vs West. But let’s try to control for the “non-liberty” confounders, such as income, wealth, social status, etc. Say, we take some upper middle-class person from Iran, Russia or China. It is quite likely that, when comparing their life with that of a Westerner of similar means, they would not immediately state that the Western person has more “objectively identifiable things humans value”. Obviously the sets of these valuable things are different, and the priorities different people assign to them would be different, but I am not sure that there is a universal measure everyone would agree upon as “more liberty”.
A universal measure for anything is a big demand. Mostly we get by with some sort of somewhat-fuzzy “reasonable person” standard, which obviously we can’t fully explicate in neurological terms either yet, but which is much more achievable.
Liberty isn’t a one-dimensional quality either, since for example you might have a country with little real freedom of the press but lots of freedom to own guns, or vice versa.
What you would have to do to develop a measure with significant intersubjective validity is to ask a whole bunch of relevantly educated people what things they consider important freedoms and what incentives they would need to be offered to give them up, to figure out how they weight the various freedoms. This is quite do-able, and in fact we do very similar things when we do QALY analysis of medical interventions to find out how much people value a year of life without sight compared to a year of life with sight (or whatever).
Fundamentally it’s not different to figuring out people’s utility functions, except we are restricting the domain of questioning to liberty issues.