Note that the relevance issue has been successfully solved in any number of complex practical applications, such as the self-driving vehicles, which are able to filter out gobs of irrelevant data, or the LHC code, which filters out even more. I suspect that the Framing Problem is not some general problem that needs to be resolved for AI to work, but just one of many technical issues, just as the “computer scientists” suggest. On the other hand, it is likely to be a real problem for FAI design, where relying to heuristics providing, say, six-sigma certainty just isn’t good enough.
I think that the framing problem is distinct from the problem of defining and calculating
things like liberty, which seem like obvious qualities to specify in an optimal world we are building an AI to search for
mostly because attempting to define liberty objectively leads us to the discussion of free will, the latter being an illusion due to the human failure to introspect deep enough.
I tend to think that you don’t need to adopt any particular position on free will to observe that people in North Korea lack freedom from government intervention in their lives, access to communication and information, a genuine plurality of viable life choices and other objectively identifiable things humans value. We could agree for the sake of argument that “free will is an illusion” (for some definitions of free will and illusion) yet still think that people in New Zealand have more liberty than people in North Korea.
I think that you are basically right that the Framing Problem is like the problem of building a longer bridge, or a faster car, in that you are never going to solve the entire class of problem at a stroke so that you can make infinitely long bridges or infinitely fast cars but that you can make meaningful incremental progress over time. I’ve said from the start that capturing the human ability to make philosophical judgments about liberty is a hard problem but I don’t think it is an impossible one—just a lot easier than creating a program that does that and solves all the other problems of strong AI at once.
In the same way that it turns out to be much easier to make a self-driving car than a strong AI, I think we’ll have useful natural-language parsing of terms like “liberty” before we have strong AI.
I tend to think that you don’t need to adopt any particular position on free will to observe that people in North Korea lack freedom from government intervention in their lives, access to communication and information, a genuine plurality of viable life choices and other objectively identifiable things humans value.
Well, yes, it is hard to argue about NK vs West. But let’s try to control for the “non-liberty” confounders, such as income, wealth, social status, etc. Say, we take some upper middle-class person from Iran, Russia or China. It is quite likely that, when comparing their life with that of a Westerner of similar means, they would not immediately state that the Western person has more “objectively identifiable things humans value”. Obviously the sets of these valuable things are different, and the priorities different people assign to them would be different, but I am not sure that there is a universal measure everyone would agree upon as “more liberty”.
A universal measure for anything is a big demand. Mostly we get by with some sort of somewhat-fuzzy “reasonable person” standard, which obviously we can’t fully explicate in neurological terms either yet, but which is much more achievable.
Liberty isn’t a one-dimensional quality either, since for example you might have a country with little real freedom of the press but lots of freedom to own guns, or vice versa.
What you would have to do to develop a measure with significant intersubjective validity is to ask a whole bunch of relevantly educated people what things they consider important freedoms and what incentives they would need to be offered to give them up, to figure out how they weight the various freedoms. This is quite do-able, and in fact we do very similar things when we do QALY analysis of medical interventions to find out how much people value a year of life without sight compared to a year of life with sight (or whatever).
Fundamentally it’s not different to figuring out people’s utility functions, except we are restricting the domain of questioning to liberty issues.
Note that the relevance issue has been successfully solved in any number of complex practical applications, such as the self-driving vehicles, which are able to filter out gobs of irrelevant data, or the LHC code, which filters out even more. I suspect that the Framing Problem is not some general problem that needs to be resolved for AI to work, but just one of many technical issues, just as the “computer scientists” suggest. On the other hand, it is likely to be a real problem for FAI design, where relying to heuristics providing, say, six-sigma certainty just isn’t good enough.
I think that the framing problem is distinct from the problem of defining and calculating
mostly because attempting to define liberty objectively leads us to the discussion of free will, the latter being an illusion due to the human failure to introspect deep enough.
I tend to think that you don’t need to adopt any particular position on free will to observe that people in North Korea lack freedom from government intervention in their lives, access to communication and information, a genuine plurality of viable life choices and other objectively identifiable things humans value. We could agree for the sake of argument that “free will is an illusion” (for some definitions of free will and illusion) yet still think that people in New Zealand have more liberty than people in North Korea.
I think that you are basically right that the Framing Problem is like the problem of building a longer bridge, or a faster car, in that you are never going to solve the entire class of problem at a stroke so that you can make infinitely long bridges or infinitely fast cars but that you can make meaningful incremental progress over time. I’ve said from the start that capturing the human ability to make philosophical judgments about liberty is a hard problem but I don’t think it is an impossible one—just a lot easier than creating a program that does that and solves all the other problems of strong AI at once.
In the same way that it turns out to be much easier to make a self-driving car than a strong AI, I think we’ll have useful natural-language parsing of terms like “liberty” before we have strong AI.
Well, yes, it is hard to argue about NK vs West. But let’s try to control for the “non-liberty” confounders, such as income, wealth, social status, etc. Say, we take some upper middle-class person from Iran, Russia or China. It is quite likely that, when comparing their life with that of a Westerner of similar means, they would not immediately state that the Western person has more “objectively identifiable things humans value”. Obviously the sets of these valuable things are different, and the priorities different people assign to them would be different, but I am not sure that there is a universal measure everyone would agree upon as “more liberty”.
A universal measure for anything is a big demand. Mostly we get by with some sort of somewhat-fuzzy “reasonable person” standard, which obviously we can’t fully explicate in neurological terms either yet, but which is much more achievable.
Liberty isn’t a one-dimensional quality either, since for example you might have a country with little real freedom of the press but lots of freedom to own guns, or vice versa.
What you would have to do to develop a measure with significant intersubjective validity is to ask a whole bunch of relevantly educated people what things they consider important freedoms and what incentives they would need to be offered to give them up, to figure out how they weight the various freedoms. This is quite do-able, and in fact we do very similar things when we do QALY analysis of medical interventions to find out how much people value a year of life without sight compared to a year of life with sight (or whatever).
Fundamentally it’s not different to figuring out people’s utility functions, except we are restricting the domain of questioning to liberty issues.