As such, we’d be unlikely to get what we really want if the world was re-engineered in accordance with a description of what we want that came from verbal introspective access to our motivations. Less naive proposals would involve probing the neuroscience of motivation at the algorithmic level. (Footnote: Inferring desires from behavior alone probably won’t work, either.)
There is something a bit bizarre about proposing to extract preferences by scanning brains (because raw behavior and reported introspection are not authentic and primitive enough), and then to insist that these fundamental preferences be extrapolated through a process of reflective equilibrium—thereby becoming more refined.
Is there some argument justifying the claim that what I really want is not what I say I want, and not what I do, but rather what the technician running the scanner says I want. By what definition of “really” is this what I really want? By what definition of “want”?
Note: In some ways this echoes steven0461, but I think it makes some additional points.
I was thinking that the brain scan approach could be tested on a small scale with people living in an environment designed according to their brain scans, but then I realized that the damned thing doesn’t ground out. If you don’t trust what people say, you can’t judge the success of the project by questionnaires or interviews. If you can’t trust what people do, then you can’t use whether or not they are willing to stay in the project.
I think that if the rates of depression and/or suicide go up, the brain scan project is a failure, but that’s a pretty crude measure.
You could use brain scans, of course, but that’s no way to find out whether brain scans improve whatever you’re trying to improve.
If you can’t trust what people do, then you can’t use whether or not they are willing to stay in the project.
Why can’t you trust what people do? That is surely the #1 resource when it comes to what their decision algorithm says. So: train a video camera on them and apply revealed preference theory.
It may not follow from the article, but I think that if people’s actions are so much shaped by unconscious effects and miscalculations about happiness and other goals, then actions aren’t a very reliable guide. See also the many discussions here about akrasia—should akrasia be used to deduce that people generally would rather spend large amounts of their time doing things they don’t like all that much and don’t contribute to their goals?
OK, so what people do, and what they say are the #1 and #2 best available resources on what they actually want. Sample from multiple individuals, and I figure some pretty successful reconstructions of their goals will be possible.
There is something a bit bizarre about proposing to extract preferences by scanning brains (because raw behavior and reported introspection are not authentic and primitive enough), and then to insist that these fundamental preferences be extrapolated through a process of reflective equilibrium—thereby becoming more refined.
Is there some argument justifying the claim that what I really want is not what I say I want, and not what I do, but rather what the technician running the scanner says I want. By what definition of “really” is this what I really want? By what definition of “want”?
Note: In some ways this echoes steven0461, but I think it makes some additional points.
I was thinking that the brain scan approach could be tested on a small scale with people living in an environment designed according to their brain scans, but then I realized that the damned thing doesn’t ground out. If you don’t trust what people say, you can’t judge the success of the project by questionnaires or interviews. If you can’t trust what people do, then you can’t use whether or not they are willing to stay in the project.
I think that if the rates of depression and/or suicide go up, the brain scan project is a failure, but that’s a pretty crude measure.
You could use brain scans, of course, but that’s no way to find out whether brain scans improve whatever you’re trying to improve.
Why can’t you trust what people do? That is surely the #1 resource when it comes to what their decision algorithm says. So: train a video camera on them and apply revealed preference theory.
It may not follow from the article, but I think that if people’s actions are so much shaped by unconscious effects and miscalculations about happiness and other goals, then actions aren’t a very reliable guide. See also the many discussions here about akrasia—should akrasia be used to deduce that people generally would rather spend large amounts of their time doing things they don’t like all that much and don’t contribute to their goals?
OK, so what people do, and what they say are the #1 and #2 best available resources on what they actually want. Sample from multiple individuals, and I figure some pretty successful reconstructions of their goals will be possible.