Are you questioning that we can model human behavior using a utility function (i.e. microeconomics) or that we can model human values using a utility function? Or both? The former is important if you’re trying to predict what a human would do, the second is important if you’re trying to figure out what humans should do—or what you want an AGI to do.
I was mainly thinking about values, but behavior is suspect as well. (Though I gather that some of the use of utility functions for modeling human behavior has been relatively successful in economics.)
I spent a minute trying to think of a reply arguing for utility functions as models of human values, but I think thats wrong. I’m really agnostic about the type of preference structure human values have, and I think I’m going to stop saying “utility function” and start saying “preferences” or the more awkward “something like a utility function” to indicate this agnosticism.
When it comes to econ, utility theory is clearly a false model of human behavior (how many models aren’t false?), but it’s simplicity is appealing. As mattnewport alludes to, alternative theories usually don’t improve predictions enough in order to be worth the substantial increase in complexity they typically entail. At least that’s my impression.
I’m wondering how a model can be “false”. It seems like simply “bad” would be more appropriate.
Perhaps if the model gets you less accurate results than some naive model, or guessing.
I’ve been thinking a lot lately of treating ethical theories as models… I might have to write a paper on this, including some unpacking of “model”. Perhaps I’ll start with some top-level posts.
By a false model, all I mean is a model that isn’t exactly the same as the reality it’s supposed to model. It’s probably a useless notion (except for maybe in theoretical physics?), but some people see textbook econ and think “people aren’t rational, therefore textbook economics is wrong, therefore my favorite public policy will work.” The last step isn’t always there or just a single step, but it’s typically the end result. I’ve gotten into the habit of making the “all models are false” point when discussing economic models just to combat this mindset.
In general, it distresses me that so few people understand that scientists create maps, not exact replicas of the territory.
Treating ethical theories as models seems so natural now that you mention it. We have some preference structure that know very little about. What should we do? The same thing we did with all sorts of phenomenon that we knew very little about—model it!
The three-tier way of looking at it is interesting, but I’ll definitely be approaching it from the perspective of someone taking a theoretical approach to the study of ethics. The end result, hopefully, will be something written for such people.
Are you questioning that we can model human behavior using a utility function (i.e. microeconomics) or that we can model human values using a utility function? Or both? The former is important if you’re trying to predict what a human would do, the second is important if you’re trying to figure out what humans should do—or what you want an AGI to do.
I was mainly thinking about values, but behavior is suspect as well. (Though I gather that some of the use of utility functions for modeling human behavior has been relatively successful in economics.)
I spent a minute trying to think of a reply arguing for utility functions as models of human values, but I think thats wrong. I’m really agnostic about the type of preference structure human values have, and I think I’m going to stop saying “utility function” and start saying “preferences” or the more awkward “something like a utility function” to indicate this agnosticism.
When it comes to econ, utility theory is clearly a false model of human behavior (how many models aren’t false?), but it’s simplicity is appealing. As mattnewport alludes to, alternative theories usually don’t improve predictions enough in order to be worth the substantial increase in complexity they typically entail. At least that’s my impression.
I’m wondering how a model can be “false”. It seems like simply “bad” would be more appropriate.
Perhaps if the model gets you less accurate results than some naive model, or guessing.
I’ve been thinking a lot lately of treating ethical theories as models… I might have to write a paper on this, including some unpacking of “model”. Perhaps I’ll start with some top-level posts.
By a false model, all I mean is a model that isn’t exactly the same as the reality it’s supposed to model. It’s probably a useless notion (except for maybe in theoretical physics?), but some people see textbook econ and think “people aren’t rational, therefore textbook economics is wrong, therefore my favorite public policy will work.” The last step isn’t always there or just a single step, but it’s typically the end result. I’ve gotten into the habit of making the “all models are false” point when discussing economic models just to combat this mindset.
In general, it distresses me that so few people understand that scientists create maps, not exact replicas of the territory.
Treating ethical theories as models seems so natural now that you mention it. We have some preference structure that know very little about. What should we do? The same thing we did with all sorts of phenomenon that we knew very little about—model it!
“All models are wrong but some models are useful.”—George E. P. Box
Any relation to my thoughts of ethical theories as models?
http://lesswrong.com/lw/18l/ethics_as_a_black_box_function/
http://lesswrong.com/lw/18l/ethics_as_a_black_box_function/14ha
Sure.
The three-tier way of looking at it is interesting, but I’ll definitely be approaching it from the perspective of someone taking a theoretical approach to the study of ethics. The end result, hopefully, will be something written for such people.