Short version: beyond a certain (very coarse) precision you can’t usefully model humans as logical, goal-directed, decision-making agents contaminated by pesky “biases”. Goals, decisions and agency are very leaky abstractions, illusions that arise from the mechanical interplay of our many ad-hoc features. Rather than heading off for the sunset, the 99% typical behavior of humans is going around in circles day after day; if this is goal-directed, the goal must be weird indeed. If you want to make predictions about actual human beings, don’t talk about their goals, talk about their tendencies.
Far from distressing me, this situation makes me happy. It’s great we have so few optimizers around. Real-world strong optimizers, from natural selection to public corporations to paperclippers, look psychopathic and monstrous when viewed through the lens of our tendency-based morality.
For more details see thread above. Or should I compile this stuff into a toplevel post?
Okay, I’ve probably captured the gist of your position now. Correct me if I’m speaking something out of its character below.
Humans are descriptively not utility maximizers, they can only be modeled this way under coarse approximation and with a fair number of exceptions. There seems to be no reason to normatively model them with some ideal utility maximizer, to apply the concepts like should in more rigorous sense of decision theory.
Humans do what they do, not what they “should” according to some rigorous external model. This is an argument and intuition similar to not listening to philosopher-constructed rules of morality, non-intuitive conclusions reached from considering a thought experiment, or God-declared moral rules, since you first have to accept each moral rule yourself, according to your own criteria, which might even be circular.
It’s great we have so few optimizers around. Real-world strong optimizers, from natural selection to public corporations to paperclippers, look psychopathic and monstrous when viewed through the lens of our tendency-based morality.
I thought this was the point of the Overcoming Bias project and the endeavor not to be named until tomorrow (cf. “Thou Art Godshatter” and “Value is Fragile”): that we want to put the fearsome power of optimization in the service of humane values, instead just of leaving things to nature, which is monstrous.
Or should I compile this stuff into a toplevel post?
I would love to see a top-level post on this issue.
Short version: beyond a certain (very coarse) precision you can’t usefully model humans as logical, goal-directed, decision-making agents contaminated by pesky “biases”. Goals, decisions and agency are very leaky abstractions, illusions that arise from the mechanical interplay of our many ad-hoc features. Rather than heading off for the sunset, the 99% typical behavior of humans is going around in circles day after day; if this is goal-directed, the goal must be weird indeed. If you want to make predictions about actual human beings, don’t talk about their goals, talk about their tendencies.
Far from distressing me, this situation makes me happy. It’s great we have so few optimizers around. Real-world strong optimizers, from natural selection to public corporations to paperclippers, look psychopathic and monstrous when viewed through the lens of our tendency-based morality.
For more details see thread above. Or should I compile this stuff into a toplevel post?
Okay, I’ve probably captured the gist of your position now. Correct me if I’m speaking something out of its character below.
Humans are descriptively not utility maximizers, they can only be modeled this way under coarse approximation and with a fair number of exceptions. There seems to be no reason to normatively model them with some ideal utility maximizer, to apply the concepts like should in more rigorous sense of decision theory.
Humans do what they do, not what they “should” according to some rigorous external model. This is an argument and intuition similar to not listening to philosopher-constructed rules of morality, non-intuitive conclusions reached from considering a thought experiment, or God-declared moral rules, since you first have to accept each moral rule yourself, according to your own criteria, which might even be circular.
I thought this was the point of the Overcoming Bias project and the endeavor not to be named until tomorrow (cf. “Thou Art Godshatter” and “Value is Fragile”): that we want to put the fearsome power of optimization in the service of humane values, instead just of leaving things to nature, which is monstrous.
I would love to see a top-level post on this issue.