There’s a lot of arguing, of course, on if humans are rational, but this often mixes up two things: there’s the “Von Neumann-Morgenstern utility function maximization” definition of “rational”, and there’s a hypothetical “rational” that a human could fulfill with constraints much more complicated than the classical approach, more in the direction of prospect theory, or Predictive Coding.
I think I regard the second definition as sufficiently not understood or defined that it isn’t yet worth using in most conversation. It seems challenging, to say the least, to ask if humans are rational according to some definition which we clearly do not even know yet, let alone expect others to agree with.
Or it could be an intuitive usage and mean “(more) optimal”. “Why don’t more people do [thing that will improve their health]?”
I think that if people were to try to define optimal in a specific way, they would find that it requires a model of human behavior; the common one that academics would fall back to is that of Von Neumann-Morgenstern utility function maximization.
I think it’s quite possible that when we have better models of human behavior, we’ll better recognize that in cases where people seem to be doing silly things to improve their health, they’re actually being somewhat optimal given a large sets of physical and mental constraints.
Or it could be an intuitive usage and mean “(more) optimal”. “Why don’t more people do [thing that will improve their health]?”
I like that question.
I think that if people were to try to define optimal in a specific way, they would find that it requires a model of human behavior; the common one that academics would fall back to is that of Von Neumann-Morgenstern utility function maximization.
I think it’s quite possible that when we have better models of human behavior, we’ll better recognize that in cases where people seem to be doing silly things to improve their health, they’re actually being somewhat optimal given a large sets of physical and mental constraints.