This seems to be conflating rationality centered material with FAI/optimal decision theory material and has lumped them all under the heading “utilit maximization”. These individual parts are fundamentally distinct, and aim at different things.
Rationality centered material does include some thought about utility, Fermi calculations and heuristics, but focuses on debiasing, recognizing cognitive heuristics that can get in the way (such as rationalization, cached thoughts) and the like. I’ve managed to apply them a bit in my day to day thought. For instance; recognizing the fundamental attribution error has been very useful to me, because I tend to be judgmental. This has in the past led to me isolating myself much more than I should, and sinking into misanthropy. For the longest time I avoided the thoughts, now I’ve found that I can treat them in a more clinical manner and have gained some perspective on them. This helps me raise my overall utility, but it does not perfectly optimize it by any stretch of the imagination—nor is it meant to, it just makes things better.
Bottomless recursion with respect to expected utility calculations is a decision theory/rational choice theory issue and an AI issue, but it is not a rationality issue. To be more rational, we don’t have to optimize, we just have to recognize that one feasible procedure is better than another, and work on replacing our current procedure with this new, better one. If we recognize that a procedure is impossible for us to use in practice, we don’t use it—but it might be useful to talk about in a different, theoretical context such as FAI or decision theory. TDT and UDT were not made for practical use by humans—they were made to address a theoretical problem in FAI and formal decision theory, even though some people claim to have made good use of them (even here we see TDT being used at a psychological aide for overcoming hyperbolic discounting more than as a formal tool of any sort).
Also, there are different levels of analysis appropriate for different sorts of things. If I’m analyzing the likelihood of an asteroid impact over some timescale, I’m going to include much more explicit detail there, than in my analysis of whether I should go hang out with LWers in New York for a bit. I might assess lots of probability measures in a paper analyzing a topic, but doing so on the fly rarely crosses my mind (I often do a quick and dirty utility calculation to decide whether or not to do something, e.g. - which road home has the most right turns, what’s the expected number of red lights given the time of day etc., but that’s it).
Overall, I’m getting the impression that all of these things are being lumped in together when they should not be, and utility maximization means very distinct things in these very distinct contexts, most technical aspects of utility maximization were not intended for explicit everyday use by humans, they were intended for use by specialists in certain contexts.
This seems to be conflating rationality centered material with FAI/optimal decision theory material and has lumped them all under the heading “utilit maximization”. These individual parts are fundamentally distinct, and aim at different things.
Rationality centered material does include some thought about utility, Fermi calculations and heuristics, but focuses on debiasing, recognizing cognitive heuristics that can get in the way (such as rationalization, cached thoughts) and the like. I’ve managed to apply them a bit in my day to day thought. For instance; recognizing the fundamental attribution error has been very useful to me, because I tend to be judgmental. This has in the past led to me isolating myself much more than I should, and sinking into misanthropy. For the longest time I avoided the thoughts, now I’ve found that I can treat them in a more clinical manner and have gained some perspective on them. This helps me raise my overall utility, but it does not perfectly optimize it by any stretch of the imagination—nor is it meant to, it just makes things better.
Bottomless recursion with respect to expected utility calculations is a decision theory/rational choice theory issue and an AI issue, but it is not a rationality issue. To be more rational, we don’t have to optimize, we just have to recognize that one feasible procedure is better than another, and work on replacing our current procedure with this new, better one. If we recognize that a procedure is impossible for us to use in practice, we don’t use it—but it might be useful to talk about in a different, theoretical context such as FAI or decision theory. TDT and UDT were not made for practical use by humans—they were made to address a theoretical problem in FAI and formal decision theory, even though some people claim to have made good use of them (even here we see TDT being used at a psychological aide for overcoming hyperbolic discounting more than as a formal tool of any sort).
Also, there are different levels of analysis appropriate for different sorts of things. If I’m analyzing the likelihood of an asteroid impact over some timescale, I’m going to include much more explicit detail there, than in my analysis of whether I should go hang out with LWers in New York for a bit. I might assess lots of probability measures in a paper analyzing a topic, but doing so on the fly rarely crosses my mind (I often do a quick and dirty utility calculation to decide whether or not to do something, e.g. - which road home has the most right turns, what’s the expected number of red lights given the time of day etc., but that’s it).
Overall, I’m getting the impression that all of these things are being lumped in together when they should not be, and utility maximization means very distinct things in these very distinct contexts, most technical aspects of utility maximization were not intended for explicit everyday use by humans, they were intended for use by specialists in certain contexts.