You can assume, fairly simply, that the things I retrospectively consciously endorse were actually good for me. The question is how to predict that sort of thing rather than rationalizing or checking it only retrospectively.
Ah, but (to be facetious and semi-trolling for a moment), the narrative fallacy means you can’t trust those retrospective endorsements either. Isn’t every thought we ever take just self-signalling? Are we not mere microbes in the bowels of Moloch, utterly incapable of real thought or action? Blobs of sentience randomly thrust above the mire of dead matter like a slime mould in its aggregation phase, imagining for a moment that it is a real thing, before collapsing once more into the unthinking ooze!
You wrote this facetiously, but I regularly find myself updating towards it being quite true.
The basilisk lives, and goes forth to destroy the world! My work here is done!
More seriously, I find it easy to build that point of view from the materials of LessWrong, Overcoming Bias, and blogs on rationality, neuroscience, neoreaction, and PUA. If I were inclined to the task I could do it at book length, but it would be the intellectual equivalent of setting a car bomb. So I won’t. But it is possible. It is also possible to build completely different stories from the same collection of concepts, as easy as it is to build them from words.
The question that interests me is why people (including myself) are convinced by this story or that. Are they undertaking rational updating in the face of evidence? I provided none, only cherry-picked references to other ideas woven together with hyperbolic metaphors. Do they go along with stories that tell them what they would like to believe already? And yet “microbes in the bowels of Moloch, utterly incapable of real thought or action” is not something anyone would want to be. Perhaps this story appeals because its message, “nothing is true, all is a lie”, like its newage opposite, “reality is whatever you want it to be”, removes the burden of living in a world where achieving anything worth while is both possible, and a struggle.
the things I retrospectively consciously endorse were actually good for me.
After how long?
Let us assume that I make a large loan out to someone—call him Jim. Jim promises to pay me back in exactly a year, and I have no reason to doubt him. Two months after taking my money, Jim vanishes, and cannot be found again. The one-year mark passes, and I see no sign of my loan being returned.
At this point, I am likely to regret extending the original loan; I do not retrospectively endorse the action.
One month later, Jim reappears; in apology for repaying my loan late, he repays twice the originally agreed amount.
At this point, I do retrospectively endorse the action of extending the loan.
So, whether or not I retrospectively endorse an action can depend on how long it is since the original action occurred, and can change depending on the observed consequences of the action. How do you tell when to stop, and consciously endorse the action?
That implies that “endorse” means “I conclude that this action left me better off than without it”. I don’t think this is what most people mean by endorsement. In particular, it fails to consider that some actions can leave you better off or worse off by luck.
If you drive drunk, and you get home safely, does that imply you would endorse having driven drunk that particular time?
If you drive drunk, and you get home safely, does that imply you would endorse having driven drunk that particular time?
No, it does not; undertaking a high-risk no-reward action is not endorsable simply because the risk is avoided once. You make a good point.
Nonetheless, I have noted that whether I retrospectively endorse an action or not can change as more information is discovered. Hence, the time horizon chosen is important.
I tend to avoid retrospectively endorsing actions based on their outcomes, as that opens up the danger of falling to outcome bias. I instead prefer to orient toward evaluating the process of how I made the decision and took the action, and then trying to improve the process. After all, I can’t control the outcome, I can only control the process and my actions, and I believe it is important to only evaluate and endorse those areas that I can control.
You do make a good point; the advantage of retrospectively endorsing based on outcomes, is that it highlights very clearly where your decision making processes are faulty and provides an incentive to fix said faults before a negative outcome happens again.
But if you’re happy with your decision-engine validating processes without that, then it’s not necessary.
You can assume, fairly simply, that the things I retrospectively consciously endorse were actually good for me.
I think you’re confusing regret or lack of it with “actually good for me”. Certainly, the future-you can evaluate the consequences of some action better than the past-you, but he’s still only future-you, not an arbiter of what is “actually good” and what is not.
I think there is another issue at play here, namely whether it is worthwhile to evaluate the consequences of decision or actions, or the process of making the decision and taking the action. I believe that improving the process is what is important, not the outcome, as focusing on the outcome often leads to outcome bias. We can only control the process, after all, not the outcome, and it’s important to focus on what we have in our locus of control.
There’s no confusion here if we use a naturalistic definition of “actually good”. If we use a nonnaturalistic definition, then of course the question becomes bloody nonsense. I would hope you’d have the charity not to automatically interpret what my question nonsensically!
All of that is old philosophy work, not up-to-date cog-sci. It doesn’t tell us much at all, since the very definition of irrationality is that your actions can be optimizing for something that’s neither what you consciously intended nor what’s good for you. The only way to get help from there on this issue would be to believe that humans are perfectly rational, look at what they do, and infer the goals backwards from there!
Ah, if you’re looking for newer stuff, Stanovich’s work has been really useful for getting some fine-grained distinctions between rationality and intelligence, and together with Evans, Stanovich did some great work on advancing the dual process theory. However, there is much more work to be done, of course. Perhaps others can help out with other useful citations?
Unfortunately, it still doesn’t answer the question I actually asked. I know damn well I am not a perfectly rational robot. What I’m asking is: what’s the cog-sci behind how I can check that my current conscious desires or goals accord with what is actually, objectively, good for me? “You are not a perfect goal-seeking robot” is simply repeating the question in declarative form.
how I can check that my current conscious desires or goals accord with what is actually, objectively, good for me?
Is this a question that cog-sci can answer?
Let us assume that I decide I really like a new type of food. It may be apples, it may be cream doughnuts. If I ask “is eating this good for me?”, then that’s more a question for a dietician to answer, instead of a cognitive psychologist, surely?
Am I to be restricted to your current knowledge, and to the deductions you have made from information available to you, or can I introduce principles of, for example, physics or information theory or even dietary science not currently present in your mind?
I do believe that this has answered the other question I asked of you, with regards to after how long to consider the future-you who would be endorsing or not endorsing a given course of action; I understand from this comment that the future-you to choose is the limit future-you at time t as t approaches infinity. This implies that one possible answer to your question would be to imagine yourself at age ninety, and consider what it is that you would most or least appreciate having done at that age. (When I try this, I find that exercise and a healthy diet become very important; I do not wish to be old and frail at ninety. Old may not be avoidable, but frail certainly is...).
Am I to be restricted to your current knowledge, and to the deductions you have made from information available to you, or can I introduce principles of, for example, physics or information theory or even dietary science not currently present in your mind?
You are utterly unlimited in introducing additional knowledge. It just has to be true, is all. Introducing the dietary science on whether I should eat tuna, salmon, hummus, meat, egg, or just plain salad and bread for lunch is entirely allowed, despite my currently going by a heuristic of “tuna sandwiches with veggies on them are really tasty and reasonably healthful.”
I understand from this comment that the future-you to choose is the limit future-you at time t as t approaches infinity. This implies that one possible answer to your question would be to imagine yourself at age ninety, and consider what it is that you would most or least appreciate having done at that age. (When I try this, I find that exercise and a healthy diet become very important; I do not wish to be old and frail at ninety. Old may not be avoidable, but frail certainly is...).
This is roughly my line of reasoning as well. What I find interesting is that:
A) People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.
B) Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, “I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know.”
Hm, I wonder about orienting toward the 90-year-old self. When I model myself, I at 90 would have liked to know that I lived a life that I consider fulfilling, and that may involve exercise and healthy diet, but also good social connections, and knowledge that I made a positive impact on the world, for example through Intentional Insights. Ideally, I would continue to live beyond 90, though, and that may involve cryonics or maybe even a friendly AI helping us all live forever—go MIRI!
Uhhh… sounds good to me. Well, sounds like the standard LW party-line to me, but it’s also all actually good. Sometimes the simple answer is the right one, after all.
You are utterly unlimited in introducing additional knowledge. It just has to be true, is all.
Hmmm. This makes finding the correct answer very tricky, since in order to be completely correct I have to factor in the entirety of, well, everything that is true.
The best I’d be able to practically manage is heuristics.
People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.
Other people are more alien than… well, than most people realise. I often find data to support this hypothesis.
Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, “I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know.”
I don’t think it’s possible to do better than heuristics, the only question is how good your heuristics are. And your heuristics are dependent on your knowledge; learning more, either through formal education or practical experience, will help to refine those heuristics.
Hmmm… which is a pretty good reason for further education.
Whatever goals you believe would fulfill your desires, presumably.
Gosh, but aren’t my desires a miserably bad measure of what’s actually good for me?
(Yes, being facetious and semi-trolling. Also yes, would like to see someone actually answer the question using real cog-sci knowledge.)
Thoughts on how this paper on intentional systems and this one on agents as intentional systems applies to distinctions between one’s desires and what is actually good for oneself?
They don’t.
The issue is defining “actually good for oneself” and revealed preferences don’t help you here.
You can assume, fairly simply, that the things I retrospectively consciously endorse were actually good for me. The question is how to predict that sort of thing rather than rationalizing or checking it only retrospectively.
Ah, but (to be facetious and semi-trolling for a moment), the narrative fallacy means you can’t trust those retrospective endorsements either. Isn’t every thought we ever take just self-signalling? Are we not mere microbes in the bowels of Moloch, utterly incapable of real thought or action? Blobs of sentience randomly thrust above the mire of dead matter like a slime mould in its aggregation phase, imagining for a moment that it is a real thing, before collapsing once more into the unthinking ooze!
Ah, there’s that good old-fashioned Overcoming-Biasian “rationality”, insulting the human mind while making no checkable predictions whatsoever!
You wrote this facetiously, but I regularly find myself updating towards it being quite true.
The basilisk lives, and goes forth to destroy the world! My work here is done!
More seriously, I find it easy to build that point of view from the materials of LessWrong, Overcoming Bias, and blogs on rationality, neuroscience, neoreaction, and PUA. If I were inclined to the task I could do it at book length, but it would be the intellectual equivalent of setting a car bomb. So I won’t. But it is possible. It is also possible to build completely different stories from the same collection of concepts, as easy as it is to build them from words.
The question that interests me is why people (including myself) are convinced by this story or that. Are they undertaking rational updating in the face of evidence? I provided none, only cherry-picked references to other ideas woven together with hyperbolic metaphors. Do they go along with stories that tell them what they would like to believe already? And yet “microbes in the bowels of Moloch, utterly incapable of real thought or action” is not something anyone would want to be. Perhaps this story appeals because its message, “nothing is true, all is a lie”, like its newage opposite, “reality is whatever you want it to be”, removes the burden of living in a world where achieving anything worth while is both possible, and a struggle.
After how long?
Let us assume that I make a large loan out to someone—call him Jim. Jim promises to pay me back in exactly a year, and I have no reason to doubt him. Two months after taking my money, Jim vanishes, and cannot be found again. The one-year mark passes, and I see no sign of my loan being returned.
At this point, I am likely to regret extending the original loan; I do not retrospectively endorse the action.
One month later, Jim reappears; in apology for repaying my loan late, he repays twice the originally agreed amount.
At this point, I do retrospectively endorse the action of extending the loan.
So, whether or not I retrospectively endorse an action can depend on how long it is since the original action occurred, and can change depending on the observed consequences of the action. How do you tell when to stop, and consciously endorse the action?
That implies that “endorse” means “I conclude that this action left me better off than without it”. I don’t think this is what most people mean by endorsement. In particular, it fails to consider that some actions can leave you better off or worse off by luck.
If you drive drunk, and you get home safely, does that imply you would endorse having driven drunk that particular time?
No, it does not; undertaking a high-risk no-reward action is not endorsable simply because the risk is avoided once. You make a good point.
Nonetheless, I have noted that whether I retrospectively endorse an action or not can change as more information is discovered. Hence, the time horizon chosen is important.
I tend to avoid retrospectively endorsing actions based on their outcomes, as that opens up the danger of falling to outcome bias. I instead prefer to orient toward evaluating the process of how I made the decision and took the action, and then trying to improve the process. After all, I can’t control the outcome, I can only control the process and my actions, and I believe it is important to only evaluate and endorse those areas that I can control.
You do make a good point; the advantage of retrospectively endorsing based on outcomes, is that it highlights very clearly where your decision making processes are faulty and provides an incentive to fix said faults before a negative outcome happens again.
But if you’re happy with your decision-engine validating processes without that, then it’s not necessary.
I think you’re confusing regret or lack of it with “actually good for me”. Certainly, the future-you can evaluate the consequences of some action better than the past-you, but he’s still only future-you, not an arbiter of what is “actually good” and what is not.
I think there is another issue at play here, namely whether it is worthwhile to evaluate the consequences of decision or actions, or the process of making the decision and taking the action. I believe that improving the process is what is important, not the outcome, as focusing on the outcome often leads to outcome bias. We can only control the process, after all, not the outcome, and it’s important to focus on what we have in our locus of control.
There’s no confusion here if we use a naturalistic definition of “actually good”. If we use a nonnaturalistic definition, then of course the question becomes bloody nonsense. I would hope you’d have the charity not to automatically interpret what my question nonsensically!
I have no idea what a naturalistic definition of “actually good” would be.
All of that is old philosophy work, not up-to-date cog-sci. It doesn’t tell us much at all, since the very definition of irrationality is that your actions can be optimizing for something that’s neither what you consciously intended nor what’s good for you. The only way to get help from there on this issue would be to believe that humans are perfectly rational, look at what they do, and infer the goals backwards from there!
Ah, if you’re looking for newer stuff, Stanovich’s work has been really useful for getting some fine-grained distinctions between rationality and intelligence, and together with Evans, Stanovich did some great work on advancing the dual process theory. However, there is much more work to be done, of course. Perhaps others can help out with other useful citations?
Reminder: the downvote button is not a disagreement button. Knock it off, whoever you are.
Thanks, appreciate it!
Unfortunately, it still doesn’t answer the question I actually asked. I know damn well I am not a perfectly rational robot. What I’m asking is: what’s the cog-sci behind how I can check that my current conscious desires or goals accord with what is actually, objectively, good for me? “You are not a perfect goal-seeking robot” is simply repeating the question in declarative form.
Is this a question that cog-sci can answer?
Let us assume that I decide I really like a new type of food. It may be apples, it may be cream doughnuts. If I ask “is eating this good for me?”, then that’s more a question for a dietician to answer, instead of a cognitive psychologist, surely?
Assume that you must construct the “actually good” from my actual mind and its actual terminal preferences.
How about introducing the possibility of shifting terminal preferences?
Am I to be restricted to your current knowledge, and to the deductions you have made from information available to you, or can I introduce principles of, for example, physics or information theory or even dietary science not currently present in your mind?
I do believe that this has answered the other question I asked of you, with regards to after how long to consider the future-you who would be endorsing or not endorsing a given course of action; I understand from this comment that the future-you to choose is the limit future-you at time t as t approaches infinity. This implies that one possible answer to your question would be to imagine yourself at age ninety, and consider what it is that you would most or least appreciate having done at that age. (When I try this, I find that exercise and a healthy diet become very important; I do not wish to be old and frail at ninety. Old may not be avoidable, but frail certainly is...).
You are utterly unlimited in introducing additional knowledge. It just has to be true, is all. Introducing the dietary science on whether I should eat tuna, salmon, hummus, meat, egg, or just plain salad and bread for lunch is entirely allowed, despite my currently going by a heuristic of “tuna sandwiches with veggies on them are really tasty and reasonably healthful.”
This is roughly my line of reasoning as well. What I find interesting is that:
A) People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.
B) Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, “I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know.”
Hm, I wonder about orienting toward the 90-year-old self. When I model myself, I at 90 would have liked to know that I lived a life that I consider fulfilling, and that may involve exercise and healthy diet, but also good social connections, and knowledge that I made a positive impact on the world, for example through Intentional Insights. Ideally, I would continue to live beyond 90, though, and that may involve cryonics or maybe even a friendly AI helping us all live forever—go MIRI!
Uhhh… sounds good to me. Well, sounds like the standard LW party-line to me, but it’s also all actually good. Sometimes the simple answer is the right one, after all.
Hmmm. This makes finding the correct answer very tricky, since in order to be completely correct I have to factor in the entirety of, well, everything that is true.
The best I’d be able to practically manage is heuristics.
Other people are more alien than… well, than most people realise. I often find data to support this hypothesis.
I don’t think it’s possible to do better than heuristics, the only question is how good your heuristics are. And your heuristics are dependent on your knowledge; learning more, either through formal education or practical experience, will help to refine those heuristics.
Hmmm… which is a pretty good reason for further education.