If you come up with a candidate “bottom line” and then explore arguments for and against it, and sometimes end up rejecting it, then it wasn’t really a bottom line — your algorithm hadn’t actually terminated.
Oh. That makes sense. So it’s the bottom line only if I write it and refuse to change it forever after. Or, if it is the belief on which I actually act in the end, if it was all a part of a decision-making process.
Guess that’s what everybody was telling me… feeling stupid now.
Your real decision is the one you act on. Decision theory, after all, isn’t about what the agent believes it has decided; it’s about actions the agent chooses.
Edited to add: Also, you recognized where “the biases really struck” as you put it — that’s a pretty important part. It seems to me that one reason to resist writing even a tentative bottom line too early is to avoid motivated stopping. And if you’re working in a group, this is a reason to hold off on proposing solutions.
Edited again to add: In retrospect I’m not sure, but I think what I triggered on, that led me to respond to your post was the phrase “a rationalist should”. This fits the same grammatical pattern as “a libertarian should”, “a Muslim should”, and so on … as if rationality were another ideological identity; that one identifies with Rationalist-ism first and then follows the rationality social rules, having faith that by being a good rationalist one gets to go to rationalist heaven and receive 3^^^3 utilons (and no dust specks), or some such.
I expect that’s not what you actually meant. But I think I sometimes pounce on that kind of thing. Gotta fight the cult attractor! I figure David Gerard has the “keeping LW from becoming Scientology” angle; I’ll try for the “keeping LW from becoming Objectivism” angle. :)
Oh. That makes sense. So it’s the bottom line only if I write it and refuse to change it forever after. Or, if it is the belief on which I actually act in the end, if it was all a part of a decision-making process.
Guess that’s what everybody was telling me… feeling stupid now.
’s all good.
Your real decision is the one you act on. Decision theory, after all, isn’t about what the agent believes it has decided; it’s about actions the agent chooses.
Edited to add:
Also, you recognized where “the biases really struck” as you put it — that’s a pretty important part. It seems to me that one reason to resist writing even a tentative bottom line too early is to avoid motivated stopping. And if you’re working in a group, this is a reason to hold off on proposing solutions.
Edited again to add:
In retrospect I’m not sure, but I think what I triggered on, that led me to respond to your post was the phrase “a rationalist should”. This fits the same grammatical pattern as “a libertarian should”, “a Muslim should”, and so on … as if rationality were another ideological identity; that one identifies with Rationalist-ism first and then follows the rationality social rules, having faith that by being a good rationalist one gets to go to rationalist heaven and receive 3^^^3 utilons (and no dust specks), or some such.
I expect that’s not what you actually meant. But I think I sometimes pounce on that kind of thing. Gotta fight the cult attractor! I figure David Gerard has the “keeping LW from becoming Scientology” angle; I’ll try for the “keeping LW from becoming Objectivism” angle. :)
Feeling stupid means you’re getting smarter. At least, that’s what I tell myself whenever I feel that past-me did something stupid.