Shut up and multiply is only about very comparable things (hence the example with differing numbers of birds). Obviously very important to make Pareto improvements of the form “hold everything constant, get more of good thing X”.
The main failure mode is applying it between un-like things, and accidentally making bad tradeoffs. For example, working very hard to make money to give away but then stagnating, because it turns out doing things you like is actually very important to personal growth and personal growth is very important to achieving your goals in the future. In general, making Kaldor-Hicks improvements that turn out not to be Kaldor-Hicks improvements because things had secret benefits.
Shut up and divide helps alleviate people getting mind-controlled by Very Large Numbers and devoting all of their time to other people (of whom there are many more of than yourself), but… it smuggles in an insidious and terrible type error (while not correcting the core issue).
“Shutting up and letting an explicit deliberative practice decide the answer” is notabout getting your emotions to work more reasonably, as is said in the division post, it’s about making decisions where your emotions don’t have the proper sense of scale. You’re not supposed to walk around actually feeling the desire to save a billion birds at an intensity a billion times stronger than the desire to save one bird. The deliberative analysis isn’t about aligning your care, it’s about making decisions where your care system is having troubles. To apply it to “how much you care about a random person”, as he did, is not the place your care system has troubles! (Of course, plausibly Wei Dai was not actually making this mistake, it’s always hard to ensure your ideas aren’t being misinterpreted. But it really sounds like he was.)
But still, directly, why do I think you should still care about a random bird, when there are so many more important things to do? Why not overwrite your initial caring system with something that makes more sense, and use the fact that you don’t care greatly about the sum total of birds to determine you don’t care greatly about a single bird? Because I desperately want people to protect their learned local gradient.
The initial problem of missing secret benefits to things is well-known in various guises similar to Chesterton’s Fence. But Chesterton’s Fence isn’t very constructive—it just tells you to tread carefully. I think the type of process you should be running to actively identify fake Kaldor-Hicks improvements is protecting the local gradient. If your mind has learned that reading fiction books is more important than going above and beyond on work, even if that supposedly saves lives—preserve this gradient! If your mind has learned that saving a single bird is more important than getting to your appointment on time, even if that supposedly saves lives—preserve this gradient!
The whole point of shutting up to multiply is that your brain is very bad outside a certain regime, but everyone knows your brain is the most profound analysis tool ever created inside its wheelhouse. And local gradients are its wheelhouse. In fact, “using deliberate analysis to decide which of multiple very different goals should be pursued” is the kind of tool that is great in its own regime, namely optimizing quantities in well-known formal situations, but is itself very bad outside of this regime! (Cf Communism, Goodhart, the failures of high modernism, etc.) To make hard tradeoffs in your daily life, you want to use the analogous principle “shut up and intuit” or “shut up and listen to your desires” or whatever provokes in you the mindset of using your mind’s experience. That’s the place you’d expect to get the note of discord, that says “wait, I think there’s actually something pretty bad about working constantly and giving all my money away—what is that about?”
This shortform post makes me wish LW supported bookmarking/sequencing it internally. Absent that, there’s bookmarking the shortform, but this comment in particular seems like a step towards something that the sequences seem like they’re missing.
We might implement bookmarking shortform posts (though it’s a bit of work and not sure we’d get to it soon). But, meanwhile, I’d support this post just getting turned into a top-level post.
To make hard tradeoffs in your daily life, you want to use the analogous principle “shut up and intuit” or “shut up and listen to your desires” or whatever provokes in you the mindset of using your mind’s experience
I enjoyed this and it clarified one thing for me. One question I have about this is shouldn’t you also listen to the part of your cognition that’s like “You’re wasting time reading too many fiction books” and “You could donate more of your money to charity?”
I think maybe what you’re pointing at here is to not immediately make “obvious improvements” but to instead inquire into your own intuitions and look for an appropriately aligned stace.
Good point, and you’re right that that’s the complex part. It’s very hard to say the criterion, but it’s the difference between “I feel like I should donate more of my money to charity because of this argument” vs “I should donate more of my money to charity, which I realized because of this argument”.
The deliberative process is like your heuristic in A*, and you definitely feel some strong push toward that option, but the S1 pathfinder hasn’t approved of the thing until (something happens), which I’m going to call “realizing it”. I think this meshes with our other uses of the phrase. Cf someone who “realizes there is nothing left for them here”, or “realizes that person is actually good”, or something—they aren’t going to have any sort of akrasia on acting from that new belief.
Shut up and multiply is only about very comparable things (hence the example with differing numbers of birds). Obviously very important to make Pareto improvements of the form “hold everything constant, get more of good thing X”.
The main failure mode is applying it between un-like things, and accidentally making bad tradeoffs. For example, working very hard to make money to give away but then stagnating, because it turns out doing things you like is actually very important to personal growth and personal growth is very important to achieving your goals in the future. In general, making Kaldor-Hicks improvements that turn out not to be Kaldor-Hicks improvements because things had secret benefits.
Shut up and divide helps alleviate people getting mind-controlled by Very Large Numbers and devoting all of their time to other people (of whom there are many more of than yourself), but… it smuggles in an insidious and terrible type error (while not correcting the core issue).
“Shutting up and letting an explicit deliberative practice decide the answer” is not about getting your emotions to work more reasonably, as is said in the division post, it’s about making decisions where your emotions don’t have the proper sense of scale. You’re not supposed to walk around actually feeling the desire to save a billion birds at an intensity a billion times stronger than the desire to save one bird. The deliberative analysis isn’t about aligning your care, it’s about making decisions where your care system is having troubles. To apply it to “how much you care about a random person”, as he did, is not the place your care system has troubles! (Of course, plausibly Wei Dai was not actually making this mistake, it’s always hard to ensure your ideas aren’t being misinterpreted. But it really sounds like he was.)
But still, directly, why do I think you should still care about a random bird, when there are so many more important things to do? Why not overwrite your initial caring system with something that makes more sense, and use the fact that you don’t care greatly about the sum total of birds to determine you don’t care greatly about a single bird? Because I desperately want people to protect their learned local gradient.
The initial problem of missing secret benefits to things is well-known in various guises similar to Chesterton’s Fence. But Chesterton’s Fence isn’t very constructive—it just tells you to tread carefully. I think the type of process you should be running to actively identify fake Kaldor-Hicks improvements is protecting the local gradient. If your mind has learned that reading fiction books is more important than going above and beyond on work, even if that supposedly saves lives—preserve this gradient! If your mind has learned that saving a single bird is more important than getting to your appointment on time, even if that supposedly saves lives—preserve this gradient!
The whole point of shutting up to multiply is that your brain is very bad outside a certain regime, but everyone knows your brain is the most profound analysis tool ever created inside its wheelhouse. And local gradients are its wheelhouse. In fact, “using deliberate analysis to decide which of multiple very different goals should be pursued” is the kind of tool that is great in its own regime, namely optimizing quantities in well-known formal situations, but is itself very bad outside of this regime! (Cf Communism, Goodhart, the failures of high modernism, etc.) To make hard tradeoffs in your daily life, you want to use the analogous principle “shut up and intuit” or “shut up and listen to your desires” or whatever provokes in you the mindset of using your mind’s experience. That’s the place you’d expect to get the note of discord, that says “wait, I think there’s actually something pretty bad about working constantly and giving all my money away—what is that about?”
This shortform post makes me wish LW supported bookmarking/sequencing it internally. Absent that, there’s bookmarking the shortform, but this comment in particular seems like a step towards something that the sequences seem like they’re missing.
We might implement bookmarking shortform posts (though it’s a bit of work and not sure we’d get to it soon). But, meanwhile, I’d support this post just getting turned into a top-level post.
I enjoyed this and it clarified one thing for me. One question I have about this is shouldn’t you also listen to the part of your cognition that’s like “You’re wasting time reading too many fiction books” and “You could donate more of your money to charity?”
I think maybe what you’re pointing at here is to not immediately make “obvious improvements” but to instead inquire into your own intuitions and look for an appropriately aligned stace.
Good point, and you’re right that that’s the complex part. It’s very hard to say the criterion, but it’s the difference between “I feel like I should donate more of my money to charity because of this argument” vs “I should donate more of my money to charity, which I realized because of this argument”.
The deliberative process is like your heuristic in A*, and you definitely feel some strong push toward that option, but the S1 pathfinder hasn’t approved of the thing until (something happens), which I’m going to call “realizing it”. I think this meshes with our other uses of the phrase. Cf someone who “realizes there is nothing left for them here”, or “realizes that person is actually good”, or something—they aren’t going to have any sort of akrasia on acting from that new belief.