Does this come from a general idea of “optimizing hard” means higher risk of damage caused by errors in detail, and “optimizing soft” has enough slack so as not to have the same risks, but also soft is less ambitious and likely less effective (if both are actually implemented well)?
a general idea of “optimizing hard” means higher risk of damage caused by errors in detail
Agreed.
“optimizing soft” has enough slack so as not to have the same risks, but also soft is less ambitious and likely less effective
I disagree with the idea that “optimizing soft” is less ambitious. “Optimizing soft”, in my head, is about as ambitious as “optimizing hard”, except it makes the epistemic uncertainty more explicit. In this model of caring I am trying to make more legible, I believe that Carlsmith-style caring may be more robust to certain epistemological errors humans can make that can result in severely sub-optimal scenarios, because it is constrained by human cognition and capabilities.
Note: I notice that this can also be said for Soares-style caring—both are constrained by human cognition and capabilities, but in different ways. Perhaps both have different failure modes, and are more effective in certain distributions (which may diverge)?
Backing up a step, because I’m pretty sure we have different levels of knowledge and assumptions (mostly my failing) about the differences between “hard” and “soft” optimizing.
I should acknowledge that I’m not particularly invested in EA as a community or identity. I try to be effective, and do some good, but I’m exploring rather than advocating here.
Also, I don’t tend to frame things as “how to care”, so much as “how to model the effects of actions, and how to use those models to choose how to act”. I suspect that’s isomorphic to how you’re using “how to care”, but I’m not sure of that.
All that said, I think of “optimizing hard” as truly taking seriously the “shut up and multiply” results, even where it’s uncomfortable epistemically, BECAUSE that’s the only way to actually do the MOST POSSIBLE good. actually OPTIMIZING, you know? “soft” is almost by definition less ambitious, BECAUSE it’s epistemically more conservative, and gives up average expected value in order to increase modal goodness in the face of that uncertainty. I don’t actually know if those are the positions taken by those people. I’d love to hear different definitions of “hard” and “soft”, so I can better understand why they’re both equal in impact.
Does this come from a general idea of “optimizing hard” means higher risk of damage caused by errors in detail, and “optimizing soft” has enough slack so as not to have the same risks, but also soft is less ambitious and likely less effective (if both are actually implemented well)?
Agreed.
I disagree with the idea that “optimizing soft” is less ambitious. “Optimizing soft”, in my head, is about as ambitious as “optimizing hard”, except it makes the epistemic uncertainty more explicit. In this model of caring I am trying to make more legible, I believe that Carlsmith-style caring may be more robust to certain epistemological errors humans can make that can result in severely sub-optimal scenarios, because it is constrained by human cognition and capabilities.
Note: I notice that this can also be said for Soares-style caring—both are constrained by human cognition and capabilities, but in different ways. Perhaps both have different failure modes, and are more effective in certain distributions (which may diverge)?
Backing up a step, because I’m pretty sure we have different levels of knowledge and assumptions (mostly my failing) about the differences between “hard” and “soft” optimizing.
I should acknowledge that I’m not particularly invested in EA as a community or identity. I try to be effective, and do some good, but I’m exploring rather than advocating here.
Also, I don’t tend to frame things as “how to care”, so much as “how to model the effects of actions, and how to use those models to choose how to act”. I suspect that’s isomorphic to how you’re using “how to care”, but I’m not sure of that.
All that said, I think of “optimizing hard” as truly taking seriously the “shut up and multiply” results, even where it’s uncomfortable epistemically, BECAUSE that’s the only way to actually do the MOST POSSIBLE good. actually OPTIMIZING, you know? “soft” is almost by definition less ambitious, BECAUSE it’s epistemically more conservative, and gives up average expected value in order to increase modal goodness in the face of that uncertainty. I don’t actually know if those are the positions taken by those people. I’d love to hear different definitions of “hard” and “soft”, so I can better understand why they’re both equal in impact.