Seems like a rational prioritization to me if they were in an important moment of thought and didn’t want to disrupt it. (Noting of course that ‘walking on it’ was not intentional and was caused by forgetting it was there.)
This sounds like you’re saying that they made a rational prioritization and then, separately from that, forgot that it was there. But those two events are not separate: the forgetting-and-then-walking-on-it was a predictable consequence of the earlier decision to ignore it and instead focus on work. I think if you model the first decision as a decision to continue working and to also take on a significant risk of hurting your feet, it doesn’t seem so obviously rational anymore. (Of course it could be that the thought in question was just so important that it was worth the risk. But that seems unlikely to me.)
As the OP says, a “normal person might stop and remove all the glass splinters”. Most people, in thinking whether to continue working or whether to clean up the splinters, wouldn’t need to explicitly consider the possibility that they might forget about the splinters and step on them later. This would be incorporated into the decision-making process implicitly and automatically, by the presence of splinters making them feel uneasy until they were cleaned up. The fact that this didn’t happen suggests that the OP might also ignore other signals relevant to their well-being.
The fact that the OP seems to consider this event a virtue to highlight in the title of their post, is also a sign that they are systematically undervaluing their own well-being in a way that to me seems very worrying.
Also, I would feel pretty bad if someone wrote a comment like this after I posted something. (Maybe it would have been better as a PM)
Probably most people would. But I think it’s also really important for there to be clear, public signals that the community wants people to take their well-being seriously and doesn’t endorse people hurting themselves “for the sake of the cause”.
The EA and rationalist communities are infamous for having lots of people burning themselves out through extreme self-sacrifice. If someone makes a post where they present the act of working until their feet start bleeding as a personal virtue, and there’s no public pushback to that, then that sends the implicit signal that the community endorses that reasoning. That will then contribute to unhealthy social norms that cause people to burn themselves out. The only way to counteract that is by public comments that make it clear that the community wants people to take care of themselves, even if that makes them (temporarily) less effective.
To the OP: please prioritize your well-being first. Self-preservation is one of the instrumental convergent drives; you can only continue to work if you are in good shape.
I am probably bad at valuing my well-being correctly. That said I don’t think the initial comment made me feel bad (but maybe I am bad at noticing if it would). Rather now with this entire comment stream, I realize that I have again failed to communicate.
Yes, I think this was irrational to not clean up the glass. That is the point I want to make. I don’t think it is virtuous to have failed in this way at all. What I want to say is: “Look I am running into failure modes because I want to work so much.”
Not running into these failure modes is important, but these failure modes where you are working too much are much easier to handle than the failure mode of “I can’t get myself to put in at least 50 hours of work per week consistently.”
While I do think that it is true, I am probably very bad in general at optimizing for myself to be happy. But the thing is while I was working so hard during AISC I was most of the time very happy. The same when I made these games. Most of the time I did these things because I deeply wanted to.
There where moments during AISC where I felt like I was close to burning out, but this was the minority. Mostly I was much happier than baseline. I think usually I don’t manage to work as hard and as long as I’d like, and that is a major source of unhappiness for me.
So it seems that the problem that Alex seems to see, in me working very hard (that I am failing to take my happiness into account) is actually solved by me working very hard, which is quite funny.
Yes, I think this was irrational to not clean up the glass. That is the point I want to make. I don’t think it is virtual to have failed in this way at all. What I want to say is: “Look I am running into failure modes because I want to work so much.”
Ah! I completely missed that, that changes my interpretation significantly. Thank you for the clarification, now I’m less worried for you since it no longer sounds like you have a blindspot around it.
Not running into these failure modes is important, but these failure modes where you are working too much are much easier to handle than the failure mode of “I can’t get myself to put in at least 50 hours of work per week consistently.”
While I do think that it is true, I am probably very bad in general at optimizing for myself to be happy. But the thing is while I was working so hard during AISC I was most of the time very happy. The same when I made these games. Most of the time I did these things because I deeply wanted to.
It sounds right that these failure modes are easier to handle than the failure mode of not being able to do much work.
Though working too much can lead to the failure mode of “I can’t get myself put in work consistently”. I’d be cautious in that it’s possible to feel like you really enjoy your work… and then burn out anyway! I’ve heard several people report this happening to them. The way I model that is something like… there are some parts of the person that are obsessed with the work, and become really happy about being able to completely focus on the obsession. But meanwhile, that single-minded focus can lead to the person’s other needs not being met, and eventually those unmet needs add up and cause a collapse.
I don’t know how much you need to be worried about that, but it’s at least good to be aware of.
This sounds like you’re saying that they made a rational prioritization and then, separately from that, forgot that it was there
That implication wasn’t intended. I agree that (for basic reasons) the probability of a small cut was higher given their choice.
Rather, the action itself seems rational to me when considering:
That outcome seems unprobable (at least if they were sitting down), but actual in this particular timeline.
The effects of a cut on the foot are really low (with I’d guess >99.5% probability, for an otherwise healthy person—on reflection, maybe not cumulatively low enough for the also-small payoff?), and if so ~certain to not significantly curtail progress.
That doesn’t necessarily imply the policy which produced the action is rational, though. But when considering the two hypotheses: (1) OP is mentally unwell, and (2) They have some them-specific reason[1] for following a policy which outputs actions like this, I considered (2) to be a lot more probable.
Meta: This comment is (genuinely) very hard/overwhelming-feeling for me to try to reply to, for a few reasons specific to my mind, mainly about {unmarked assumptions} and {parts seeming to be for rhetorical effect}. (For that reason I’ll let others discuss this instead of saying much further)
I think it’s also really important for there to be clear, public signals that the community wants people to take their well-being seriously
I agree with this, but I think any ‘community norm reinforcing messages’ should be clearly about norms rather than framed about an individual, in cases like this where there’s just a weak datapoint about the individual.
A simple example would be “Having introspected and tested different policies before determining that they’re not at risk of burnout from the policy which gives this action.”
A more complex example would be “a particular action can be irrational in isolation but downstream of a (suboptimal but human-attainable) policy which produces irrational behavior less than is typical”, which (now) seems to me to be what OP was trying to show with this example given their comment
This sounds like you’re saying that they made a rational prioritization and then, separately from that, forgot that it was there. But those two events are not separate: the forgetting-and-then-walking-on-it was a predictable consequence of the earlier decision to ignore it and instead focus on work. I think if you model the first decision as a decision to continue working and to also take on a significant risk of hurting your feet, it doesn’t seem so obviously rational anymore. (Of course it could be that the thought in question was just so important that it was worth the risk. But that seems unlikely to me.)
As the OP says, a “normal person might stop and remove all the glass splinters”. Most people, in thinking whether to continue working or whether to clean up the splinters, wouldn’t need to explicitly consider the possibility that they might forget about the splinters and step on them later. This would be incorporated into the decision-making process implicitly and automatically, by the presence of splinters making them feel uneasy until they were cleaned up. The fact that this didn’t happen suggests that the OP might also ignore other signals relevant to their well-being.
The fact that the OP seems to consider this event a virtue to highlight in the title of their post, is also a sign that they are systematically undervaluing their own well-being in a way that to me seems very worrying.
Probably most people would. But I think it’s also really important for there to be clear, public signals that the community wants people to take their well-being seriously and doesn’t endorse people hurting themselves “for the sake of the cause”.
The EA and rationalist communities are infamous for having lots of people burning themselves out through extreme self-sacrifice. If someone makes a post where they present the act of working until their feet start bleeding as a personal virtue, and there’s no public pushback to that, then that sends the implicit signal that the community endorses that reasoning. That will then contribute to unhealthy social norms that cause people to burn themselves out. The only way to counteract that is by public comments that make it clear that the community wants people to take care of themselves, even if that makes them (temporarily) less effective.
To the OP: please prioritize your well-being first. Self-preservation is one of the instrumental convergent drives; you can only continue to work if you are in good shape.
I am probably bad at valuing my well-being correctly. That said I don’t think the initial comment made me feel bad (but maybe I am bad at noticing if it would). Rather now with this entire comment stream, I realize that I have again failed to communicate.
Yes, I think this was irrational to not clean up the glass. That is the point I want to make. I don’t think it is virtuous to have failed in this way at all. What I want to say is: “Look I am running into failure modes because I want to work so much.”
Not running into these failure modes is important, but these failure modes where you are working too much are much easier to handle than the failure mode of “I can’t get myself to put in at least 50 hours of work per week consistently.”
While I do think that it is true, I am probably very bad in general at optimizing for myself to be happy. But the thing is while I was working so hard during AISC I was most of the time very happy. The same when I made these games. Most of the time I did these things because I deeply wanted to.
There where moments during AISC where I felt like I was close to burning out, but this was the minority. Mostly I was much happier than baseline. I think usually I don’t manage to work as hard and as long as I’d like, and that is a major source of unhappiness for me.
So it seems that the problem that Alex seems to see, in me working very hard (that I am failing to take my happiness into account) is actually solved by me working very hard, which is quite funny.
Ah! I completely missed that, that changes my interpretation significantly. Thank you for the clarification, now I’m less worried for you since it no longer sounds like you have a blindspot around it.
It sounds right that these failure modes are easier to handle than the failure mode of not being able to do much work.
Though working too much can lead to the failure mode of “I can’t get myself put in work consistently”. I’d be cautious in that it’s possible to feel like you really enjoy your work… and then burn out anyway! I’ve heard several people report this happening to them. The way I model that is something like… there are some parts of the person that are obsessed with the work, and become really happy about being able to completely focus on the obsession. But meanwhile, that single-minded focus can lead to the person’s other needs not being met, and eventually those unmet needs add up and cause a collapse.
I don’t know how much you need to be worried about that, but it’s at least good to be aware of.
That implication wasn’t intended. I agree that (for basic reasons) the probability of a small cut was higher given their choice.
Rather, the action itself seems rational to me when considering:
That outcome seems unprobable (at least if they were sitting down), but actual in this particular timeline.
The effects of a cut on the foot are really low (with I’d guess >99.5% probability, for an otherwise healthy person—on reflection, maybe not cumulatively low enough for the also-small payoff?), and if so ~certain to not significantly curtail progress.
That doesn’t necessarily imply the policy which produced the action is rational, though. But when considering the two hypotheses: (1) OP is mentally unwell, and (2) They have some them-specific reason[1] for following a policy which outputs actions like this, I considered (2) to be a lot more probable.
Meta: This comment is (genuinely) very hard/overwhelming-feeling for me to try to reply to, for a few reasons specific to my mind, mainly about {unmarked assumptions} and {parts seeming to be for rhetorical effect}. (For that reason I’ll let others discuss this instead of saying much further)
I agree with this, but I think any ‘community norm reinforcing messages’ should be clearly about norms rather than framed about an individual, in cases like this where there’s just a weak datapoint about the individual.
A simple example would be “Having introspected and tested different policies before determining that they’re not at risk of burnout from the policy which gives this action.”
A more complex example would be “a particular action can be irrational in isolation but downstream of a (suboptimal but human-attainable) policy which produces irrational behavior less than is typical”, which (now) seems to me to be what OP was trying to show with this example given their comment