Very strongly agree with the part of this post outlining the problem, your definition of “addiction” captures how most people I know spend time (including myself). But I think you’re missing an important piece of the picture. One path (and the path most likely to succeed in my experience) out of these traps is to shimmy towards addictive avoidance behaviors which optimize you out of the hole in a roundabout way. E.g. addictively work out to avoid dealing with relationship issues ⇒ accidentally improve energy levels, confidence, and mood, creating slack to solve relationship issues. E.g. obsessively work on proving theorems to procrastinate on grant applications ⇒ accidentally solve famous problem that renders grant applications trivial.
And included in this fact is that, best as I can tell, anyone who really groks what I’m talking about will want to prioritize peeling off adaptive entropy over any specific outcome. That using addiction or any other entropy-inducing structure to achieve a goal is the opposite of what they truly want.
This paragraph raised my alarm bells. There’s a common and “pyramid-schemey” move on LW to say that my particular consideration is upstream and dominant over all other considerations: “AGI ruin is the only bit that matters, drop everything else,” “If you can write Haskell, earning-to-give overwhelms your ability to do good in any other way, forget altruism,” “Persuading other people of important things overwhelms your own ability to do anything, drop your career and learn rhetoric,” and so on ad nauseum.
To be fair, I agree to a limited extent with all of the statements above, but over the years I’ve acquired so many skills and perspectives (many from yourself Val) that are synergistic and force-multiplying that I’m suspicious any time anyone presents an argument “you must prioritize this particular force-multiplier to the exclusion of all else.”
I think you’re missing an important piece of the picture. One path (and the path most likely to succeed in my experience) out of these traps is to shimmy towards addictive avoidance behaviors which optimize you out of the hole in a roundabout way. E.g. addictively work out to avoid dealing with relationship issues ⇒ accidentally improve energy levels, confidence, and mood, creating slack to solve relationship issues. E.g. obsessively work on proving theorems to procrastinate on grant applications ⇒ accidentally solve famous problem that renders grant applications trivial.
Mmm. I like this point. I’m not yet sure how this fits in.
It seems important to notice that we don’t have control over when these “shimmying” strategies work, or how. I don’t know the implication of that yet. But it seems awfully important.
A related move is when applying force to sort of push the adaptive entropy out of a certain subsystem so that that subsystem can untangle some of the entropy. Some kinds of meditation are like this: intentionally clearing the mind and settling the body so that there’s a pocket of calmness in defiance of everything relying on non-calmness, precisely because that creates clarity from which you can meaningfully change things and net decrease adaptive entropy.
All of these still have a kind of implicit focus on decreasing net entropy. It’s more like different engineering strategies once the parameters are know.
But I’ll want to think about that for longer. Thank you for the point.
This paragraph raised my alarm bells.
Yeah… for some reason, on this particular point, it always does, no matter how I present it. Then people go on to say things that seem related but importantly aren’t. It’s a detail of how this whole dimension works that I’ve never seen how to communicate without it somehow coming across like an attempt to hijack people. Maybe secretly to me some part of me is trying. But FWIW, hijacking is quite explicitly the opposite of what I want. Alas, spelling that out doesn’t help and sometimes just causes people to say they flat-out don’t believe me. So… here we are.
There’s a common and “pyramid-schemey” move on LW to say that my particular consideration is upstream and dominant over all other considerations
Yep. I know what you’re talking about.
This whole way of approaching things is entropy-inducing. It’s part of why I wrote the post that inspired the exchange that inspired the OP here.
I’m not trying to say that accounting for adaptive entropy matters more than anything else.
I am saying, though, that any attempt to decrease net problem-ness in defiance of, instead of in orientation to, adaptive entropy will on net be somewhere between unhelpful and anti-helpful.
It doesn’t make any sense to orient to adaptive entropy instead of everything else. That doesn’t mean anything. That’s taking the reification too literally. Adaptive entropy has a structure to it that has to be unwoven in contact with attempts to change things.
Like, the main way I see to unweave the global entropy around AI safety is by orienting to AI safety and noticing what the relevant forces are within and around you. This normally leads to noticing a chain of layered entropy until you find some layer you can kind of dissolve in a way similar to the breathing example in the OP. It might literally be about breath, or it might be about how you interact with other people, or it might show up as a sense of inadequacy in your psyche, or it might appear as a way that some key AI company CEO struggles with their marriage in a way you can see how to maybe help sort out.
It doesn’t make any sense to talk about letting this go instead of orienting to the world.
The thing I’m pointing out is the hazard of the “instead of” going the other way: trying to make certain outcomes happen instead of orienting to adaptive entropy. The problem isn’t the “trying to make certain outcomes happen”. It’s the “instead of”.
(It just so happens that because of what adaptive entropy is, the “trying to make certain outcomes happen” is basically guaranteed to change when you incorporate awareness of how entropy works.)
I’m suspicious any time anyone presents an argument “you must prioritize this particular force-multiplier to the exclusion of all else.”
Agreed. I think that’s a healthy suspicion.
Hopefully the above clarifies how this isn’t what I’m trying to present.
It seems important to notice that we don’t have control over when these “shimmying” strategies work, or how. I don’t know the implication of that yet. But it seems awfully important.
A related move is when applying force to sort of push the adaptive entropy out of a certain subsystem so that that subsystem can untangle some of the entropy. Some kinds of meditation are like this: intentionally clearing the mind and settling the body so that there’s a pocket of calmness in defiance of everything relying on non-calmness, precisely because that creates clarity from which you can meaningfully change things and net decrease adaptive entropy.
Two further comments: (a) The main distinction I wanted to get across is while many behaviors fall under the “addiction from” umbrella, there is a whole spectrum of how more or less productive they are, both on their own terms and with respect to the original root cause. (b) I think, but am not sure, I understand what you mean by [let go of the outcome], and my interpretation is different from how the words are received by default. At least for me I cannot actually let go of the outcome psychologically, but what I can do is [expect direct efforts to fail miserably and indirect efforts to be surprisingly fruitful].
Yeah… for some reason, on this particular point, it always does, no matter how I present it. Then people go on to say things that seem related but importantly aren’t. It’s a detail of how this whole dimension works that I’ve never seen how to communicate without it somehow coming across like an attempt to hijack people. Maybe secretly to me some part of me is trying. But FWIW, hijacking is quite explicitly the opposite of what I want. Alas, spelling that out doesn’t help and sometimes just causes people to say they flat-out don’t believe me. So… here we are.
Sure, seems like the issue is not a substantive disagreement, but some combination of a rhetorical tic of yours and the topic itself being hard to talk about.
The main distinction I wanted to get across is while many behaviors fall under the “addiction from” umbrella, there is a whole spectrum of how more or less productive they are, both on their own terms and with respect to the original root cause.
Yep. I’m receiving that. Thank you. That update is still propagating and will do so for a while.
I think, but am not sure, I understand what you mean by [let go of the outcome], and my interpretation is different from how the words are received by default. At least for me I cannot actually let go of the outcome psychologically, but what I can do is [expect direct efforts to fail miserably and indirect efforts to be surprisingly fruitful].
Ah, interesting.
I can’t reliably let go of any given outcome, but there are some places where I can tell I’m “gripping” an outcome and can loosen my “grip”.
(…and then notice what was using that gripping, and do a kind of inner dialogue so as to learn what it’s caring for, and then pass its trust tests, and then the gripping on that particular outcome fully leaves without my adding “trying to let go” to the entropic stack.)
Aiming for indirect efforts still feels a bit to me like “That outcome over there is the important one, but I don’t know how to get there, so I’m gonna try indirect stuff.” It’s still gripping the outcome a little when I imagine doing it.
It sounds like here there’s a combo of (a) inferential gap and (b) something about these indirect strategies I haven’t integrated into my explicit model.
Sure, seems like the issue is not a substantive disagreement, but some combination of a rhetorical tic of yours and the topic itself being hard to talk about.
Very strongly agree with the part of this post outlining the problem, your definition of “addiction” captures how most people I know spend time (including myself). But I think you’re missing an important piece of the picture. One path (and the path most likely to succeed in my experience) out of these traps is to shimmy towards addictive avoidance behaviors which optimize you out of the hole in a roundabout way. E.g. addictively work out to avoid dealing with relationship issues ⇒ accidentally improve energy levels, confidence, and mood, creating slack to solve relationship issues. E.g. obsessively work on proving theorems to procrastinate on grant applications ⇒ accidentally solve famous problem that renders grant applications trivial.
This paragraph raised my alarm bells. There’s a common and “pyramid-schemey” move on LW to say that my particular consideration is upstream and dominant over all other considerations: “AGI ruin is the only bit that matters, drop everything else,” “If you can write Haskell, earning-to-give overwhelms your ability to do good in any other way, forget altruism,” “Persuading other people of important things overwhelms your own ability to do anything, drop your career and learn rhetoric,” and so on ad nauseum.
To be fair, I agree to a limited extent with all of the statements above, but over the years I’ve acquired so many skills and perspectives (many from yourself Val) that are synergistic and force-multiplying that I’m suspicious any time anyone presents an argument “you must prioritize this particular force-multiplier to the exclusion of all else.”
Mmm. I like this point. I’m not yet sure how this fits in.
It seems important to notice that we don’t have control over when these “shimmying” strategies work, or how. I don’t know the implication of that yet. But it seems awfully important.
A related move is when applying force to sort of push the adaptive entropy out of a certain subsystem so that that subsystem can untangle some of the entropy. Some kinds of meditation are like this: intentionally clearing the mind and settling the body so that there’s a pocket of calmness in defiance of everything relying on non-calmness, precisely because that creates clarity from which you can meaningfully change things and net decrease adaptive entropy.
All of these still have a kind of implicit focus on decreasing net entropy. It’s more like different engineering strategies once the parameters are know.
But I’ll want to think about that for longer. Thank you for the point.
Yeah… for some reason, on this particular point, it always does, no matter how I present it. Then people go on to say things that seem related but importantly aren’t. It’s a detail of how this whole dimension works that I’ve never seen how to communicate without it somehow coming across like an attempt to hijack people. Maybe secretly to me some part of me is trying. But FWIW, hijacking is quite explicitly the opposite of what I want. Alas, spelling that out doesn’t help and sometimes just causes people to say they flat-out don’t believe me. So… here we are.
Yep. I know what you’re talking about.
This whole way of approaching things is entropy-inducing. It’s part of why I wrote the post that inspired the exchange that inspired the OP here.
I’m not trying to say that accounting for adaptive entropy matters more than anything else.
I am saying, though, that any attempt to decrease net problem-ness in defiance of, instead of in orientation to, adaptive entropy will on net be somewhere between unhelpful and anti-helpful.
It doesn’t make any sense to orient to adaptive entropy instead of everything else. That doesn’t mean anything. That’s taking the reification too literally. Adaptive entropy has a structure to it that has to be unwoven in contact with attempts to change things.
Like, the main way I see to unweave the global entropy around AI safety is by orienting to AI safety and noticing what the relevant forces are within and around you. This normally leads to noticing a chain of layered entropy until you find some layer you can kind of dissolve in a way similar to the breathing example in the OP. It might literally be about breath, or it might be about how you interact with other people, or it might show up as a sense of inadequacy in your psyche, or it might appear as a way that some key AI company CEO struggles with their marriage in a way you can see how to maybe help sort out.
It doesn’t make any sense to talk about letting this go instead of orienting to the world.
The thing I’m pointing out is the hazard of the “instead of” going the other way: trying to make certain outcomes happen instead of orienting to adaptive entropy. The problem isn’t the “trying to make certain outcomes happen”. It’s the “instead of”.
(It just so happens that because of what adaptive entropy is, the “trying to make certain outcomes happen” is basically guaranteed to change when you incorporate awareness of how entropy works.)
Agreed. I think that’s a healthy suspicion.
Hopefully the above clarifies how this isn’t what I’m trying to present.
Two further comments:
(a) The main distinction I wanted to get across is while many behaviors fall under the “addiction from” umbrella, there is a whole spectrum of how more or less productive they are, both on their own terms and with respect to the original root cause.
(b) I think, but am not sure, I understand what you mean by [let go of the outcome], and my interpretation is different from how the words are received by default. At least for me I cannot actually let go of the outcome psychologically, but what I can do is [expect direct efforts to fail miserably and indirect efforts to be surprisingly fruitful].
Sure, seems like the issue is not a substantive disagreement, but some combination of a rhetorical tic of yours and the topic itself being hard to talk about.
Yep. I’m receiving that. Thank you. That update is still propagating and will do so for a while.
Ah, interesting.
I can’t reliably let go of any given outcome, but there are some places where I can tell I’m “gripping” an outcome and can loosen my “grip”.
(…and then notice what was using that gripping, and do a kind of inner dialogue so as to learn what it’s caring for, and then pass its trust tests, and then the gripping on that particular outcome fully leaves without my adding “trying to let go” to the entropic stack.)
Aiming for indirect efforts still feels a bit to me like “That outcome over there is the important one, but I don’t know how to get there, so I’m gonna try indirect stuff.” It’s still gripping the outcome a little when I imagine doing it.
It sounds like here there’s a combo of (a) inferential gap and (b) something about these indirect strategies I haven’t integrated into my explicit model.
Yep.