Say I run into someone at a party who angrily demands an apology for some perceived slight. It seems like you’re arguing against thinking:
This person is trying to get something (e.g., status or how careful others have to be around him) at my expense. This is unfair and I should fight it.
But I think I got that lesson long ago and what tends to pop up in my mind is instead a mixture of:
I should give in because the person is just running an evolved strategy.
Maybe I should fight this because giving in would reinforce the behavior even if subconsciously.
If everyone gave in to this kind of behavior, evolution would make it even more widespread. Maybe I should fight it for that reason.
Curious if I correctly got the point you’re trying to make, and if so do you have any further thoughts on the topic (such as which of 2-4 is most correct or if I should be thinking something else instead).
...they’re ants. That’s just not how ants work. For a myriad of reasons. The whole point of the post is that there isn’t necessarily local deliberative intent, just strategies filling ecological niches.
They’re not ants, they’re hybrid ant-human metaphors. Ants don’t talk and don’t wonder if the grasshopper is right. Ants don’t consider counterfactual cases of never having met the other colony. Metaphorical ants that _CAN_ do these things can also consider other strategies than war.
Expanding a bit on gallabytes’s comment: The language around Moloch often assumes a Nash equilibrium, i.e. a situation in which a rational agent implementing causal decision theory couldn’t do better. Sometimes the agents aren’t general intelligences, but are simpler evolutionarily fit processes responding to feedback.
I’m perhaps a bit of an outlier in that I see less difference among powerful optimization processes (evolution and humans being the only real examples I know of, and both extend to artificial versions) than most.
Over sufficient time periods, evolution is subject to the same constraints and tradeoffs as intelligent world-modeling agents are. Evolution is slow enough that it can bypass some of the identity problems that agents have, but it’s also only going to find viable equilibria.
Lookahead is unnecessary if you can actually perform the iterations. There is some subtlety around path-dependency and needing to survive the iterations in order to arrive at the equilibrium, but for simple cases like this one, it just doesn’t matter. The strategy will be found whether the intermediate states are imagined hypothetically by an intelligence, or just executed physically by a patient experimenter.
What do you mean by “simple cases like this one?” Empirically, evolution quite often ends up in a Nash equilibrium of conflict where a negotiated solution would have less deadweight loss.
Simple cases like the ants or like toy problems where humans usually get the right answer (and some where we don’t). In cases where iterated reasoning can come up with a solution, evolution will be MUCH slower but will come up with answers as good as any modeled reasoning engine.
(note: I’m overstating this by quite a lot. The effects of path-dependency and search breadth for evolution and of modeling limitations and limited capacity for brains can make orders of magnitude difference in the solutions found. In simple theory, though, they’re roughly equivalent.)
The fact that evolution is adequate to produce ants doesn’t really have much bearing on anything here, unless there’s also reason to believe that lookahead can’t do better than ants, which is clearly absurd. Even if the moon were a rich source of calories (say, by having comparatively unimpeded access to sunlight), evolution just doesn’t know how to get there and can’t figure it out by iteration. Humans clearly can in principle, it’s hard for us but obviously within our reach as a species, and not by natural selection for flight.
Panspermia theories have vulcanic activity and meteor strikes moving bacteria world to world. It’s not clear it’s off limit to evolution (or one needs to do some tricky organic world vs inorganic world boundary drawing to get a motivated cognition result).
Some strctures less complex than brains might be selected for “look ahead” like benefit. For example the evolution of sex. Also having features coded in multiple ways in DNA. Some of the DNA encodings might be selected for “evolvability”. Making things like epigenetic switches and in general control genes can be seen as a modelling layer a bit more abstract than concrete features.
The thesis seems to be to caution against automatically blaming those who might’ve benefited from a disaster, as often enough things don’t happen in a goal-directed fashion. (“To fathom a strange plot, one technique is to look at what ended up happening, assume it was the intended result, and ask who benefited.”) Not sure this is a widespread enough heuristic to be worth reining in in the form of an unconditional advice.
Oh, I hadn’t thought of this aspect, thanks! I wonder how it changes if you realize that the grasshopper helped clear the path between the anthills, which made contact (and then war and then surplus food) more likely.
I ended up not including the full text because this felt a lot less lesswrongy than most of my stuff—but not so much that I didn’t think a link would be appropriate.
I… don’t think I would’ve guessed that “post not cross-posted but just linked” means “post is not very ‘lesswrong’”.
If that mapping is unclear (and it will be unclear if authors do not consistently use this convention—which, as far as I am aware, they do not), then the effect is merely to waste my time by making me click through to the linked page.
Perhaps you might note the “non-lesswrongy” nature of the linked post in the body text? This would make your intent clear, and would allow Less Wrong readers to make a properly informed decision about whether to click through and read the post.
Slaver ants would make “warrior prowness” comment relevant. Even with slaver ants it unlilkey that all nests are enslaved all the time. Too much slavers and they need to fight each other or run out of slave refreshments. With adequate distance between slavers some of the nests will be enslaved late or never.
It also could be very plausible that a colony could actually burn the extra calories and benefit form it for example in the form of extra drones and queens.
If they actually had been better off to not notice there would be a dominant strategy to intentionally not notice. Them not being there is a different thing than them being there and not noticing them.
Ants in fact make supercolonies. They (atleast some) choose PvE strategies over PvP mechanics.
The existence of ants has had a big evolutionary pressure on other species. There are a lot of species that produce sugar that ants collect and ants act to defend these resource sources (which can be understood as a mutually beneficial transaction like mechanic). This kind of symbiosis needed to kick off from somewhere even if there is a feedback loop keeping it stable. One of the plausible stories is that another insect provided free food and ants started to regard those insects more as a resource rather than a piece of background. Once they do this they can favour more generous candymen to more stingy ones.
A lot of the colonies of eusocial insects keep a reduced population over winter. The ones that collect food during good times might not ever consume it during harsh times. Because they have a different insentive strcture it’s unlikely a worker ant would save itself at the cost of the colony “colony was screwed, but I survived” is an unlikely ant thought. They have behaviours like the oldest members being the ones that take on activities furthest away from the nest (guess which individuals are nearest to mortal danger).
Say I run into someone at a party who angrily demands an apology for some perceived slight. It seems like you’re arguing against thinking:
This person is trying to get something (e.g., status or how careful others have to be around him) at my expense. This is unfair and I should fight it.
But I think I got that lesson long ago and what tends to pop up in my mind is instead a mixture of:
I should give in because the person is just running an evolved strategy.
Maybe I should fight this because giving in would reinforce the behavior even if subconsciously.
If everyone gave in to this kind of behavior, evolution would make it even more widespread. Maybe I should fight it for that reason.
Curious if I correctly got the point you’re trying to make, and if so do you have any further thoughts on the topic (such as which of 2-4 is most correct or if I should be thinking something else instead).
Is this yet another Moloch example, or is there something deeper?
What keeps the ants from just agreeing on boundaries and not warring? What keeps them from combining into one sufficiently-fed colony?
...they’re ants. That’s just not how ants work. For a myriad of reasons. The whole point of the post is that there isn’t necessarily local deliberative intent, just strategies filling ecological niches.
They’re not ants, they’re hybrid ant-human metaphors. Ants don’t talk and don’t wonder if the grasshopper is right. Ants don’t consider counterfactual cases of never having met the other colony. Metaphorical ants that _CAN_ do these things can also consider other strategies than war.
Expanding a bit on gallabytes’s comment: The language around Moloch often assumes a Nash equilibrium, i.e. a situation in which a rational agent implementing causal decision theory couldn’t do better. Sometimes the agents aren’t general intelligences, but are simpler evolutionarily fit processes responding to feedback.
I’m perhaps a bit of an outlier in that I see less difference among powerful optimization processes (evolution and humans being the only real examples I know of, and both extend to artificial versions) than most.
Over sufficient time periods, evolution is subject to the same constraints and tradeoffs as intelligent world-modeling agents are. Evolution is slow enough that it can bypass some of the identity problems that agents have, but it’s also only going to find viable equilibria.
Evolution doesn’t have lookahead—or modeling the problem at all—except via evolving things like brains.
Lookahead is unnecessary if you can actually perform the iterations. There is some subtlety around path-dependency and needing to survive the iterations in order to arrive at the equilibrium, but for simple cases like this one, it just doesn’t matter. The strategy will be found whether the intermediate states are imagined hypothetically by an intelligence, or just executed physically by a patient experimenter.
What do you mean by “simple cases like this one?” Empirically, evolution quite often ends up in a Nash equilibrium of conflict where a negotiated solution would have less deadweight loss.
Simple cases like the ants or like toy problems where humans usually get the right answer (and some where we don’t). In cases where iterated reasoning can come up with a solution, evolution will be MUCH slower but will come up with answers as good as any modeled reasoning engine.
(note: I’m overstating this by quite a lot. The effects of path-dependency and search breadth for evolution and of modeling limitations and limited capacity for brains can make orders of magnitude difference in the solutions found. In simple theory, though, they’re roughly equivalent.)
The fact that evolution is adequate to produce ants doesn’t really have much bearing on anything here, unless there’s also reason to believe that lookahead can’t do better than ants, which is clearly absurd. Even if the moon were a rich source of calories (say, by having comparatively unimpeded access to sunlight), evolution just doesn’t know how to get there and can’t figure it out by iteration. Humans clearly can in principle, it’s hard for us but obviously within our reach as a species, and not by natural selection for flight.
Panspermia theories have vulcanic activity and meteor strikes moving bacteria world to world. It’s not clear it’s off limit to evolution (or one needs to do some tricky organic world vs inorganic world boundary drawing to get a motivated cognition result).
Some strctures less complex than brains might be selected for “look ahead” like benefit. For example the evolution of sex. Also having features coded in multiple ways in DNA. Some of the DNA encodings might be selected for “evolvability”. Making things like epigenetic switches and in general control genes can be seen as a modelling layer a bit more abstract than concrete features.
Nothing. Super-colonies in fact happen. Althought they don’t transfer that much food inside it.
The thesis seems to be to caution against automatically blaming those who might’ve benefited from a disaster, as often enough things don’t happen in a goal-directed fashion. (“To fathom a strange plot, one technique is to look at what ended up happening, assume it was the intended result, and ask who benefited.”) Not sure this is a widespread enough heuristic to be worth reining in in the form of an unconditional advice.
Oh, I hadn’t thought of this aspect, thanks! I wonder how it changes if you realize that the grasshopper helped clear the path between the anthills, which made contact (and then war and then surplus food) more likely.
Your blog is not loading for me. Could you cross-post the post to Less Wrong? (I generally prefer this for link-posts in any case…)
Would be happy to deal with the editing for you (Benquo).
I ended up not including the full text because this felt a lot less lesswrongy than most of my stuff—but not so much that I didn’t think a link would be appropriate.
I… don’t think I would’ve guessed that “post not cross-posted but just linked” means “post is not very ‘lesswrong’”.
If that mapping is unclear (and it will be unclear if authors do not consistently use this convention—which, as far as I am aware, they do not), then the effect is merely to waste my time by making me click through to the linked page.
Perhaps you might note the “non-lesswrongy” nature of the linked post in the body text? This would make your intent clear, and would allow Less Wrong readers to make a properly informed decision about whether to click through and read the post.
Slaver ants would make “warrior prowness” comment relevant. Even with slaver ants it unlilkey that all nests are enslaved all the time. Too much slavers and they need to fight each other or run out of slave refreshments. With adequate distance between slavers some of the nests will be enslaved late or never.
It also could be very plausible that a colony could actually burn the extra calories and benefit form it for example in the form of extra drones and queens.
If they actually had been better off to not notice there would be a dominant strategy to intentionally not notice. Them not being there is a different thing than them being there and not noticing them.
Ants in fact make supercolonies. They (atleast some) choose PvE strategies over PvP mechanics.
The existence of ants has had a big evolutionary pressure on other species. There are a lot of species that produce sugar that ants collect and ants act to defend these resource sources (which can be understood as a mutually beneficial transaction like mechanic). This kind of symbiosis needed to kick off from somewhere even if there is a feedback loop keeping it stable. One of the plausible stories is that another insect provided free food and ants started to regard those insects more as a resource rather than a piece of background. Once they do this they can favour more generous candymen to more stingy ones.
A lot of the colonies of eusocial insects keep a reduced population over winter. The ones that collect food during good times might not ever consume it during harsh times. Because they have a different insentive strcture it’s unlikely a worker ant would save itself at the cost of the colony “colony was screwed, but I survived” is an unlikely ant thought. They have behaviours like the oldest members being the ones that take on activities furthest away from the nest (guess which individuals are nearest to mortal danger).