The problem, by which I mean the reason I would rather the scene had less of this mythic stuff, is that I subscribe to absolutely the meanest, smallest type of cynicism: things people love are dangerous.
Take political arguments. People love to have political arguments. If one considers the community in the abstract, then political arguments are great for the community—look at how much more discussion there is over on SSC these days!
I am, of course, assuming in this example that political arguments in internet comments are of little use. But I think there is a straightforward cause: political arguments can be of little use because people love them. If people didn’t love them, they would only have them when necessary.
People love myths. Or at least most of them, some of the time. That’s why the myths you hear about aren’t selected for usefulness.
This seems right to me, as far as it goes. But for the same reason they’re dangerous, they’re powerful. Why should the forces of evil and ignorance be the only ones who get to have powerful weapons?
I would feel pretty comfortable betting that Meditations on Moloch is one of the top 5 most effective posts produced by the LW-sphere, in terms of leading to people pursuing good in the world. That’s a direct result of it choosing to harness myth in a way selected for usefulness.
Meditations on Moloch certainly wasn’t promoting evil, but I think it was (inadvertently) promoting ignorance. For example, it paints the fish farming story as an argument against libertarianism, but economists see the exact same story as an argument for privatization of fisheries, and it works in reality exactly as economists say!
The whole essay suffers from that problem. It leaves readers unaware that there’s a whole profession dedicated to “fighting Moloch” and they have a surprisingly good framework: incentives, public goods, common resources, free rider problem, externalities, Pigovian taxes, Coasian bargains… Unfortunately, dry theory is hard to learn, so people skip learning it if they can more easily get an illusion of understanding—like many readers of the Moloch essay I’ve encountered.
That’s the general problem Charlie is pointing to. If you want to give your argument some extra oomph beyond what the evidence supports, why do you want that? You could be slightly wrong, or (if you’re less lucky than Scott) a lot wrong, and make many other people wrong too. Better spend that extra time making your evidence-based argument better.
Even shorter: I don’t want powerful weapons to argue for truth. I want asymmetric weapons that only the truth can use. Myth isn’t such a weapon, so I’ll leave it in the cave where it was found.
I’m bad and I feel bad about making this kind of argument:
I don’t want powerful weapons to argue for truth. I want asymmetric weapons that only the truth can use. Myth isn’t such a weapon, so I’ll leave it in the cave where it was found.
Register the irony of framing your refusal to use the power of mythical language in a metaphor about a wise and humble hero leaving Excalibur in the cave where it was found.
The issue is that we are all being pulled by Omega’s web into roles, and the choice is not whether or not to partake in some role, but whether or not to use the role we inhabit to our advantage. You don’t get to choose not to play the game, but you do get to pick your position.
If you want to give your argument some extra oomph beyond what the evidence suggests, why do you want that? You could be wrong, and make many people wrong. Better spend that extra time making your evidence-based argument better.
Even shorter: I don’t want powerful weapons to argue for truth. I want asymmetric weapons that only the truth can use. Myth isn’t such a weapon, so I’ll leave it in the cave where it was found.
I deeply respect that, and your choice.
I think I want the same end result you do: I want truth and clarity to reign. This has led me to intentionally use mythic mode because I see the influence of things like it all over the place, and I want to be able to notice and track that, and get practice extracting the parts that are epistemically good. And I need to have a cultivated skill with countering uses of mythic language that turn out to have deceived (or were intentionally used to deceive).
But I think it’s totally a defensible position to say “Nope, this is too fraught and too symmetric, I ain’t touchin’ that” and walk away.
That’s the general problem Charlie is pointing to. If you want to give your argument some extra oomph beyond what the evidence supports, why do you want that? You could be slightly wrong, or (if you’re less lucky than Scott) a lot wrong, and make many other people wrong too. Better spend that extra time making your evidence-based argument better.
My goal is almost always behavior change. I can write all sorts of strong evidence-based arguments but I despair of those arguments actually affecting the behavior of anyone except the rationalists who are best at taking ideas seriously.
Said another way, in addition to writing down arguments there’s the task of debugging emotional blocks preventing people from taking the argument seriously enough for it to change their behavior. I think there’s a role for writing that tries to do both of these things (and that e.g. Eliezer did this a lot in the Sequences and it was good that he did this, and that HPMoR also does this and that was good too, and Meditations on Moloch, etc.).
Meditations on Moloch was creative and effective but ultimately “just” a restatement of well-known game theory. This post is a lot more speculative and anecdotal.
Hmm, I don’t really see it that way? This post is trying to describe the category of which Meditation on Moloch is an instance. If Meditation on Moloch is good, surely trying to understand the thing that it’s an instance of could also be good.
I have just recently read Meditations on Moloch and I agree it is fascinating post, but also entirely misses the point. Competition does not make you sacrifice your values, that’s how these values came to existence in the first place. There was analogy with rats who came to live in the island and used their spare time to do art, but stopped when resources had depleted. That`s not how story goes. When rats first came to island they did not care about art or any such nonsense, all they did was eat and fuck all day and everyone was happy. But one day, there was no more food to continue to just do that. Only then some rats started to be creative. Turns out if you paint your picture with bigger muscles than you actually have, and you put it on rats-tinder, you get to mate more than if you just posted your real picture. That’s how art came to exist in rats island.
I have just recently read Meditations on Moloch and I agree it is fascinating post, but also entirely misses the point. Competition does not make you sacrifice your values[…]
Scott wasn’t suggesting that competition alone makes people sacrifice their values. He was suggesting (as I understand it) that the following configuration tends to suck for everyone pretty systematically:
You have a bunch of agents who are in competition for some resource.
Each agent is given an opportunity to sacrifice something important to them in order to gain competitive advantage over the other agents.
The agents can’t coordinate about who will or won’t take advantage of this opportunity.
The net effect is generally that agents who accept this trade tend to win out over those who don’t. This incentivizes each agent to make the trade so that they can at least stay in competition.
In particular, this means that even if there’s common knowledge of this whole setup, and there’s common knowledge that it sucks, it’s still the case that no one can do anything about it.
Each agent is given an opportunity to sacrifice something important to them in order to gain competitive advantage over the other agents.
Yes, and what I am asking is why those things are important fot them in the first place? Probably because having these things important gave those agents competetive advantage. Love your children? Thats Moloch wants you to replicate your stomach so you could eat mode baby elephants, than you alone could. You only sacrifice those things that Molach himself has given you.
The way I would put it is that agents evolve to make use of the regularities in the environment. If exploiting those regularities leads to increased success, then competition creates complexity that allows for those regularities to be taken advantage of. Whereas complexity which is no longer useful, either because the regularities no longer exist in the new environment or because there are more powerful regularities to exploit instead, will eventually be eaten away by competition.
Thus it’s true that competition gave us those things originally. But on the other hand, if you’re looking from the perspective of what we have now and want to preserve it, then it’s also fair to say that competition is a threat to it.
Let me put it this way—if this is a problem, you would probably want to solve it? Generally if you want to solve a problem you would prefer it to not have existed in the first place? If yes then you would also not have any of the values you want to save. Considering this, does Moloch still qualifies as a problem?
This is incorrect and I think only sounds like an argument because of the language you’re choosing; there’s nothing incoherent about 1. preferring evolutionary pressures that look like Moloch to exist so that you end up existing rather than not existing, and 2. wanting to solve Moloch-like problems now that you exist.
Also, there’s nothing incoherent about wanting to solve Moloch-like problems now that you exist regardless of Moloch-like things causing you to come into existence. Our values are not evolution’s values, if that even makes sense.
So to again summarise this whole argument: Moloch is a problem, that made you exist and is impossible to solve by definition. So what are you going to do about it? (I suggest trying to answer this to your self at first, only then to me)
So… we should respond by removing the things people love?
I suspect I just disagree with your claim. But even if you were right, I don’t think the right answer is to ban beloved things. I think it’s to learn how to have beloved things and still be sane.
By my own personal judgment, rationalist culture developed a lot of epistemic viciousness by gripping hard onto the chant “Politics is the mind-killer!” and thereby banning all development of the Art in that domain. The Trump election in 2016 displayed that communal weakness in force, with rationalists getting sucked into the same internal signaling games as all the other primates, and then being shocked when he won.
I mean, think about that. A whole community that grew out of an attempt to practice an art of clear thinking that supposedly tries to pay rent largely made the same wrong prediction. Yes, I know there are exceptions. I live with one of them. But that just says that some people in that community managed not to get swept up.
This doesn’t bode well for a Calvinist approach to epistemic integrity.
This is a tangent, but I feel like this comment is making the mistake of collapsing predictions into a “predicted Trump”/”predicted Clinton” binary. I predicted about a 20% chance of Trump (my strategy was to agree with Nate Silver, Nate Silver is always right when speaking ex cathedra), and I do not consider myself to have made an error. Things with a 20% chance of happening happen one time out of five. Trump lost the popular vote after an October surprise; that definitely looks like the sort of outcome you get in a world where he was genuinely less likely than Clinton to win.
So… we should respond by removing the things people love?
I think about it like a memetic ecosystem. Ideas can spread because they’re visibly helping someone else, or because they’re catchy, or because they tap into primal instincts, or because there’s an abstract argument for them, or combinations of such things. Ideas in an ecosystem have properties at different levels: they have appeal that helps them spread, they have effects on peoples’ actions, and they can also be understood as having effects on the ecosystem. The idea of the scientific method, for example, has some philosophical appeal, it changes peoples’ actions to involve more testing, and it also changes what thoughts those people think and spread.
In this framing, my claimed problem with the mythic mode is that it pushes people, and to an extent the entire ecosystem, more towards spreading ideas based on how they tap into primal instincts and emotions, at the expense of appeal based on certain sorts of abstract argument about value.
So to be more precise, information that has a lot of appeal not based on its value is dangerous, because I think we need this memetic ecosystem to appeal mostly based on value and knowledge. Hence why my example was the dangers of political discussion, not the dangers of chocolate (though my belly might argue for certain dangers of that too). Even if the mythic mode is valuable, or if certain political discussions are valuable, we need to balance this local value with the effect it’s going to have on the global value generated. A ban is one sort of meme that shapes the memetic ecosystem—but it’s not the only way.
Trump
I live in central Illinois and do my interaction with rationalists via the internet these days, deliberately ignoring 99.9% of people talking about politics, so I’m guessing you experienced something pretty different out in Berkeley. Given this, I think I just don’t have the context to interpret your argument. Arguing that we should systematically outperform Nate Silver seems wrong, but I suspect that’s not what you’re arguing.
The problem, by which I mean the reason I would rather the scene had less of this mythic stuff, is that I subscribe to absolutely the meanest, smallest type of cynicism: things people love are dangerous.
Take political arguments. People love to have political arguments. If one considers the community in the abstract, then political arguments are great for the community—look at how much more discussion there is over on SSC these days!
I am, of course, assuming in this example that political arguments in internet comments are of little use. But I think there is a straightforward cause: political arguments can be of little use because people love them. If people didn’t love them, they would only have them when necessary.
People love myths. Or at least most of them, some of the time. That’s why the myths you hear about aren’t selected for usefulness.
This seems right to me, as far as it goes. But for the same reason they’re dangerous, they’re powerful. Why should the forces of evil and ignorance be the only ones who get to have powerful weapons?
I would feel pretty comfortable betting that Meditations on Moloch is one of the top 5 most effective posts produced by the LW-sphere, in terms of leading to people pursuing good in the world. That’s a direct result of it choosing to harness myth in a way selected for usefulness.
Meditations on Moloch certainly wasn’t promoting evil, but I think it was (inadvertently) promoting ignorance. For example, it paints the fish farming story as an argument against libertarianism, but economists see the exact same story as an argument for privatization of fisheries, and it works in reality exactly as economists say!
The whole essay suffers from that problem. It leaves readers unaware that there’s a whole profession dedicated to “fighting Moloch” and they have a surprisingly good framework: incentives, public goods, common resources, free rider problem, externalities, Pigovian taxes, Coasian bargains… Unfortunately, dry theory is hard to learn, so people skip learning it if they can more easily get an illusion of understanding—like many readers of the Moloch essay I’ve encountered.
That’s the general problem Charlie is pointing to. If you want to give your argument some extra oomph beyond what the evidence supports, why do you want that? You could be slightly wrong, or (if you’re less lucky than Scott) a lot wrong, and make many other people wrong too. Better spend that extra time making your evidence-based argument better.
Even shorter: I don’t want powerful weapons to argue for truth. I want asymmetric weapons that only the truth can use. Myth isn’t such a weapon, so I’ll leave it in the cave where it was found.
I’m bad and I feel bad about making this kind of argument:
Register the irony of framing your refusal to use the power of mythical language in a metaphor about a wise and humble hero leaving Excalibur in the cave where it was found.
The issue is that we are all being pulled by Omega’s web into roles, and the choice is not whether or not to partake in some role, but whether or not to use the role we inhabit to our advantage. You don’t get to choose not to play the game, but you do get to pick your position.
Nice! I agree I should’ve left out that last bit :-)
I deeply respect that, and your choice.
I think I want the same end result you do: I want truth and clarity to reign. This has led me to intentionally use mythic mode because I see the influence of things like it all over the place, and I want to be able to notice and track that, and get practice extracting the parts that are epistemically good. And I need to have a cultivated skill with countering uses of mythic language that turn out to have deceived (or were intentionally used to deceive).
But I think it’s totally a defensible position to say “Nope, this is too fraught and too symmetric, I ain’t touchin’ that” and walk away.
My goal is almost always behavior change. I can write all sorts of strong evidence-based arguments but I despair of those arguments actually affecting the behavior of anyone except the rationalists who are best at taking ideas seriously.
Said another way, in addition to writing down arguments there’s the task of debugging emotional blocks preventing people from taking the argument seriously enough for it to change their behavior. I think there’s a role for writing that tries to do both of these things (and that e.g. Eliezer did this a lot in the Sequences and it was good that he did this, and that HPMoR also does this and that was good too, and Meditations on Moloch, etc.).
Meditations on Moloch is not an argument. It’s a type error to analyze it as if it were.
Meditations on Moloch was creative and effective but ultimately “just” a restatement of well-known game theory. This post is a lot more speculative and anecdotal.
Hmm, I don’t really see it that way? This post is trying to describe the category of which Meditation on Moloch is an instance. If Meditation on Moloch is good, surely trying to understand the thing that it’s an instance of could also be good.
I have just recently read Meditations on Moloch and I agree it is fascinating post, but also entirely misses the point. Competition does not make you sacrifice your values, that’s how these values came to existence in the first place. There was analogy with rats who came to live in the island and used their spare time to do art, but stopped when resources had depleted. That`s not how story goes. When rats first came to island they did not care about art or any such nonsense, all they did was eat and fuck all day and everyone was happy. But one day, there was no more food to continue to just do that. Only then some rats started to be creative. Turns out if you paint your picture with bigger muscles than you actually have, and you put it on rats-tinder, you get to mate more than if you just posted your real picture. That’s how art came to exist in rats island.
Scott wasn’t suggesting that competition alone makes people sacrifice their values. He was suggesting (as I understand it) that the following configuration tends to suck for everyone pretty systematically:
You have a bunch of agents who are in competition for some resource.
Each agent is given an opportunity to sacrifice something important to them in order to gain competitive advantage over the other agents.
The agents can’t coordinate about who will or won’t take advantage of this opportunity.
The net effect is generally that agents who accept this trade tend to win out over those who don’t. This incentivizes each agent to make the trade so that they can at least stay in competition.
In particular, this means that even if there’s common knowledge of this whole setup, and there’s common knowledge that it sucks, it’s still the case that no one can do anything about it.
That, personified, is Moloch.
Yes, and what I am asking is why those things are important fot them in the first place? Probably because having these things important gave those agents competetive advantage. Love your children? Thats Moloch wants you to replicate your stomach so you could eat mode baby elephants, than you alone could. You only sacrifice those things that Molach himself has given you.
The way I would put it is that agents evolve to make use of the regularities in the environment. If exploiting those regularities leads to increased success, then competition creates complexity that allows for those regularities to be taken advantage of. Whereas complexity which is no longer useful, either because the regularities no longer exist in the new environment or because there are more powerful regularities to exploit instead, will eventually be eaten away by competition.
Thus it’s true that competition gave us those things originally. But on the other hand, if you’re looking from the perspective of what we have now and want to preserve it, then it’s also fair to say that competition is a threat to it.
We might want to preseve those, but can we? By definition we will be outcompeted by those who do not.
And that problem is exactly what Scott refers to as Moloch.
Let me put it this way—if this is a problem, you would probably want to solve it? Generally if you want to solve a problem you would prefer it to not have existed in the first place? If yes then you would also not have any of the values you want to save. Considering this, does Moloch still qualifies as a problem?
This is incorrect and I think only sounds like an argument because of the language you’re choosing; there’s nothing incoherent about 1. preferring evolutionary pressures that look like Moloch to exist so that you end up existing rather than not existing, and 2. wanting to solve Moloch-like problems now that you exist.
Also, there’s nothing incoherent about wanting to solve Moloch-like problems now that you exist regardless of Moloch-like things causing you to come into existence. Our values are not evolution’s values, if that even makes sense.
So to again summarise this whole argument: Moloch is a problem, that made you exist and is impossible to solve by definition. So what are you going to do about it? (I suggest trying to answer this to your self at first, only then to me)
Yes.
So… we should respond by removing the things people love?
I suspect I just disagree with your claim. But even if you were right, I don’t think the right answer is to ban beloved things. I think it’s to learn how to have beloved things and still be sane.
By my own personal judgment, rationalist culture developed a lot of epistemic viciousness by gripping hard onto the chant “Politics is the mind-killer!” and thereby banning all development of the Art in that domain. The Trump election in 2016 displayed that communal weakness in force, with rationalists getting sucked into the same internal signaling games as all the other primates, and then being shocked when he won.
I mean, think about that. A whole community that grew out of an attempt to practice an art of clear thinking that supposedly tries to pay rent largely made the same wrong prediction. Yes, I know there are exceptions. I live with one of them. But that just says that some people in that community managed not to get swept up.
This doesn’t bode well for a Calvinist approach to epistemic integrity.
(…and that method is a lot less fun!)
This is a tangent, but I feel like this comment is making the mistake of collapsing predictions into a “predicted Trump”/”predicted Clinton” binary. I predicted about a 20% chance of Trump (my strategy was to agree with Nate Silver, Nate Silver is always right when speaking ex cathedra), and I do not consider myself to have made an error. Things with a 20% chance of happening happen one time out of five. Trump lost the popular vote after an October surprise; that definitely looks like the sort of outcome you get in a world where he was genuinely less likely than Clinton to win.
I think about it like a memetic ecosystem. Ideas can spread because they’re visibly helping someone else, or because they’re catchy, or because they tap into primal instincts, or because there’s an abstract argument for them, or combinations of such things. Ideas in an ecosystem have properties at different levels: they have appeal that helps them spread, they have effects on peoples’ actions, and they can also be understood as having effects on the ecosystem. The idea of the scientific method, for example, has some philosophical appeal, it changes peoples’ actions to involve more testing, and it also changes what thoughts those people think and spread.
In this framing, my claimed problem with the mythic mode is that it pushes people, and to an extent the entire ecosystem, more towards spreading ideas based on how they tap into primal instincts and emotions, at the expense of appeal based on certain sorts of abstract argument about value.
So to be more precise, information that has a lot of appeal not based on its value is dangerous, because I think we need this memetic ecosystem to appeal mostly based on value and knowledge. Hence why my example was the dangers of political discussion, not the dangers of chocolate (though my belly might argue for certain dangers of that too). Even if the mythic mode is valuable, or if certain political discussions are valuable, we need to balance this local value with the effect it’s going to have on the global value generated. A ban is one sort of meme that shapes the memetic ecosystem—but it’s not the only way.
I live in central Illinois and do my interaction with rationalists via the internet these days, deliberately ignoring 99.9% of people talking about politics, so I’m guessing you experienced something pretty different out in Berkeley. Given this, I think I just don’t have the context to interpret your argument. Arguing that we should systematically outperform Nate Silver seems wrong, but I suspect that’s not what you’re arguing.