This seems right to me, as far as it goes. But for the same reason they’re dangerous, they’re powerful. Why should the forces of evil and ignorance be the only ones who get to have powerful weapons?
I would feel pretty comfortable betting that Meditations on Moloch is one of the top 5 most effective posts produced by the LW-sphere, in terms of leading to people pursuing good in the world. That’s a direct result of it choosing to harness myth in a way selected for usefulness.
Meditations on Moloch certainly wasn’t promoting evil, but I think it was (inadvertently) promoting ignorance. For example, it paints the fish farming story as an argument against libertarianism, but economists see the exact same story as an argument for privatization of fisheries, and it works in reality exactly as economists say!
The whole essay suffers from that problem. It leaves readers unaware that there’s a whole profession dedicated to “fighting Moloch” and they have a surprisingly good framework: incentives, public goods, common resources, free rider problem, externalities, Pigovian taxes, Coasian bargains… Unfortunately, dry theory is hard to learn, so people skip learning it if they can more easily get an illusion of understanding—like many readers of the Moloch essay I’ve encountered.
That’s the general problem Charlie is pointing to. If you want to give your argument some extra oomph beyond what the evidence supports, why do you want that? You could be slightly wrong, or (if you’re less lucky than Scott) a lot wrong, and make many other people wrong too. Better spend that extra time making your evidence-based argument better.
Even shorter: I don’t want powerful weapons to argue for truth. I want asymmetric weapons that only the truth can use. Myth isn’t such a weapon, so I’ll leave it in the cave where it was found.
I’m bad and I feel bad about making this kind of argument:
I don’t want powerful weapons to argue for truth. I want asymmetric weapons that only the truth can use. Myth isn’t such a weapon, so I’ll leave it in the cave where it was found.
Register the irony of framing your refusal to use the power of mythical language in a metaphor about a wise and humble hero leaving Excalibur in the cave where it was found.
The issue is that we are all being pulled by Omega’s web into roles, and the choice is not whether or not to partake in some role, but whether or not to use the role we inhabit to our advantage. You don’t get to choose not to play the game, but you do get to pick your position.
If you want to give your argument some extra oomph beyond what the evidence suggests, why do you want that? You could be wrong, and make many people wrong. Better spend that extra time making your evidence-based argument better.
Even shorter: I don’t want powerful weapons to argue for truth. I want asymmetric weapons that only the truth can use. Myth isn’t such a weapon, so I’ll leave it in the cave where it was found.
I deeply respect that, and your choice.
I think I want the same end result you do: I want truth and clarity to reign. This has led me to intentionally use mythic mode because I see the influence of things like it all over the place, and I want to be able to notice and track that, and get practice extracting the parts that are epistemically good. And I need to have a cultivated skill with countering uses of mythic language that turn out to have deceived (or were intentionally used to deceive).
But I think it’s totally a defensible position to say “Nope, this is too fraught and too symmetric, I ain’t touchin’ that” and walk away.
That’s the general problem Charlie is pointing to. If you want to give your argument some extra oomph beyond what the evidence supports, why do you want that? You could be slightly wrong, or (if you’re less lucky than Scott) a lot wrong, and make many other people wrong too. Better spend that extra time making your evidence-based argument better.
My goal is almost always behavior change. I can write all sorts of strong evidence-based arguments but I despair of those arguments actually affecting the behavior of anyone except the rationalists who are best at taking ideas seriously.
Said another way, in addition to writing down arguments there’s the task of debugging emotional blocks preventing people from taking the argument seriously enough for it to change their behavior. I think there’s a role for writing that tries to do both of these things (and that e.g. Eliezer did this a lot in the Sequences and it was good that he did this, and that HPMoR also does this and that was good too, and Meditations on Moloch, etc.).
Meditations on Moloch was creative and effective but ultimately “just” a restatement of well-known game theory. This post is a lot more speculative and anecdotal.
Hmm, I don’t really see it that way? This post is trying to describe the category of which Meditation on Moloch is an instance. If Meditation on Moloch is good, surely trying to understand the thing that it’s an instance of could also be good.
I have just recently read Meditations on Moloch and I agree it is fascinating post, but also entirely misses the point. Competition does not make you sacrifice your values, that’s how these values came to existence in the first place. There was analogy with rats who came to live in the island and used their spare time to do art, but stopped when resources had depleted. That`s not how story goes. When rats first came to island they did not care about art or any such nonsense, all they did was eat and fuck all day and everyone was happy. But one day, there was no more food to continue to just do that. Only then some rats started to be creative. Turns out if you paint your picture with bigger muscles than you actually have, and you put it on rats-tinder, you get to mate more than if you just posted your real picture. That’s how art came to exist in rats island.
I have just recently read Meditations on Moloch and I agree it is fascinating post, but also entirely misses the point. Competition does not make you sacrifice your values[…]
Scott wasn’t suggesting that competition alone makes people sacrifice their values. He was suggesting (as I understand it) that the following configuration tends to suck for everyone pretty systematically:
You have a bunch of agents who are in competition for some resource.
Each agent is given an opportunity to sacrifice something important to them in order to gain competitive advantage over the other agents.
The agents can’t coordinate about who will or won’t take advantage of this opportunity.
The net effect is generally that agents who accept this trade tend to win out over those who don’t. This incentivizes each agent to make the trade so that they can at least stay in competition.
In particular, this means that even if there’s common knowledge of this whole setup, and there’s common knowledge that it sucks, it’s still the case that no one can do anything about it.
Each agent is given an opportunity to sacrifice something important to them in order to gain competitive advantage over the other agents.
Yes, and what I am asking is why those things are important fot them in the first place? Probably because having these things important gave those agents competetive advantage. Love your children? Thats Moloch wants you to replicate your stomach so you could eat mode baby elephants, than you alone could. You only sacrifice those things that Molach himself has given you.
The way I would put it is that agents evolve to make use of the regularities in the environment. If exploiting those regularities leads to increased success, then competition creates complexity that allows for those regularities to be taken advantage of. Whereas complexity which is no longer useful, either because the regularities no longer exist in the new environment or because there are more powerful regularities to exploit instead, will eventually be eaten away by competition.
Thus it’s true that competition gave us those things originally. But on the other hand, if you’re looking from the perspective of what we have now and want to preserve it, then it’s also fair to say that competition is a threat to it.
Let me put it this way—if this is a problem, you would probably want to solve it? Generally if you want to solve a problem you would prefer it to not have existed in the first place? If yes then you would also not have any of the values you want to save. Considering this, does Moloch still qualifies as a problem?
This is incorrect and I think only sounds like an argument because of the language you’re choosing; there’s nothing incoherent about 1. preferring evolutionary pressures that look like Moloch to exist so that you end up existing rather than not existing, and 2. wanting to solve Moloch-like problems now that you exist.
Also, there’s nothing incoherent about wanting to solve Moloch-like problems now that you exist regardless of Moloch-like things causing you to come into existence. Our values are not evolution’s values, if that even makes sense.
So to again summarise this whole argument: Moloch is a problem, that made you exist and is impossible to solve by definition. So what are you going to do about it? (I suggest trying to answer this to your self at first, only then to me)
This seems right to me, as far as it goes. But for the same reason they’re dangerous, they’re powerful. Why should the forces of evil and ignorance be the only ones who get to have powerful weapons?
I would feel pretty comfortable betting that Meditations on Moloch is one of the top 5 most effective posts produced by the LW-sphere, in terms of leading to people pursuing good in the world. That’s a direct result of it choosing to harness myth in a way selected for usefulness.
Meditations on Moloch certainly wasn’t promoting evil, but I think it was (inadvertently) promoting ignorance. For example, it paints the fish farming story as an argument against libertarianism, but economists see the exact same story as an argument for privatization of fisheries, and it works in reality exactly as economists say!
The whole essay suffers from that problem. It leaves readers unaware that there’s a whole profession dedicated to “fighting Moloch” and they have a surprisingly good framework: incentives, public goods, common resources, free rider problem, externalities, Pigovian taxes, Coasian bargains… Unfortunately, dry theory is hard to learn, so people skip learning it if they can more easily get an illusion of understanding—like many readers of the Moloch essay I’ve encountered.
That’s the general problem Charlie is pointing to. If you want to give your argument some extra oomph beyond what the evidence supports, why do you want that? You could be slightly wrong, or (if you’re less lucky than Scott) a lot wrong, and make many other people wrong too. Better spend that extra time making your evidence-based argument better.
Even shorter: I don’t want powerful weapons to argue for truth. I want asymmetric weapons that only the truth can use. Myth isn’t such a weapon, so I’ll leave it in the cave where it was found.
I’m bad and I feel bad about making this kind of argument:
Register the irony of framing your refusal to use the power of mythical language in a metaphor about a wise and humble hero leaving Excalibur in the cave where it was found.
The issue is that we are all being pulled by Omega’s web into roles, and the choice is not whether or not to partake in some role, but whether or not to use the role we inhabit to our advantage. You don’t get to choose not to play the game, but you do get to pick your position.
Nice! I agree I should’ve left out that last bit :-)
I deeply respect that, and your choice.
I think I want the same end result you do: I want truth and clarity to reign. This has led me to intentionally use mythic mode because I see the influence of things like it all over the place, and I want to be able to notice and track that, and get practice extracting the parts that are epistemically good. And I need to have a cultivated skill with countering uses of mythic language that turn out to have deceived (or were intentionally used to deceive).
But I think it’s totally a defensible position to say “Nope, this is too fraught and too symmetric, I ain’t touchin’ that” and walk away.
My goal is almost always behavior change. I can write all sorts of strong evidence-based arguments but I despair of those arguments actually affecting the behavior of anyone except the rationalists who are best at taking ideas seriously.
Said another way, in addition to writing down arguments there’s the task of debugging emotional blocks preventing people from taking the argument seriously enough for it to change their behavior. I think there’s a role for writing that tries to do both of these things (and that e.g. Eliezer did this a lot in the Sequences and it was good that he did this, and that HPMoR also does this and that was good too, and Meditations on Moloch, etc.).
Meditations on Moloch is not an argument. It’s a type error to analyze it as if it were.
Meditations on Moloch was creative and effective but ultimately “just” a restatement of well-known game theory. This post is a lot more speculative and anecdotal.
Hmm, I don’t really see it that way? This post is trying to describe the category of which Meditation on Moloch is an instance. If Meditation on Moloch is good, surely trying to understand the thing that it’s an instance of could also be good.
I have just recently read Meditations on Moloch and I agree it is fascinating post, but also entirely misses the point. Competition does not make you sacrifice your values, that’s how these values came to existence in the first place. There was analogy with rats who came to live in the island and used their spare time to do art, but stopped when resources had depleted. That`s not how story goes. When rats first came to island they did not care about art or any such nonsense, all they did was eat and fuck all day and everyone was happy. But one day, there was no more food to continue to just do that. Only then some rats started to be creative. Turns out if you paint your picture with bigger muscles than you actually have, and you put it on rats-tinder, you get to mate more than if you just posted your real picture. That’s how art came to exist in rats island.
Scott wasn’t suggesting that competition alone makes people sacrifice their values. He was suggesting (as I understand it) that the following configuration tends to suck for everyone pretty systematically:
You have a bunch of agents who are in competition for some resource.
Each agent is given an opportunity to sacrifice something important to them in order to gain competitive advantage over the other agents.
The agents can’t coordinate about who will or won’t take advantage of this opportunity.
The net effect is generally that agents who accept this trade tend to win out over those who don’t. This incentivizes each agent to make the trade so that they can at least stay in competition.
In particular, this means that even if there’s common knowledge of this whole setup, and there’s common knowledge that it sucks, it’s still the case that no one can do anything about it.
That, personified, is Moloch.
Yes, and what I am asking is why those things are important fot them in the first place? Probably because having these things important gave those agents competetive advantage. Love your children? Thats Moloch wants you to replicate your stomach so you could eat mode baby elephants, than you alone could. You only sacrifice those things that Molach himself has given you.
The way I would put it is that agents evolve to make use of the regularities in the environment. If exploiting those regularities leads to increased success, then competition creates complexity that allows for those regularities to be taken advantage of. Whereas complexity which is no longer useful, either because the regularities no longer exist in the new environment or because there are more powerful regularities to exploit instead, will eventually be eaten away by competition.
Thus it’s true that competition gave us those things originally. But on the other hand, if you’re looking from the perspective of what we have now and want to preserve it, then it’s also fair to say that competition is a threat to it.
We might want to preseve those, but can we? By definition we will be outcompeted by those who do not.
And that problem is exactly what Scott refers to as Moloch.
Let me put it this way—if this is a problem, you would probably want to solve it? Generally if you want to solve a problem you would prefer it to not have existed in the first place? If yes then you would also not have any of the values you want to save. Considering this, does Moloch still qualifies as a problem?
This is incorrect and I think only sounds like an argument because of the language you’re choosing; there’s nothing incoherent about 1. preferring evolutionary pressures that look like Moloch to exist so that you end up existing rather than not existing, and 2. wanting to solve Moloch-like problems now that you exist.
Also, there’s nothing incoherent about wanting to solve Moloch-like problems now that you exist regardless of Moloch-like things causing you to come into existence. Our values are not evolution’s values, if that even makes sense.
So to again summarise this whole argument: Moloch is a problem, that made you exist and is impossible to solve by definition. So what are you going to do about it? (I suggest trying to answer this to your self at first, only then to me)
Yes.