This is an interesting idea. Note that superforecasters read more news than the average person, and so are online a significant amount of time, yet they seem unaffected (this could be for many reasons, but is weak evidence against your theory). I’d like to know whether highly or moderately successful people, especially in the EA-sphere, avoid advertising and other info characterized as malicious by your theory. Elon Musk stands out as very online, yet very successful, but the way he is spending his money certainly is not optimized to prevent his fears of existential catastrophe.
I’m also uncertain how beneficial it is to model these orgs as agents. Certainly they behave similarly to some extent, but this view also precludes other interventions such as institutional reform efforts, which take advantage of the fact that the org is actually made up of factions of people, and is not a unified whole. For instance you could try making prediction markets legal.
It’s also unlikely to me that CFAR can scale up enough to make a dent in this problem. But maybe this depends on their track-record of outputting successful people, which I don’t know.
Note that superforecasters read more news than the average person, and so are online a significant amount of time, yet they seem unaffected (this could be for many reasons, but is weak evidence against your theory).
I like this example.
Superforecasters are doing something real. If you make a prediction and you can clearly tell whether it comes about or not, this makes the process of evaluating the prediction mostly immune to stupefaction.
Much like being online a lot doesn’t screw with your ability to shoot hoops, other than maybe taking time away from practice. You can still tell whether the ball goes in the basket.
This is why focusing on real things is clarifying. Reality reflects truth. Is truth, really, although I imagine that use of the word “truth” borders on heresy here.
Contrast superforecasters with astrologers. They’re both mastering a skillset, but astrologers are mastering one that has no obvious grounding. Their “predictions” slide all over the place. Absolutely subject to stupefaction. They’re optimizing for something more like buy-in. Actually testing their art against reality would threaten what they’re doing, so they throw up mental fog and invite you to do the same.
Whenever you have this kind of grounding in reality, what you’re working on is much, much more immune to stupefaction, which is the primary weapon of unFriendly egregores.
This was close to the heart of the main original insight of science.
It’s also unlikely to me that CFAR can scale up enough to make a dent in this problem.
Oh hell no. CFAR’s hopes of mattering in this hypercreature war died around 2013. Partly because of my own stupidity, but also because no one involved understood these forces. Anna had some fuzzy intuitions and kind of panicked about this and tried to yank things in a non-terrible direction, but she had no idea which way that was. And stupid people like me dragged our feet while she did this, instead of tuning into what she was actually panicking about and working as a clear unified team to understand and orient to the issue.
So the thing became a lumbering bureaucratic zombie optimizing for exciting rationality-flavored workshops. Pretty website, great ops (because those are real!), fun conversations… but no teeth. Fake.
In this case, it sounds like your theory is (I’d say ‘just’ here but that indicates the framing serves no purpose, and it may) a different framing on simulacra levels. In particular, most of the adversarial behavior you postulate can be explained in terms of orgs simply discovering that operating on the current simulacra level & lying about their positions is immediately beneficial.
That might be the same thing. I haven’t familiarized myself with the simulacra levels theory.
What I’ve gained by osmosis suggests a tweak though:
It’s more like, a convergent evolutionary strategy for lots of unFriendly egregores is to keep social power and basement-level reality separate. That lets the egregores paint whatever picture they need, which speeds up weapon production as they fight other egregores for resources.
Some things like plumbing and electrician work are too real to do this to, and too necessary, so instead they’re kept powerless over cultural flow.
So it’s not really about lying about which level they’re at per se. It’s specifically avoiding basement truth so as to manufacture memetic weapons at speed.
…which is why when something real like a physical pandemic-causing virus comes storming through, stupefied institutions can’t handle it as a physically real phenomenon.
I’m putting basement-level reality in a special place here. I think that’s simulacrum level 1, yes? It’s not just “operating at different levels”, but specifically about clarity of connection to things like chairs and food and kilowatt-hours.
That sounds like a different process than simulacra levels. If you want to convince me of your position, you should read into the simulacra levels theory, and find instances where level changes in the real world happen surprisingly faster than what the theory would predict, or with evidence of malice on the part of orgs. Because from my perspective all the evidence you’ve presented so far is consistent with the prevalence of simulacra collapse being correlated with the current simulacra level in an institution, or memes with high replication rate being spread further than those with low replication rate. No postulation of surprisingly competent organizations.
Ex. If plumbers weren’t operating at simulacra level 1, their clients will become upset with them on the order of days, and no longer buy their services. But if governments don’t operate at simulacra level 1 wrt pandemic preparedness, voters will become upset with them on the order of decades, then vote people out of office. Since the government simulacra collapse time is far longer than the plumber simulacra collapse time, (ignoring effects like ‘perhaps simulacra levels increase faster on the government scale, or decrease faster on the plumbing industry scale during a collapse’) governments can reach far greater simulacra levels than the plumbing industry. Similar effects can be seen in social change.
I would try to find this evidence myself, but it seems we very likely live in a world with surprisingly incompetent organizations, so this doesn’t seem likely enough for me to expend much willpower looking into it (though I may if I get in the mood).
Your strength as a rationalist is the degree to which it takes “very strong and persuasive argumentation” to convince you of false things, and “weak, unpersuasive-sounding argumentation” to convince you of true things; ideally, in the latter case, the empty string should suffice.
…which means that strong rationalist communication is healthiest and most efficient when practically empty of arguments.
I downvoted this comment. First of all, you are responding to a non-central point I made. My biggest argument was that your theory has no evidence supporting it which isn’t explained by far simpler hypotheses, and requires some claims (institutions are ultra competent) which seem very unlikely. This should cause you to be just as skeptical of your hypothesis as me. Second, “the most healthy & efficient communication ⇒ practically empty of arguments” does not mean “practically empty of arguments ⇒ the most healthy & efficient communication”, or even “practically empty of arguments ⇒ a healthy & efficient communication”. In fact, usually if there are no arguments, there is not communication happening.
In this sort of situation I think it’s important to sharply distinguish argument from evidence. If you can think of a clever argument that would change your mind then you might as well update right away, but if you can think of evidence that would change your mind then you should only update insofar as you expect to see that evidence later, and definitely less than you would if someone actually showed it to you. Eliezer is not precise about this in the linked thread: Engines of Creation contains lots of material other than clever arguments!
A request for arguments in this sense is just confused, and I too would hope not to see it in rationalist communication. But requests for evidence should always be honored, even though they often can’t be answered.
This is an interesting idea. Note that superforecasters read more news than the average person, and so are online a significant amount of time, yet they seem unaffected (this could be for many reasons, but is weak evidence against your theory). I’d like to know whether highly or moderately successful people, especially in the EA-sphere, avoid advertising and other info characterized as malicious by your theory. Elon Musk stands out as very online, yet very successful, but the way he is spending his money certainly is not optimized to prevent his fears of existential catastrophe.
I’m also uncertain how beneficial it is to model these orgs as agents. Certainly they behave similarly to some extent, but this view also precludes other interventions such as institutional reform efforts, which take advantage of the fact that the org is actually made up of factions of people, and is not a unified whole. For instance you could try making prediction markets legal.
It’s also unlikely to me that CFAR can scale up enough to make a dent in this problem. But maybe this depends on their track-record of outputting successful people, which I don’t know.
I like this example.
Superforecasters are doing something real. If you make a prediction and you can clearly tell whether it comes about or not, this makes the process of evaluating the prediction mostly immune to stupefaction.
Much like being online a lot doesn’t screw with your ability to shoot hoops, other than maybe taking time away from practice. You can still tell whether the ball goes in the basket.
This is why focusing on real things is clarifying. Reality reflects truth. Is truth, really, although I imagine that use of the word “truth” borders on heresy here.
Contrast superforecasters with astrologers. They’re both mastering a skillset, but astrologers are mastering one that has no obvious grounding. Their “predictions” slide all over the place. Absolutely subject to stupefaction. They’re optimizing for something more like buy-in. Actually testing their art against reality would threaten what they’re doing, so they throw up mental fog and invite you to do the same.
Whenever you have this kind of grounding in reality, what you’re working on is much, much more immune to stupefaction, which is the primary weapon of unFriendly egregores.
This was close to the heart of the main original insight of science.
Oh hell no. CFAR’s hopes of mattering in this hypercreature war died around 2013. Partly because of my own stupidity, but also because no one involved understood these forces. Anna had some fuzzy intuitions and kind of panicked about this and tried to yank things in a non-terrible direction, but she had no idea which way that was. And stupid people like me dragged our feet while she did this, instead of tuning into what she was actually panicking about and working as a clear unified team to understand and orient to the issue.
So the thing became a lumbering bureaucratic zombie optimizing for exciting rationality-flavored workshops. Pretty website, great ops (because those are real!), fun conversations… but no teeth. Fake.
In this case, it sounds like your theory is (I’d say ‘just’ here but that indicates the framing serves no purpose, and it may) a different framing on simulacra levels. In particular, most of the adversarial behavior you postulate can be explained in terms of orgs simply discovering that operating on the current simulacra level & lying about their positions is immediately beneficial.
Are there nuances I’m not getting here?
That might be the same thing. I haven’t familiarized myself with the simulacra levels theory.
What I’ve gained by osmosis suggests a tweak though:
It’s more like, a convergent evolutionary strategy for lots of unFriendly egregores is to keep social power and basement-level reality separate. That lets the egregores paint whatever picture they need, which speeds up weapon production as they fight other egregores for resources.
Some things like plumbing and electrician work are too real to do this to, and too necessary, so instead they’re kept powerless over cultural flow.
So it’s not really about lying about which level they’re at per se. It’s specifically avoiding basement truth so as to manufacture memetic weapons at speed.
…which is why when something real like a physical pandemic-causing virus comes storming through, stupefied institutions can’t handle it as a physically real phenomenon.
I’m putting basement-level reality in a special place here. I think that’s simulacrum level 1, yes? It’s not just “operating at different levels”, but specifically about clarity of connection to things like chairs and food and kilowatt-hours.
But hey, maybe I just mean what you said!
That sounds like a different process than simulacra levels. If you want to convince me of your position, you should read into the simulacra levels theory, and find instances where level changes in the real world happen surprisingly faster than what the theory would predict, or with evidence of malice on the part of orgs. Because from my perspective all the evidence you’ve presented so far is consistent with the prevalence of simulacra collapse being correlated with the current simulacra level in an institution, or memes with high replication rate being spread further than those with low replication rate. No postulation of surprisingly competent organizations.
Ex. If plumbers weren’t operating at simulacra level 1, their clients will become upset with them on the order of days, and no longer buy their services. But if governments don’t operate at simulacra level 1 wrt pandemic preparedness, voters will become upset with them on the order of decades, then vote people out of office. Since the government simulacra collapse time is far longer than the plumber simulacra collapse time, (ignoring effects like ‘perhaps simulacra levels increase faster on the government scale, or decrease faster on the plumbing industry scale during a collapse’) governments can reach far greater simulacra levels than the plumbing industry. Similar effects can be seen in social change.
I would try to find this evidence myself, but it seems we very likely live in a world with surprisingly incompetent organizations, so this doesn’t seem likely enough for me to expend much willpower looking into it (though I may if I get in the mood).
Nope. Not interested in convincing anyone of anything.
I support Eliezer’s sentiment here:
…which means that strong rationalist communication is healthiest and most efficient when practically empty of arguments.
I downvoted this comment. First of all, you are responding to a non-central point I made. My biggest argument was that your theory has no evidence supporting it which isn’t explained by far simpler hypotheses, and requires some claims (institutions are ultra competent) which seem very unlikely. This should cause you to be just as skeptical of your hypothesis as me. Second, “the most healthy & efficient communication ⇒ practically empty of arguments” does not mean “practically empty of arguments ⇒ the most healthy & efficient communication”, or even “practically empty of arguments ⇒ a healthy & efficient communication”. In fact, usually if there are no arguments, there is not communication happening.
In this sort of situation I think it’s important to sharply distinguish argument from evidence. If you can think of a clever argument that would change your mind then you might as well update right away, but if you can think of evidence that would change your mind then you should only update insofar as you expect to see that evidence later, and definitely less than you would if someone actually showed it to you. Eliezer is not precise about this in the linked thread: Engines of Creation contains lots of material other than clever arguments!
A request for arguments in this sense is just confused, and I too would hope not to see it in rationalist communication. But requests for evidence should always be honored, even though they often can’t be answered.