I will take blame for not making it clear that this is an introduction to a much larger body of thought
I’ll have another essay in a few weeks—I will send it to you and I look forward to your criticism.
This does provide the necessary context absolving the post from the main blow of my critique, for the time being. Looking forward for your next essay!
I would be glad if your future reasoning gave me some novel insight, and I’m giving you the benefit of the doubt about it and will do my best to approach it open-mindedly. But as of now, I’m afraid, your conclusions are going to be based on a false premise. I’m going to highlight what I think it is—feel free to address this concern in your future essays.
But first, let’s establish some common ground.
I want to live in a universe where surprising things happen and the aforementioned delusions still have a place. In some sense, I want to turn the world and its ways upside down—I want the weak and the deluded to win—but how?
I agree with the sentiment. That’s why the whole “How to get rid of Moloch” question is important to me.
Then you say:
Not through rational intelligence or “work” because that is exactly how the null hypothesis becomes fulfilled. Reality is like a chinese finger trap, struggling only deepens your entrapment.
And this is wrong. The Molochian equilibrium appeared long before any rational intelligences came into existence and thought about it in terms of game theory. Evolution through natural selection—the creator of our brains—is already a facet of Moloch. It’s not rational thought that traps us in the maze of Moloch—as you mention yourself, the rational approach to reality is a relatively late thing. Moloch predates it by millions of years.
All the playfullness and childiness and dramatic dimensions work to Molochs goals just as much as workfullness and adultness and straightforward approach. That’s the horror of the situation to begin with. It’s not that everyone is just doing their best to be a profit maximizer and noone has ever thought about chilling a bit and appreciating the beauty of the sunrize and as soon as we try, the grip of Moloch fails on us. It’s that those who try that are systematically eliminated and the survivors become even more enslaved by Moloch as a result.
I think Scott did a very good job at explaining this point, so I’m not sure why you are making this mistake. My current best guess is wishful thinking. That’s what I got from reading your essay and it didn’t change after this comment. But, once again, I would be happy to be proven wrong in this regard, so waiting for your next essay on the topic.
That has been the default strategy for many years and it failed dramatically.
All the “convinced influential people in tech”, started making their own AI start-ups, while comming up with galaxy-brained rationalizations why everything will be okay with their idea in particular. We tried to be nice to them in order not to lose our influence with them. Turned out we didn’t have any. While we carefully and respectfully showed the problems with their reasoning, they likewise respectfully nodded their heads and continued to burn the AI timelines. Who could’ve though that people who have a real chance to become incredibly rich and important at the cost of dooming human civilization a bit later, are going to take this awesome opportunity?
Meanwhile, surprisingly enough, it turned out that regular “100 IQ individuals” with no prospect to become absurdly rich and powerful actually do not want an apocalypse! Too bad that we have already stained our reputation quite a bit, while appearing as bootlickers to the tech-billionares for all these years, but better late than never.
There is a lesson about naivity/cynycism and personal bias here. How it’s much more pleasant to persuade influential elites than common masses. The former feels like respectable intellectual activity, while the latter like—gah! - politics, something what crazy activists would do. And it’s good that we’ve managed to learn it, diversifying our activity and trying to appeal to common people more. Would’ve been even better if managed to win initially, instead of making this kind of fascinating mistake, but sadly we are not that good at rationality yet.