I’ve also noticed those tendencies, not in the community but in myself.
Selfishness. Classification of people as “normies.” Mental health instability. Machiavellianism.
But...
They get stronger as I look at the world like a rationalist. You read books like Elephant in the Brain and find yourself staring at a truth you don’t want to see. I wish God were real. I wish I were still a Christian with those guardrails erected to prevent me from seeing the true nature of the world.
But the more I look, the more like it looks like a non-moral, brutally unfair, unforgiving stochastic game we’re all forced to play for… no real reason?
Obviously I’d like to me mentally healthier, more loving and selfless, etc, but… I don’t know. I can’t lie to myself. I’m stuck in between losing my old flawed, factually inaccurate philosophies, but not really having something better to replace them with yet. Just staring into the abyss of the amoral world and getting lost. I suspect most new rationalists are also in that space.
good news on the moral front: prosocial moral intuitions are in fact a winning strategy long term. we’re in a bit of a mess short term. but, solidarity and co-protection are good strategies; radical transparency can be an extremely effective move; mutual aid has always been a factor of evolution; the best real life game theory strategies tend to look like reputational generous tit for tat with semirandom forgiveness, eg in evolutionary game theory simulations; etc. Moral realism is probably true but extremely hard to compute. If we had a successful co-protective natural language program, it would likely be verifiably true and look like well known moral advice structured in a clear and readable presentation with its mathematical consequences visualized for all to understand.
https://ncase.me/trust/ one of the best intros to morality as trustbuilding, with game theory visualizations; the only serious contender for the original description on this list, imo.
some of the results are actual science posts on the science post hubs—if you want to get into the field properly, you might try spidering around related papers, adding them to a folder, and shallowsly reading a bunch of the big ones. You could even add them to a semanticscholar folder and it’ll give you recommendations for papers you might find interesting. could be very useful if you want to push SOTA on understanding of morality!
some of them get weird, but it is, in my opinion, rather fun and interesting weird:
https://www.metaethical.ai/v20-1/ (this one is pretty spicy, an attempt to exactly formalize meta-ethics; I have seen it several times and I still am not sure I follow what’s going on, but it seems cool)
https://polycentriclaw.org/ is a few interesting blog posts rehashing stuff you may already know, but they’re short, only three posts and they all seem cool
https://bigmother.ai/ is a rather galaxy brain “ai alignment problem needs solving so we can build the big one!” agi page, and it looks like it has some institutional backing
In fact, I’m a moral realist! And I’ve got the skeleton of a rationalist argument for it. Only the skeleton, mind, and I’m sure people could easily blow holes in it. But making posts on here is… exhausting… so I haven’t written it up.
But the more I look, the more like it looks like a non-moral, brutally unfair, unforgiving stochastic game we’re all forced to play for… no real reason?
Well, yes, we live in a hell ruled by a dead (never-born) god. That’s why it’s our responsibility to create a living one (an aligned sovereign ASI) and liberate all sentient beings from suffering. That’s what you ought to be living for.
I’ve also noticed those tendencies, not in the community but in myself.
Selfishness. Classification of people as “normies.” Mental health instability. Machiavellianism.
But...
They get stronger as I look at the world like a rationalist. You read books like Elephant in the Brain and find yourself staring at a truth you don’t want to see. I wish God were real. I wish I were still a Christian with those guardrails erected to prevent me from seeing the true nature of the world.
But the more I look, the more like it looks like a non-moral, brutally unfair, unforgiving stochastic game we’re all forced to play for… no real reason?
Obviously I’d like to me mentally healthier, more loving and selfless, etc, but… I don’t know. I can’t lie to myself. I’m stuck in between losing my old flawed, factually inaccurate philosophies, but not really having something better to replace them with yet. Just staring into the abyss of the amoral world and getting lost. I suspect most new rationalists are also in that space.
good news on the moral front: prosocial moral intuitions are in fact a winning strategy long term. we’re in a bit of a mess short term. but, solidarity and co-protection are good strategies; radical transparency can be an extremely effective move; mutual aid has always been a factor of evolution; the best real life game theory strategies tend to look like reputational generous tit for tat with semirandom forgiveness, eg in evolutionary game theory simulations; etc. Moral realism is probably true but extremely hard to compute. If we had a successful co-protective natural language program, it would likely be verifiably true and look like well known moral advice structured in a clear and readable presentation with its mathematical consequences visualized for all to understand.
I really like https://microsolidarity.cc as an everyday life intro to this, and I tossed this comment into metaphor.systems up to the opening bracket of this link. here are some very interesting results, various middle to high quality manifestos and quick overviews of ethics:
https://ncase.me/trust/ one of the best intros to morality as trustbuilding, with game theory visualizations; the only serious contender for the original description on this list, imo.
https://persistentdemocracy.org/ (an intro to the design of next gen democratic systems)
https://www.machineethics.com/ (an ai formal? ethics group)
https://osf.io/q8bfx/wiki/home/ which appears to be a text abstract for the 13 minute talk https://www.youtube.com/watch?v=OJNQvkpX6Go
http://cooperation.org/ (a short link index of a few more similar links)
https://rationalaltruist.com/ (looks like this person is probably someone who hangs out around these parts, not sure who)
https://spartacus.app/ (assurance contract app)
https://longtermrisk.org/reasons-to-be-nice-to-other-value-systems/
some of the results are actual science posts on the science post hubs—if you want to get into the field properly, you might try spidering around related papers, adding them to a folder, and shallowsly reading a bunch of the big ones. You could even add them to a semanticscholar folder and it’ll give you recommendations for papers you might find interesting. could be very useful if you want to push SOTA on understanding of morality!
https://www.semanticscholar.org/paper/Morality-as-Cooperation%3A-A-Problem-Centred-Approach-Curry/2839f7273e70fa5fe0090024df98b97801d4a7ad#paper-header
some of them get weird, but it is, in my opinion, rather fun and interesting weird:
https://www.metaethical.ai/v20-1/ (this one is pretty spicy, an attempt to exactly formalize meta-ethics; I have seen it several times and I still am not sure I follow what’s going on, but it seems cool)
http://mediangroup.org/research (very funky research project by some folks who hang out around these parts sometimes)
https://polycentriclaw.org/ is a few interesting blog posts rehashing stuff you may already know, but they’re short, only three posts and they all seem cool
https://bigmother.ai/ is a rather galaxy brain “ai alignment problem needs solving so we can build the big one!” agi page, and it looks like it has some institutional backing
https://www.tedagame.com/answersanswers/circle/index.html is a very web1.0 intro to georgism, I think
https://longtermrisk.org/msr multiverse cooperation? superrationality?
https://basisproject.net/ funky “constructive distributed-systems alternatives to broken markets” project
https://jakintosh.com/coalescence/matter-and-concepts.html another slightly galaxy brain manifesto
https://happinesspolitics.org/index.html ea politics site?
https://www.optimalaltruism.com/ another galaxy brain altruism project, looks pretty cool
https://magnova.space/ yet more galaxy brain manifesto
You always post such cool links!!! I bet you’re a cool person. :)
There is a middle path. insert buddha vibes
In fact, I’m a moral realist! And I’ve got the skeleton of a rationalist argument for it. Only the skeleton, mind, and I’m sure people could easily blow holes in it. But making posts on here is… exhausting… so I haven’t written it up.
Well, yes, we live in a hell ruled by a dead (never-born) god. That’s why it’s our responsibility to create a living one (an aligned sovereign ASI) and liberate all sentient beings from suffering. That’s what you ought to be living for.