good news on the moral front: prosocial moral intuitions are in fact a winning strategy long term. we’re in a bit of a mess short term. but, solidarity and co-protection are good strategies; radical transparency can be an extremely effective move; mutual aid has always been a factor of evolution; the best real life game theory strategies tend to look like reputational generous tit for tat with semirandom forgiveness, eg in evolutionary game theory simulations; etc. Moral realism is probably true but extremely hard to compute. If we had a successful co-protective natural language program, it would likely be verifiably true and look like well known moral advice structured in a clear and readable presentation with its mathematical consequences visualized for all to understand.
https://ncase.me/trust/ one of the best intros to morality as trustbuilding, with game theory visualizations; the only serious contender for the original description on this list, imo.
some of the results are actual science posts on the science post hubs—if you want to get into the field properly, you might try spidering around related papers, adding them to a folder, and shallowsly reading a bunch of the big ones. You could even add them to a semanticscholar folder and it’ll give you recommendations for papers you might find interesting. could be very useful if you want to push SOTA on understanding of morality!
some of them get weird, but it is, in my opinion, rather fun and interesting weird:
https://www.metaethical.ai/v20-1/ (this one is pretty spicy, an attempt to exactly formalize meta-ethics; I have seen it several times and I still am not sure I follow what’s going on, but it seems cool)
https://polycentriclaw.org/ is a few interesting blog posts rehashing stuff you may already know, but they’re short, only three posts and they all seem cool
https://bigmother.ai/ is a rather galaxy brain “ai alignment problem needs solving so we can build the big one!” agi page, and it looks like it has some institutional backing
good news on the moral front: prosocial moral intuitions are in fact a winning strategy long term. we’re in a bit of a mess short term. but, solidarity and co-protection are good strategies; radical transparency can be an extremely effective move; mutual aid has always been a factor of evolution; the best real life game theory strategies tend to look like reputational generous tit for tat with semirandom forgiveness, eg in evolutionary game theory simulations; etc. Moral realism is probably true but extremely hard to compute. If we had a successful co-protective natural language program, it would likely be verifiably true and look like well known moral advice structured in a clear and readable presentation with its mathematical consequences visualized for all to understand.
I really like https://microsolidarity.cc as an everyday life intro to this, and I tossed this comment into metaphor.systems up to the opening bracket of this link. here are some very interesting results, various middle to high quality manifestos and quick overviews of ethics:
https://ncase.me/trust/ one of the best intros to morality as trustbuilding, with game theory visualizations; the only serious contender for the original description on this list, imo.
https://persistentdemocracy.org/ (an intro to the design of next gen democratic systems)
https://www.machineethics.com/ (an ai formal? ethics group)
https://osf.io/q8bfx/wiki/home/ which appears to be a text abstract for the 13 minute talk https://www.youtube.com/watch?v=OJNQvkpX6Go
http://cooperation.org/ (a short link index of a few more similar links)
https://rationalaltruist.com/ (looks like this person is probably someone who hangs out around these parts, not sure who)
https://spartacus.app/ (assurance contract app)
https://longtermrisk.org/reasons-to-be-nice-to-other-value-systems/
some of the results are actual science posts on the science post hubs—if you want to get into the field properly, you might try spidering around related papers, adding them to a folder, and shallowsly reading a bunch of the big ones. You could even add them to a semanticscholar folder and it’ll give you recommendations for papers you might find interesting. could be very useful if you want to push SOTA on understanding of morality!
https://www.semanticscholar.org/paper/Morality-as-Cooperation%3A-A-Problem-Centred-Approach-Curry/2839f7273e70fa5fe0090024df98b97801d4a7ad#paper-header
some of them get weird, but it is, in my opinion, rather fun and interesting weird:
https://www.metaethical.ai/v20-1/ (this one is pretty spicy, an attempt to exactly formalize meta-ethics; I have seen it several times and I still am not sure I follow what’s going on, but it seems cool)
http://mediangroup.org/research (very funky research project by some folks who hang out around these parts sometimes)
https://polycentriclaw.org/ is a few interesting blog posts rehashing stuff you may already know, but they’re short, only three posts and they all seem cool
https://bigmother.ai/ is a rather galaxy brain “ai alignment problem needs solving so we can build the big one!” agi page, and it looks like it has some institutional backing
https://www.tedagame.com/answersanswers/circle/index.html is a very web1.0 intro to georgism, I think
https://longtermrisk.org/msr multiverse cooperation? superrationality?
https://basisproject.net/ funky “constructive distributed-systems alternatives to broken markets” project
https://jakintosh.com/coalescence/matter-and-concepts.html another slightly galaxy brain manifesto
https://happinesspolitics.org/index.html ea politics site?
https://www.optimalaltruism.com/ another galaxy brain altruism project, looks pretty cool
https://magnova.space/ yet more galaxy brain manifesto
You always post such cool links!!! I bet you’re a cool person. :)