Hi, as I was tagged here, I will respond to a few points. There are a bunch of smaller points only hinted at that I won’t address. In general, I strongly disagree with the overall conclusion of this post.
There are two main points I would like to address in particular:
1 More information is not more Gooder
There seems to be a deep underlying confusion here that in some sense more information is inherently more good, or inherently will result in good things winning out. This is very much the opposite of what I generally claim about memetics. Saying that all information is good is like saying all organic molecules or cells are equally good. No! Adding more biosludge and toxic algal blooms to your rosegarden won’t make it better!
Social media is the exact living proof of this. People genuinely thought social media will bring everyone together, resolve conflicts, create a globally unified culture and peace and democracy, that autocracy and bigotry couldn’t possibly thrive if you just only had enough information. I consider this hypothesis thoroughly invalidated. “Increasing memetic evolutionary pressure” is not a good thing! (all things equal)
Increasing the evolutionary pressure on the flu virus doesn’t make the world better, and viruses mutate a lot faster than nice fluffy mammals. Most mutations in fluffy mammals kills them, mutations in viruses helps them far more. Value is fragile. It is asymmetrically easy to destroy than to create.
Raw evolution selects for fitness/reproduction, not Goodness. You are just feeding the Great Replicator.
For an accessible intro to some of this, I recommend the book “Nexus” by Yuval Harari. (not that I endorse everything in that book, but the first half is great)
2 “Pivotal Act” style theories of change
You talk about theories of change of the form “we safety people will keep everything secret and create an aligned AI, ship it to big labs and save the world before they destroy it (or directly use the AI to stop them)”. I don’t endorse, and in fact strongly condemn, such theories of change.
But not because of the hiding information part, but because of the “we will not coordinate with others and will use violence unilaterally” part! Such theories of change are fundamentally immoral for the same reasons labs building AGI is immoral. We have a norm in our civilization that we don’t as private citizens threaten to harm or greatly upend the lives of our fellow civilians without either their consent or societal/governmental/democratic authority.
The not sharing information part is fine! Not all information is good! For example, Canadian researchers a while back figured out how to reconstruct an extinct form of smallpox, and then published how to do it. Is this a good thing for the world to have that information out there?? I don’t think so. Should we open source the blue prints of the F-35 fighter jet? I don’t think so, I think it’s good that I don’t have those blueprints!
Information is not inherently good! Not sharing information that would make the world worse is virtuous. Now, you might be wrong about the effects of sharing the information you have, sure, but claiming there is no tradeoff or the possibility that sharing might actually, genuinely, be bad, is just ignoring why coordination is hard.
3 Conclusion
If you ever find yourself thinking something of the shape “we must simply unreservedly increase [conceptually simple variable X], with no tradeoffs”, you’re wrong. Doesn’t matter how clever you think X is, you’re wrong. Any real life, not fake complex thing is made of towers upon towers of tradeoffs. If you think there are no tradeoffs in whatever system you are looking at, you don’t understand the system.
Memes are not our friends. Conspiracy theories and lies spread faster than complex, nuanced truth. The printing press didn’t bring the scientific revolution, it brought the witch burnings and the 30 year war. The scientific revolution came from the Royal Society and its nuanced, patient, complex norms of critical inquiry. Yes, spreading your scientific papers was also important, it was necessary but not sufficient for a good outcome.
More mutation/evolution, all things equal, means more cancer, not more health and beauty. Health and beauty can come from cancerous mutation and selection, but it’s not a pretty process, and requires a lot of bloody, bloody trial and error (and a good selection function). The kind of inefficient and morally abominable process I would prefer us not relying on.
With that being said, I think it’s good that you wrote things down and are thinking about them, please don’t take what I’m saying as some kind of personal disparaging, I wish more people wrote down their ideas and tried to think things through! I think there is indeed a lot of valuable things in this direction, around better norms, tools, processes and memetic growth, but they’re just really quite non trivial! You’re on your way to thinking critically about morality, coordination and epistemology, which is great! That’s where I think real solutions are!
That last paragraph seems important. There’s a type of person that doesn’t have an opinion yet in AI discourse, which is new, and will bounce off the “side” that appears most hostile to them—which, if they have misguided ideas, might be the truth-seeking side that gently criticizes. (Not saying that’s the case for the author of this post!)
It’s really hard to change the mind of someone who’s found their side in AI. But not to have them join one in the first place!
Despite being “into” AI safety for a while, I haven’t picked a side. I do believe it’s extremely important and deserves more attention and I believe that AI actually could kill everyone in less 5 years.
But any effort spent on pinning down one’s “p(doom)” is not spent usefully on things like: how to actually make AI safe, how AI works, how to approach this problem as a civilization/community, how to think about this problem. And, as was my intention with this article, “how to think about things in general, and how to make philosophical progress”.
I’m worried you’re not seeing this at a long enough timescale.
I’m claiming:
1. “information sharing is good” is an invariant as timeless as “people will sacrifice truth and empathy for power”, you can’t claim Moloch wins based on available evidence.
2. both of these are more powerful than short-effects which we can forecast
On 1:
Increased information sharing leads to faster iteration. Faster iteration of science and technology leads to increased power derived from technology. Faster iteration of social norms and technologies leads to increased power derived from better coordination.
It is not a coincidence that the USA is simultaneously the most powerful and one of the most tolerant societies in human history.
Suppose you were the inventor of the gutenberg press deciding whether to release your technology or not. Maybe you could have foreseen the witch burnings. Maybe you could’ve even foreseen something like the 95 theses.
You couldn’t have foreseen democracy in France, or that its success would inspire the US. (Which was again only possible because of sharing of information between Europe and US) You couldn’t have foreseen that jew physicists leaving Europe for a more tolerant society would invent an atomic bomb that would ultimately bring peace to Europe. You couldn’t have foreseen the peace among EU nations in 2024, not enforced just at threat of bomb but more strongly via intermixing of its peoples.
If you decided not to release the gutenberg press because of forecasted witch burnings you might have made a collosal mistake.
Information sharing is argued as good because it relies on principles of human behaviour that survive long after you die, long after any specific circumstances.
Information survives the rise and fall of civilisations. As long as 1-of-n people preserve some information, it is preserved. A basic desire for truth and empathy is universal amongst human beings across space and time, as its encoded in genetics not culture.
Yes, people are often forced to sacrifice certain values at the altar of other ones, and we see this throughout history. You could call this Moloch. This too is universal.
Both of these are invariants that could hold long after the point where we can forecast specific events.
On 2:
Witch burnings don’t prove gutenberg press bad.
Social media isn’t proven bad on such a short timescale for the same reason witch burnings don’t prove the gutenberg press bad.
You haven’t even proven publishing smallpox papers in public is bad. Maybe one day bioweapon research is banned and this is only possible because of public consensus built on publicly available papers such as the smallpox paper.
TLDR: Here’s all the ways in which you’re right, and thanks for pointing these things out!
At a meta-level, I’m *really* excited by just how much I didn’t see your criticism coming. I thought I was thinking carefully, and that iterating on my post with Claude (though it didn’t write a single word of it!) was taking out the obvious mistakes, but I missed so much. I have to rethink a lot about my process of writing this.
I strongly agree that I need a *way* more detailed model of what “memetic evolution” looks like, when it’s good vs bad, and why, whether there’s a better way of phrasing and viewing it, dig into historical examples, etc.
I’m curious if social media is actually bad beyond the surface—but again I should’ve anticipated “social media kinda seems bad in a lot of ways” being such an obvious problem in my thinking, and attended to it.
Reading it back, it totally reads as an argument for “more information more Gooder”, which I didn’t see at all. (generally viewing the post as “more X is always more good” is also cool as in, a categorization trick that brings clarity)
I think a good way to summarize my mistake is that I didn’t “go all the way” in my (pretty scattered) lines of thinking.
You’re on your way to thinking critically about morality, coordination and epistemology, which is great!
Thanks :) A big part of why I got into writing ideas explicitly and in big posts (vs off-hand Tweets/personal notes), is because you’ve talked about this being a coordination mechanism on Discord.
I think you completely missed the angle of, civilizational coordination via people updating on the state of the world and on what others are up to.
(To be fair I literally wrote in The Gist, “speed up memetic evolution”, lol that’s really dumb, also explicitly advocated for “memetic evolution” multiple times throughout)
Communication is not exactly “sharing information”
Communication is about making sure you know where you each stand and that you resolve to some equilibrium, not that you tell each other your life story and all the object level knowledge in your head.
Isn’t this exactly what you’re doing when going around telling people “hey guys big labs are literally building gods they don’t understand nor control, this is bad and you should know it” ? I should still dig into what that looks like exactly and when it’s done well vs badly (for example you don’t tell people how exactly OpenAI is building gods, just that they are).
I’d argue that if Youtube had a chatbot window embedded in the UI which can talk about contents of a video, this would be a very positive thing, because generally it would increase people’s clarity about and ability to parse, contents of videos.
Clarity of ideas is not just “pure memetic evolution”
Think of the type of activity that could be described as “doing good philosophy” and “being a good reader”. This process is iterative too: absorb info from world → share insight/clarified version of info → get feedback → iterate again → affect world state → repeat. It’s still in the class of “unpredictable memetic phenomena”, but it’s very very different from what happens on the substrate of mindless humans scrolling TikTok, guided by the tentacles of a recommendation algorithm.
Even a guy typing something into a comment box, constantly re-reading and re-editing and re-considering, will land on (evolve towards) unpredictable ideas (memes). That’s the point!
There is a certain type of person who would look at the mountains of skulls that Genghis Khan piled up and before judging it evil, ask whether it was a state acting or a group of individuals.
Fuck that. States/governments, “democratic” or otherwise, have absolutely no privileged moral status, and to hell with any norm that suggests otherwise, and to hell with any “civilization” that promotes such a norm.
What the state can do is wield violence far more effectively than you, so if you want to level a city, say, Beijing or Moscow, yeah, you should get the US military to do it instead of trying to do it yourself. And it can wield violence against you if you defy its will, so it’s a bad idea to do so publicly, but for purely pragmatic reasons, not moral ones.
Morality is multifaceted and multilevel. If you have a naive form of morality that is just “I do whatever I think is the right thing to do”, you are not coordinating or being moral, you are just selfish.
Coordination is not inherently always good. You can coordinate with one group to more effectively do evil against another. But scalable Good is always built on coordination. If you want to live in a lawful, stable, scalable, just civilization, you will need to coordinate with your civilization and neighbors and make compromises.
As a citizen of a modern country, you are bound by the social contract. Part of the social contract is “individuals are not allowed to use violence against other individuals, except in certain circumstances like self defense.” [1] Now you might argue that this is a bad contract or whatever, but it is the contract we play by (at least in the countries I have lived in), and I think unilaterally reneging on that contract is immoral. Unilaterally saying “I will expose all of my neighbors to risk of death from AGI because I think I’m a good person” is very different from “we all voted and the majority decided building AGI is a risk worth taking.”
Now, could it be that you in some exceptional circumstances need to do something immoral to prevent some even greater tragedy? Sure, it can happen. Murder is bad, but self defense can make it on net ok. But just because it’s self defense doesn’t make murder moral, it just means there was an exception in this case. War is bad, but sometimes countries need to go to war. That doesn’t mean war isn’t bad.
Civilization is all about commitments, and honoring them. If you can’t honor your commitments to your civilization, even when you disagree with them sometimes, you are not civilized and are flagrantly advertising your defection. If everyone does this, we lose civilization.
Morality is actually hard, and scalable morality/civilization is much, much harder. If an outcome you dislike happened because of some kind of consensus, this has moral implications. If someone put up a shitty statue that you hate in the town square because he’s an asshole, that’s very different morally from “everyone in the village voted, and they like the statue and you don’t, so suck it up.” If you think “many other people want X and I want not X” has no moral implications whatsoever your “morality” is just selfishness.[2]
I do in fact believe morality to be entirely orthogonal to “consensus” or what “many other people” want, and since you call this “selfishness,” I shall return the favor and call your view, for all that you frame it as “coordination” or “scalable morality,” abject bootlicking.
A roaming bandit’s “do what I tell you and you get to live” could be thought of a kind of contract, I suppose, but I wouldn’t consider myself bound by it if I could get away with breaching it. I consider the stationary bandits’ “social contracts” not to be meaningfully different. One clue to how they’re similar is how the more powerful party can go, à la Vader, “Here is a New Deal. Pray I don’t renew it any further.” Unilaterally reneging on such a contract when you are the weaker party would certainly be unwise, for the same reason trying to stand between a lynch mob and its intended victim would be—simple self-preservation—but I condemn the suggestion that it would be immoral.
If everyone does this, we lose civilization.
I see what you call “civilization,” and I’m against it. I vaguely recall reading of a medieval Christian belief that if everyone stopped sinning for a day, Christ would return and restore the Kingdom of Heaven. This reminds me of that: would be nice, but it ain’t gonna happen.
I agree that morality and consensus are in principle not the same—Nazis or any evil society is an easy counterexample. (One could argue Nazis did not have the consensus of the entire world, but you can then just imagine a fully evil population.)
But for one, simply rejecting civilization and consensus based on “you have no rigorous definition of them and also look at Genghis Khan/Nazis this proves that governments are evil” is like, basically, putting the burden of proof on the side that is arguing for civilization and common sense morality, which is suspicious.
I’m open to alternatives, but just saying “governments can be evil therefor I reject it, full stop” is not that helpful for discourse. Like what do you wanna do, just abolish civilization?
So consider a handwavy view of morality and of what a “good civilization” looks like. Let’s assume common sense morality is correct, and that mostly everyone understands what this means: “don’t steal, don’t hurt people, leave people alone unless they’re doing bad things, don’t be a sex pervert, etc.”. And assume most people agree with this and want to live by this. Then when you have consensus, meaningmost people are observing and agreeing that civilization (or practically speaking, the area or community over which they have a decent amount of influence), is abiding by “common sense morality”, then everything is basically moral and fine.
(I also want to point out that caring too much on pinpointing what morality means exactly and how to put it into words, distracts from solving practical problems where it’s extremely obvious what is morally going wrong, but where you have to sort out the “implementation details”.)
Hi, as I was tagged here, I will respond to a few points. There are a bunch of smaller points only hinted at that I won’t address. In general, I strongly disagree with the overall conclusion of this post.
There are two main points I would like to address in particular:
1 More information is not more Gooder
There seems to be a deep underlying confusion here that in some sense more information is inherently more good, or inherently will result in good things winning out. This is very much the opposite of what I generally claim about memetics. Saying that all information is good is like saying all organic molecules or cells are equally good. No! Adding more biosludge and toxic algal blooms to your rosegarden won’t make it better!
Social media is the exact living proof of this. People genuinely thought social media will bring everyone together, resolve conflicts, create a globally unified culture and peace and democracy, that autocracy and bigotry couldn’t possibly thrive if you just only had enough information. I consider this hypothesis thoroughly invalidated. “Increasing memetic evolutionary pressure” is not a good thing! (all things equal)
Increasing the evolutionary pressure on the flu virus doesn’t make the world better, and viruses mutate a lot faster than nice fluffy mammals. Most mutations in fluffy mammals kills them, mutations in viruses helps them far more. Value is fragile. It is asymmetrically easy to destroy than to create.
Raw evolution selects for fitness/reproduction, not Goodness. You are just feeding the Great Replicator.
For an accessible intro to some of this, I recommend the book “Nexus” by Yuval Harari. (not that I endorse everything in that book, but the first half is great)
2 “Pivotal Act” style theories of change
You talk about theories of change of the form “we safety people will keep everything secret and create an aligned AI, ship it to big labs and save the world before they destroy it (or directly use the AI to stop them)”. I don’t endorse, and in fact strongly condemn, such theories of change.
But not because of the hiding information part, but because of the “we will not coordinate with others and will use violence unilaterally” part! Such theories of change are fundamentally immoral for the same reasons labs building AGI is immoral. We have a norm in our civilization that we don’t as private citizens threaten to harm or greatly upend the lives of our fellow civilians without either their consent or societal/governmental/democratic authority.
The not sharing information part is fine! Not all information is good! For example, Canadian researchers a while back figured out how to reconstruct an extinct form of smallpox, and then published how to do it. Is this a good thing for the world to have that information out there?? I don’t think so. Should we open source the blue prints of the F-35 fighter jet? I don’t think so, I think it’s good that I don’t have those blueprints!
Information is not inherently good! Not sharing information that would make the world worse is virtuous. Now, you might be wrong about the effects of sharing the information you have, sure, but claiming there is no tradeoff or the possibility that sharing might actually, genuinely, be bad, is just ignoring why coordination is hard.
3 Conclusion
If you ever find yourself thinking something of the shape “we must simply unreservedly increase [conceptually simple variable X], with no tradeoffs”, you’re wrong. Doesn’t matter how clever you think X is, you’re wrong. Any real life, not fake complex thing is made of towers upon towers of tradeoffs. If you think there are no tradeoffs in whatever system you are looking at, you don’t understand the system.
Memes are not our friends. Conspiracy theories and lies spread faster than complex, nuanced truth. The printing press didn’t bring the scientific revolution, it brought the witch burnings and the 30 year war. The scientific revolution came from the Royal Society and its nuanced, patient, complex norms of critical inquiry. Yes, spreading your scientific papers was also important, it was necessary but not sufficient for a good outcome.
More mutation/evolution, all things equal, means more cancer, not more health and beauty. Health and beauty can come from cancerous mutation and selection, but it’s not a pretty process, and requires a lot of bloody, bloody trial and error (and a good selection function). The kind of inefficient and morally abominable process I would prefer us not relying on.
With that being said, I think it’s good that you wrote things down and are thinking about them, please don’t take what I’m saying as some kind of personal disparaging, I wish more people wrote down their ideas and tried to think things through! I think there is indeed a lot of valuable things in this direction, around better norms, tools, processes and memetic growth, but they’re just really quite non trivial! You’re on your way to thinking critically about morality, coordination and epistemology, which is great! That’s where I think real solutions are!
That last paragraph seems important. There’s a type of person that doesn’t have an opinion yet in AI discourse, which is new, and will bounce off the “side” that appears most hostile to them—which, if they have misguided ideas, might be the truth-seeking side that gently criticizes. (Not saying that’s the case for the author of this post!)
It’s really hard to change the mind of someone who’s found their side in AI. But not to have them join one in the first place!
Despite being “into” AI safety for a while, I haven’t picked a side. I do believe it’s extremely important and deserves more attention and I believe that AI actually could kill everyone in less 5 years.
But any effort spent on pinning down one’s “p(doom)” is not spent usefully on things like: how to actually make AI safe, how AI works, how to approach this problem as a civilization/community, how to think about this problem. And, as was my intention with this article, “how to think about things in general, and how to make philosophical progress”.
I’m worried you’re not seeing this at a long enough timescale.
I’m claiming:
1. “information sharing is good” is an invariant as timeless as “people will sacrifice truth and empathy for power”, you can’t claim Moloch wins based on available evidence.
2. both of these are more powerful than short-effects which we can forecast
On 1:
Increased information sharing leads to faster iteration. Faster iteration of science and technology leads to increased power derived from technology. Faster iteration of social norms and technologies leads to increased power derived from better coordination.
It is not a coincidence that the USA is simultaneously the most powerful and one of the most tolerant societies in human history.
Suppose you were the inventor of the gutenberg press deciding whether to release your technology or not. Maybe you could have foreseen the witch burnings. Maybe you could’ve even foreseen something like the 95 theses.
You couldn’t have foreseen democracy in France, or that its success would inspire the US. (Which was again only possible because of sharing of information between Europe and US) You couldn’t have foreseen that jew physicists leaving Europe for a more tolerant society would invent an atomic bomb that would ultimately bring peace to Europe. You couldn’t have foreseen the peace among EU nations in 2024, not enforced just at threat of bomb but more strongly via intermixing of its peoples.
If you decided not to release the gutenberg press because of forecasted witch burnings you might have made a collosal mistake.
Information sharing is argued as good because it relies on principles of human behaviour that survive long after you die, long after any specific circumstances.
Information survives the rise and fall of civilisations. As long as 1-of-n people preserve some information, it is preserved. A basic desire for truth and empathy is universal amongst human beings across space and time, as its encoded in genetics not culture.
Yes, people are often forced to sacrifice certain values at the altar of other ones, and we see this throughout history. You could call this Moloch. This too is universal.
Both of these are invariants that could hold long after the point where we can forecast specific events.
On 2:
Witch burnings don’t prove gutenberg press bad.
Social media isn’t proven bad on such a short timescale for the same reason witch burnings don’t prove the gutenberg press bad.
You haven’t even proven publishing smallpox papers in public is bad. Maybe one day bioweapon research is banned and this is only possible because of public consensus built on publicly available papers such as the smallpox paper.
TLDR:
Here’s all the ways in which you’re right, and thanks for pointing these things out!
At a meta-level, I’m *really* excited by just how much I didn’t see your criticism coming. I thought I was thinking carefully, and that iterating on my post with Claude (though it didn’t write a single word of it!) was taking out the obvious mistakes, but I missed so much. I have to rethink a lot about my process of writing this.
I strongly agree that I need a *way* more detailed model of what “memetic evolution” looks like, when it’s good vs bad, and why, whether there’s a better way of phrasing and viewing it, dig into historical examples, etc.
I’m curious if social media is actually bad beyond the surface—but again I should’ve anticipated “social media kinda seems bad in a lot of ways” being such an obvious problem in my thinking, and attended to it.
Reading it back, it totally reads as an argument for “more information more Gooder”, which I didn’t see at all. (generally viewing the post as “more X is always more good” is also cool as in, a categorization trick that brings clarity)
I think a good way to summarize my mistake is that I didn’t “go all the way” in my (pretty scattered) lines of thinking.
Thanks :) A big part of why I got into writing ideas explicitly and in big posts (vs off-hand Tweets/personal notes), is because you’ve talked about this being a coordination mechanism on Discord.
So I’ve been thinking more about this...
I think you completely missed the angle of, civilizational coordination via people updating on the state of the world and on what others are up to.
(To be fair I literally wrote in The Gist, “speed up memetic evolution”, lol that’s really dumb, also explicitly advocated for “memetic evolution” multiple times throughout)
Communication is not exactly “sharing information”
Communication is about making sure you know where you each stand and that you resolve to some equilibrium, not that you tell each other your life story and all the object level knowledge in your head.
Isn’t this exactly what you’re doing when going around telling people “hey guys big labs are literally building gods they don’t understand nor control, this is bad and you should know it” ?
I should still dig into what that looks like exactly and when it’s done well vs badly (for example you don’t tell people how exactly OpenAI is building gods, just that they are).
I’d argue that if Youtube had a chatbot window embedded in the UI which can talk about contents of a video, this would be a very positive thing, because generally it would increase people’s clarity about and ability to parse, contents of videos.
Clarity of ideas is not just “pure memetic evolution”
Think of the type of activity that could be described as “doing good philosophy” and “being a good reader”. This process is iterative too: absorb info from world → share insight/clarified version of info → get feedback → iterate again → affect world state → repeat. It’s still in the class of “unpredictable memetic phenomena”, but it’s very very different from what happens on the substrate of mindless humans scrolling TikTok, guided by the tentacles of a recommendation algorithm.
Even a guy typing something into a comment box, constantly re-reading and re-editing and re-considering, will land on (evolve towards) unpredictable ideas (memes). That’s the point!
There is a certain type of person who would look at the mountains of skulls that Genghis Khan piled up and before judging it evil, ask whether it was a state acting or a group of individuals.
Fuck that. States/governments, “democratic” or otherwise, have absolutely no privileged moral status, and to hell with any norm that suggests otherwise, and to hell with any “civilization” that promotes such a norm.
What the state can do is wield violence far more effectively than you, so if you want to level a city, say, Beijing or Moscow, yeah, you should get the US military to do it instead of trying to do it yourself. And it can wield violence against you if you defy its will, so it’s a bad idea to do so publicly, but for purely pragmatic reasons, not moral ones.
Morality is multifaceted and multilevel. If you have a naive form of morality that is just “I do whatever I think is the right thing to do”, you are not coordinating or being moral, you are just selfish.
Coordination is not inherently always good. You can coordinate with one group to more effectively do evil against another. But scalable Good is always built on coordination. If you want to live in a lawful, stable, scalable, just civilization, you will need to coordinate with your civilization and neighbors and make compromises.
As a citizen of a modern country, you are bound by the social contract. Part of the social contract is “individuals are not allowed to use violence against other individuals, except in certain circumstances like self defense.” [1] Now you might argue that this is a bad contract or whatever, but it is the contract we play by (at least in the countries I have lived in), and I think unilaterally reneging on that contract is immoral. Unilaterally saying “I will expose all of my neighbors to risk of death from AGI because I think I’m a good person” is very different from “we all voted and the majority decided building AGI is a risk worth taking.”
Now, could it be that you in some exceptional circumstances need to do something immoral to prevent some even greater tragedy? Sure, it can happen. Murder is bad, but self defense can make it on net ok. But just because it’s self defense doesn’t make murder moral, it just means there was an exception in this case. War is bad, but sometimes countries need to go to war. That doesn’t mean war isn’t bad.
Civilization is all about commitments, and honoring them. If you can’t honor your commitments to your civilization, even when you disagree with them sometimes, you are not civilized and are flagrantly advertising your defection. If everyone does this, we lose civilization.
Morality is actually hard, and scalable morality/civilization is much, much harder. If an outcome you dislike happened because of some kind of consensus, this has moral implications. If someone put up a shitty statue that you hate in the town square because he’s an asshole, that’s very different morally from “everyone in the village voted, and they like the statue and you don’t, so suck it up.” If you think “many other people want X and I want not X” has no moral implications whatsoever your “morality” is just selfishness.[2]
(building AGI that might kill everyone to try to create your vision of utopia is “using violence”)
(I expect you don’t actually endorse this, but your post does advocate for this)
I do in fact believe morality to be entirely orthogonal to “consensus” or what “many other people” want, and since you call this “selfishness,” I shall return the favor and call your view, for all that you frame it as “coordination” or “scalable morality,” abject bootlicking.
A roaming bandit’s “do what I tell you and you get to live” could be thought of a kind of contract, I suppose, but I wouldn’t consider myself bound by it if I could get away with breaching it. I consider the stationary bandits’ “social contracts” not to be meaningfully different. One clue to how they’re similar is how the more powerful party can go, à la Vader, “Here is a New Deal. Pray I don’t renew it any further.” Unilaterally reneging on such a contract when you are the weaker party would certainly be unwise, for the same reason trying to stand between a lynch mob and its intended victim would be—simple self-preservation—but I condemn the suggestion that it would be immoral.
I see what you call “civilization,” and I’m against it. I vaguely recall reading of a medieval Christian belief that if everyone stopped sinning for a day, Christ would return and restore the Kingdom of Heaven. This reminds me of that: would be nice, but it ain’t gonna happen.
I agree that morality and consensus are in principle not the same—Nazis or any evil society is an easy counterexample.
(One could argue Nazis did not have the consensus of the entire world, but you can then just imagine a fully evil population.)
But for one, simply rejecting civilization and consensus based on “you have no rigorous definition of them and also look at Genghis Khan/Nazis this proves that governments are evil” is like, basically, putting the burden of proof on the side that is arguing for civilization and common sense morality, which is suspicious.
I’m open to alternatives, but just saying “governments can be evil therefor I reject it, full stop” is not that helpful for discourse. Like what do you wanna do, just abolish civilization?
So consider a handwavy view of morality and of what a “good civilization” looks like. Let’s assume common sense morality is correct, and that mostly everyone understands what this means: “don’t steal, don’t hurt people, leave people alone unless they’re doing bad things, don’t be a sex pervert, etc.”. And assume most people agree with this and want to live by this. Then when you have consensus, meaning most people are observing and agreeing that civilization (or practically speaking, the area or community over which they have a decent amount of influence), is abiding by “common sense morality”, then everything is basically moral and fine.
(I also want to point out that caring too much on pinpointing what morality means exactly and how to put it into words, distracts from solving practical problems where it’s extremely obvious what is morally going wrong, but where you have to sort out the “implementation details”.)