(I’m new here and don’t have enough karma to create a thread, so I am posting this question here. Apologies in advance if this is inappropriate.)
Here is a topic I haven’t seen discussed on this forum: the philosophy of “Cosmicism”. If you’re not familiar with it check Wikipedia, but the quick summary is that it’s the philosophy invented by H. P. Lovecraft which posits that humanity’s values have no cosmic significance or absolute validity in our vast cosmos; to some alien species we might encounter or AI we might build, our values would be as meaningless as the values of insects are to us. Furthermore, all our creations and efforts are ultimately futile in a universe of increasing entropy and astrophysical annihilation. Lovecraft’s conclusion is: “good, evil, morality, feelings? Pure ‘Victorian fictions’. Only egotism exists.”
Personally I find this point of view difficult to refute – it seems as close to the truth about “life, the universe and everything” as one can have and remain consistent with our current understanding of the universe. At the same time, such a philosophy is rather frightening in that a world of egomaniacal cosmicists who consider human values to be meaningless would be seem to be highly unstable and insane.
I don’t claim to be an exceptionally rational person, so I’m asking the rationalists of this forum: what is your response to Cosmicism?
cousin_it and Vladimir_Nesov’s replies are good answers; at the risk of being redundant, I’ll take this point by point.
to some alien species we might encounter or AI we might build, our values would be as meaningless as the values of insects are to us.
The above is factually correct.
humanity’s values have no cosmic significance or absolute validity in our vast cosmos
The phrases “cosmic significance” and “absolute validity” are confused notions. They don’t actually refer to anything in the world. For more on this kind of thing you will want to read the Reductionism Sequence.
all our creations and efforts are ultimately futile in a universe of increasing entropy and astrophysical annihilation
Our efforts would be “ultimately futile” if we were doomed to never achieve our goals, to never satisfy any of our values. If the only things we valued were things like “living for an infinite amount of time”, then yes, the heat death of the universe would make all our efforts futile. But if we value things that only require finite resources, like “getting a good night’s sleep tonight”, then no, our efforts are not a priori futile.
Only egotism exists.
Egotism is an idea, not a thing, so it’s meaningless to say that it exists or doesn’t exist. You could say “Only egoists exist”, but that would be false. You could also say “In the limit of perfect information and perfect rationality, all humans would be egoists”, and I believe that’s also false. Certainly nothing you’ve said implies that it’s true.
The Metaethics Sequence directly addresses and dissolves the idea that everything seems to be meaningless because there is no objective, universally compelling morality. But the Reductionism Sequence should be read first.
Wow fantastic thank you for this excellent reply. Just out of curiosity, is there any question this “cult of rationality” doesn’t have a “sequence” or a ready answer for? ;)
The sequences are designed to dissolve common confusions. By dint of those confusions being common, almost everybody falls into them at one time or another, so it should not be surprising that the sequences come up often in response to new questions.
Why do you all agree on so much? Am I joining a cult?
We have a general community policy of not pretending to be open-minded on long-settled issues for the sake of not offending people. If we spent our time debating the basics, we would never get to the advanced stuff at all. Yes, some of the results that fall out of these basics sound weird if you haven’t seen the reasoning behind them, but there’s nothing in the laws of physics that prevents reality from sounding weird.
The standard reply here is that duh, values are a property of agents. I’m allowed to have values of my own and strive for things, even if the huge burning blobs of hydrogen in the sky don’t share the same goals as me. The prospect of increasing entropy and astrophysical annihilation isn’t enough to make me melt and die right now. Obligatory quote from HP:MOR:
“There is no justice in the laws of Nature, Headmaster, no term for fairness in the equations of motion. The universe is neither evil, nor good, it simply does not care. The stars don’t care, or the Sun, or the sky. But they don’t have to! We care! There is light in the world, and it is us!”
Wha? There’s no law of nature forcing all my goals to be egotistical. If I saw a kitten about to get run over by a train, I’d try to save it. The fact that insectoid aliens may not adore kittens doesn’t change my values one bit.
That’s certainly true, but from the regular human perspective, the real trouble is that in case of a conflict of values and interests, there is no “right,” only naked power. (Which, of course, depending on the game-theoretic aspects of the concrete situation, may or may not escalate into warfare.) This does have some unpleasant implications not just when it comes to insectoid aliens, but also the regular human conflicts.
In fact, I think there is a persistent thread of biased thinking on LW in this regard. People here often write as if sufficiently rational individuals would surely be able to achieve harmony among themselves (this often cited post, for example, seems to take this for granted). Whereas in reality, even if they are so rational to leave no possibility of factual disagreement, if their values and interests differ—and they often will—it must be either “good fences make good neighbors” or “who-whom.” In fact, I find it quite plausible that a no-holds-barred dissolving of the socially important beliefs and concepts would in fact exacerbate conflict, since this would become only more obvious.
Negative-sum conflicts happen due to factual disagreements (mostly inaccurate assessments of relative power), not value disagreements. If two parties have accurate beliefs but different values, bargaining will be more beneficial to both than making war, because bargaining can avoid destroying wealth but still take into account the “correct” counterfactual outcome of war.
Though bargaining may still look like “who whom” if one party is much more powerful than the other.
How strong perfect-information assumptions do you need to guarantee that rational decision-making can never lead both sides in a conflict to precommit to escalation, even in a situation where their behavior has signaling implications for other conflicts in the future? (I don’t know the answer to this question, but my hunch is that even if this is possible, the assumptions would have to be unrealistic for anything conceivable in reality.)
And of course, as you note, even if every conflict is resolved by perfect Coasian bargaining, if there is a significant asymmetry of power, the practical outcome can still be little different from defeat and subjugation (or even obliteration) in a war for the weaker side.
Negative-sum conflicts happen due to factual disagreements (mostly inaccurate assessments of relative power), not value disagreements.
By ‘negative-sum’ do you really mean ‘negative for all parties’? Because, taking ‘negative-sum’ literally, we can imagine a variant of the Prisoner’s Dilemma where A defecting gains 1 and costs B 2, and where B defecting gains 3 and costs A 10.
How does that make sense? You are correct that under sufficiently generous Coasian assumptions, any attempt at predation will be negotiated into a zero-sum transfer, thus avoiding a negative-sum conflict. But that is still a violation of Pareto optimality, which requires that nobody ends up worse off.
I don’t understand your comment. There can be many Pareto optimal outcomes. For example, “Alice gives Bob a million dollars” is Pareto optimal, even though it makes Alice worse off than the other Pareto optimal outcome where everyone keeps their money.
Yes, this was a confusion on my part. You are right that starting from a Pareto-optimal state, a pure transfer results in another Pareto-optimal state.
I expect I’ll keep on doing what I’m doing, which is trying to work out what I actually want. [...] So far I haven’t lapsed into nihilist catatonia or killed everyone or destroyed the economy. This suggests that assuming a morality is not a requirement for not behaving like a sociopath. I have friends and it pleases me to be nice to them and I have a lovely girlfriend and a lovely three year old daughter who I spend most of my life’s efforts on trying to bring up and on the prerequisites to that.
Without an intrinsic point to the universe, it seems likely to me that people would go on behaving with the same sort of observable morality they had before. I consider this supported by the observed phenomenon that Christians who turn atheist seem to still behave as ethically as they did before, without a perception of God to direct them.
This may or may not directly answer your question of what’s the correct moral engine to have in one’s mind (if there is a single correct moral engine to have in one’s mind—and even assuming what’s in one’s mind has a tremendous effect on one’s observed ethical behaviour, rather than said ethical behaviour largely being evolved behaviour going back millions of years before the mind), but I don’t actually care about that except insofar as it affects the observed behaviour.
It’s perhaps worthwhile pointing out that even as there is nothing to compel you to accept notions such as “cosmic significance” or “only egotism exists”, by symmetry, there is also nothing to compel you to reject those notions (except for your actual values of course). So it really comes down to your values. For most humans, the concerns you have expressed are probably confusions, as we pretty much share the same values, and we also share the same cognitive flaws which let us elevate what should be mundane facts about the universe to something acquiring moral force.
Also, it’s worth pointing out that there is no need for your values to be “logically consistent”. You use logic to figure out how to go about the world satisfying your values, and unless your values specify a need for a logically consistent value system, there is no need to logically systematize your values.
Read the sequences and you’ll probably learn to not make the epistemic errors that generate this position, in which case I expect you’ll change your mind. I believe it’s a bad idea to argue about ideologies on object level, they tend to have too many anti-epistemic defenses to make it efficient or even productive, rather one should learn a load of good thinking skills that would add up to eventually fixing the problem. (On the other hand, the metaethics sequence, which is more directly relevant to your problem, is relatively hard to understand, so success is not guaranteed, and you can benefit from a targeted argument at that point.)
You know, I was hoping the gentle admonition to casually read a million words had faded away from the local memepool.
Your usage here also happens to serve as an excellent demonstration of the meaning of the phrase as described on RW. I suggest you try not to do that. Pointing people to a particular post or at worst a particular sequence is much more helpful. (I realise it’s also more work before you hit “comment”, but I suggest that’s a feature of such an approach rather than a bug.)
and you’ll probably learn to not make the epistemic errors that generate this position
Do please consider the possibility that to read the sequences is not, in fact, to cut’n’paste them into your thinking wholesale.
TheCosmist: the sequences are in fact useful for working out what people here think, and for spotting when what appears to be an apposite comment by someone is in fact a callout. ciphergoth has described LW as “a fan site for the sequences”, which it’s growing into more than, but which is still useful to know as the viewpoint of many long-term readers. It took me a couple of months of casual internet-as-television-time reading to get through them, since I was actively participating here and all.
Sequences are a specific method of addressing this situation, not a general reference. I don’t believe individual references would be helpful, instead I suggest systematic training. I wrote:
I believe it’s a bad idea to argue about ideologies on object level, they tend to have too many anti-epistemic defenses to make it efficient or even productive, rather one should learn a load of good thinking skills that would add up to eventually fixing the problem.
You’d need to address this argument, not just state a deontological maxim that one shouldn’t send people to read the sequences.
I wasn’t stating a deontological maxim—I was pointing that you were being bloody rude in a highly unproductive manner that’s bad for the site as a whole. “I suggest you try not to do that.”
Again, you fail to address the actual argument. Maybe the right thing to do is to stay silent, you could argue that. But I don’t believe that pointing out references to individual ideas would be helpful in this case.
Also, consider “read the sequences” as a form of book recommendation. Book recommendations are generally not considered “bloody rude”. If you never studied topology, and want to understand Smirnov metrization theorem, “study the textbook” is the right kind of advice.
Actually changing your mind is an advanced exercise.
(I’m new here and don’t have enough karma to create a thread, so I am posting this question here. Apologies in advance if this is inappropriate.)
Here is a topic I haven’t seen discussed on this forum: the philosophy of “Cosmicism”. If you’re not familiar with it check Wikipedia, but the quick summary is that it’s the philosophy invented by H. P. Lovecraft which posits that humanity’s values have no cosmic significance or absolute validity in our vast cosmos; to some alien species we might encounter or AI we might build, our values would be as meaningless as the values of insects are to us. Furthermore, all our creations and efforts are ultimately futile in a universe of increasing entropy and astrophysical annihilation. Lovecraft’s conclusion is: “good, evil, morality, feelings? Pure ‘Victorian fictions’. Only egotism exists.”
Personally I find this point of view difficult to refute – it seems as close to the truth about “life, the universe and everything” as one can have and remain consistent with our current understanding of the universe. At the same time, such a philosophy is rather frightening in that a world of egomaniacal cosmicists who consider human values to be meaningless would be seem to be highly unstable and insane.
I don’t claim to be an exceptionally rational person, so I’m asking the rationalists of this forum: what is your response to Cosmicism?
cousin_it and Vladimir_Nesov’s replies are good answers; at the risk of being redundant, I’ll take this point by point.
The above is factually correct.
The phrases “cosmic significance” and “absolute validity” are confused notions. They don’t actually refer to anything in the world. For more on this kind of thing you will want to read the Reductionism Sequence.
Our efforts would be “ultimately futile” if we were doomed to never achieve our goals, to never satisfy any of our values. If the only things we valued were things like “living for an infinite amount of time”, then yes, the heat death of the universe would make all our efforts futile. But if we value things that only require finite resources, like “getting a good night’s sleep tonight”, then no, our efforts are not a priori futile.
Egotism is an idea, not a thing, so it’s meaningless to say that it exists or doesn’t exist. You could say “Only egoists exist”, but that would be false. You could also say “In the limit of perfect information and perfect rationality, all humans would be egoists”, and I believe that’s also false. Certainly nothing you’ve said implies that it’s true.
The Metaethics Sequence directly addresses and dissolves the idea that everything seems to be meaningless because there is no objective, universally compelling morality. But the Reductionism Sequence should be read first.
Very well expressed. Especially since it links to the specific sequence that deals with this instead of generally advising to “read the sequences”.
Wow fantastic thank you for this excellent reply. Just out of curiosity, is there any question this “cult of rationality” doesn’t have a “sequence” or a ready answer for? ;)
The sequences are designed to dissolve common confusions. By dint of those confusions being common, almost everybody falls into them at one time or another, so it should not be surprising that the sequences come up often in response to new questions.
You’re welcome. The FAQ says:
“[R]eality has a well-known [weird] bias.”
The standard reply here is that duh, values are a property of agents. I’m allowed to have values of my own and strive for things, even if the huge burning blobs of hydrogen in the sky don’t share the same goals as me. The prospect of increasing entropy and astrophysical annihilation isn’t enough to make me melt and die right now. Obligatory quote from HP:MOR:
So in other words you agree with Lovecraft that only egotism exists?
Wha? There’s no law of nature forcing all my goals to be egotistical. If I saw a kitten about to get run over by a train, I’d try to save it. The fact that insectoid aliens may not adore kittens doesn’t change my values one bit.
That’s certainly true, but from the regular human perspective, the real trouble is that in case of a conflict of values and interests, there is no “right,” only naked power. (Which, of course, depending on the game-theoretic aspects of the concrete situation, may or may not escalate into warfare.) This does have some unpleasant implications not just when it comes to insectoid aliens, but also the regular human conflicts.
In fact, I think there is a persistent thread of biased thinking on LW in this regard. People here often write as if sufficiently rational individuals would surely be able to achieve harmony among themselves (this often cited post, for example, seems to take this for granted). Whereas in reality, even if they are so rational to leave no possibility of factual disagreement, if their values and interests differ—and they often will—it must be either “good fences make good neighbors” or “who-whom.” In fact, I find it quite plausible that a no-holds-barred dissolving of the socially important beliefs and concepts would in fact exacerbate conflict, since this would become only more obvious.
Negative-sum conflicts happen due to factual disagreements (mostly inaccurate assessments of relative power), not value disagreements. If two parties have accurate beliefs but different values, bargaining will be more beneficial to both than making war, because bargaining can avoid destroying wealth but still take into account the “correct” counterfactual outcome of war.
Though bargaining may still look like “who whom” if one party is much more powerful than the other.
How strong perfect-information assumptions do you need to guarantee that rational decision-making can never lead both sides in a conflict to precommit to escalation, even in a situation where their behavior has signaling implications for other conflicts in the future? (I don’t know the answer to this question, but my hunch is that even if this is possible, the assumptions would have to be unrealistic for anything conceivable in reality.)
And of course, as you note, even if every conflict is resolved by perfect Coasian bargaining, if there is a significant asymmetry of power, the practical outcome can still be little different from defeat and subjugation (or even obliteration) in a war for the weaker side.
By ‘negative-sum’ do you really mean ‘negative for all parties’? Because, taking ‘negative-sum’ literally, we can imagine a variant of the Prisoner’s Dilemma where A defecting gains 1 and costs B 2, and where B defecting gains 3 and costs A 10.
I suppose I meant “Pareto-suboptimal”. Sorry.
How does that make sense? You are correct that under sufficiently generous Coasian assumptions, any attempt at predation will be negotiated into a zero-sum transfer, thus avoiding a negative-sum conflict. But that is still a violation of Pareto optimality, which requires that nobody ends up worse off.
I don’t understand your comment. There can be many Pareto optimal outcomes. For example, “Alice gives Bob a million dollars” is Pareto optimal, even though it makes Alice worse off than the other Pareto optimal outcome where everyone keeps their money.
Yes, this was a confusion on my part. You are right that starting from a Pareto-optimal state, a pure transfer results in another Pareto-optimal state.
As I commented on What Would You Do Without Morality?:
Without an intrinsic point to the universe, it seems likely to me that people would go on behaving with the same sort of observable morality they had before. I consider this supported by the observed phenomenon that Christians who turn atheist seem to still behave as ethically as they did before, without a perception of God to direct them.
This may or may not directly answer your question of what’s the correct moral engine to have in one’s mind (if there is a single correct moral engine to have in one’s mind—and even assuming what’s in one’s mind has a tremendous effect on one’s observed ethical behaviour, rather than said ethical behaviour largely being evolved behaviour going back millions of years before the mind), but I don’t actually care about that except insofar as it affects the observed behaviour.
It’s perhaps worthwhile pointing out that even as there is nothing to compel you to accept notions such as “cosmic significance” or “only egotism exists”, by symmetry, there is also nothing to compel you to reject those notions (except for your actual values of course). So it really comes down to your values. For most humans, the concerns you have expressed are probably confusions, as we pretty much share the same values, and we also share the same cognitive flaws which let us elevate what should be mundane facts about the universe to something acquiring moral force.
Also, it’s worth pointing out that there is no need for your values to be “logically consistent”. You use logic to figure out how to go about the world satisfying your values, and unless your values specify a need for a logically consistent value system, there is no need to logically systematize your values.
Read the sequences and you’ll probably learn to not make the epistemic errors that generate this position, in which case I expect you’ll change your mind. I believe it’s a bad idea to argue about ideologies on object level, they tend to have too many anti-epistemic defenses to make it efficient or even productive, rather one should learn a load of good thinking skills that would add up to eventually fixing the problem. (On the other hand, the metaethics sequence, which is more directly relevant to your problem, is relatively hard to understand, so success is not guaranteed, and you can benefit from a targeted argument at that point.)
You know, I was hoping the gentle admonition to casually read a million words had faded away from the local memepool.
Your usage here also happens to serve as an excellent demonstration of the meaning of the phrase as described on RW. I suggest you try not to do that. Pointing people to a particular post or at worst a particular sequence is much more helpful. (I realise it’s also more work before you hit “comment”, but I suggest that’s a feature of such an approach rather than a bug.)
Do please consider the possibility that to read the sequences is not, in fact, to cut’n’paste them into your thinking wholesale.
TheCosmist: the sequences are in fact useful for working out what people here think, and for spotting when what appears to be an apposite comment by someone is in fact a callout. ciphergoth has described LW as “a fan site for the sequences”, which it’s growing into more than, but which is still useful to know as the viewpoint of many long-term readers. It took me a couple of months of casual internet-as-television-time reading to get through them, since I was actively participating here and all.
Sequences are a specific method of addressing this situation, not a general reference. I don’t believe individual references would be helpful, instead I suggest systematic training. I wrote:
You’d need to address this argument, not just state a deontological maxim that one shouldn’t send people to read the sequences.
I wasn’t stating a deontological maxim—I was pointing that you were being bloody rude in a highly unproductive manner that’s bad for the site as a whole. “I suggest you try not to do that.”
Again, you fail to address the actual argument. Maybe the right thing to do is to stay silent, you could argue that. But I don’t believe that pointing out references to individual ideas would be helpful in this case.
Also, consider “read the sequences” as a form of book recommendation. Book recommendations are generally not considered “bloody rude”. If you never studied topology, and want to understand Smirnov metrization theorem, “study the textbook” is the right kind of advice.
Actually changing your mind is an advanced exercise.