The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters.
Do people here really think that antinatalism is silly? I disagree with the position (very strongly) but it isn’t a view that I consider to be silly in the same way that I would consider say, most religious beliefs to be silly.
But keep in mind that having smart supporters is by no means a strong indication that a viewpoint is not silly. For example, Jonathan Sarfati is a prominent young earth creationist who before he became a YEC proponent was a productive chemist. He’s also a highly ranked chess master. He’s clearly a bright individual. Now, you might be able to argue that YECism has a higher proportion of people who aren’t smart (There’s some evidence to back this up. See for example this breakdown of GSS data and also this analysis. Note that the metric used in the first one, the GSS WORDSUM, is surprisingly robust under education levels by some measures so the first isn’t just measuring a proxy for education.) That might function as a better indicator of silliness. But simply having smart supporters seems insufficient to conclude that a position is not silly.
It does however seem that on LW there’s a common tendency to label beliefs silly when they mean “I assign a very low probability to this belief being correct.” Or “I don’t understand how someone’s mind could be so warped as to have this belief.” Both of these are problematic, the second more so than the first because different humans have different value systems. In this particular example, value systems that put harm to others as more bad are more likely to be able to make a coherent antinatalist position. In that regard, note that people are able to discuss things like paperclippers but seem to have more difficulty discussing value systems which are in many ways closer to their own. This may be simply because paperclipping is a simple moral system. It may also be because it is so far removed from their own moral systems that it becomes easier to map out in a consistent fashion where something like antinatalism is close enough to their own moral system that people conflate some of their own moral/ethical/value conclusions with those of the antinatalist, and that this occurs subtly enough for people not to notice.
Do people here really think that antinatalism is silly?
A data point: I don’t think antinatalism (as defined by Roko above - ‘it is a bad thing to create people’) is silly under every set of circumstances, but neither is it obviously true under all circumstances. If my standard of living is phenomenally awful, and I knew my child’s life would be equally bad, it’d be bad to have a child. But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?
Let me restate what I mean more formally. Conditional on high living standards, high-quality parenting, and desire to raise a child, one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child. In which case I wouldn’t think the antinatalism position has legs.
one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child.
I’m not sure about this. It’s most likely that anything your kid does in life will get done by someone else instead. There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).
But even if this is true, it’s still not enough for antinatalism. Increasing total utility is not enough justification to create a life. The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)
I’m not sure about this. It’s most likely that anything your kid does in life will get done by someone else instead.
True—we might call the expected utility strangers get a wash because of this substitution effect. If we say the expected value most people get from me having a child is nil, it doesn’t contribute to the net expected value, but nor does it make it less positive.
There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).
It sounds as though that data’s based on samples of all types of parents, so it may not have much bearing on the subset of parents who (a) have stable (thanks NL!) high living standards, (b) are good at being parents, and (c) wanted their children. (Of course this just means the evidence is weak, not completely irrelevant.)
But even if this is true, it’s still not enough for antinatalism. Increasing total utility is not enough justification to create a life.
That’s a good point, I know of nothing in utilitarianism that says whose utility I should care about.
The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)
Whether or not someone agrees with this is going to depend on how much they care about risk aversion in addition to expected utility. (Prediction: antinatalists are more risk averse.) I think my personal level of risk aversion is too low for me to agree that I shouldn’t make any entity that has a chance of suffering negative personal utility.
I still think that it’s silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.
Yet because of moral antirealism, the mistake is subtle. And I have yet to find a critique of antinatalism that actually gives the correct (in my view) rebuttal. Most people who try to rebut it seem to also offer arguments that are tantamount sophistry, i.e. they are not the causal reason for the person disagreeing with the view.
And I worry, an I making a similarly subtle mistake? And as a contrarian with few good critics, would anyone present me with the correct counterargument?
I still think that it’s silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.
I’m curious what you think the causal justification is. I’m not a fan of imputing motives to people I disagree with rather than dealing with their arguments but one can’t help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suicide. In that context, antinatalism just in regards to one’s own life seems to make some sense. Thus one might think of antinatalism as arising in part from Other Optimizing
but one can’t help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suicide.
I promise that I genuinely did not know that when I wrote “I suspect, not the causal reason for the values it purports to justify.” and thought “these people were just born with low happiness set points and they’re rationalizing”
I don’t think antinatalism is silly, although I have not really tried to find problems with it yet. My current, not-fully-reflected position is that I would have prefer to not have existed (if that’s indeed possible) but, given that I in fact exist, I do not want to die. I don’t, right now, see screaming incoherency here, although I’m suspicious.
I would very much appreciate anyone who can point out faultlines for me to investigate. I may be missing something very obvious.
If our contrarian position was as wrong as we think antinatalism is, would we realize?
If there was an argument for antinatalism that was capable of moving us, would we have seen it? Maybe not. A LessWrong post summarizing all of the good arguments for antinatalism would be a good idea.
If our contrarian position was as wrong as we think antinatalism is, would we realize?
We have many contrarian positions, but antinatalism is one position. Personally, I think that some of the contrarian positions that some people advocate here are indeed silly.
I knew someone would ask. :-) Ok, I’ll list some of my silliness verdicts, but bear in mind that I’m not interested in arguing for my assessments of silliness, because I think they’re too silly for me to bother with, and metadiscussion escalates silliness levels. Life is short (however long it may extend), and there are plenty of non-silly matters to think about. I generally don’t post on matters I’ve consigned to the not-even-wrong category,or vote them down for it.
Non-silly: cryonics, advanced nano, AGI, FAI, Bayesian superintelligence. (“Non-silly” doesn’t mean I agree with all of these, just that I think there are serious arguments in favour, whether or not I’m persuaded of them.)
Silly: we’re living in a simulation, there are infinitely many identical copies of all of us, “status” as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable.
Does anyone else think that some of the recurrent ideas here are silly?
Silly: we’re living in a simulation, there are infinitely many identical copies of all of us, “status” as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable....Utilitarianism of all types.
There’s an odd inconsistency in how you labeled these. The last is identified by name and the first seems similarly neutral, but the third and fourth (and maybe the second—there are a lot of things that could be referring to) are phrased to make it clear what you think is silly about them. This seems tactically poor, if you want to avoid discussion of these issues. (or maybe the first and are the mistake, but tactical diversity seems weird to me)
Moreover, it seems hard for me to imagine that you pay so little attention to these topics that you believe that many people here support them as you’ve phrased them. Not that I have anything to say about the difference in what one should do in the two situations of encountering people who (1) endorse your silly summary of their position; vs (2) seem to make a silly claim, but also claim to distinguish it from your silly summary. Of course, most of the time silly claims are far away and you never find out whether the people endorse your summary.
What probability would you assign then to a well respected, oft-televised, senior scientist and establishment figure arguing in favour of the simulation hypothesis? (And I don’t mean Nick Bostrom. I mean someone who heads government committees and has tea with the queen)
What probability would you assign to a well respected, oft-televised, senior scientist and establishment figure arguing in favor of an incompatibilist theory of free will?
I don’t think that incompatibilism is so silly it’s not worth talking about. In fact its not actually wrong, it is simply a matter of how you define the term “free will”.
Close to 1 as makes no difference, since I don’t think you would ask this unless there was such a person. (Tea with the queen? Does that correlate positively or negatively with eccentricity, I wonder?)
Before anyone gets offended at my silliness verdicts (presuming you don’t find them too silly to get offended by), these are my judgements on the ideas, not on the people holding them.
Ok, but the point of the question is to try to arrive at true beliefs. So imagine forgetting that I’d asked the question. What does your model of the world, which says that simulation is silly, say for the probability that a major establishment scientist who is in no way a transhumanist, believes that we could be in a simulation? If it assigns too low a probability, maybe you should consider assigning some probability to alternative models?
I would not be at all surprised. No speculation is too silly to have been seriously propounded by some philosopher or other, and lofty state gives no immunity to silliness.
[ETA: And of course, I’m talking about ideas that I’ve judged silly despite their being seriously propounded by (some) folks here on LessWrong that I think are really smart, and after reading a whole lot of their stuff before arriving at that conclusion. So one more smart person, however prestigious, isn’t going to make a difference.]
But you changed it to “could be”. Sure, could be, but that’s like Descartes’ speculations about a trickster demon faking all our sensations. It’s unfalsifiable unless you deliberately put something into the speculation to let the denizens discover their true state, but at that point you’re just writing speculative fiction.
But if this person is arguing that we probably are in a simulation, then no, I just tune that out.
So the bottom line of your reasoning is quite safe from any evidential threats?
In one sense, yes, but in another sense....yes.
First sense: I have a high probability for speculations on whether we are living in a simulation (or any of the other ideas I dismiss) not being worth my while outside of entertaining fictions. As a result, evidence to the contrary is unlikely to reach my notice, and even if it does, it has a lot of convincing to do. In that sense, it is as safe as any confidently held belief is from evidential threats.
Second sense: Any evidential threats at all? Now we’re into unproductive navel-gazing. If, as a proper Bayesian, I make sure that my probabilities are never quite equal to 1, and therefore answer that my belief must be threatened by some sort of evidence, the next thing is you’ll ask what that evidence might be. But why should anyone have to be able to answer that question? If I choose to question some idea I have, then, yes, I must decide what possible observations I might make that would tell either way. This may be a non-trivial task. (Perhaps for reasons relating to the small world/large world controversy in Bayesian reasoning, but I haven’t worked that out.) But I have other things to do—I cannot be questioning everything all the time. The “silly” ideas are the ones I can’t be bothered spending any time on at all even if people are talking about them on my favorite blog, and if that means I miss getting in on the ground floor of the revelation of the age, well, that’s the risk I accept in hitting the Ignore button.
So in practice, yes, my bottom line on this matter (which was not written down in advance, but reached after having read a bunch of stuff of the sort I don’t read any more) is indeed quite safe. I don’t see anything wrong with that.
Besides that, I am always suspicious of this question, “what would convince you that you are wrong?” It’s the sort of thing that creationists arguing against evolution end up saying. After vigorously debating the evidence and making no headway, the creationist asks, “well, what would convince you?”, to which the answer is that to start with, all of the evidence that has just been gone over would have to go away. But in the creationist’s mind, the greater their failure to convince someone, the greater the proof that they’re right and the other wrong. “Consider it possible that you are mistaken” is the sound of a firing pin clicking on an empty chamber.
“what would convince you that you are wrong?” It’s the sort of thing that creationists arguing against evolution
But a proponent of evolution can easily answer this, for example if they went to the fossil record and found it showed that all and only existing creatures’ skeletons appeared 6000 years ago, and that radiocarbon dating showed that the earth was 6000 years old.
I’m baffled at the idea that the simulation hypothesis is silly. It can be rephrased “We are not at the top level of reality.” Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we’re at the top.
I’m baffled at the idea that the simulation hypothesis is silly. It can be rephrased “We are not at the top level of reality.” Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we’re at the top.
Do you have any evidence that any of those levels have anything remotely approximating observers? (I’ll add the tiny data point that I’ve had dreams where characters have explicitly claimed to be aware. In one dream I and everyone around was aware that it was a dream and that it was my dream. They wanted me to not go on a mission to defeat a villain since if I died I’d wake up and their world would cease to exist. I’m willing to put very high confidence on the hypothesis that no observers actually existed.)
I agree that the simulationist hypothesis is not silly but this is primarily due to the apparently high probability that we will at some point be able to simulate intelligence beings with great accuracy.
Reality isn’t stratified. A simulated world constitutes a concept of its own, apart from being referenced by the enclosing worlds. Two worlds can simulate each other to an equal degree.
I mostly agree with your list of silly ideas, though I’m not entirely sure what an FRP character sheet is and I do think status explanations are quite important so probably disagree on that one. I’d add utilitarianism to the list of silly ideas as well.
FRP = fantasy role-playing, i.e. Dungeons & Dragons and the like. A character sheet is a list of the attributes of the character you’re playing, things like Strength=10, Wisdom=8, Charisma=16, etc. (each number obtained by rolling three dice and adding them together). There are rules about what these attributes mean (e.g. on attempting some task requiring especial Charisma, roll a 20-sided die and if the number is less than your Charisma you succeed). Then there are circumstances that will give you additional points for an attribute or take them away, e.g. wearing a certain enchanted ring might give you +2 to Charisma.
Discussions of “status” here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.
Discussions of “status” here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.
Sometimes, yes. However, in many situations, the mere recognition that status considerations play an important role—even if stated in the crudest possible character-sheet sort of way—can be a tremendous first step in dispelling widespread, deeply entrenched naive and misguided views of human behavior and institutions.
Unfortunately, since a precise technical terminology for discussing the details of human status dynamic doesn’t (yet?) exist, often it’s very difficult to do any better.
They have an answer to that.
The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters.
If our contrarian position was as wrong as we think antinatalism is, would we realize?
Do people here really think that antinatalism is silly? I disagree with the position (very strongly) but it isn’t a view that I consider to be silly in the same way that I would consider say, most religious beliefs to be silly.
But keep in mind that having smart supporters is by no means a strong indication that a viewpoint is not silly. For example, Jonathan Sarfati is a prominent young earth creationist who before he became a YEC proponent was a productive chemist. He’s also a highly ranked chess master. He’s clearly a bright individual. Now, you might be able to argue that YECism has a higher proportion of people who aren’t smart (There’s some evidence to back this up. See for example this breakdown of GSS data and also this analysis. Note that the metric used in the first one, the GSS WORDSUM, is surprisingly robust under education levels by some measures so the first isn’t just measuring a proxy for education.) That might function as a better indicator of silliness. But simply having smart supporters seems insufficient to conclude that a position is not silly.
It does however seem that on LW there’s a common tendency to label beliefs silly when they mean “I assign a very low probability to this belief being correct.” Or “I don’t understand how someone’s mind could be so warped as to have this belief.” Both of these are problematic, the second more so than the first because different humans have different value systems. In this particular example, value systems that put harm to others as more bad are more likely to be able to make a coherent antinatalist position. In that regard, note that people are able to discuss things like paperclippers but seem to have more difficulty discussing value systems which are in many ways closer to their own. This may be simply because paperclipping is a simple moral system. It may also be because it is so far removed from their own moral systems that it becomes easier to map out in a consistent fashion where something like antinatalism is close enough to their own moral system that people conflate some of their own moral/ethical/value conclusions with those of the antinatalist, and that this occurs subtly enough for people not to notice.
A data point: I don’t think antinatalism (as defined by Roko above - ‘it is a bad thing to create people’) is silly under every set of circumstances, but neither is it obviously true under all circumstances. If my standard of living is phenomenally awful, and I knew my child’s life would be equally bad, it’d be bad to have a child. But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?
That your child might experience a great deal of pain which you could prevent by not having it.
That your child might regret being born and wish you had made the other decision.
That you can be a good parent, raise a kid, and improve someone’s life without having a kid (adopt).
That the world is already overpopulated and our natural resources are not infinite.
Points taken.
Let me restate what I mean more formally. Conditional on high living standards, high-quality parenting, and desire to raise a child, one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child. In which case I wouldn’t think the antinatalism position has legs.
I’d throw in considering how stable you think those high living standards are.
I’m not sure about this. It’s most likely that anything your kid does in life will get done by someone else instead. There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).
But even if this is true, it’s still not enough for antinatalism. Increasing total utility is not enough justification to create a life. The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)
True—we might call the expected utility strangers get a wash because of this substitution effect. If we say the expected value most people get from me having a child is nil, it doesn’t contribute to the net expected value, but nor does it make it less positive.
It sounds as though that data’s based on samples of all types of parents, so it may not have much bearing on the subset of parents who (a) have stable (thanks NL!) high living standards, (b) are good at being parents, and (c) wanted their children. (Of course this just means the evidence is weak, not completely irrelevant.)
That’s a good point, I know of nothing in utilitarianism that says whose utility I should care about.
Whether or not someone agrees with this is going to depend on how much they care about risk aversion in addition to expected utility. (Prediction: antinatalists are more risk averse.) I think my personal level of risk aversion is too low for me to agree that I shouldn’t make any entity that has a chance of suffering negative personal utility.
I still think that it’s silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.
Yet because of moral antirealism, the mistake is subtle. And I have yet to find a critique of antinatalism that actually gives the correct (in my view) rebuttal. Most people who try to rebut it seem to also offer arguments that are tantamount sophistry, i.e. they are not the causal reason for the person disagreeing with the view.
And I worry, an I making a similarly subtle mistake? And as a contrarian with few good critics, would anyone present me with the correct counterargument?
I’m curious what you think the causal justification is. I’m not a fan of imputing motives to people I disagree with rather than dealing with their arguments but one can’t help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suicide. In that context, antinatalism just in regards to one’s own life seems to make some sense. Thus one might think of antinatalism as arising in part from Other Optimizing
I promise that I genuinely did not know that when I wrote “I suspect, not the causal reason for the values it purports to justify.” and thought “these people were just born with low happiness set points and they’re rationalizing”
I don’t think antinatalism is silly, although I have not really tried to find problems with it yet. My current, not-fully-reflected position is that I would have prefer to not have existed (if that’s indeed possible) but, given that I in fact exist, I do not want to die. I don’t, right now, see screaming incoherency here, although I’m suspicious.
I would very much appreciate anyone who can point out faultlines for me to investigate. I may be missing something very obvious.
If there was an argument for antinatalism that was capable of moving us, would we have seen it? Maybe not. A LessWrong post summarizing all of the good arguments for antinatalism would be a good idea.
We have many contrarian positions, but antinatalism is one position. Personally, I think that some of the contrarian positions that some people advocate here are indeed silly.
Such as?
I knew someone would ask. :-) Ok, I’ll list some of my silliness verdicts, but bear in mind that I’m not interested in arguing for my assessments of silliness, because I think they’re too silly for me to bother with, and metadiscussion escalates silliness levels. Life is short (however long it may extend), and there are plenty of non-silly matters to think about. I generally don’t post on matters I’ve consigned to the not-even-wrong category,or vote them down for it.
Non-silly: cryonics, advanced nano, AGI, FAI, Bayesian superintelligence. (“Non-silly” doesn’t mean I agree with all of these, just that I think there are serious arguments in favour, whether or not I’m persuaded of them.)
Silly: we’re living in a simulation, there are infinitely many identical copies of all of us, “status” as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable.
Does anyone else think that some of the recurrent ideas here are silly?
ETA: Non-silly: the mission of LessWrong. Silly: Utilitarianism of all types.
There’s an odd inconsistency in how you labeled these. The last is identified by name and the first seems similarly neutral, but the third and fourth (and maybe the second—there are a lot of things that could be referring to) are phrased to make it clear what you think is silly about them. This seems tactically poor, if you want to avoid discussion of these issues. (or maybe the first and are the mistake, but tactical diversity seems weird to me)
Moreover, it seems hard for me to imagine that you pay so little attention to these topics that you believe that many people here support them as you’ve phrased them. Not that I have anything to say about the difference in what one should do in the two situations of encountering people who (1) endorse your silly summary of their position; vs (2) seem to make a silly claim, but also claim to distinguish it from your silly summary. Of course, most of the time silly claims are far away and you never find out whether the people endorse your summary.
What probability would you assign then to a well respected, oft-televised, senior scientist and establishment figure arguing in favour of the simulation hypothesis? (And I don’t mean Nick Bostrom. I mean someone who heads government committees and has tea with the queen)
What probability would you assign to a well respected, oft-televised, senior scientist and establishment figure arguing in favor of an incompatibilist theory of free will?
I don’t think that incompatibilism is so silly it’s not worth talking about. In fact its not actually wrong, it is simply a matter of how you define the term “free will”.
Definitions are not a simple matter—I would claim that libertarian free will* is at least as silly as the simulation hypothesis.
But I don’t filter my conversation to ban silliness.
* I change my phrasing to emphasize that I can respect hard incompatibilism—the position that “free will” doesn’t exist.
Close to 1 as makes no difference, since I don’t think you would ask this unless there was such a person. (Tea with the queen? Does that correlate positively or negatively with eccentricity, I wonder?)
Before anyone gets offended at my silliness verdicts (presuming you don’t find them too silly to get offended by), these are my judgements on the ideas, not on the people holding them.
Ok, but the point of the question is to try to arrive at true beliefs. So imagine forgetting that I’d asked the question. What does your model of the world, which says that simulation is silly, say for the probability that a major establishment scientist who is in no way a transhumanist, believes that we could be in a simulation? If it assigns too low a probability, maybe you should consider assigning some probability to alternative models?
I would not be at all surprised. No speculation is too silly to have been seriously propounded by some philosopher or other, and lofty state gives no immunity to silliness.
[ETA: And of course, I’m talking about ideas that I’ve judged silly despite their being seriously propounded by (some) folks here on LessWrong that I think are really smart, and after reading a whole lot of their stuff before arriving at that conclusion. So one more smart person, however prestigious, isn’t going to make a difference.]
But you changed it to “could be”. Sure, could be, but that’s like Descartes’ speculations about a trickster demon faking all our sensations. It’s unfalsifiable unless you deliberately put something into the speculation to let the denizens discover their true state, but at that point you’re just writing speculative fiction.
But if this person is arguing that we probably are in a simulation, then no, I just tune that out.
So the bottom line of your reasoning is quite safe from any evidential threats?
In one sense, yes, but in another sense....yes.
First sense: I have a high probability for speculations on whether we are living in a simulation (or any of the other ideas I dismiss) not being worth my while outside of entertaining fictions. As a result, evidence to the contrary is unlikely to reach my notice, and even if it does, it has a lot of convincing to do. In that sense, it is as safe as any confidently held belief is from evidential threats.
Second sense: Any evidential threats at all? Now we’re into unproductive navel-gazing. If, as a proper Bayesian, I make sure that my probabilities are never quite equal to 1, and therefore answer that my belief must be threatened by some sort of evidence, the next thing is you’ll ask what that evidence might be. But why should anyone have to be able to answer that question? If I choose to question some idea I have, then, yes, I must decide what possible observations I might make that would tell either way. This may be a non-trivial task. (Perhaps for reasons relating to the small world/large world controversy in Bayesian reasoning, but I haven’t worked that out.) But I have other things to do—I cannot be questioning everything all the time. The “silly” ideas are the ones I can’t be bothered spending any time on at all even if people are talking about them on my favorite blog, and if that means I miss getting in on the ground floor of the revelation of the age, well, that’s the risk I accept in hitting the Ignore button.
So in practice, yes, my bottom line on this matter (which was not written down in advance, but reached after having read a bunch of stuff of the sort I don’t read any more) is indeed quite safe. I don’t see anything wrong with that.
Besides that, I am always suspicious of this question, “what would convince you that you are wrong?” It’s the sort of thing that creationists arguing against evolution end up saying. After vigorously debating the evidence and making no headway, the creationist asks, “well, what would convince you?”, to which the answer is that to start with, all of the evidence that has just been gone over would have to go away. But in the creationist’s mind, the greater their failure to convince someone, the greater the proof that they’re right and the other wrong. “Consider it possible that you are mistaken” is the sound of a firing pin clicking on an empty chamber.
But a proponent of evolution can easily answer this, for example if they went to the fossil record and found it showed that all and only existing creatures’ skeletons appeared 6000 years ago, and that radiocarbon dating showed that the earth was 6000 years old.
The creationist generally puts his universal question after having unsuccessfully argued that the fossil record and radiocarbon dating support him.
I’m baffled at the idea that the simulation hypothesis is silly. It can be rephrased “We are not at the top level of reality.” Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we’re at the top.
Do you have any evidence that any of those levels have anything remotely approximating observers? (I’ll add the tiny data point that I’ve had dreams where characters have explicitly claimed to be aware. In one dream I and everyone around was aware that it was a dream and that it was my dream. They wanted me to not go on a mission to defeat a villain since if I died I’d wake up and their world would cease to exist. I’m willing to put very high confidence on the hypothesis that no observers actually existed.)
I agree that the simulationist hypothesis is not silly but this is primarily due to the apparently high probability that we will at some point be able to simulate intelligence beings with great accuracy.
Reality isn’t stratified. A simulated world constitutes a concept of its own, apart from being referenced by the enclosing worlds. Two worlds can simulate each other to an equal degree.
I mostly agree with your list of silly ideas, though I’m not entirely sure what an FRP character sheet is and I do think status explanations are quite important so probably disagree on that one. I’d add utilitarianism to the list of silly ideas as well.
Agreed about utilitarianism.
FRP = fantasy role-playing, i.e. Dungeons & Dragons and the like. A character sheet is a list of the attributes of the character you’re playing, things like Strength=10, Wisdom=8, Charisma=16, etc. (each number obtained by rolling three dice and adding them together). There are rules about what these attributes mean (e.g. on attempting some task requiring especial Charisma, roll a 20-sided die and if the number is less than your Charisma you succeed). Then there are circumstances that will give you additional points for an attribute or take them away, e.g. wearing a certain enchanted ring might give you +2 to Charisma.
Discussions of “status” here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.
RichardKennaway:
Sometimes, yes. However, in many situations, the mere recognition that status considerations play an important role—even if stated in the crudest possible character-sheet sort of way—can be a tremendous first step in dispelling widespread, deeply entrenched naive and misguided views of human behavior and institutions.
Unfortunately, since a precise technical terminology for discussing the details of human status dynamic doesn’t (yet?) exist, often it’s very difficult to do any better.
Could you expand on how those discussions of status here and on OB are different from what you’d see as a more realistic discussion of status?
I never replied to this, but this is an example of what I think is a more realistic discussion.