I knew someone would ask. :-) Ok, I’ll list some of my silliness verdicts, but bear in mind that I’m not interested in arguing for my assessments of silliness, because I think they’re too silly for me to bother with, and metadiscussion escalates silliness levels. Life is short (however long it may extend), and there are plenty of non-silly matters to think about. I generally don’t post on matters I’ve consigned to the not-even-wrong category,or vote them down for it.
Non-silly: cryonics, advanced nano, AGI, FAI, Bayesian superintelligence. (“Non-silly” doesn’t mean I agree with all of these, just that I think there are serious arguments in favour, whether or not I’m persuaded of them.)
Silly: we’re living in a simulation, there are infinitely many identical copies of all of us, “status” as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable.
Does anyone else think that some of the recurrent ideas here are silly?
Silly: we’re living in a simulation, there are infinitely many identical copies of all of us, “status” as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable....Utilitarianism of all types.
There’s an odd inconsistency in how you labeled these. The last is identified by name and the first seems similarly neutral, but the third and fourth (and maybe the second—there are a lot of things that could be referring to) are phrased to make it clear what you think is silly about them. This seems tactically poor, if you want to avoid discussion of these issues. (or maybe the first and are the mistake, but tactical diversity seems weird to me)
Moreover, it seems hard for me to imagine that you pay so little attention to these topics that you believe that many people here support them as you’ve phrased them. Not that I have anything to say about the difference in what one should do in the two situations of encountering people who (1) endorse your silly summary of their position; vs (2) seem to make a silly claim, but also claim to distinguish it from your silly summary. Of course, most of the time silly claims are far away and you never find out whether the people endorse your summary.
What probability would you assign then to a well respected, oft-televised, senior scientist and establishment figure arguing in favour of the simulation hypothesis? (And I don’t mean Nick Bostrom. I mean someone who heads government committees and has tea with the queen)
What probability would you assign to a well respected, oft-televised, senior scientist and establishment figure arguing in favor of an incompatibilist theory of free will?
I don’t think that incompatibilism is so silly it’s not worth talking about. In fact its not actually wrong, it is simply a matter of how you define the term “free will”.
Close to 1 as makes no difference, since I don’t think you would ask this unless there was such a person. (Tea with the queen? Does that correlate positively or negatively with eccentricity, I wonder?)
Before anyone gets offended at my silliness verdicts (presuming you don’t find them too silly to get offended by), these are my judgements on the ideas, not on the people holding them.
Ok, but the point of the question is to try to arrive at true beliefs. So imagine forgetting that I’d asked the question. What does your model of the world, which says that simulation is silly, say for the probability that a major establishment scientist who is in no way a transhumanist, believes that we could be in a simulation? If it assigns too low a probability, maybe you should consider assigning some probability to alternative models?
I would not be at all surprised. No speculation is too silly to have been seriously propounded by some philosopher or other, and lofty state gives no immunity to silliness.
[ETA: And of course, I’m talking about ideas that I’ve judged silly despite their being seriously propounded by (some) folks here on LessWrong that I think are really smart, and after reading a whole lot of their stuff before arriving at that conclusion. So one more smart person, however prestigious, isn’t going to make a difference.]
But you changed it to “could be”. Sure, could be, but that’s like Descartes’ speculations about a trickster demon faking all our sensations. It’s unfalsifiable unless you deliberately put something into the speculation to let the denizens discover their true state, but at that point you’re just writing speculative fiction.
But if this person is arguing that we probably are in a simulation, then no, I just tune that out.
So the bottom line of your reasoning is quite safe from any evidential threats?
In one sense, yes, but in another sense....yes.
First sense: I have a high probability for speculations on whether we are living in a simulation (or any of the other ideas I dismiss) not being worth my while outside of entertaining fictions. As a result, evidence to the contrary is unlikely to reach my notice, and even if it does, it has a lot of convincing to do. In that sense, it is as safe as any confidently held belief is from evidential threats.
Second sense: Any evidential threats at all? Now we’re into unproductive navel-gazing. If, as a proper Bayesian, I make sure that my probabilities are never quite equal to 1, and therefore answer that my belief must be threatened by some sort of evidence, the next thing is you’ll ask what that evidence might be. But why should anyone have to be able to answer that question? If I choose to question some idea I have, then, yes, I must decide what possible observations I might make that would tell either way. This may be a non-trivial task. (Perhaps for reasons relating to the small world/large world controversy in Bayesian reasoning, but I haven’t worked that out.) But I have other things to do—I cannot be questioning everything all the time. The “silly” ideas are the ones I can’t be bothered spending any time on at all even if people are talking about them on my favorite blog, and if that means I miss getting in on the ground floor of the revelation of the age, well, that’s the risk I accept in hitting the Ignore button.
So in practice, yes, my bottom line on this matter (which was not written down in advance, but reached after having read a bunch of stuff of the sort I don’t read any more) is indeed quite safe. I don’t see anything wrong with that.
Besides that, I am always suspicious of this question, “what would convince you that you are wrong?” It’s the sort of thing that creationists arguing against evolution end up saying. After vigorously debating the evidence and making no headway, the creationist asks, “well, what would convince you?”, to which the answer is that to start with, all of the evidence that has just been gone over would have to go away. But in the creationist’s mind, the greater their failure to convince someone, the greater the proof that they’re right and the other wrong. “Consider it possible that you are mistaken” is the sound of a firing pin clicking on an empty chamber.
“what would convince you that you are wrong?” It’s the sort of thing that creationists arguing against evolution
But a proponent of evolution can easily answer this, for example if they went to the fossil record and found it showed that all and only existing creatures’ skeletons appeared 6000 years ago, and that radiocarbon dating showed that the earth was 6000 years old.
I’m baffled at the idea that the simulation hypothesis is silly. It can be rephrased “We are not at the top level of reality.” Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we’re at the top.
I’m baffled at the idea that the simulation hypothesis is silly. It can be rephrased “We are not at the top level of reality.” Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we’re at the top.
Do you have any evidence that any of those levels have anything remotely approximating observers? (I’ll add the tiny data point that I’ve had dreams where characters have explicitly claimed to be aware. In one dream I and everyone around was aware that it was a dream and that it was my dream. They wanted me to not go on a mission to defeat a villain since if I died I’d wake up and their world would cease to exist. I’m willing to put very high confidence on the hypothesis that no observers actually existed.)
I agree that the simulationist hypothesis is not silly but this is primarily due to the apparently high probability that we will at some point be able to simulate intelligence beings with great accuracy.
Reality isn’t stratified. A simulated world constitutes a concept of its own, apart from being referenced by the enclosing worlds. Two worlds can simulate each other to an equal degree.
I mostly agree with your list of silly ideas, though I’m not entirely sure what an FRP character sheet is and I do think status explanations are quite important so probably disagree on that one. I’d add utilitarianism to the list of silly ideas as well.
FRP = fantasy role-playing, i.e. Dungeons & Dragons and the like. A character sheet is a list of the attributes of the character you’re playing, things like Strength=10, Wisdom=8, Charisma=16, etc. (each number obtained by rolling three dice and adding them together). There are rules about what these attributes mean (e.g. on attempting some task requiring especial Charisma, roll a 20-sided die and if the number is less than your Charisma you succeed). Then there are circumstances that will give you additional points for an attribute or take them away, e.g. wearing a certain enchanted ring might give you +2 to Charisma.
Discussions of “status” here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.
Discussions of “status” here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.
Sometimes, yes. However, in many situations, the mere recognition that status considerations play an important role—even if stated in the crudest possible character-sheet sort of way—can be a tremendous first step in dispelling widespread, deeply entrenched naive and misguided views of human behavior and institutions.
Unfortunately, since a precise technical terminology for discussing the details of human status dynamic doesn’t (yet?) exist, often it’s very difficult to do any better.
Such as?
I knew someone would ask. :-) Ok, I’ll list some of my silliness verdicts, but bear in mind that I’m not interested in arguing for my assessments of silliness, because I think they’re too silly for me to bother with, and metadiscussion escalates silliness levels. Life is short (however long it may extend), and there are plenty of non-silly matters to think about. I generally don’t post on matters I’ve consigned to the not-even-wrong category,or vote them down for it.
Non-silly: cryonics, advanced nano, AGI, FAI, Bayesian superintelligence. (“Non-silly” doesn’t mean I agree with all of these, just that I think there are serious arguments in favour, whether or not I’m persuaded of them.)
Silly: we’re living in a simulation, there are infinitely many identical copies of all of us, “status” as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable.
Does anyone else think that some of the recurrent ideas here are silly?
ETA: Non-silly: the mission of LessWrong. Silly: Utilitarianism of all types.
There’s an odd inconsistency in how you labeled these. The last is identified by name and the first seems similarly neutral, but the third and fourth (and maybe the second—there are a lot of things that could be referring to) are phrased to make it clear what you think is silly about them. This seems tactically poor, if you want to avoid discussion of these issues. (or maybe the first and are the mistake, but tactical diversity seems weird to me)
Moreover, it seems hard for me to imagine that you pay so little attention to these topics that you believe that many people here support them as you’ve phrased them. Not that I have anything to say about the difference in what one should do in the two situations of encountering people who (1) endorse your silly summary of their position; vs (2) seem to make a silly claim, but also claim to distinguish it from your silly summary. Of course, most of the time silly claims are far away and you never find out whether the people endorse your summary.
What probability would you assign then to a well respected, oft-televised, senior scientist and establishment figure arguing in favour of the simulation hypothesis? (And I don’t mean Nick Bostrom. I mean someone who heads government committees and has tea with the queen)
What probability would you assign to a well respected, oft-televised, senior scientist and establishment figure arguing in favor of an incompatibilist theory of free will?
I don’t think that incompatibilism is so silly it’s not worth talking about. In fact its not actually wrong, it is simply a matter of how you define the term “free will”.
Definitions are not a simple matter—I would claim that libertarian free will* is at least as silly as the simulation hypothesis.
But I don’t filter my conversation to ban silliness.
* I change my phrasing to emphasize that I can respect hard incompatibilism—the position that “free will” doesn’t exist.
Close to 1 as makes no difference, since I don’t think you would ask this unless there was such a person. (Tea with the queen? Does that correlate positively or negatively with eccentricity, I wonder?)
Before anyone gets offended at my silliness verdicts (presuming you don’t find them too silly to get offended by), these are my judgements on the ideas, not on the people holding them.
Ok, but the point of the question is to try to arrive at true beliefs. So imagine forgetting that I’d asked the question. What does your model of the world, which says that simulation is silly, say for the probability that a major establishment scientist who is in no way a transhumanist, believes that we could be in a simulation? If it assigns too low a probability, maybe you should consider assigning some probability to alternative models?
I would not be at all surprised. No speculation is too silly to have been seriously propounded by some philosopher or other, and lofty state gives no immunity to silliness.
[ETA: And of course, I’m talking about ideas that I’ve judged silly despite their being seriously propounded by (some) folks here on LessWrong that I think are really smart, and after reading a whole lot of their stuff before arriving at that conclusion. So one more smart person, however prestigious, isn’t going to make a difference.]
But you changed it to “could be”. Sure, could be, but that’s like Descartes’ speculations about a trickster demon faking all our sensations. It’s unfalsifiable unless you deliberately put something into the speculation to let the denizens discover their true state, but at that point you’re just writing speculative fiction.
But if this person is arguing that we probably are in a simulation, then no, I just tune that out.
So the bottom line of your reasoning is quite safe from any evidential threats?
In one sense, yes, but in another sense....yes.
First sense: I have a high probability for speculations on whether we are living in a simulation (or any of the other ideas I dismiss) not being worth my while outside of entertaining fictions. As a result, evidence to the contrary is unlikely to reach my notice, and even if it does, it has a lot of convincing to do. In that sense, it is as safe as any confidently held belief is from evidential threats.
Second sense: Any evidential threats at all? Now we’re into unproductive navel-gazing. If, as a proper Bayesian, I make sure that my probabilities are never quite equal to 1, and therefore answer that my belief must be threatened by some sort of evidence, the next thing is you’ll ask what that evidence might be. But why should anyone have to be able to answer that question? If I choose to question some idea I have, then, yes, I must decide what possible observations I might make that would tell either way. This may be a non-trivial task. (Perhaps for reasons relating to the small world/large world controversy in Bayesian reasoning, but I haven’t worked that out.) But I have other things to do—I cannot be questioning everything all the time. The “silly” ideas are the ones I can’t be bothered spending any time on at all even if people are talking about them on my favorite blog, and if that means I miss getting in on the ground floor of the revelation of the age, well, that’s the risk I accept in hitting the Ignore button.
So in practice, yes, my bottom line on this matter (which was not written down in advance, but reached after having read a bunch of stuff of the sort I don’t read any more) is indeed quite safe. I don’t see anything wrong with that.
Besides that, I am always suspicious of this question, “what would convince you that you are wrong?” It’s the sort of thing that creationists arguing against evolution end up saying. After vigorously debating the evidence and making no headway, the creationist asks, “well, what would convince you?”, to which the answer is that to start with, all of the evidence that has just been gone over would have to go away. But in the creationist’s mind, the greater their failure to convince someone, the greater the proof that they’re right and the other wrong. “Consider it possible that you are mistaken” is the sound of a firing pin clicking on an empty chamber.
But a proponent of evolution can easily answer this, for example if they went to the fossil record and found it showed that all and only existing creatures’ skeletons appeared 6000 years ago, and that radiocarbon dating showed that the earth was 6000 years old.
The creationist generally puts his universal question after having unsuccessfully argued that the fossil record and radiocarbon dating support him.
I’m baffled at the idea that the simulation hypothesis is silly. It can be rephrased “We are not at the top level of reality.” Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we’re at the top.
Do you have any evidence that any of those levels have anything remotely approximating observers? (I’ll add the tiny data point that I’ve had dreams where characters have explicitly claimed to be aware. In one dream I and everyone around was aware that it was a dream and that it was my dream. They wanted me to not go on a mission to defeat a villain since if I died I’d wake up and their world would cease to exist. I’m willing to put very high confidence on the hypothesis that no observers actually existed.)
I agree that the simulationist hypothesis is not silly but this is primarily due to the apparently high probability that we will at some point be able to simulate intelligence beings with great accuracy.
Reality isn’t stratified. A simulated world constitutes a concept of its own, apart from being referenced by the enclosing worlds. Two worlds can simulate each other to an equal degree.
I mostly agree with your list of silly ideas, though I’m not entirely sure what an FRP character sheet is and I do think status explanations are quite important so probably disagree on that one. I’d add utilitarianism to the list of silly ideas as well.
Agreed about utilitarianism.
FRP = fantasy role-playing, i.e. Dungeons & Dragons and the like. A character sheet is a list of the attributes of the character you’re playing, things like Strength=10, Wisdom=8, Charisma=16, etc. (each number obtained by rolling three dice and adding them together). There are rules about what these attributes mean (e.g. on attempting some task requiring especial Charisma, roll a 20-sided die and if the number is less than your Charisma you succeed). Then there are circumstances that will give you additional points for an attribute or take them away, e.g. wearing a certain enchanted ring might give you +2 to Charisma.
Discussions of “status” here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.
RichardKennaway:
Sometimes, yes. However, in many situations, the mere recognition that status considerations play an important role—even if stated in the crudest possible character-sheet sort of way—can be a tremendous first step in dispelling widespread, deeply entrenched naive and misguided views of human behavior and institutions.
Unfortunately, since a precise technical terminology for discussing the details of human status dynamic doesn’t (yet?) exist, often it’s very difficult to do any better.
Could you expand on how those discussions of status here and on OB are different from what you’d see as a more realistic discussion of status?
I never replied to this, but this is an example of what I think is a more realistic discussion.