This made my trust in the community and my judgement of its average quality go down a LOT, and my estimate of my own value to the community, SIAI, and the world in general go up with a LOT.
I expected almost everyone to agree with Eliezer on most important things, to have been here for a long time, to have read all the sequences, to spend lots of time here… In short, to be like the top posters seem to (and even with them the halo effect might be involved), except with lower IQ and/or writing skill.
This made my trust in the community and my judgement of its average quality go down a LOT...
I expected almost everyone to agree with Eliezer on most important things...
Alicorn (top-poster) doesn’t agree with Eliezer about ethics. PhilGoetz (top-poster) doesn’t agree with Eliezer. Wei_Dai (top-poster) doesn’t agree with Eliezer on AI issues. wedrifid (top-poster) doesn’t agree with Eliezer on CEV and the interpretation of some game and decision theoretic thought experiments.
I am pretty sure Yvain doesn’t agree with Eliezer on quite a few things too (too lazy to look it up now).
Generally there are a lot of top-notch people who don’t agree with Eliezer. Robin Hanson for example. But also others who have read all of the Sequences, like Holden Karnofsky from GiveWell, John Baez or Katja Grace who has been a visiting fellow.
But even Rolf Nelson (a major donor and well-read Bayesian) disagrees about the Amanda Knox trial. Or take Peter Thiel (SI’s top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.
I am extremely surprised by this, and very confused. This is strange because I technically knew each of those individual examples… I’m not sure what’s going on, but I’m sure that whatever it is it’s my fault and extremely unflattering to my ability as a rationalist.
How am I supposed to follow my consensus-trusting heuristics when no consensus exists? I’m to lazy to form my own opinions! :p
We just learned that neutrinos might be accelerated faster that light in certain circumstances, while this result doesn’t give me too much pause, It certainly made me think about the possible practical consequences of successfully understanding quantum mechanics.
Fair enough. A deeper understanding of quantum mechanics would probably have huge practical consequences.
It isn’t obvious to me that figuring out whether the MWI is right is an especially good way to improve understanding of QM. My impression from LW is that MWI is important here for looking at ethical consequences.
I share that impression :) Plus its very fun to think about Everett branches and accusal trade when I pretend we would have a chance against a truly Strong AI in a box.
This is strange because I technically knew each of those individual examples… I’m not sure what’s going on,
Sounds like plain old accidental compartmentalization. You didn’t join the dots until someone else pointed out they made a line. (Admittedly this is just a description of your surprise and not an explanation, but hopefully slapping a familiar label on it makes it less opaque.)
I wrote him an email to make sure. Here is his reply:
I’ve read a lot of the sequences. Probably the bulk of them. Possibly all of them. I’ve also looked pretty actively for SIAI-related content directly addressing the concerns I’ve outlined (including speaking to different people connected with SIAI).
take Peter Thiel (SI’s top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.
IIRC Peter Thiel can’t give SIAI more than he currently does without causing some form of tax difficulties, and it has been implied that he would give significantly more if this were not the case.
Right. I remember the fundraising appeals about this: if Thiel donates too much, SIAI begins to fail the 501c3 regs, that it “receives a substantial part of its income, directly or indirectly, from the general public or from the government. The public support must be fairly broad, not limited to a few individuals or families.”
Since you seem to have a sense of the community, your surprise surprises me. Will_Newsome’s contrarian defense of theism springs to mind immediately, and I know we have several people who are theists or were when they joined Lw.
Also, many people could have answered the survey who are new here.
You may think you encounter a lot of contrarians on LW, but I disagree—we’re all sheep.
But seriously, look at that MWI poll result. How many LWers have ever seriously looked at all the competing theories, or could even name many alternatives? (‘Collapse, MWI, uh...’ - much less could discuss why they dislike pilot waves or whatever.) I doubt many fewer could do so than plumped for MWI—because Eliezer is such a fan...
Heh. The original draft of my comment above included just this example.
To be explicit, I don’t believe that anyone with little prior knowledge about QM should update toward MWI by any significant amount after reading the QM sequence.
I disagree. I updated significantly in favour of MWI just because the QM sequence helped me introspect and perceive that much of my prior prejudice against MWI were irrational biases such as “I don’t think I would like it if MWI was true. Plus I find it a worn-out trope in science fiction. Also it feels like we live in a single world.” or misapplications of rational ideas like “Wouldn’t Occam’s razor favor a single world?”
I still don’t know much of the mathematics underpinning QM. I updated in favour of MWI simply by demolishing faulty arguments I had against it.
I updated in favour of MWI simply by demolishing faulty arguments I had against it.
It seems like doing this would only restore you to a non-informative prior, which still doesn’t cohere with the survey result. What positive evidence is there in the QM sequence for MWI?
The positive evidence for WMI is that it’s already there inside quantum mechanics until you change quantum mechanics in some specific way to get rid of it!
MWI, as beautiful as it is, won’t fully convince me until it can explain the Born probability—other interpretations don’t do it more, so it’s not a point “against” MWI, but it’s still an additional rule you need to make the “jump” between QM and what we actually see. As long you need that additional rule, I’ve a deep feeling we didn’t reach the bottom.
I see two ways of resolving this. Both are valid, as far as I can tell. The first assumes nothing, but may not satisfy. The second only assumes that we even expect the theory to speak of probability.
1
Well, QM says what’s real. It’s out there. There are many ways of interpreting this thing. Among those ways is the Born Rule. If you take that way, you may notice our world, and in turn, us. If you don’t look at it that way, you won’t notice us, much as if you use a computer implementing a GAI as a cup holder. Yet, that interpretation can be made, and moreover it’s compact and yields a lot.
So, since that interpretation can be made, apply the generalized anti-zombie principle—if it acts like a sapient being, it’s a sapient being… And it’ll perceive the universe only under interpretations under which it is a sapient being. So the Born Rule isn’t a general property of the universe. It’s a property of our viewpoint.
2
Just from decoherence, without bringing in Born’s rule, we get the notion that sections of configuration space are splitting up and never coming back together again. If we’re willing to take from that the notion that this splitting should map onto probabilities, then there is exactly one way of mapping from relative weights of splits onto probabilities, such that the usual laws of probability apply correctly. In particular:
1) probabilities are not always equal to zero.
2) the probability of a decoherent branch doesn’t change after its initial decoherence (if it could change, it wouldn’t be decoherent), and the rules are the same everywhere, and in every direction, and at every speed, and so on.
The simplest way to achieve this is to go with ‘unitary operations don’t shift probabilities, just change their orientation in Hilbert Space’. If we require that the probability rule be simpler than the physical theory it’s to apply to (i.e. quantum mechanics itself), it’s the only one, since all of the other candidates effectively take QM, nullify it, and replace it with something else. Being able to freely apply Unitary operations implies that the probability is a function only of component amplitude, not orientation in Hilbert Space.
3) given exclusive possibilities A and B, P(A or B) = P(A) + P(B).
These three are sufficient.
Given a labeling b on states, we have | psi > = sum(b) [ A(b) |b>]
Define for brevity the capital letters J, K, and M as the vector component of |psi> in a particular dimension j, k, or m. For example, K = A(k) | k >
It is possible (and natural, in the language of decoherence) to choose the labeling b such that each decoherent branch gets exactly one dimension (at some particular moment—it will propagate into some other dimension later, even before it decoheres again). Now, consider two recently decohered components, K’ and M’. By running time backwards to before the split, we get the original K and M. Back at that time, we would have seen this as a different, single coherent component, J = K + M.
P ( J ) = P ( K + M) must be equal to P ( K ) + P ( M )
This could have occurred in any dimension, so we make this requirement general.
So, consider instead the ways of projecting a vector J into two orthogonal vectors, K and M. As seen above, the probability of J must not be changed by this re-projection. Let theta be the angle between J and M.
K = sin(theta) A(j) | k >
M = cos(theta) A(j) | m >
By condition (2), P(x) is a function of amplitude, not the vectors, so we can simplify the P ( J ) statement to:
1 and 2 together are pretty convincing to me. The intuition runs like this: it seems pretty hard to construct anything like an observer without probabilities, so there are only observers in as much as one is looking at the world according to the Born Rule view. So an easy anthropic argument says that we should not be surprised to find ourselves within that interpretation.
it seems pretty hard to construct anything like an observer without probabilities, so there are only observers in as much as one is looking at the world according to the Born Rule view
Even better than that—there can be other ways of making observers. Ours happens to be one. It doesn’t need to be the only one. We don’t even need to stake the argument on that difficult problem being impossible.
It seems like doing this would only restore you to a non-informative prior,
I still had in my mind the arguments in favour of many-worlds, like “lots of scientists seem to take it seriously”, and the basic argument that works for ever-increasing the size of reality which is that the more reality there is out there for intelligence to evolve in, the greater the likelihood for intelligence to evolve.
What positive evidence is there in the QM sequence for MWI?
Well, it mentions some things like “it’s deterministic and local, like all other laws of physics seem to be”. Does that count?
Demographically, there is one huge cluster of Less Wrongers: 389 (42%) straight white (including Hispanics) atheist males (including FTM) under 48 who are in STEM. I don’t actually know if that characterizes Eliezer.
It’s slightly comforting to me to know that a majority of LWers are outside that cluster in one way or another.
This made my trust in the community and my judgement of its average quality go down a LOT, and my estimate of my own value to the community, SIAI, and the world in general go up with a LOT.
Which parts, specifically?
(it didn’t have an effect like that on me, I didn’t see that many surprising things)
I expected almost everyone to agree with Eliezer on most important things, to have been here for a long time, to have read all the sequences, to spend lots of time here… In short, to be like the top posters seem to (and even with them the halo effect might be involved), except with lower IQ and/or writing skill.
Alicorn (top-poster) doesn’t agree with Eliezer about ethics. PhilGoetz (top-poster) doesn’t agree with Eliezer. Wei_Dai (top-poster) doesn’t agree with Eliezer on AI issues. wedrifid (top-poster) doesn’t agree with Eliezer on CEV and the interpretation of some game and decision theoretic thought experiments.
I am pretty sure Yvain doesn’t agree with Eliezer on quite a few things too (too lazy to look it up now).
Generally there are a lot of top-notch people who don’t agree with Eliezer. Robin Hanson for example. But also others who have read all of the Sequences, like Holden Karnofsky from GiveWell, John Baez or Katja Grace who has been a visiting fellow.
But even Rolf Nelson (a major donor and well-read Bayesian) disagrees about the Amanda Knox trial. Or take Peter Thiel (SI’s top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.
I am extremely surprised by this, and very confused. This is strange because I technically knew each of those individual examples… I’m not sure what’s going on, but I’m sure that whatever it is it’s my fault and extremely unflattering to my ability as a rationalist.
How am I supposed to follow my consensus-trusting heuristics when no consensus exists? I’m to lazy to form my own opinions! :p
I just wait, especially considering that which interpretation of QM is correct doesn’t have urgent practical consequences.
We just learned that neutrinos might be accelerated faster that light in certain circumstances, while this result doesn’t give me too much pause, It certainly made me think about the possible practical consequences of successfully understanding quantum mechanics.
Fair enough. A deeper understanding of quantum mechanics would probably have huge practical consequences.
It isn’t obvious to me that figuring out whether the MWI is right is an especially good way to improve understanding of QM. My impression from LW is that MWI is important here for looking at ethical consequences.
I share that impression :) Plus its very fun to think about Everett branches and accusal trade when I pretend we would have a chance against a truly Strong AI in a box.
Sounds like plain old accidental compartmentalization. You didn’t join the dots until someone else pointed out they made a line. (Admittedly this is just a description of your surprise and not an explanation, but hopefully slapping a familiar label on it makes it less opaque.)
Holden Karnofsky has read all of the Sequences?
I wrote him an email to make sure. Here is his reply:
IIRC Peter Thiel can’t give SIAI more than he currently does without causing some form of tax difficulties, and it has been implied that he would give significantly more if this were not the case.
Right. I remember the fundraising appeals about this: if Thiel donates too much, SIAI begins to fail the 501c3 regs, that it “receives a substantial part of its income, directly or indirectly, from the general public or from the government. The public support must be fairly broad, not limited to a few individuals or families.”
That would have made my trust in the community go down a lot. Echo chambers rarely produce good results.
Surely it depends on which questions are meant by “important things”.
Granted.
The most salient one would be religion.
What surprised you about the survey’s results regarding religion?
That there are theists around?
Okay, but only 3.5%. I wonder how many are newbies who haven’t read many of the sequences yet, and I wonder how many are simulists.
Since you seem to have a sense of the community, your surprise surprises me. Will_Newsome’s contrarian defense of theism springs to mind immediately, and I know we have several people who are theists or were when they joined Lw.
Also, many people could have answered the survey who are new here.
It’s also fairly unlikely that all the theists and quasitheists on LW have outed themselves as such.
Nor is there any particular reason they should.
I assumed those were rare exceptions.
Why? Don’t you encounter enough contrarians on LW?
You may think you encounter a lot of contrarians on LW, but I disagree—we’re all sheep.
But seriously, look at that MWI poll result. How many LWers have ever seriously looked at all the competing theories, or could even name many alternatives? (‘Collapse, MWI, uh...’ - much less could discuss why they dislike pilot waves or whatever.) I doubt many fewer could do so than plumped for MWI—because Eliezer is such a fan...
I know I am a sheep and hero worshipper, and then the typical mind fallacy happened.
Heh. The original draft of my comment above included just this example.
To be explicit, I don’t believe that anyone with little prior knowledge about QM should update toward MWI by any significant amount after reading the QM sequence.
I disagree. I updated significantly in favour of MWI just because the QM sequence helped me introspect and perceive that much of my prior prejudice against MWI were irrational biases such as “I don’t think I would like it if MWI was true. Plus I find it a worn-out trope in science fiction. Also it feels like we live in a single world.” or misapplications of rational ideas like “Wouldn’t Occam’s razor favor a single world?”
I still don’t know much of the mathematics underpinning QM. I updated in favour of MWI simply by demolishing faulty arguments I had against it.
It seems like doing this would only restore you to a non-informative prior, which still doesn’t cohere with the survey result. What positive evidence is there in the QM sequence for MWI?
The positive evidence for WMI is that it’s already there inside quantum mechanics until you change quantum mechanics in some specific way to get rid of it!
MWI, as beautiful as it is, won’t fully convince me until it can explain the Born probability—other interpretations don’t do it more, so it’s not a point “against” MWI, but it’s still an additional rule you need to make the “jump” between QM and what we actually see. As long you need that additional rule, I’ve a deep feeling we didn’t reach the bottom.
I see two ways of resolving this. Both are valid, as far as I can tell. The first assumes nothing, but may not satisfy. The second only assumes that we even expect the theory to speak of probability.
1
Well, QM says what’s real. It’s out there. There are many ways of interpreting this thing. Among those ways is the Born Rule. If you take that way, you may notice our world, and in turn, us. If you don’t look at it that way, you won’t notice us, much as if you use a computer implementing a GAI as a cup holder. Yet, that interpretation can be made, and moreover it’s compact and yields a lot.
So, since that interpretation can be made, apply the generalized anti-zombie principle—if it acts like a sapient being, it’s a sapient being… And it’ll perceive the universe only under interpretations under which it is a sapient being. So the Born Rule isn’t a general property of the universe. It’s a property of our viewpoint.
2
Just from decoherence, without bringing in Born’s rule, we get the notion that sections of configuration space are splitting up and never coming back together again. If we’re willing to take from that the notion that this splitting should map onto probabilities, then there is exactly one way of mapping from relative weights of splits onto probabilities, such that the usual laws of probability apply correctly. In particular:
1) probabilities are not always equal to zero.
2) the probability of a decoherent branch doesn’t change after its initial decoherence (if it could change, it wouldn’t be decoherent), and the rules are the same everywhere, and in every direction, and at every speed, and so on.
The simplest way to achieve this is to go with ‘unitary operations don’t shift probabilities, just change their orientation in Hilbert Space’. If we require that the probability rule be simpler than the physical theory it’s to apply to (i.e. quantum mechanics itself), it’s the only one, since all of the other candidates effectively take QM, nullify it, and replace it with something else. Being able to freely apply Unitary operations implies that the probability is a function only of component amplitude, not orientation in Hilbert Space.
3) given exclusive possibilities A and B, P(A or B) = P(A) + P(B).
These three are sufficient.
Given a labeling b on states, we have | psi > = sum(b) [ A(b) |b>]
Define for brevity the capital letters J, K, and M as the vector component of |psi> in a particular dimension j, k, or m. For example, K = A(k) | k >
It is possible (and natural, in the language of decoherence) to choose the labeling b such that each decoherent branch gets exactly one dimension (at some particular moment—it will propagate into some other dimension later, even before it decoheres again). Now, consider two recently decohered components, K’ and M’. By running time backwards to before the split, we get the original K and M. Back at that time, we would have seen this as a different, single coherent component, J = K + M.
P ( J ) = P ( K + M) must be equal to P ( K ) + P ( M )
This could have occurred in any dimension, so we make this requirement general.
So, consider instead the ways of projecting a vector J into two orthogonal vectors, K and M. As seen above, the probability of J must not be changed by this re-projection. Let theta be the angle between J and M.
K = sin(theta) A(j) | k >
M = cos(theta) A(j) | m >
By condition (2), P(x) is a function of amplitude, not the vectors, so we can simplify the P ( J ) statement to:
P( A(j) ) = P ( sin(theta) A(j) ) + P( cos(theta) A(j) )
this must be true as a function of theta, and for any A(j). The pythagorean theorem shows the one way to achieve this:
P(x) = C x* x for some C.
Since the probabilities are not identically zero, we know that C is not zero.
This, you may note, is the Born Probability Rule.
1 and 2 together are pretty convincing to me. The intuition runs like this: it seems pretty hard to construct anything like an observer without probabilities, so there are only observers in as much as one is looking at the world according to the Born Rule view. So an easy anthropic argument says that we should not be surprised to find ourselves within that interpretation.
Even better than that—there can be other ways of making observers. Ours happens to be one. It doesn’t need to be the only one. We don’t even need to stake the argument on that difficult problem being impossible.
I still had in my mind the arguments in favour of many-worlds, like “lots of scientists seem to take it seriously”, and the basic argument that works for ever-increasing the size of reality which is that the more reality there is out there for intelligence to evolve in, the greater the likelihood for intelligence to evolve.
Well, it mentions some things like “it’s deterministic and local, like all other laws of physics seem to be”. Does that count?
Its determinism is of a very peculiar kind, not like that of other laws of physics seem to be.
Demographically, there is one huge cluster of Less Wrongers: 389 (42%) straight white (including Hispanics) atheist males (including FTM) under 48 who are in STEM. I don’t actually know if that characterizes Eliezer.
It’s slightly comforting to me to know that a majority of LWers are outside that cluster in one way or another.