[...] this shouldn’t change your actual beliefs [...] it does not, by itself, constitute evidence [...] the argument itself is insufficient for drawing conclusions. Even if the hypothesis is itself hard to test.
Is that a conclusion or a hypothesis? I don’t believe there is a fundamental distinction between “actual beliefs”, “conclusions” and “hypotheses”. What should it take to change my beliefs about this?
I’ll think about how this can be phrased differently such that it might sway you. Given that you are not Valentine, is there a difference of opinion between his posts above and your views?
That part you pulled out and quoted is essentially what I was writing about in the OP. There is a philosophy-over-hard-subjects which is pursued here, in the sequences, at FHI, and is exemplified in the conclusions drawn by Bostrom in Superintelligence, and Yudkowsky in the later sequences. Sometimes it works, e.g. the argument in the sequences about the compatibility of determinism and free will works because it essentially shows how non-determinism and free will are incompatible—it exposes a cached thought that free-will == non-deterministic choice which was never grounded in the first place. But over new subjects where you are not confused in the first place—e.g. the nature and risk of superintelligence—people seem to be using thought experiments alone to reach ungrounded conclusions, and not following up with empirical studies.
That is dangerous. If you allow yourself to reason from thought experiments alone, I can get you to believe almost anything. I can’t get you to believe the sky is green—unless you’ve never seen the sky—but anything you yourself don’t have available experimental evidence for or against, I can sway you in either way. E.g. that consciousness is in information being computed and not the computational process itself. That an AI takeoff would be hard, not soft, and basically uncontrollable. That boxing techniques are foredoomed to failure irregardless of circumstances. That intelligence and values are orthogonal under all circumstances. That cryonics is an open-and-shut case. On these sorts of questions we need more, not less experimentation.
When you hear a clever thought experiment that seems to demonstrate the truth of something you previously thought to have low probability, then (1) check if your priors here are inconsistent with each other; then (2) check if there is empirical data here that you have not fully updated on. If neither of those approaches resolves the issue, then (3) notice you are confused, and seek an experimental result to resolve the confusion. If you are truly unable to find an experimental test you can perform now, then (4) operate as if you do not know which of the possible theories is true.
You do not say “that thought experiment seemed convincing, so until I know otherwise I’ll update in favor of it.” That is the sort of thinking which led the ancients to believe that “All things come to rest eventually, so the natural state is a lack of motion. Planets continue in clockwork motion, so they must be a separate magisteria from earthly objects.” You may think we as rationalists are above that mistake, but history has shown otherwise. Hindsight bias makes the Greeks seem a lot stupider than they actually were.
Take a concrete example: the physical origin of consciousness. We can rule out the naïve my-atoms-constitute-my-consciousness view from biological arguments. However I have been unable to find or construct for myself an experiment which would definitively rule out either the information-identity or computational-process theories, both of which are supported by available empirical evidence.
How is this relevant? Some are arguing for brain preservation instead of cryonics. But this only achieves personal longevity if the information-identity theory is correct as it is destructive of the computational process. Cryonics on the other hand achieves personal longevity by preserving the computational substrate itself, which achieves both information- and computational-preservation. So unless there is a much larger difference in success likelihood than appears to be the case, my money (and my life) is on cryonics. Not because I think that computational-process theory is correct (although I do have other weak evidence that makes it more likely), but because I can’t rule it out as a possibility so I must consider the case where destructive brain preservation gets popularized but at the cost of fewer cryopreservations, and it turns out that personal longevity is only achieved with the preservation of computational processes. So I do not support the Brain Preservation Foundation.
To be clear, I think that arguing for destructive brain preservation at this point in time is a morally unconscionable thing to do, even though (exactly because!) we don’t know the nature of consciousness and personal identity, and there is an alternative which is likely to work no matter how that problem is resolved.
My point is that the very statements you are making, that we are all making all the time, are also very theory-loaded, “not followed up with empirical studies”. This includes the statements about the need to follow things up with empirical studies. You can’t escape the need for experimentally unverified theoretical judgement, and it does seem to work, even though I can’t give you a well-designed experimental verification of that. Some well-designed studies even prove that ghosts exist.
The degree to which discussion of familiar topics is closer to observations than discussion of more theoretical topics is unclear, and the distinction should be cashed out as uncertainty on a case-by-case basis. Some very theoretical things are crystal clear math, more certain than the measurement of the charge of an electron.
That is dangerous.
Being wrong is dangerous. Not taking theoretical arguments into account can result in error. This statement probably wouldn’t be much affected by further experimental verification. What specifically should be concluded depends on the problem, not on a vague outside measure of the problem like the degree to which it’s removed from empirical study.
[...] anything you yourself don’t have available experimental evidence for or against, I can sway you in either way. E.g. that consciousness is in information being computed and not the computational process itself.
Before considering the truth of a statement, we should first establish its meaning, which describes the conditions for judging its truth. For a vague idea, there are many alternative formulations of its meaning, and it may be unclear which one is interesting, but that’s separate from the issue of thinking about any specific formulation clearly.
Ghosts specifically seem like too complicated a hypothesis to extract from any experimental results I’m aware of. If we didn’t already have a concept of ghosts, I doubt any parapsychology experiments that have taken place would have caused us to develop one.
Is that a conclusion or a hypothesis? I don’t believe there is a fundamental distinction between “actual beliefs”, “conclusions” and “hypotheses”. What should it take to change my beliefs about this?
I’ll think about how this can be phrased differently such that it might sway you. Given that you are not Valentine, is there a difference of opinion between his posts above and your views?
That part you pulled out and quoted is essentially what I was writing about in the OP. There is a philosophy-over-hard-subjects which is pursued here, in the sequences, at FHI, and is exemplified in the conclusions drawn by Bostrom in Superintelligence, and Yudkowsky in the later sequences. Sometimes it works, e.g. the argument in the sequences about the compatibility of determinism and free will works because it essentially shows how non-determinism and free will are incompatible—it exposes a cached thought that free-will == non-deterministic choice which was never grounded in the first place. But over new subjects where you are not confused in the first place—e.g. the nature and risk of superintelligence—people seem to be using thought experiments alone to reach ungrounded conclusions, and not following up with empirical studies.
That is dangerous. If you allow yourself to reason from thought experiments alone, I can get you to believe almost anything. I can’t get you to believe the sky is green—unless you’ve never seen the sky—but anything you yourself don’t have available experimental evidence for or against, I can sway you in either way. E.g. that consciousness is in information being computed and not the computational process itself. That an AI takeoff would be hard, not soft, and basically uncontrollable. That boxing techniques are foredoomed to failure irregardless of circumstances. That intelligence and values are orthogonal under all circumstances. That cryonics is an open-and-shut case. On these sorts of questions we need more, not less experimentation.
When you hear a clever thought experiment that seems to demonstrate the truth of something you previously thought to have low probability, then (1) check if your priors here are inconsistent with each other; then (2) check if there is empirical data here that you have not fully updated on. If neither of those approaches resolves the issue, then (3) notice you are confused, and seek an experimental result to resolve the confusion. If you are truly unable to find an experimental test you can perform now, then (4) operate as if you do not know which of the possible theories is true.
You do not say “that thought experiment seemed convincing, so until I know otherwise I’ll update in favor of it.” That is the sort of thinking which led the ancients to believe that “All things come to rest eventually, so the natural state is a lack of motion. Planets continue in clockwork motion, so they must be a separate magisteria from earthly objects.” You may think we as rationalists are above that mistake, but history has shown otherwise. Hindsight bias makes the Greeks seem a lot stupider than they actually were.
Take a concrete example: the physical origin of consciousness. We can rule out the naïve my-atoms-constitute-my-consciousness view from biological arguments. However I have been unable to find or construct for myself an experiment which would definitively rule out either the information-identity or computational-process theories, both of which are supported by available empirical evidence.
How is this relevant? Some are arguing for brain preservation instead of cryonics. But this only achieves personal longevity if the information-identity theory is correct as it is destructive of the computational process. Cryonics on the other hand achieves personal longevity by preserving the computational substrate itself, which achieves both information- and computational-preservation. So unless there is a much larger difference in success likelihood than appears to be the case, my money (and my life) is on cryonics. Not because I think that computational-process theory is correct (although I do have other weak evidence that makes it more likely), but because I can’t rule it out as a possibility so I must consider the case where destructive brain preservation gets popularized but at the cost of fewer cryopreservations, and it turns out that personal longevity is only achieved with the preservation of computational processes. So I do not support the Brain Preservation Foundation.
To be clear, I think that arguing for destructive brain preservation at this point in time is a morally unconscionable thing to do, even though (exactly because!) we don’t know the nature of consciousness and personal identity, and there is an alternative which is likely to work no matter how that problem is resolved.
My point is that the very statements you are making, that we are all making all the time, are also very theory-loaded, “not followed up with empirical studies”. This includes the statements about the need to follow things up with empirical studies. You can’t escape the need for experimentally unverified theoretical judgement, and it does seem to work, even though I can’t give you a well-designed experimental verification of that. Some well-designed studies even prove that ghosts exist.
The degree to which discussion of familiar topics is closer to observations than discussion of more theoretical topics is unclear, and the distinction should be cashed out as uncertainty on a case-by-case basis. Some very theoretical things are crystal clear math, more certain than the measurement of the charge of an electron.
Being wrong is dangerous. Not taking theoretical arguments into account can result in error. This statement probably wouldn’t be much affected by further experimental verification. What specifically should be concluded depends on the problem, not on a vague outside measure of the problem like the degree to which it’s removed from empirical study.
Before considering the truth of a statement, we should first establish its meaning, which describes the conditions for judging its truth. For a vague idea, there are many alternative formulations of its meaning, and it may be unclear which one is interesting, but that’s separate from the issue of thinking about any specific formulation clearly.
I”m not aware of ghosts, Scott talks about telepathy and precognition studies.
Ghosts specifically seem like too complicated a hypothesis to extract from any experimental results I’m aware of. If we didn’t already have a concept of ghosts, I doubt any parapsychology experiments that have taken place would have caused us to develop one.