AAT has two very important qualifiers that you have to include if you’re discussing it. First, both individuals have to have the same priors. If I have some reason to believe that the other party and I have different priors, then my confidence that AAT applies will decrease as well. In the case of the creationist, I think it is at least likely that we have some different priors (occam’s razor, nature of causality, etc), so I won’t be terribly surprised if AAT doesn’t seem to work in this case. Second, there is that picky clause about “perfect Bayesians”. If a creationist is not accurately updating on the evidence, then I shouldn’t expect AAT to work. If you really want to check yourself with AAT, you could (assuming you have the same priors) determine what an ideal bayesian would do with the evidence the other person has and determine if AAT works then. That math seems like it would get really complicated really fast.
With regards to the superintelligence/schizophrenics question: we can all agree that the AI would actually get the wrong answer if it inferred, “I think I am a superintelligence, therefore I probably am a schizophrenic”. However, every schizophrenic with delusions of deification would draw the right conclusion. If I was trying to code a superintelligence, I wouldn’t deliberately write the AI to do this, but if anyone ever reads this who thinks that they are a superintelligence, maybe you should try to test that claim, just to be sure. Write down some predictions and test how accurate you actually are.
On a side note, I’ve often wondered what would happen if a schizophrenic was taught the methods of rationality, or a rationalist developed schizophrenia. Have any cases of that ever happened? Is it something we should try to test?
Not schizophrenia, but reading LW have had untypeably immense positive effects on mental health, which was rotten to the point I’m very confident there wouldn’t exist a “me” one way or another if I hadn’t.
I’d be surprised if it didn’t help with schizophrenia as well.
I was inspired by the later scenes in A Beautiful Mind, where Nash was still hallucinating as he went about his day but he chose to just ignore his visions of people he knew were not real.
That movie was very interesting. The scene that caught my attention the most was when he realized the little girl couldn’t be real because she never aged.
I wonder what would happen if you went to a psych ward and started teaching a schizophrenic patient the scientific method? Not specifically related to their visions, but just about natural phenomena. Would they be able to shake off their delusions?
I think a better test would be to teach people prone to developing schizophrenia and then see if it help with those that did develop schizophrenia. It would be much easier to teach rationality before the onset of schizophrenia to boot.
Absolutely we should run that test, and I suspect it would help. The experiment I proposed, however, was more designed out of the question, “would it be possible to teach rationality to someone who cannot trust their own perceptions, and in fact may not realize yet that their perceptions are untrustworthy?” Is rationality genuinely not possible in that case? Or is it possible to give them enough rational skills to recover from the deepest set delusions humans can have?
AAT has two very important qualifiers that you have to include if you’re discussing it. First, both individuals have to have the same priors. If I have some reason to believe that the other party and I have different priors, then my confidence that AAT applies will decrease as well. In the case of the creationist, I think it is at least likely that we have some different priors (occam’s razor, nature of causality, etc), so I won’t be terribly surprised if AAT doesn’t seem to work in this case. Second, there is that picky clause about “perfect Bayesians”. If a creationist is not accurately updating on the evidence, then I shouldn’t expect AAT to work. If you really want to check yourself with AAT, you could (assuming you have the same priors) determine what an ideal bayesian would do with the evidence the other person has and determine if AAT works then. That math seems like it would get really complicated really fast.
With regards to the superintelligence/schizophrenics question: we can all agree that the AI would actually get the wrong answer if it inferred, “I think I am a superintelligence, therefore I probably am a schizophrenic”. However, every schizophrenic with delusions of deification would draw the right conclusion. If I was trying to code a superintelligence, I wouldn’t deliberately write the AI to do this, but if anyone ever reads this who thinks that they are a superintelligence, maybe you should try to test that claim, just to be sure. Write down some predictions and test how accurate you actually are.
On a side note, I’ve often wondered what would happen if a schizophrenic was taught the methods of rationality, or a rationalist developed schizophrenia. Have any cases of that ever happened? Is it something we should try to test?
Not schizophrenia, but reading LW have had untypeably immense positive effects on mental health, which was rotten to the point I’m very confident there wouldn’t exist a “me” one way or another if I hadn’t.
I’d be surprised if it didn’t help with schizophrenia as well.
I am curious about the details of how LW had those immense positive effects, Armok.
The obvious, boring way of removing delusions and granting the tools and will for gradual self improvement.
And perhaps most importantly, realizing that being insane was a bad thing and I should do somehting about it.
I was inspired by the later scenes in A Beautiful Mind, where Nash was still hallucinating as he went about his day but he chose to just ignore his visions of people he knew were not real.
That movie was very interesting. The scene that caught my attention the most was when he realized the little girl couldn’t be real because she never aged.
I wonder what would happen if you went to a psych ward and started teaching a schizophrenic patient the scientific method? Not specifically related to their visions, but just about natural phenomena. Would they be able to shake off their delusions?
I think a better test would be to teach people prone to developing schizophrenia and then see if it help with those that did develop schizophrenia. It would be much easier to teach rationality before the onset of schizophrenia to boot.
Absolutely we should run that test, and I suspect it would help. The experiment I proposed, however, was more designed out of the question, “would it be possible to teach rationality to someone who cannot trust their own perceptions, and in fact may not realize yet that their perceptions are untrustworthy?” Is rationality genuinely not possible in that case? Or is it possible to give them enough rational skills to recover from the deepest set delusions humans can have?
People affected by Charles Bonnet syndrome, according to Wikipedia, are often sane and able to distinguish their hallucinations as hallucinations.