The fact that your post was upvoted so much makes me take it seriously; I want to understand it better. Currently I see your post as merely a general skeptical worry. Sure, maybe we should never be very confident in our FAI-predictions, but to the extent that we are confident, we can allow that confidence to influence our other beliefs and decisions, and we should be confident in some things to some extent at least (the alternative, complete and paralyzing skepticism, is absurd) Could you explain more what you meant, or explain what you think my mistake is in the above reasoning?
Of course, Bayesians want to be Mr. degrees-of-belief Carneades, not Mr. know-nothing Berkeley. Far be it from me to suggest that we ought to stop making models. It just worried me that you were so willing to adjust your behavior based on inherently untrustworthy predictions.
An acquaintance of mine liked to claim that superhuman intelligence was one of Superman’s powers. The idea immediately struck me as contradictory: nothing Superman does will ever be indicative of superhuman intelligence as long as the scriptwriter is human. My point here is the same: Your model of an ideal FAI will fall short of accurately simulating the ideal just as much as your own mind falls short of being ideal.
The fact that your post was upvoted so much makes me take it seriously; I want to understand it better. Currently I see your post as merely a general skeptical worry. Sure, maybe we should never be very confident in our FAI-predictions, but to the extent that we are confident, we can allow that confidence to influence our other beliefs and decisions, and we should be confident in some things to some extent at least (the alternative, complete and paralyzing skepticism, is absurd) Could you explain more what you meant, or explain what you think my mistake is in the above reasoning?
Of course, Bayesians want to be Mr. degrees-of-belief Carneades, not Mr. know-nothing Berkeley. Far be it from me to suggest that we ought to stop making models. It just worried me that you were so willing to adjust your behavior based on inherently untrustworthy predictions.
An acquaintance of mine liked to claim that superhuman intelligence was one of Superman’s powers. The idea immediately struck me as contradictory: nothing Superman does will ever be indicative of superhuman intelligence as long as the scriptwriter is human. My point here is the same: Your model of an ideal FAI will fall short of accurately simulating the ideal just as much as your own mind falls short of being ideal.