I would venture that a zero prior is often (though not always) applied (in practice, though not in theory) to theories that defy the known laws of a given age. Basically, some people will go to their graves before updating their priors about some theory or another, including notable scientists. It seems reasonable to model such instances as a case where someone had a zero prior, which then leads such people to struggle with perceived impossibilities.
Now, I’d like to point out that scientists and philosophers have in the past been placed in a position where they need to “believe the impossible”, for instance when new evidence accumulates that defies a strongly held belief about the known physics of the age. In this sense, the topic of your post is perhaps more relevant than might appear on the surface. That is to say, “believing the impossible” is an occupational hazard for many people who work with immutable laws (e.g. physics), and I believe this topic is certainly worth more attention.
For instance, replace the JFK scenario with some real world examples, and we see the issue is not hypothetical. Top-of-mind examples include the disbelief in the atom at the end of the 19th century (e.g. Max Planck), or spacetime at the start of the 20th (e.g. Henri Bergson). Their stories are less sexy than a JFK conspiracy, but unlike conspiracy crackpots, their persistent disbelief in a theory was highly influential during their time.
Assuming I haven’t lost you with all this philosophy of science, let me bring it back home to AI safety. How many AI researchers have put a zero prior (in practice, if not in theory) on Goertzel’s so-called “Benevolent AI”? How many have put a zero prior on Yudkowsky’s so-called “Friendly AI”? How many have put a zero prior on either one being provably safe? How many have put a zero prior on Ethics being solvable in the first place?
I don’t doubt that many people likely began with non-zero priors on all these issues. But in practice, time has a way of ossifying beliefs, to which there may eventually be no distinguishing between a zero prior and an unshakable dogma. So I wonder whether “believing impossible things” might turn out to be an occupational hazard here as well. And its in this regard that I read your post with interest, since if your conclusion is correct, then in practice (if not in theory) it might not matter all that much. Indeed, Einstein did get a nobel despite Bergson’s protests, and atomic physics did become mainstream, despite Planck’s faith. We may never know what beliefs they actually went to their graves with, but in theory, it doesn’t matter.
That’s slightly different—society reaching the right conclusion, despite some members of it being irredeemably wrong.
A closer analogy would be a believer in psychics or the supernatural who has lots of excuses ready to explain away experiments—their expectations have changed even if they haven’t revived their beliefs.
I would venture that a zero prior is often (though not always) applied (in practice, though not in theory) to theories that defy the known laws of a given age. Basically, some people will go to their graves before updating their priors about some theory or another, including notable scientists. It seems reasonable to model such instances as a case where someone had a zero prior, which then leads such people to struggle with perceived impossibilities.
Now, I’d like to point out that scientists and philosophers have in the past been placed in a position where they need to “believe the impossible”, for instance when new evidence accumulates that defies a strongly held belief about the known physics of the age. In this sense, the topic of your post is perhaps more relevant than might appear on the surface. That is to say, “believing the impossible” is an occupational hazard for many people who work with immutable laws (e.g. physics), and I believe this topic is certainly worth more attention.
For instance, replace the JFK scenario with some real world examples, and we see the issue is not hypothetical. Top-of-mind examples include the disbelief in the atom at the end of the 19th century (e.g. Max Planck), or spacetime at the start of the 20th (e.g. Henri Bergson). Their stories are less sexy than a JFK conspiracy, but unlike conspiracy crackpots, their persistent disbelief in a theory was highly influential during their time.
Assuming I haven’t lost you with all this philosophy of science, let me bring it back home to AI safety. How many AI researchers have put a zero prior (in practice, if not in theory) on Goertzel’s so-called “Benevolent AI”? How many have put a zero prior on Yudkowsky’s so-called “Friendly AI”? How many have put a zero prior on either one being provably safe? How many have put a zero prior on Ethics being solvable in the first place?
I don’t doubt that many people likely began with non-zero priors on all these issues. But in practice, time has a way of ossifying beliefs, to which there may eventually be no distinguishing between a zero prior and an unshakable dogma. So I wonder whether “believing impossible things” might turn out to be an occupational hazard here as well. And its in this regard that I read your post with interest, since if your conclusion is correct, then in practice (if not in theory) it might not matter all that much. Indeed, Einstein did get a nobel despite Bergson’s protests, and atomic physics did become mainstream, despite Planck’s faith. We may never know what beliefs they actually went to their graves with, but in theory, it doesn’t matter.
That’s slightly different—society reaching the right conclusion, despite some members of it being irredeemably wrong.
A closer analogy would be a believer in psychics or the supernatural who has lots of excuses ready to explain away experiments—their expectations have changed even if they haven’t revived their beliefs.