The actual Bayesian response would be for both the AGI case and the Second Coming case is that both hypotheses are invalid from the start due to underspecification, so any probability estimates/decision making for utility for these hypotheses are also invalid.
I wouldn’t call either hypothesis invalid. People just use the same words to refer to different things. This is true for all words and hypotheses to some degree. When there is little to no contention that we’re not in New York, or that we don’t have AGI, or that the Second Coming hasn’t happened, then those differences are not apparent. But presumably there is some correlation between the different interpretations, such that when the Event does take place, contention rises to a degree that increases as that correlation decreases[1]. (Where by Event I mean some event that is semantically within some distance to the average interpretation[2].)
Formally, I say that P(contentionAGI|¬AGI)≪P(contentionAGI|AGI), meaning VAGI=P(contentionAGI|¬AGI)P(contentionAGI|AGI) is small, where VAGI can be considered a measure of how vaguely the term AGI is specified.
The more vaguely an event is specified, the more contention there is when the event takes place. Conversely, the more precisely an event is specified, the less contention there is when the event takes place. It’s kind of obvious when you think about it. Using Bayes’ law we can additionally say the following.
P(AGI|contentionAGI)=11+VAGI1−P(AGI)P(AGI)
That is, when there is contention about whether a vaguely defined event such as AGI has occurred, your posterior probability should be high, modulated by your prior for AGI (the posterior monotonically decreases with the prior). I think it’s also possible to say that the more contentious an event the higher the probability that it has occurred, but that may require some additional assumptions about the distribution of interpretations in semantic space.
An important difference between AGI and the Second Coming (at least among rationalists and AI researchers) is that the latter generally has a much lower prior probability than the former.
The actual Bayesian response would be for both the AGI case and the Second Coming case is that both hypotheses are invalid from the start due to underspecification, so any probability estimates/decision making for utility for these hypotheses are also invalid.
I wouldn’t call either hypothesis invalid. People just use the same words to refer to different things. This is true for all words and hypotheses to some degree. When there is little to no contention that we’re not in New York, or that we don’t have AGI, or that the Second Coming hasn’t happened, then those differences are not apparent. But presumably there is some correlation between the different interpretations, such that when the Event does take place, contention rises to a degree that increases as that correlation decreases[1]. (Where by Event I mean some event that is semantically within some distance to the average interpretation[2].)
Formally, I say that P(contentionAGI|¬AGI)≪P(contentionAGI|AGI), meaning VAGI=P(contentionAGI|¬AGI)P(contentionAGI|AGI) is small, where VAGI can be considered a measure of how vaguely the term AGI is specified.
The more vaguely an event is specified, the more contention there is when the event takes place. Conversely, the more precisely an event is specified, the less contention there is when the event takes place. It’s kind of obvious when you think about it. Using Bayes’ law we can additionally say the following.
P(AGI|contentionAGI)=11+VAGI1−P(AGI)P(AGI)
That is, when there is contention about whether a vaguely defined event such as AGI has occurred, your posterior probability should be high, modulated by your prior for AGI (the posterior monotonically decreases with the prior). I think it’s also possible to say that the more contentious an event the higher the probability that it has occurred, but that may require some additional assumptions about the distribution of interpretations in semantic space.
An important difference between AGI and the Second Coming (at least among rationalists and AI researchers) is that the latter generally has a much lower prior probability than the former.
Assuming rational actors.
Assuming a unimodal distribution of interpretations in semantic space.