Disagreement on AGI Suggests It’s Near
If I’m planning a holiday to New York (and I live pretty far from New York), it’s quite straightforward to get fellow travellers to agree that we need to buy plane tickets to New York. Which airport? Eh, whichever is more convenient, I guess. Alternatively, some may prefer a road trip to New York, but the general direction is obvious for everyone.
However, as the holiday gets closer in time or space, the question of what we actually mean by a holiday in New York becomes more and more contentious. Did we mean New York State or New York City? Did we mean Brooklyn or Broadway? Which Broadway theater? Which show? Which seats?
By my estimation, the fact that the question of whether AGI has been achieved is so broadly contentious shows that we are so close to it that the term has lost its meaning, in the same way that “Let’s go to New York!” loses its meaning when you’re already standing in Times Square.
It’s time for more precise definitions of the space of possible minds that we are now exploring. I have my own ideas, but I’ll leave those for another post…
Agree. This post captures the fact that, time and again, historical and once perceived as insurmountable benchmarks in AI have been surpassed. Those not fully cognizant of the situation have been iteratively surprised. People, for reasons I cannot fully work out, will continue to engage in motivated reasoning against current and near-term-future-expected AI capabilities and or economical value, with some part of the evidence-downplaying consisting of shifting AGI-definitional or capability-threshold-to-impress goalposts (see moving goalposts). On a related note, your post also makes me imagine the apologue of the boiling frog of late w.r.t. scaling curves.
people disagree heavily on what the second coming will look like. this, of course, means that the second coming must be upon us
You’re kind of proving the point; the Second Coming is so vaguely defined that it might as well have happened. Some churches preach this.
If the Lord Himself did float down from Heaven and gave a speech on Capitol Hill, I bet lots of Christians would deride Him as an impostor.
Specifically, as an antichrist, as the Gospels specifically warn that “false messiahs and false prophets will appear and produce great signs and omens”, among other things. (And the position that the second coming has already happened—completely, not merely partially—is hyperpreterism.)
suppose I believe the second coming involves the Lord giving a speech on capitol hill. one thing I might care about is how long until that happens. the fact that lots of people disagree about when the second coming is doesn’t mean the Lord will give His speech soon.
similarly, the thing that I define as AGI involves AIs building Dyson spheres. the fact that other people disagree about when AGI is doesn’t mean I should expect Dyson spheres soon.
The amount of contention says something about whether an event occurred according to the average interpretation. Whether it occurred according to your specific interpretation depends on how close that interpretation is to the average interpretation.
You can’t increase the probability of getting a million dollars by personally choosing to define a contentious event as you getting a million dollars.
My response to this is to focus on when a Dyson Swarm is being built, not AGI, because it’s easier to define the term less controversially.
And a large portion of disagreements here fundamentally revolves around being unable to coordinate on what a given word means, which from an epistemic perspective doesn’t matter at all, but it does matter from a utility/coordination perspective, where coordination is required for a lot of human feats.
What is the definition of a Dyson Swarm? Is it really easier to define, or just easier to see that we are not there, only because we are not close yet?
Unfortunatelly, I fear this applies to basically everything I could in principle make a benchmark around, mostly because of my own limited abilities.
The actual Bayesian response would be for both the AGI case and the Second Coming case is that both hypotheses are invalid from the start due to underspecification, so any probability estimates/decision making for utility for these hypotheses are also invalid.
I wouldn’t call either hypothesis invalid. People just use the same words to refer to different things. This is true for all words and hypotheses to some degree. When there is little to no contention that we’re not in New York, or that we don’t have AGI, or that the Second Coming hasn’t happened, then those differences are not apparent. But presumably there is some correlation between the different interpretations, such that when the Event does take place, contention rises to a degree that increases as that correlation decreases[1]. (Where by Event I mean some event that is semantically within some distance to the average interpretation[2].)
Formally, I say that P(contentionAGI|¬AGI)≪P(contentionAGI|AGI), meaning VAGI=P(contentionAGI|¬AGI)P(contentionAGI|AGI) is small, where VAGI can be considered a measure of how vaguely the term AGI is specified.
The more vaguely an event is specified, the more contention there is when the event takes place. Conversely, the more precisely an event is specified, the less contention there is when the event takes place. It’s kind of obvious when you think about it. Using Bayes’ law we can additionally say the following.
P(AGI|contentionAGI)=11+VAGI1−P(AGI)P(AGI)
That is, when there is contention about whether a vaguely defined event such as AGI has occurred, your posterior probability should be high, modulated by your prior for AGI (the posterior monotonically decreases with the prior). I think it’s also possible to say that the more contentious an event the higher the probability that it has occurred, but that may require some additional assumptions about the distribution of interpretations in semantic space.
An important difference between AGI and the Second Coming (at least among rationalists and AI researchers) is that the latter generally has a much lower prior probability than the former.
Assuming rational actors.
Assuming a unimodal distribution of interpretations in semantic space.
I think that, in your New York example, the increasing disagreement is driven by people spending more time thinking about the concrete details of the trip. They do so because it is obviously more urgent, because they know the trip is happening soon. The disagreements were presumably already there in the form of differing expectations/preferences, and were only surfaced later on as they started discussing things more concretely. So the increasing disagreements are driven by increasing attention to concrete details.
It seems likely to me that the increasing disagreement around AGI is also driven by people spending more time thinking about the concrete details of what constitutes AGI. But where in the New York example we can assume people pay more attention to the details because they know the trip is upcoming, with AGI we know that people don’t know when AGI will happen, so there must be some other reason.
One reason could be “a bunch of people think/feel AGI is near”, but we already knew that before noticing disagreement around AGI. Another reason could be that there’s currently a lot of hype and activity around AI and AGI. But the fact that there’s lots of hype around AI/AGI doesn’t seem like much evidence that AGI is near, or if it is, can also be stated more directly than through a detour via disagreements.