Self-embedded Agent and I agreed on the following bet: They paid me $1000 a few days ago. I will pay them $1100 inflation adjusted if there is no AGI in 2030.
Ramana Kumar will serve as the arbiter. Under unforeseen events we will renegotiate in good-faith.
As a guideline for ‘what counts as AGI’ they suggested the following, to which I agreed:
“the Arbiter agrees with the statement “there is convincing evidence that there is an operational Artificial General Intelligence” on 6/7/2030″
Defining an artificial general intelligence is a little hard and has a strong ‘know it when you see it vibe’ which is why I’d like to leave it up to Ramana’s discretion.
We hold these properties to be self-evident requirements for a true Artificial General Intelligence:
be able to equal or outperform any human on virtually all relevant domains, at least theoretically
-> there might be e.g. physical tasks that it is artificially constrained from completing because it is lacks actuators for instance—but it should be able to do this ‘in theory’. again I leave it up to the arbiter to make the right judgement call here.
it should be able to asymptotically outperform or equal human performance for a task with equal fixed data, compute, and prior knowledge
it should autonomously be able to formalize vaguely stated directives into tasks and solve these (if possible by a human)
it should be able to solve difficult unsolved maths problems for which there are no similar cases in its dataset
(again difficult, know it when you see it)
it should be immune / atleast outperform humans against an adversarial opponent (e.g. it shouldn’t fail Gary Marcus style questioning)
outperform or equals humans on causal & counterfactual reasoning
This list is not a complete enumeration but a moving goalpost (but importantly set by Ramana! not me)
-> as we understand more about intelligence we peel off capability layers that turn out to not be essential /downstream of ‘true’ intelligence.
Importantly, I think near-future ML systems to be start to outperform humans in virtually all (data-rich) clearly defined tasks (almost) purely on scale but I feel that an AGI should be able to solve data-poor, vaguely defined tasks, be robust to adversarial actions, correctly perform counterfactual & causal reasoning and be able to autonomously ‘formalize questions’.
Self-embedded Agent and I agreed on the following bet: They paid me $1000 a few days ago. I will pay them $1100 inflation adjusted if there is no AGI in 2030.
Ramana Kumar will serve as the arbiter. Under unforeseen events we will renegotiate in good-faith.
As a guideline for ‘what counts as AGI’ they suggested the following, to which I agreed:
I made an almost identical bet with Tobias Baumann about two years ago.