Daniel Kokotaljo and I agreed on the following bet: I paid Daniel $1000 today. Daniel will pay me $1100 inflation adjusted if there is no AGI in 2030.
Ramana Kumar will serve as the arbiter. Under unforeseen events we will renegotiate in good-faith.
As a guideline for ‘what counts as AGI’ I suggested the following, to which Daniel agreed:
“the Arbiter agrees with the statement “there is convincing evidence that there is an operational Artificial General Intelligence” on 6/7/2030″
Defining an artificial general intelligence is a little hard and has a strong ‘know it when you see it vibe’ which is why I’d like to leave it up to Ramana’s discretion.
We hold these properties to be self-evident requirements for a true Artificial General Intelligence:
1. be able to equal or outperform any human on virtually all relevant domains, at least theoretically
-> there might be e.g. physical tasks that it is artificially constrained from completing because it is lacks actuators for instance—but it should be able to do this ‘in theory’. again I leave it up to the arbiter to make the right judgement call here.
3. it should autonomously be able to formalize vaguely stated directives into tasks and solve these (if possible by a human)
4. it should be able to solve difficult unsolved maths problems for which there are no similar cases in its dataset
(again difficult, know it when you see it)
5. it should be immune / atleast outperform humans against an adversarial opponent (e.g. it shouldn’t fail Gary Marcus style questioning)
6. outperform or equals humans on causal & counterfactual reasoning
7. This list is not a complete enumeration but a moving goalpost (but importantly set by Ramana! not me)
-> as we understand more about intelligence we peel off capability layers that turn out to not be essential /downstream of ‘true’ intelligence.
Importantly, I think near-future ML systems to be start to outperform humans in virtually all (data-rich) clearly defined tasks (almost) purely on scale but I feel that an AGI should be able to solve data-poor, vaguely defined tasks, be robust to adversarial actions, correctly perform counterfactual & causal reasoning and be able to autonomously ‘formalize questions’.
I read this and immediately thought about whether I’d want to take that bet. I feel like AGI by 2030 is about… 50⁄50 for me? Like my confidence interval changes month-to-month over the past couple years, what with a lot of my mental landmarks being passed sooner than I expected, but I’m still at something like 8 years +/- 6 years. So that leaves me not strongly inclined either way. I took a bet about 2030 which I feel good about which had some specifications for a near-but-not-fully-AGI system being extant.
Daniel Kokotaljo and I agreed on the following bet: I paid Daniel $1000 today. Daniel will pay me $1100 inflation adjusted if there is no AGI in 2030.
Ramana Kumar will serve as the arbiter. Under unforeseen events we will renegotiate in good-faith.
As a guideline for ‘what counts as AGI’ I suggested the following, to which Daniel agreed:
I read this and immediately thought about whether I’d want to take that bet. I feel like AGI by 2030 is about… 50⁄50 for me? Like my confidence interval changes month-to-month over the past couple years, what with a lot of my mental landmarks being passed sooner than I expected, but I’m still at something like 8 years +/- 6 years. So that leaves me not strongly inclined either way. I took a bet about 2030 which I feel good about which had some specifications for a near-but-not-fully-AGI system being extant.