A bet for Samo Burja

I’m listening to Samo Burja talk on the Cognitive Revolution podcast with Nathan Labenz. Samo said that he would bet that AGI is coming perhaps in the next 20-50 years, but not in the next 5.

I will take that bet. I can’t afford to make an impressively large bet because my counterfactual income is already tied up in a bet against the universe. I quit my well-paying industry job as a machine learning engineer /​ data scientist three years ago to focus on AI safety/​alignment research. To make the bet interesting, I will therefore offer 10:1 odds. I bet $1000 USD against your $100 USD that AGI will be invented in the next 5 years. There are a lot of possible resolution criteria, but as a reasonable shelling point I’ll accept this metaculus market: https://​​www.metaculus.com/​​questions/​​5121/​​date-of-artificial-general-intelligence/​​

I’ll describe my rationale here, in case I change your mind and make you not want the bet. ;-)

I agree with your premise that AGI will require fundamental scientific advances beyond currently deployed tech like transformer LLMs.

I agree that scientific progress is hard, usually slow and erratic, fundamentally different from engineering or bringing a product to market.

I agree with your estimate that the current hype around chat LLMs, and focus on bringing better versions to market, is slowing fundamental scientific progress by distracting top AI scientists from pursuit of theoretical advances.

My cruxes are these:

  1. I believe LLMs will scale to close enough to AGI to become central parts of very useful tools. I believe that these tools will enable the human AI scientists to make rapid theoretical progress. I expect that these AI research systems (I won’t say researchers, since in this scenario they are still sub-AGI) will enable massively parallel testing of hypotheses which are derived as permutations of a handful of initial ideas given by the human scientists. I also foresee these AI research systems mining existing scientific literature for hypotheses to test. I believe the result of this technology will be rapid discovery of algorithms that can actually scale to true AGI.

  2. I have been following advances in neuroscience relevant to brain-inspired AI for over 20 years now. I believe that the neuroscience community has made some key breakthroughs in the past five years which have yet to be effectively exported to machine learning and tested at scale. I also believe there’s a backlog of older neuroscience findings that also haven’t been fully tested. Thus, I believe the existing neuroscience literature provides a rich source of testable under-explored hypotheses. This could be tackled rapidly by the AI research systems from point 1, or will eventually be digested by eager young scientists looking for an academic ML paper to kickstart their careers. Thus the two cruxes are independent but potentially highly synergistic.

I look forward to your response! Regards, Nathan Helm-Burger