This is for a person with no ML background. He is 55 years old, he liked the sequences and I recently managed to convince him that AI risk is serious by recommending a bunch of Lesswrong posts on it, but he still thinks it’s astronomically unlikely that AGI is <80 years away.
There are a lot of other people like this, so I think it’s valuable to know what the best explainer is, more than just in my case.
Has he tried personally to interact with GPT4? Can’t think of a better way. It convinced even Bryan Caplan, who had bet publicly against it
I don’t even bring up ai at all. They can figure that part out on their own easily enough. Much more important is Modeling the Human Trajectory
AI Timelines: Where the Arguments, and the “Experts,” Stand would be my best bet, as well as mukashi’s recommendation of playing with ChatGPT / GPT-4.
I might recommend the Most Important Century series on Cold Takes. It’s long, but it’s accessible and comprehensive.
This is difficult for people with no ML background. The trouble with this is that one first has to explain timelines. Then explain what averages and ranges most researchers in the field maintain, and then explain why some discount that in favor of short AI timelines. That is a long arc for a skeptical person.
Aren’t we all skeptical people? Carl Sagan said that extraordinary claims require extraordinary evidence. Explaining a short timeline is a heavy lift by its very nature.