Well done! I’m very happy to see this, I think this sort of scenario-forecasting exercise is underrated and more people should do it more often. (I’ve continued to do it, privately, since writing What 2026 Looks Like.)
I encourage you to think about why, in your story, the singularity hasn’t happened yet by EOY 2025. I guess GPT-6 is good at lots of things but not particularly good at accelerating AI R&D? Why not? For example, perhaps peruse takeoffspeeds.com and see if you can fiddle with the inputs to get an output graph that looks roughly like what you expect.
Similarly, think about whether GPT-6 is agentic / strategic / goal-directed / coherent. Why or why not?
If it is, is it optimizing towards aligned or misaligned goals? Why or why not?
Thanks! Yeah, those are definitely questions I’d like to find some to think about harder (and encourage others to think about too). Writing towards the end, I did feel some tension at not having explained why that things weren’t game over one way or another, given the GPT-6 I describe is pretty damn capable.
My bet is that GPT-5 will be capable of recursive self-improvement. Not the strong FOOM-within-days that some have proposed as a possibility, but a more like a 100x improvement within a year. Nevertheless, I’m still hopeful that the dangers will become more evident, and even self-interested actors will take precautionary action.
Well done! I’m very happy to see this, I think this sort of scenario-forecasting exercise is underrated and more people should do it more often. (I’ve continued to do it, privately, since writing What 2026 Looks Like.)
I encourage you to think about why, in your story, the singularity hasn’t happened yet by EOY 2025. I guess GPT-6 is good at lots of things but not particularly good at accelerating AI R&D? Why not? For example, perhaps peruse takeoffspeeds.com and see if you can fiddle with the inputs to get an output graph that looks roughly like what you expect.
Similarly, think about whether GPT-6 is agentic / strategic / goal-directed / coherent. Why or why not?
If it is, is it optimizing towards aligned or misaligned goals? Why or why not?
Thanks! Yeah, those are definitely questions I’d like to find some to think about harder (and encourage others to think about too). Writing towards the end, I did feel some tension at not having explained why that things weren’t game over one way or another, given the GPT-6 I describe is pretty damn capable.
My bet is that GPT-5 will be capable of recursive self-improvement. Not the strong FOOM-within-days that some have proposed as a possibility, but a more like a 100x improvement within a year. Nevertheless, I’m still hopeful that the dangers will become more evident, and even self-interested actors will take precautionary action.