I am an engineer and entrepreneur trying to make sure AI is developed without killing everybody. I founded and was CEO of the startup Ripe Robotics from 2019 to 2024. | hunterjay.com
HunterJay
That’s a good point, I’ll write up a brief explanation/disclaimer and put it in as a footnote.
Typo corrected, thanks for that.
I agree, it’s more likely for the first AGI to begin on a supercomputer at a well-funding institution. If you like, you can imagine that this AGI is not the first, but simply the first not effectively boxed. Maybe its programmer simply implemented a leaked algorithm that was developed and previously run by a large project, but changed the goal and tweaked the safeties.
In any case, it’s a story, not a prediction, and I’d defend it as plausible in that context. Any story has a thousand assumptions and events that, in sequence, reduce the probability to infinitesimal. I’m just trying to give a sense of what a takeoff could be like when there is a large hardware overhang and no safety—both of which have only a small-ish chance of occurring. That in mind, do you have an alternative suggestion for the title?
Thanks!