I am an engineer and entrepreneur trying to make sure AI is developed without killing everybody. I founded and was CEO of the startup Ripe Robotics from 2019 to 2024. | hunterjay.com
HunterJay
I took the original sentence to mean something like “we use things external to the brain to compute things too”, which is clearly true. Writing stuff down to work through a problem is clearly doing some computation outside of the brain, for example. The confusion comes from where you draw the line—if I’m just wiggling my fingers without holding a pen, does that still count as computing stuff outside the brain? Do you count the spinal cord as part of the brain? What about the peripheral nervous system? What about information that’s computed by the outside environment and presented to my eyes? I think it’s kind of an arbitrary line, but reading this charitably their statement can still be correct, I think.
(No response from me on the rest of your points, just wanted to back the author up a bit on this one.)
I really enjoy’d this writeup! I’d probably even go a little bit on the pessimistic (optimistic?) side, and bet that almost all of this technology would be possible with only a few years of development from today—though I suppose it might be 20 if development doesn’t start/ramp up in earnest.
Thanks!
That’s a good point, I’ll write up a brief explanation/disclaimer and put it in as a footnote.
Typo corrected, thanks for that.
I agree, it’s more likely for the first AGI to begin on a supercomputer at a well-funding institution. If you like, you can imagine that this AGI is not the first, but simply the first not effectively boxed. Maybe its programmer simply implemented a leaked algorithm that was developed and previously run by a large project, but changed the goal and tweaked the safeties.
In any case, it’s a story, not a prediction, and I’d defend it as plausible in that context. Any story has a thousand assumptions and events that, in sequence, reduce the probability to infinitesimal. I’m just trying to give a sense of what a takeoff could be like when there is a large hardware overhang and no safety—both of which have only a small-ish chance of occurring. That in mind, do you have an alternative suggestion for the title?
I am curious if this has changed over the past 6 years since you posted this comment. Do you get the feeling that high profile researchers have shifted even further towards Xrisk concern, or if they continue with the same views as in 2016? Thanks!