On my picture, I think a key variable is the length of time between when-we-understand-the-basic-shape-of-things-that-will-get-to-AGI and when-it-reaches-strong-superintelligence. Each doubling of that length of time feels to me like it could be worth order of 0.5-1% of the future.
Amusingly, I expect that each doubling of that time is negative EV. Because that time is very likely negative.
Question, are you assuming time travel or acausality being a thing in the next 20-30 years due to FTL work? Because that’s the only way that time from AGI understanding to superintelligence is negative at all.
No, I expect (absent agent foundations advances) people will build superintelligence before they understand the basic shape of things of which that AGI will consist. An illustrative example (though I don’t think this exact thing will happen): if the first superintelligence popped out of a genetic algorithm, then people would probably have no idea what pieces went into the thing by the time it exists.
Amusingly, I expect that each doubling of that time is negative EV. Because that time is very likely negative.
Question, are you assuming time travel or acausality being a thing in the next 20-30 years due to FTL work? Because that’s the only way that time from AGI understanding to superintelligence is negative at all.
No, I expect (absent agent foundations advances) people will build superintelligence before they understand the basic shape of things of which that AGI will consist. An illustrative example (though I don’t think this exact thing will happen): if the first superintelligence popped out of a genetic algorithm, then people would probably have no idea what pieces went into the thing by the time it exists.