I just listened to Ege and Tamay’s 3-hour interview by Dwarkesh. They make some excellent points that are worth hearing, but they do not stack up to anything like a 25-year-plus timeline. They are not now a safety org if they ever were.
Their good points are about bottlenecks in turning intelligence into useful action. These are primarily sensorimotor and the need to experiment to do much science and engineering. They also address bottlenecks to achieving strong AGI, mostly compute.
In my mind this all stacks up to convincing themselves timelines are long so they can work on the exciting project of creating systems capable of doing valuable work. Their long timelines also allow them to believe that adoption will be slow, so job replacement won’t cause a disastrous economic collapse.
I just listened to Ege and Tamay’s 3-hour interview by Dwarkesh. They make some excellent points that are worth hearing, but they do not stack up to anything like a 25-year-plus timeline. They are not now a safety org if they ever were.
Their good points are about bottlenecks in turning intelligence into useful action. These are primarily sensorimotor and the need to experiment to do much science and engineering. They also address bottlenecks to achieving strong AGI, mostly compute.
In my mind this all stacks up to convincing themselves timelines are long so they can work on the exciting project of creating systems capable of doing valuable work. Their long timelines also allow them to believe that adoption will be slow, so job replacement won’t cause a disastrous economic collapse.