That is the world I’ve decided to optimise for regardless of what I actually believe timelines are
I don’t really feel like rehashing the arguments for longer timelines here (not all that relevant to my question) but it’s not the case that I have a < 10% probability on pre 2040 timelines, more that I think I can have a much larger impact on post 2040 timelines than on pre 2030, so most of my attention is directed there.
That said computational/biological anchors are a good reason for longer timelines absent foundational breakthroughs in our understanding of intelligence.
Furthermore, I suspect that intelligence is hard, that incremental progress will become harder as systems become more capable, that returns to cumulative investment in cognitive capabilities are sublinear/marginal returns to cognitive investment decay at a superlinear rate, etc.)
Why do you think TAI is decades away?
That is the world I’ve decided to optimise for regardless of what I actually believe timelines are
I don’t really feel like rehashing the arguments for longer timelines here (not all that relevant to my question) but it’s not the case that I have a < 10% probability on pre 2040 timelines, more that I think I can have a much larger impact on post 2040 timelines than on pre 2030, so most of my attention is directed there.
That said computational/biological anchors are a good reason for longer timelines absent foundational breakthroughs in our understanding of intelligence.
Furthermore, I suspect that intelligence is hard, that incremental progress will become harder as systems become more capable, that returns to cumulative investment in cognitive capabilities are sublinear/marginal returns to cognitive investment decay at a superlinear rate, etc.)