I was surprised that I had misremembered this post significantly. Over the past two years somehow my brain summarized this as “discontinuities barely happen at all, maybe nukes, and even that’s questionable.” I’m not sure where I got that impression.
Looking back here I am surprised at the number of discontinuities discovered, even if there are weird sampling issues of what trendlines got selected to investigate.
Rereading this, I’m excited by… the sort of sheer amount of details here. I like that there’s a bunch of different domains being explored, which helps fill in a mosaic of how the broader world fits together.
It’s an interesting question how much any of this should directly bear on AI timeline forecasts. The more recent debates between Eliezer and Paul dig into some differences into how to apply this. Is AI going to be like past technological jumps, or an entirely new one?
I appreciate Katja et all flagging various potential issues with the methodology in the original post, and noting some possible other questions you could research. If I had infinite researchers I’d probably still want those questions explored, but I’m not sure how many of current researchers I’d be excited to delve into those followup questions. I feel like the approach of “investigate past trends” has passed the 80⁄20 point of informing our AI timelines, and I’d probably prefer those researchers to orient to new questions that illuminate different facets of the AI strategic landscape.
I feel like the approach of “investigate past trends” has passed the 80⁄20 point of informing our AI timelines, and I’d probably prefer those researchers to orient to new questions that illuminate different facets of the AI strategic landscape.
I specialise in researching this topic. My impression is that barely anyone has looked at past technological trends, neither in academia nor in the LW/EA community. I am generally quite excited about more people looking into this space, because it seems neglected and the kind of topic where EA/LW type of people have a significant edge.
I was surprised that I had misremembered this post significantly. Over the past two years somehow my brain summarized this as “discontinuities barely happen at all, maybe nukes, and even that’s questionable.” I’m not sure where I got that impression.
Looking back here I am surprised at the number of discontinuities discovered, even if there are weird sampling issues of what trendlines got selected to investigate.
Rereading this, I’m excited by… the sort of sheer amount of details here. I like that there’s a bunch of different domains being explored, which helps fill in a mosaic of how the broader world fits together.
It’s an interesting question how much any of this should directly bear on AI timeline forecasts. The more recent debates between Eliezer and Paul dig into some differences into how to apply this. Is AI going to be like past technological jumps, or an entirely new one?
I appreciate Katja et all flagging various potential issues with the methodology in the original post, and noting some possible other questions you could research. If I had infinite researchers I’d probably still want those questions explored, but I’m not sure how many of current researchers I’d be excited to delve into those followup questions. I feel like the approach of “investigate past trends” has passed the 80⁄20 point of informing our AI timelines, and I’d probably prefer those researchers to orient to new questions that illuminate different facets of the AI strategic landscape.
I specialise in researching this topic. My impression is that barely anyone has looked at past technological trends, neither in academia nor in the LW/EA community. I am generally quite excited about more people looking into this space, because it seems neglected and the kind of topic where EA/LW type of people have a significant edge.