My point of view is different. I was worried about 4 months ago that this spring-summer of 2023 would potentially be FOOM-time. My experience has been one of gratefully feeling like, ‘oh good, GPT-4 wasn’t as scary competent as I was worried it might be. We’re safe for another year or two. Time to calm down and get back to work on alignment at a regular pace.’
Also, I’m one of those weird people who started taking this all seriously like 20 years ago, and I’ve been planning my life around a scary tricky transition time somewhere around 2025 − 2030. And in the past few years, I got better at ML, read more papers, researched enough to get my own inside view on timelines and then realized I didn’t think we had until 2030 until AGI. I don’t think we’ll be doomed right after inventing it, like some do, but I do think that it’s going to change our world in scary ways and that if we don’t deal with it well within a few years, it’ll get out of control and then we’ll be doomed.
My point of view is different. I was worried about 4 months ago that this spring-summer of 2023 would potentially be FOOM-time. My experience has been one of gratefully feeling like, ‘oh good, GPT-4 wasn’t as scary competent as I was worried it might be. We’re safe for another year or two. Time to calm down and get back to work on alignment at a regular pace.’
Also, I’m one of those weird people who started taking this all seriously like 20 years ago, and I’ve been planning my life around a scary tricky transition time somewhere around 2025 − 2030. And in the past few years, I got better at ML, read more papers, researched enough to get my own inside view on timelines and then realized I didn’t think we had until 2030 until AGI. I don’t think we’ll be doomed right after inventing it, like some do, but I do think that it’s going to change our world in scary ways and that if we don’t deal with it well within a few years, it’ll get out of control and then we’ll be doomed.