I think there’s a little too much focus on the concept of AGI, an “Artificial General Intelligence”, that is, an AI that can do basically anything that a human can. However, I feel it’s better to focus simply on the fact that the number of things that AI can accomplish well are rapidly increasing, and the list of things that can only be done by humans is getting shorter and shorter. Most of the implications of AGI are also implied by this trend, and it is cleaner to reason about the pace of this trend than about when precisely AGI will come (I hear many people say they think AGI is several decades off. If we worry about edge cases, this is perhaps a reasonable belief. However, the implications of AGI will likely be implications of this trend within the next 2 or 3 decades, if they don’t come even sooner, regardless of if AGI is achieved by that time).
I think there’s a little too much focus on the concept of AGI, an “Artificial General Intelligence”, that is, an AI that can do basically anything that a human can. However, I feel it’s better to focus simply on the fact that the number of things that AI can accomplish well are rapidly increasing, and the list of things that can only be done by humans is getting shorter and shorter. Most of the implications of AGI are also implied by this trend, and it is cleaner to reason about the pace of this trend than about when precisely AGI will come (I hear many people say they think AGI is several decades off. If we worry about edge cases, this is perhaps a reasonable belief. However, the implications of AGI will likely be implications of this trend within the next 2 or 3 decades, if they don’t come even sooner, regardless of if AGI is achieved by that time).