I personally suspect we’ll perpetually keep moving the goalposts so whatever AI we currently have is obviously not AGI because AGI is by definition better than what we’ve got in some way. I think AI is already here and performing to standards that I would’ve called AGI or even magic if you’d showed it to me a decade ago, but we’re continually coming up with reasons it isn’t “really” AGI yet. I see no reason that we would culturally stop that habit of insisting that silicon-based minds are less real than carbon-based ones, at least as long as we keep using “belongs to the same species as me” as a load-bearing proxy for “is a person”. (load-bearing because if you stop using species as a personhood constraint, it opens a possibility of human non-people, and we all know that bad things happen when we promote ideologies where that’s possible).
However, I’m doing your point (6) anyways because everybody’s aging. If I believed in AGI being around the corner, I’d probably spend less time with them, because “real AGI” as it’s often mythologized could solve mortality and give me a lot more time with them.
I’m also doing your point (8) to some degree—if I expect that new tooling will obviate a skill soon, I’m less likely to invest in developing the skill. While I don’t think AI will get to a point where we widely recognize it as AGI, I do think we’re building a lot of very powerful new tools right now with what we’ve already got.
we’re mighty close by my standards. I think GPT4 is pretty obviously “mid-level AGI with near zero streetsmarts”. But there are some core capabilities it’s missing as a result of that that are pretty critical to the worries from AI agency. Usually when people talk about AGI they mean ASI, it’s been a frustration for a while of mine because yeah obviously a big language model would be an AGI, and tada here one is.
I personally suspect we’ll perpetually keep moving the goalposts so whatever AI we currently have is obviously not AGI because AGI is by definition better than what we’ve got in some way. I think AI is already here and performing to standards that I would’ve called AGI or even magic if you’d showed it to me a decade ago, but we’re continually coming up with reasons it isn’t “really” AGI yet. I see no reason that we would culturally stop that habit of insisting that silicon-based minds are less real than carbon-based ones, at least as long as we keep using “belongs to the same species as me” as a load-bearing proxy for “is a person”. (load-bearing because if you stop using species as a personhood constraint, it opens a possibility of human non-people, and we all know that bad things happen when we promote ideologies where that’s possible).
However, I’m doing your point (6) anyways because everybody’s aging. If I believed in AGI being around the corner, I’d probably spend less time with them, because “real AGI” as it’s often mythologized could solve mortality and give me a lot more time with them.
I’m also doing your point (8) to some degree—if I expect that new tooling will obviate a skill soon, I’m less likely to invest in developing the skill. While I don’t think AI will get to a point where we widely recognize it as AGI, I do think we’re building a lot of very powerful new tools right now with what we’ve already got.
we’re mighty close by my standards. I think GPT4 is pretty obviously “mid-level AGI with near zero streetsmarts”. But there are some core capabilities it’s missing as a result of that that are pretty critical to the worries from AI agency. Usually when people talk about AGI they mean ASI, it’s been a frustration for a while of mine because yeah obviously a big language model would be an AGI, and tada here one is.