This is a very weak definition even for weak AGI on the Metaculus question. This seems quite easy to game in the same way all the previous AI questions were gamed. I wouldn’t be too impressed if this happened tomorrow. This does not seem like a sufficient amount of generality to be an AGI.
The Loebner silver prize is the only even vaguely interesting requirement, but I have deep doubts as to the validity of the Turing test (and this particular variant test apparently isn’t even running anymore). I’d have to trust the judges for this to mean anything, and Metaculus would have to trust completely unknown judges for this question to be when it should happen, not when they think there will be such hype. Fooling what are likely to be cooperative judges just isn’t impressive. (People have been intentionally fooling themselves with chatbots since at least Eliza.) People fooling themselves about AI has been in the news a lot ever since GPT-2. (Whose capabilities as stated by even the more reasonable commentariat have only finally been matched this year by when Chinchilla came out in my opinion.)
I know I already extend a great deal of charity when interpreting things through text, and will give the benefit of the doubt to anyone who can string words together (which is what AIs are currently best at) and never look for proof that the other side isn’t thinking. They will likely be fooled many years before the chatbot has significant reasoning ability (which is highly correlated with ability to string words together in humans, but not machines.). Unless the judges are very careful, and no one knows who would be judging, and we have no reason to trust that they would be selected for the right reasons.
Separately, there has been little evidence that using Metaculus leads to correct results about even vaguely unknowable things. It may or may not reflect current knowledge well depending on the individual market, but the results have never seemed to work much above chance. I don’t engage with it much personally, but everything I’ve seen from it has been very wrong.
Third note, even if this was accurate timing for weak AGI, the true indications are that a human level AGI would not suddenly become a god level AI. There is no reason to believe that recursive self-improvement is even a thing that could happen at a software level at a high rate of speed in something that wasn’t already a superintelligence, so it would rely on very slow physical processes out in the real world if it took an even vaguely significant level of resources to make said AI in the first place. We are near many physical limits, and new kinds of technology will have to be developed to get even vaguely quick increases to how big the models can be, which will be very slow for a human level being that has to wait on physical experiments.
Fourth, the design of the currently successful paradigm does not allow the really interesting capabilities anyway. (Lack of memory, lack of grounding by design, etc.) The RL agents I think are used for things like Go and videogame capabilities are more interesting, but there doesn’t seem to be any near future path where those are the basis of a unified model that performs writing successfully rather than just a thing to manipulate a language model with. (We could probably make a non-unified model today that would do these things if we just spent a lot of money, but it wouldn’t be useful.)
Because of this, you will likely die of the same things as would be expected in a world where we didn’t pursue AI, and at similar times. It might affect things in the margin, but only moderately.
Note: I only realized this was a couple months old after writing a large portion of this.
This is a very weak definition even for weak AGI on the Metaculus question. This seems quite easy to game in the same way all the previous AI questions were gamed. I wouldn’t be too impressed if this happened tomorrow. This does not seem like a sufficient amount of generality to be an AGI.
The Loebner silver prize is the only even vaguely interesting requirement, but I have deep doubts as to the validity of the Turing test (and this particular variant test apparently isn’t even running anymore). I’d have to trust the judges for this to mean anything, and Metaculus would have to trust completely unknown judges for this question to be when it should happen, not when they think there will be such hype. Fooling what are likely to be cooperative judges just isn’t impressive. (People have been intentionally fooling themselves with chatbots since at least Eliza.) People fooling themselves about AI has been in the news a lot ever since GPT-2. (Whose capabilities as stated by even the more reasonable commentariat have only finally been matched this year by when Chinchilla came out in my opinion.)
I know I already extend a great deal of charity when interpreting things through text, and will give the benefit of the doubt to anyone who can string words together (which is what AIs are currently best at) and never look for proof that the other side isn’t thinking. They will likely be fooled many years before the chatbot has significant reasoning ability (which is highly correlated with ability to string words together in humans, but not machines.). Unless the judges are very careful, and no one knows who would be judging, and we have no reason to trust that they would be selected for the right reasons.
Separately, there has been little evidence that using Metaculus leads to correct results about even vaguely unknowable things. It may or may not reflect current knowledge well depending on the individual market, but the results have never seemed to work much above chance. I don’t engage with it much personally, but everything I’ve seen from it has been very wrong.
Third note, even if this was accurate timing for weak AGI, the true indications are that a human level AGI would not suddenly become a god level AI. There is no reason to believe that recursive self-improvement is even a thing that could happen at a software level at a high rate of speed in something that wasn’t already a superintelligence, so it would rely on very slow physical processes out in the real world if it took an even vaguely significant level of resources to make said AI in the first place. We are near many physical limits, and new kinds of technology will have to be developed to get even vaguely quick increases to how big the models can be, which will be very slow for a human level being that has to wait on physical experiments.
Fourth, the design of the currently successful paradigm does not allow the really interesting capabilities anyway. (Lack of memory, lack of grounding by design, etc.) The RL agents I think are used for things like Go and videogame capabilities are more interesting, but there doesn’t seem to be any near future path where those are the basis of a unified model that performs writing successfully rather than just a thing to manipulate a language model with. (We could probably make a non-unified model today that would do these things if we just spent a lot of money, but it wouldn’t be useful.)
Because of this, you will likely die of the same things as would be expected in a world where we didn’t pursue AI, and at similar times. It might affect things in the margin, but only moderately.
Note: I only realized this was a couple months old after writing a large portion of this.