There are so many considerations in the design of AI. AGI was always a far too general term, and when people use it, I often ask what they mean and usually its “human-like or better than human chatbot”. Other people say its the “technological singularity” i.e. it can improve itself. These are obviously two very different things or at least two very different design features.
Saying “My company is going to build AGI” is like saying “My company is going to build computer software”. The best software for what exactly? What kind of software to solve what problem? What features? Usually the answer from AGI fans is “all of them”, so perhaps the term is just inherently vague by definition.
When talking about AI, I think its more useful to talk about what features a particular implementation will or wont have. You have already actually listed a few.
Here are some AI feature ideas from myself:
Ability to manipulate the physical world
Ability to operate without human prompting
Be “always on”
Have its own goals
Be able to access large additional computing resources for additional “world simulations” or for conducting virtual research experiments or spawning sub-processes or additional agents.
Be able to improve/train “itself” (really there is no “itself” since as many copies can be made as needed, and its then unclear which one is the original “it”)
Be able to change its own beliefs and goals through training or some other means (scary one)
Ability to to do any or some of above completely unsupervised and/or un-monitored
I think any useful terminology will probably be some sort of qualification. But it needs to be much more limited than the above specifications to be useful.
Spelling out everything you mean in every discussion is sort of the opposite of having generally-understood terminology.
There are so many considerations in the design of AI. AGI was always a far too general term, and when people use it, I often ask what they mean and usually its “human-like or better than human chatbot”. Other people say its the “technological singularity” i.e. it can improve itself. These are obviously two very different things or at least two very different design features.
Saying “My company is going to build AGI” is like saying “My company is going to build computer software”. The best software for what exactly? What kind of software to solve what problem? What features? Usually the answer from AGI fans is “all of them”, so perhaps the term is just inherently vague by definition.
When talking about AI, I think its more useful to talk about what features a particular implementation will or wont have. You have already actually listed a few.
Here are some AI feature ideas from myself:
Ability to manipulate the physical world
Ability to operate without human prompting
Be “always on”
Have its own goals
Be able to access large additional computing resources for additional “world simulations” or for conducting virtual research experiments or spawning sub-processes or additional agents.
Be able to improve/train “itself” (really there is no “itself” since as many copies can be made as needed, and its then unclear which one is the original “it”)
Be able to change its own beliefs and goals through training or some other means (scary one)
Ability to to do any or some of above completely unsupervised and/or un-monitored
I think any useful terminology will probably be some sort of qualification. But it needs to be much more limited than the above specifications to be useful.
Spelling out everything you mean in every discussion is sort of the opposite of having generally-understood terminology.