I would say not many at all! They might be working on something they call AGI but I think we need a change in view point before we can start making progress towards the important general aspect of it.
I think the closest people are the transfer learning people. They are at least trying something different. I think we need to solve the resource allocation problem first, then we can layer ML/language inside it. Nothing is truly general, general intelligences can devote resources to solving different problems at different time, and get knowledge of solving problems from other general intelligences.
I’ve got an article brewing on the incentives for people not to work on AGI.
Company incentives:
They are making plenty of money with normal AI/dumb computers no need to go fancy.
It is hard to monetise in the way companies are used to. No need of an upgrade cycle, the system maintains and upgrades itself. No expensive training required either, it trains itself to understand the users. Sell a person an AGI never sell them software again vs SaaS.
For internal software companies optimise for simple software that they people can understand and get many people to maintain. There is a high activation energy required to go from simple software that people maintain to a complex system that can maintain itself.
Legal minefield. Who has responsibility for an AGIs actions? The company or the user? This is solved if it can be sold as intelligence augmentation and is sold in a very raw state with little knowledge and is trained/given more responsibility by the user.
Programmer incentives:
Programmers don’t want to program themselves out of a job.
Programmers also optimize for simple things that they can maintain/understand.
I’m guessing if ever it stops being easy to make money as a software company, then the other incentives might get overridden.
The only real reason to make AGI is if you want to take over the world (or solve other big problems). And if you want, you will not put on your web page—if you are serious. So we will almost never see credible claims on work on AGI, and especially on self-improving superintelligence.
Almost every flying machine innovator was quite public about his goal. And there were a lot of them. Still, a dark horse won.
Here, the situation is quite similar, except that a dark horse victory is not very likely.
If Google is unable to improve its Deep/Alpha product line to an effective AGI machine in a less than 10 years, they are either utterly incompetent (which they aren’t) or this NN paradigm isn’t strong enough. Which sounds unlikely, too.
Others have less than 10 years wide opportunity window.
I am not too excited about the amount of CPU/RAM requirements for this NN/ML style of racing. But it might be just good enough.
I think NN is strong enough for ML I just think that ML is the wrong paradigm. It is at best a partial answer, it does not capture a class of things that humans do that I think is important.
Mathematically ML is trying to find a function from input to output. There are things we do that do not fall into that in our language processing. A couple of examples.
Attentional phrases: “This is important, pay attention,” this means that you should devote more mental energy to processing/learning whatever is happening around you. To learn to process this kind of phrase, you would have to be able to create a map of input to some form of attention control. This form of attention control has not been practised in ML, it is assumed that if data is being presented to the algorithm it is important data.
Language about language: “The word for word in French is mot”, this changes not only the internal state. But also the mapping of input to internal state (and mapping of input to the mapping of input to internal state). Processing it and other phrases would allow you to process the phrase “le mot à mot en allemand est wort”. It is akin to learning to compiling down a new compiler.
You could maybe approximate both these tasks with a crazy hotchpotch of ML systems. But I think that that way is a blind alley.
Learning both of these abilities will have some ML involved. However Language is weird and we have not scratched the surface of how it interacts with learning.
I’d put some money on AGI being pretty different to current ML.
I’d put some money on AGI being pretty different to current ML.
Me too. It’s possible to go the NN/ML (a lot of acronyms and no good name) way, and I don’t think it’s a blind alley, but it’s a long way. Not the most efficient use of the computing resources, by far.
And yes, there are important problems, where the NN approach is particularly clumsy.
Just give those NN guys a shot. The reality will decide.
Currently, there’s competition about how has the best cloud between Google, IBM, Amazon, Oracle and Microsoft. It seems that those companies believe that a successful cloud platform is one that has API’s that can easily used for a wide variety of use cases.
I think this kind of AI research is equivalent with AGI research.
Facebooks internal AI research is broad enough that they pursued Go as a toy problem similar to how Deep Mind did so.
After DeepMind’s success, Tencent Holding didn’t take long to debut an engine that’s on par with professional players even through it isn’t yet on the AlphaGo level.
Apple has money lying around. It’s knows that Siri underperforms at the moment, so it makes total sense to invest money into long-term AI/AGI research. Strategically Apple doesn’t want to be in a situation where Google’s DeepMind/Google Brain initiatives continue to put it’s assistant well ahead of Apple’s performance.
Samsung wants Bixby to be a success and not be outperformed by competing assistants. Samsung also needs AI in a variety of other fields for military tech to internet of things applications.
Bridgewater Associates is working on it’s AI/human hybrid to replace Ray Dalio. Using humans as subroutines might mean that the result get’s dangerous much faster.
Palantir wants the money of the US military for doing various analysis tasks. Given that it’s a broad spectrum of tasks it pays to have quite AI general capabilities. The US military wants to buy AI.
While the CIA is now in the Amazon cloud, Palantir wants to stay competitive and don’t lose projects to Amazon and that requires it do to basic research.
Salesforce has the money and it will need to do a lot of AI to keep up with the times.
I think Baidu and Alibaba will face similar pressures as Google and Amazon. I think both need to invest into basic AI and the have the capability to do so.
Given the possible consequences of AGI for geopolitical power, I think it’s very likely that the Chinese Government has an AGI project.
Okay, you have a much broader definition of what’s AGI research, then. I usually interpret the term to only mean research that has making AGI as an explicit objective, especially since most researchers would (IME) disagree with “API’s that can easily used for a wide variety of use cases” being equivalent to AGI research.
How many teams are working on AGI in the world now? Do we have a list? (I asked already on facebook, but maybe I could get more input here.) https://www.facebook.com/groups/aisafety/permalink/849566641874118/
I would say not many at all! They might be working on something they call AGI but I think we need a change in view point before we can start making progress towards the important general aspect of it.
I think the closest people are the transfer learning people. They are at least trying something different. I think we need to solve the resource allocation problem first, then we can layer ML/language inside it. Nothing is truly general, general intelligences can devote resources to solving different problems at different time, and get knowledge of solving problems from other general intelligences.
Sometimes I think that there are fewer people who explicitly works on universal AGI than people who works on AI safety.
I’ve got an article brewing on the incentives for people not to work on AGI.
Company incentives:
They are making plenty of money with normal AI/dumb computers no need to go fancy.
It is hard to monetise in the way companies are used to. No need of an upgrade cycle, the system maintains and upgrades itself. No expensive training required either, it trains itself to understand the users. Sell a person an AGI never sell them software again vs SaaS.
For internal software companies optimise for simple software that they people can understand and get many people to maintain. There is a high activation energy required to go from simple software that people maintain to a complex system that can maintain itself.
Legal minefield. Who has responsibility for an AGIs actions? The company or the user? This is solved if it can be sold as intelligence augmentation and is sold in a very raw state with little knowledge and is trained/given more responsibility by the user.
Programmer incentives:
Programmers don’t want to program themselves out of a job.
Programmers also optimize for simple things that they can maintain/understand.
I’m guessing if ever it stops being easy to make money as a software company, then the other incentives might get overridden.
The only real reason to make AGI is if you want to take over the world (or solve other big problems). And if you want, you will not put on your web page—if you are serious. So we will almost never see credible claims on work on AGI, and especially on self-improving superintelligence.
Exception: Schmidhuber
Exception: Goertzel and just about every founder of the AI field who work on AI mainly as a way of understanding thought and building things like us.
Almost every flying machine innovator was quite public about his goal. And there were a lot of them. Still, a dark horse won.
Here, the situation is quite similar, except that a dark horse victory is not very likely.
If Google is unable to improve its Deep/Alpha product line to an effective AGI machine in a less than 10 years, they are either utterly incompetent (which they aren’t) or this NN paradigm isn’t strong enough. Which sounds unlikely, too.
Others have less than 10 years wide opportunity window.
I am not too excited about the amount of CPU/RAM requirements for this NN/ML style of racing. But it might be just good enough.
I think NN is strong enough for ML I just think that ML is the wrong paradigm. It is at best a partial answer, it does not capture a class of things that humans do that I think is important.
Mathematically ML is trying to find a function from input to output. There are things we do that do not fall into that in our language processing. A couple of examples.
Attentional phrases: “This is important, pay attention,” this means that you should devote more mental energy to processing/learning whatever is happening around you. To learn to process this kind of phrase, you would have to be able to create a map of input to some form of attention control. This form of attention control has not been practised in ML, it is assumed that if data is being presented to the algorithm it is important data.
Language about language: “The word for word in French is mot”, this changes not only the internal state. But also the mapping of input to internal state (and mapping of input to the mapping of input to internal state). Processing it and other phrases would allow you to process the phrase “le mot à mot en allemand est wort”. It is akin to learning to compiling down a new compiler.
You could maybe approximate both these tasks with a crazy hotchpotch of ML systems. But I think that that way is a blind alley.
Learning both of these abilities will have some ML involved. However Language is weird and we have not scratched the surface of how it interacts with learning.
I’d put some money on AGI being pretty different to current ML.
Me too. It’s possible to go the NN/ML (a lot of acronyms and no good name) way, and I don’t think it’s a blind alley, but it’s a long way. Not the most efficient use of the computing resources, by far.
And yes, there are important problems, where the NN approach is particularly clumsy.
Just give those NN guys a shot. The reality will decide.
I would say about 1000.
950 of them have no chance at all.
But at least 20 of those which tirelessly exercising cnn or ML or some other neural network thing, have some chances to success.
And about 20 other teams may be out there, which have some other, also decent ideas.
I am just speculating, but this looks plausible to me.
I have approximately the same priors:
10 large companies, which get the most probability of creating something, and 50 percent it will be Google.
100 university professors and startups. If they create something meaningful, they will be acquired by Google
1000 freaks.
10 large companies seem to be an understatement.
From my head:
Baidu
Alibaba
Salesforce
Facebook
Amazon
Palantir
IBM
Google
Apple
Samsung
Microsoft
Bridgewater Associates
Infosys
The Chinese Government
Toyota
Tencent Holding
Oracle
Are you saying that all of those are working on AGI? That would be enormously surprising to me.
Currently, there’s competition about how has the best cloud between Google, IBM, Amazon, Oracle and Microsoft. It seems that those companies believe that a successful cloud platform is one that has API’s that can easily used for a wide variety of use cases.
I think this kind of AI research is equivalent with AGI research.
Facebooks internal AI research is broad enough that they pursued Go as a toy problem similar to how Deep Mind did so. After DeepMind’s success, Tencent Holding didn’t take long to debut an engine that’s on par with professional players even through it isn’t yet on the AlphaGo level.
Apple has money lying around. It’s knows that Siri underperforms at the moment, so it makes total sense to invest money into long-term AI/AGI research. Strategically Apple doesn’t want to be in a situation where Google’s DeepMind/Google Brain initiatives continue to put it’s assistant well ahead of Apple’s performance.
Samsung wants Bixby to be a success and not be outperformed by competing assistants. Samsung also needs AI in a variety of other fields for military tech to internet of things applications.
Bridgewater Associates is working on it’s AI/human hybrid to replace Ray Dalio. Using humans as subroutines might mean that the result get’s dangerous much faster.
Palantir wants the money of the US military for doing various analysis tasks. Given that it’s a broad spectrum of tasks it pays to have quite AI general capabilities. The US military wants to buy AI. While the CIA is now in the Amazon cloud, Palantir wants to stay competitive and don’t lose projects to Amazon and that requires it do to basic research.
Salesforce has the money and it will need to do a lot of AI to keep up with the times.
I think Baidu and Alibaba will face similar pressures as Google and Amazon. I think both need to invest into basic AI and the have the capability to do so.
Given the possible consequences of AGI for geopolitical power, I think it’s very likely that the Chinese Government has an AGI project.
Okay, you have a much broader definition of what’s AGI research, then. I usually interpret the term to only mean research that has making AGI as an explicit objective, especially since most researchers would (IME) disagree with “API’s that can easily used for a wide variety of use cases” being equivalent to AGI research.
Thanks for the update. Some of the names are surprising for me, but I will check.