When you talk about specializing AI powers, you talk about a high intellectual power AI with limited informational power and limited mental (social) power. I think this idea is similar to what Max Tegmark said in an article:
If you’d summarize the conventional past wisdom on how to avoid an intelligence explosion in a “Don’t-do-list” for powerful AI, it might start like this:
☐ Don’t teach it to code: this facilitates recursive self-improvement
☐ Don’t connect it to the internet: let it learn only the minimum needed to help us, not how to manipulate us or gain power
☐ Don’t give it a public API: prevent nefarious actors from using it within their code
☐ Don’t start an arms race: this incentivizes everyone to prioritize development speed over safety
Industry has collectively proven itself incapable to self-regulate, by violating all of these rules.
He disagrees that “the market will automatically develop in this direction” and is strongly pushing for regulation.
Another think Max Tegmark talks about is focusing on Tool AI instead of building a single AGI which can do everything better than humans (see 4:48 to 6:30 in his video). This slightly resembles specializing AI intelligence, but I feel his Tool AI regulation is too restrictive to be a permanent solution. He also argues for cooperation between the US and China to push for international regulation (in 12:03 to 14:28 of that video).
Of course, there are tons of ideas in your paper that he hasn’t talked about yet.
The problem with AGI is at first it has no destructive power at all, and then it suddenly has great destructive power. By the time people see its destructive power, it’s too late. Maybe the ASI has already taken over the world, or maybe the AGI has already invented a new deadly technology which can never ever be “uninvented,” and bad actors can do harm far more efficiently.
1. The industry is currently not violating the rules mentioned in my paper, because all current AIs are weak AIs, so none of the AIs’ power has reached the upper limit of the 7 types of AIs I described. In the future, it is possible for an AI to break through the upper limit, but I think it is uneconomical. For example, an AI psychiatrist does not need to have superhuman intelligence to perform well. An AI mathematician may be very intelligent in mathematics, but it does not need to learn how to manipulate humans or how to design DNA sequences. Of course, having regulations is better, because there may be some careless AI developers who will grant AIs too many unnecessary capabilities or permissions, although this does not improve the performance of AIs in actual tasks.
The difference between my view and Max Tegmark’s is that he seems to assume that there will only be one type of super intelligent AI in the world, while I think there will be many different types of AIs. Different types of AIs should be subject to different rules, rather than the same rule. Can you imagine a person who is both a Nobel Prize-winning scientist, the president, the richest man, and an Olympic champion at the same time? This is very strange, right? Our society doesn’t need such an all-round person. Similarly, we don’t need such an all-round AI either.
The development of a technology usually has two stages: first, achieving capabilities, and second, reducing costs. The AI technology is currently in the first stage. When AI develops to the second stage, specialization will occur.
I feel your points are very intelligent. I also agree that specializing AI is a worthwhile direction.
It’s very uncertain if it works, but all approaches are very uncertain, so humanity’s best chance is to work on many uncertain approaches.
Unfortunately, I disagree it will happen automatically. Gemini 1.5 (and probably Gemini 2.0 and GPT-4) are Mixture of Experts models. I’m no expert, but I think that means that for each token of text, a “weighting function” decides which of the sub-models should output the next token of text, or how much weight to give each sub-model.
So maybe there is an AI psychiatrist, an AI mathematician, and an AI biologist inside Gemini and o1. Which one is doing the talking depends on what question is asked, or which part of the question the overall model is answering.
The problem is they they all output words to the same stream of consciousness, and refer to past sentences with the words “I said this,” rather than “the biologist said this.” They think that they are one agent, and so they behave like one agent.
My idea—which I only thought of thanks to your paper—is to do the opposite. The experts within the Mixture of Experts model, or even the same AI on different days, do not refer to themselves with “I” but “he,” so they behave like many agents.
:) thank you for your work!
I’m not disagreeing with your work, I’m just a little less optimistic than you and don’t think things will go well unless effort is made. You wrote the 100 page paper so you probably understand effort more than me :)
That is very thoughtful.
1.
When you talk about specializing AI powers, you talk about a high intellectual power AI with limited informational power and limited mental (social) power. I think this idea is similar to what Max Tegmark said in an article:
He disagrees that “the market will automatically develop in this direction” and is strongly pushing for regulation.
Another think Max Tegmark talks about is focusing on Tool AI instead of building a single AGI which can do everything better than humans (see 4:48 to 6:30 in his video). This slightly resembles specializing AI intelligence, but I feel his Tool AI regulation is too restrictive to be a permanent solution. He also argues for cooperation between the US and China to push for international regulation (in 12:03 to 14:28 of that video).
Of course, there are tons of ideas in your paper that he hasn’t talked about yet.
You should read about the Future of Life Institute, which is headed by Max Tegmark and is said to have a budget of $30 million.
2.
The problem with AGI is at first it has no destructive power at all, and then it suddenly has great destructive power. By the time people see its destructive power, it’s too late. Maybe the ASI has already taken over the world, or maybe the AGI has already invented a new deadly technology which can never ever be “uninvented,” and bad actors can do harm far more efficiently.
1. The industry is currently not violating the rules mentioned in my paper, because all current AIs are weak AIs, so none of the AIs’ power has reached the upper limit of the 7 types of AIs I described. In the future, it is possible for an AI to break through the upper limit, but I think it is uneconomical. For example, an AI psychiatrist does not need to have superhuman intelligence to perform well. An AI mathematician may be very intelligent in mathematics, but it does not need to learn how to manipulate humans or how to design DNA sequences. Of course, having regulations is better, because there may be some careless AI developers who will grant AIs too many unnecessary capabilities or permissions, although this does not improve the performance of AIs in actual tasks.
The difference between my view and Max Tegmark’s is that he seems to assume that there will only be one type of super intelligent AI in the world, while I think there will be many different types of AIs. Different types of AIs should be subject to different rules, rather than the same rule. Can you imagine a person who is both a Nobel Prize-winning scientist, the president, the richest man, and an Olympic champion at the same time? This is very strange, right? Our society doesn’t need such an all-round person. Similarly, we don’t need such an all-round AI either.
The development of a technology usually has two stages: first, achieving capabilities, and second, reducing costs. The AI technology is currently in the first stage. When AI develops to the second stage, specialization will occur.
2. Agree.
I feel your points are very intelligent. I also agree that specializing AI is a worthwhile direction.
It’s very uncertain if it works, but all approaches are very uncertain, so humanity’s best chance is to work on many uncertain approaches.
Unfortunately, I disagree it will happen automatically. Gemini 1.5 (and probably Gemini 2.0 and GPT-4) are Mixture of Experts models. I’m no expert, but I think that means that for each token of text, a “weighting function” decides which of the sub-models should output the next token of text, or how much weight to give each sub-model.
So maybe there is an AI psychiatrist, an AI mathematician, and an AI biologist inside Gemini and o1. Which one is doing the talking depends on what question is asked, or which part of the question the overall model is answering.
The problem is they they all output words to the same stream of consciousness, and refer to past sentences with the words “I said this,” rather than “the biologist said this.” They think that they are one agent, and so they behave like one agent.
My idea—which I only thought of thanks to your paper—is to do the opposite. The experts within the Mixture of Experts model, or even the same AI on different days, do not refer to themselves with “I” but “he,” so they behave like many agents.
:) thank you for your work!
I’m not disagreeing with your work, I’m just a little less optimistic than you and don’t think things will go well unless effort is made. You wrote the 100 page paper so you probably understand effort more than me :)
Happy holidays!