1. The industry is currently not violating the rules mentioned in my paper, because all current AIs are weak AIs, so none of the AIs’ power has reached the upper limit of the 7 types of AIs I described. In the future, it is possible for an AI to break through the upper limit, but I think it is uneconomical. For example, an AI psychiatrist does not need to have superhuman intelligence to perform well. An AI mathematician may be very intelligent in mathematics, but it does not need to learn how to manipulate humans or how to design DNA sequences. Of course, having regulations is better, because there may be some careless AI developers who will grant AIs too many unnecessary capabilities or permissions, although this does not improve the performance of AIs in actual tasks.
The difference between my view and Max Tegmark’s is that he seems to assume that there will only be one type of super intelligent AI in the world, while I think there will be many different types of AIs. Different types of AIs should be subject to different rules, rather than the same rule. Can you imagine a person who is both a Nobel Prize-winning scientist, the president, the richest man, and an Olympic champion at the same time? This is very strange, right? Our society doesn’t need such an all-round person. Similarly, we don’t need such an all-round AI either.
The development of a technology usually has two stages: first, achieving capabilities, and second, reducing costs. The AI technology is currently in the first stage. When AI develops to the second stage, specialization will occur.
I feel your points are very intelligent. I also agree that specializing AI is a worthwhile direction.
It’s very uncertain if it works, but all approaches are very uncertain, so humanity’s best chance is to work on many uncertain approaches.
Unfortunately, I disagree it will happen automatically. Gemini 1.5 (and probably Gemini 2.0 and GPT-4) are Mixture of Experts models. I’m no expert, but I think that means that for each token of text, a “weighting function” decides which of the sub-models should output the next token of text, or how much weight to give each sub-model.
So maybe there is an AI psychiatrist, an AI mathematician, and an AI biologist inside Gemini and o1. Which one is doing the talking depends on what question is asked, or which part of the question the overall model is answering.
The problem is they they all output words to the same stream of consciousness, and refer to past sentences with the words “I said this,” rather than “the biologist said this.” They think that they are one agent, and so they behave like one agent.
My idea—which I only thought of thanks to your paper—is to do the opposite. The experts within the Mixture of Experts model, or even the same AI on different days, do not refer to themselves with “I” but “he,” so they behave like many agents.
:) thank you for your work!
I’m not disagreeing with your work, I’m just a little less optimistic than you and don’t think things will go well unless effort is made. You wrote the 100 page paper so you probably understand effort more than me :)
1. The industry is currently not violating the rules mentioned in my paper, because all current AIs are weak AIs, so none of the AIs’ power has reached the upper limit of the 7 types of AIs I described. In the future, it is possible for an AI to break through the upper limit, but I think it is uneconomical. For example, an AI psychiatrist does not need to have superhuman intelligence to perform well. An AI mathematician may be very intelligent in mathematics, but it does not need to learn how to manipulate humans or how to design DNA sequences. Of course, having regulations is better, because there may be some careless AI developers who will grant AIs too many unnecessary capabilities or permissions, although this does not improve the performance of AIs in actual tasks.
The difference between my view and Max Tegmark’s is that he seems to assume that there will only be one type of super intelligent AI in the world, while I think there will be many different types of AIs. Different types of AIs should be subject to different rules, rather than the same rule. Can you imagine a person who is both a Nobel Prize-winning scientist, the president, the richest man, and an Olympic champion at the same time? This is very strange, right? Our society doesn’t need such an all-round person. Similarly, we don’t need such an all-round AI either.
The development of a technology usually has two stages: first, achieving capabilities, and second, reducing costs. The AI technology is currently in the first stage. When AI develops to the second stage, specialization will occur.
2. Agree.
I feel your points are very intelligent. I also agree that specializing AI is a worthwhile direction.
It’s very uncertain if it works, but all approaches are very uncertain, so humanity’s best chance is to work on many uncertain approaches.
Unfortunately, I disagree it will happen automatically. Gemini 1.5 (and probably Gemini 2.0 and GPT-4) are Mixture of Experts models. I’m no expert, but I think that means that for each token of text, a “weighting function” decides which of the sub-models should output the next token of text, or how much weight to give each sub-model.
So maybe there is an AI psychiatrist, an AI mathematician, and an AI biologist inside Gemini and o1. Which one is doing the talking depends on what question is asked, or which part of the question the overall model is answering.
The problem is they they all output words to the same stream of consciousness, and refer to past sentences with the words “I said this,” rather than “the biologist said this.” They think that they are one agent, and so they behave like one agent.
My idea—which I only thought of thanks to your paper—is to do the opposite. The experts within the Mixture of Experts model, or even the same AI on different days, do not refer to themselves with “I” but “he,” so they behave like many agents.
:) thank you for your work!
I’m not disagreeing with your work, I’m just a little less optimistic than you and don’t think things will go well unless effort is made. You wrote the 100 page paper so you probably understand effort more than me :)
Happy holidays!