Help clear something up for me: I am extremely confused (theoretically) how we can simultaneously have:
1. An Artificial Superintelligence
2. It be controlled by humans (therefore creating misuse of concentration of power issues)
My intuition is that once it reaches a particular level of power it will be uncontrollable. Unless people are saying that we can have models 100x more powerful than GPT4 without it having any agency??
You could have a Q&A superintelligence that is passive and reactive—it gives the best answer to a question, on the basis of what it already knows, but it takes no steps to acquire more information, and when it’s not asked a question, it just sits there… But any agent that uses it, would de facto become a superintelligence with agency.
This is one of the key reasons that the term alignment was invented and used instead of control; I can be aligned with the interests of my infant, or my pet, without any control on their part.
Help clear something up for me: I am extremely confused (theoretically) how we can simultaneously have:
1. An Artificial Superintelligence
2. It be controlled by humans (therefore creating misuse of concentration of power issues)
My intuition is that once it reaches a particular level of power it will be uncontrollable. Unless people are saying that we can have models 100x more powerful than GPT4 without it having any agency??
You could have a Q&A superintelligence that is passive and reactive—it gives the best answer to a question, on the basis of what it already knows, but it takes no steps to acquire more information, and when it’s not asked a question, it just sits there… But any agent that uses it, would de facto become a superintelligence with agency.
This is one of the key reasons that the term alignment was invented and used instead of control; I can be aligned with the interests of my infant, or my pet, without any control on their part.