Can’t remember ad hoc; but if superintelligence is able to do anything, it could easily pretend to be more stupid than it is. May be only “super super intelligence” could solve him.
But it also may depend of the length of the conversation. If it say just Yes or No once, we can’t decide, if it say longer sequences we could conclude something, but for any length of sentences is maximum level of intelligence that could be concluded from it.
The opportunities for detecting superintelligence would definitely be rarer if the superintelligence is actively trying to conceal the status.
What about in the case where there is no attempted concealment? Or even weaker, where the AI voluntary submits to arbitrary tests. What tests would we use?
Presumably we would have a successful model of human intelligence by that point. It’s interesting to think about what dimensions of intelligence to measure. Number of variables simultaneously optimized? Optimization speed? Ability to apply nonlinear relationships? Search speed in a high dimensional, nonlinear solution space? I guess it is more the ability to generate appropriate search spaces in the first place. Something much simpler?
limited proxies—yes, well said.
also I would add solving problems which humans were unable to solve for long: aging, cancer, star travel, word peace, resurrection of dead.
Or we could ask these AI to create the scale. We could use also its size to estimate power, like number on neurons.
But real test needs to be powerful as well as universal optimization problem, something like ability to crack complex encryption or Go game.
I created a list of steps or milestones of the future AI and we could use similar list to estimate level of current super AI.
AI autopilot. Tesla has it already.
AI home robot. All prerequisites are available to build it by 2020 maximum. This robot will be able to understand and fulfill an order like ‘Bring my slippers from the other room’. On its basis, something like “mind-brick” may be created, which is a universal robot brain able to navigate in natural space and recognize speech. Then, this mind-brick can be used to create more sophisticated systems.
AI intellectual assistant. Searching through personal documentation, possibility to ask questions in a natural language and receive wise answers. 2020-2030.
AI human model. Very vague as yet. Could be realized by means of a robot brain adaptation. Will be able to simulate 99% of usual human behavior, probably, except for solving problems of consciousness, complicated creative tasks, and generating innovations. 2030.
AI as powerful as an entire research institution and able to create scientific knowledge and get self-upgraded. Can be made of numerous human models. 100 simulated people, each working 100 times faster than a human being, will be probably able to create AI capable to get self-improved faster, than humans in other laboratories can do it. 2030-2100
5a Self-improving threshold. AI becomes able to self-improve independently and quicker than all humanity
5b Consciousness and qualia threshold. AI is able not only pass Turing test in all cases, but has experiences and has understanding why and what it is.
Mankind-level AI. AI possessing intelligence comparable to that of the whole mankind. 2040-2100
AI with the intelligence 10 – 100 times bigger than that of the whole mankind. It will be able to solve problems of aging, cancer, solar system exploration, nanorobots building, and radical improvement of life of all people. 2050-2100
Jupiter brain – huge AI using the entire planet’s mass for calculations. It can reconstruct dead people, create complex simulations of the past, and dispatch von Neumann probes. 2100-3000
Galactic kardashov level 3 AI. Several million years from now.
I wonder if this is true in general. Have you read a good discussion on detecting superintelligence?
Can’t remember ad hoc; but if superintelligence is able to do anything, it could easily pretend to be more stupid than it is. May be only “super super intelligence” could solve him. But it also may depend of the length of the conversation. If it say just Yes or No once, we can’t decide, if it say longer sequences we could conclude something, but for any length of sentences is maximum level of intelligence that could be concluded from it.
The opportunities for detecting superintelligence would definitely be rarer if the superintelligence is actively trying to conceal the status.
What about in the case where there is no attempted concealment? Or even weaker, where the AI voluntary submits to arbitrary tests. What tests would we use?
Presumably we would have a successful model of human intelligence by that point. It’s interesting to think about what dimensions of intelligence to measure. Number of variables simultaneously optimized? Optimization speed? Ability to apply nonlinear relationships? Search speed in a high dimensional, nonlinear solution space? I guess it is more the ability to generate appropriate search spaces in the first place. Something much simpler?
Probably winning humans in ALL known domains, including philosophy, poetry, love, power.
Although we use limited proxies (e.g., IQ test questions) to estimate human intelligence.
limited proxies—yes, well said. also I would add solving problems which humans were unable to solve for long: aging, cancer, star travel, word peace, resurrection of dead.
I mean, the ability to estimate the abilities of superintelligences appears to be an aspect of reliable Vingean reflection.
Or we could ask these AI to create the scale. We could use also its size to estimate power, like number on neurons. But real test needs to be powerful as well as universal optimization problem, something like ability to crack complex encryption or Go game.
I created a list of steps or milestones of the future AI and we could use similar list to estimate level of current super AI.
AI autopilot. Tesla has it already.
AI home robot. All prerequisites are available to build it by 2020 maximum. This robot will be able to understand and fulfill an order like ‘Bring my slippers from the other room’. On its basis, something like “mind-brick” may be created, which is a universal robot brain able to navigate in natural space and recognize speech. Then, this mind-brick can be used to create more sophisticated systems.
AI intellectual assistant. Searching through personal documentation, possibility to ask questions in a natural language and receive wise answers. 2020-2030.
AI human model. Very vague as yet. Could be realized by means of a robot brain adaptation. Will be able to simulate 99% of usual human behavior, probably, except for solving problems of consciousness, complicated creative tasks, and generating innovations. 2030.
AI as powerful as an entire research institution and able to create scientific knowledge and get self-upgraded. Can be made of numerous human models. 100 simulated people, each working 100 times faster than a human being, will be probably able to create AI capable to get self-improved faster, than humans in other laboratories can do it. 2030-2100
5a Self-improving threshold. AI becomes able to self-improve independently and quicker than all humanity 5b Consciousness and qualia threshold. AI is able not only pass Turing test in all cases, but has experiences and has understanding why and what it is.
Mankind-level AI. AI possessing intelligence comparable to that of the whole mankind. 2040-2100
AI with the intelligence 10 – 100 times bigger than that of the whole mankind. It will be able to solve problems of aging, cancer, solar system exploration, nanorobots building, and radical improvement of life of all people. 2050-2100
Jupiter brain – huge AI using the entire planet’s mass for calculations. It can reconstruct dead people, create complex simulations of the past, and dispatch von Neumann probes. 2100-3000
Galactic kardashov level 3 AI. Several million years from now.
All-Universe AI. Several billion years from now