Yes we can. Superintelligences have abilities that normal intelligences do not.
Imagine a game of chess. A good AI will make vastly different moves than a bad AI or a human. More skilled players would easily be detectable. They would make very different moves.
An AI that is indistinguishable from a human (to an even greater superintelligent AI) is not dangerous, because humans are not dangerous. Just like a chess master that is indistinguishable from a regular player wouldn’t win many games.
Can’t remember ad hoc; but if superintelligence is able to do anything, it could easily pretend to be more stupid than it is. May be only “super super intelligence” could solve him.
But it also may depend of the length of the conversation. If it say just Yes or No once, we can’t decide, if it say longer sequences we could conclude something, but for any length of sentences is maximum level of intelligence that could be concluded from it.
The opportunities for detecting superintelligence would definitely be rarer if the superintelligence is actively trying to conceal the status.
What about in the case where there is no attempted concealment? Or even weaker, where the AI voluntary submits to arbitrary tests. What tests would we use?
Presumably we would have a successful model of human intelligence by that point. It’s interesting to think about what dimensions of intelligence to measure. Number of variables simultaneously optimized? Optimization speed? Ability to apply nonlinear relationships? Search speed in a high dimensional, nonlinear solution space? I guess it is more the ability to generate appropriate search spaces in the first place. Something much simpler?
limited proxies—yes, well said.
also I would add solving problems which humans were unable to solve for long: aging, cancer, star travel, word peace, resurrection of dead.
Or we could ask these AI to create the scale. We could use also its size to estimate power, like number on neurons.
But real test needs to be powerful as well as universal optimization problem, something like ability to crack complex encryption or Go game.
I created a list of steps or milestones of the future AI and we could use similar list to estimate level of current super AI.
AI autopilot. Tesla has it already.
AI home robot. All prerequisites are available to build it by 2020 maximum. This robot will be able to understand and fulfill an order like ‘Bring my slippers from the other room’. On its basis, something like “mind-brick” may be created, which is a universal robot brain able to navigate in natural space and recognize speech. Then, this mind-brick can be used to create more sophisticated systems.
AI intellectual assistant. Searching through personal documentation, possibility to ask questions in a natural language and receive wise answers. 2020-2030.
AI human model. Very vague as yet. Could be realized by means of a robot brain adaptation. Will be able to simulate 99% of usual human behavior, probably, except for solving problems of consciousness, complicated creative tasks, and generating innovations. 2030.
AI as powerful as an entire research institution and able to create scientific knowledge and get self-upgraded. Can be made of numerous human models. 100 simulated people, each working 100 times faster than a human being, will be probably able to create AI capable to get self-improved faster, than humans in other laboratories can do it. 2030-2100
5a Self-improving threshold. AI becomes able to self-improve independently and quicker than all humanity
5b Consciousness and qualia threshold. AI is able not only pass Turing test in all cases, but has experiences and has understanding why and what it is.
Mankind-level AI. AI possessing intelligence comparable to that of the whole mankind. 2040-2100
AI with the intelligence 10 – 100 times bigger than that of the whole mankind. It will be able to solve problems of aging, cancer, solar system exploration, nanorobots building, and radical improvement of life of all people. 2050-2100
Jupiter brain – huge AI using the entire planet’s mass for calculations. It can reconstruct dead people, create complex simulations of the past, and dispatch von Neumann probes. 2100-3000
Galactic kardashov level 3 AI. Several million years from now.
We can’t judge based on behaviour that some one is superintelligent or not.
Yes we can. Superintelligences have abilities that normal intelligences do not.
Imagine a game of chess. A good AI will make vastly different moves than a bad AI or a human. More skilled players would easily be detectable. They would make very different moves.
But in some games it is better to look more stupid in the begging. Like poker, espionage and AI box experiment.
An AI that is indistinguishable from a human (to an even greater superintelligent AI) is not dangerous, because humans are not dangerous. Just like a chess master that is indistinguishable from a regular player wouldn’t win many games.
It may be indistinguishable until it gets our of the building. Recent movie Ex Machine had such plot.
The AI doesn’t want to escape from the building. It’s utility function is basically to mimic humans. It’s a terminal value, not a subgoal.
But most humans would like to escape from any confinement
I wonder if this is true in general. Have you read a good discussion on detecting superintelligence?
Can’t remember ad hoc; but if superintelligence is able to do anything, it could easily pretend to be more stupid than it is. May be only “super super intelligence” could solve him. But it also may depend of the length of the conversation. If it say just Yes or No once, we can’t decide, if it say longer sequences we could conclude something, but for any length of sentences is maximum level of intelligence that could be concluded from it.
The opportunities for detecting superintelligence would definitely be rarer if the superintelligence is actively trying to conceal the status.
What about in the case where there is no attempted concealment? Or even weaker, where the AI voluntary submits to arbitrary tests. What tests would we use?
Presumably we would have a successful model of human intelligence by that point. It’s interesting to think about what dimensions of intelligence to measure. Number of variables simultaneously optimized? Optimization speed? Ability to apply nonlinear relationships? Search speed in a high dimensional, nonlinear solution space? I guess it is more the ability to generate appropriate search spaces in the first place. Something much simpler?
Probably winning humans in ALL known domains, including philosophy, poetry, love, power.
Although we use limited proxies (e.g., IQ test questions) to estimate human intelligence.
limited proxies—yes, well said. also I would add solving problems which humans were unable to solve for long: aging, cancer, star travel, word peace, resurrection of dead.
I mean, the ability to estimate the abilities of superintelligences appears to be an aspect of reliable Vingean reflection.
Or we could ask these AI to create the scale. We could use also its size to estimate power, like number on neurons. But real test needs to be powerful as well as universal optimization problem, something like ability to crack complex encryption or Go game.
I created a list of steps or milestones of the future AI and we could use similar list to estimate level of current super AI.
AI autopilot. Tesla has it already.
AI home robot. All prerequisites are available to build it by 2020 maximum. This robot will be able to understand and fulfill an order like ‘Bring my slippers from the other room’. On its basis, something like “mind-brick” may be created, which is a universal robot brain able to navigate in natural space and recognize speech. Then, this mind-brick can be used to create more sophisticated systems.
AI intellectual assistant. Searching through personal documentation, possibility to ask questions in a natural language and receive wise answers. 2020-2030.
AI human model. Very vague as yet. Could be realized by means of a robot brain adaptation. Will be able to simulate 99% of usual human behavior, probably, except for solving problems of consciousness, complicated creative tasks, and generating innovations. 2030.
AI as powerful as an entire research institution and able to create scientific knowledge and get self-upgraded. Can be made of numerous human models. 100 simulated people, each working 100 times faster than a human being, will be probably able to create AI capable to get self-improved faster, than humans in other laboratories can do it. 2030-2100
5a Self-improving threshold. AI becomes able to self-improve independently and quicker than all humanity 5b Consciousness and qualia threshold. AI is able not only pass Turing test in all cases, but has experiences and has understanding why and what it is.
Mankind-level AI. AI possessing intelligence comparable to that of the whole mankind. 2040-2100
AI with the intelligence 10 – 100 times bigger than that of the whole mankind. It will be able to solve problems of aging, cancer, solar system exploration, nanorobots building, and radical improvement of life of all people. 2050-2100
Jupiter brain – huge AI using the entire planet’s mass for calculations. It can reconstruct dead people, create complex simulations of the past, and dispatch von Neumann probes. 2100-3000
Galactic kardashov level 3 AI. Several million years from now.
All-Universe AI. Several billion years from now