If AI were developed in this way, one upside might be that you would have a decent understanding of how smart it was at any given point (since that is how the AI was selected), so would be unlikely to accidentally have a system that was much more capable than you expected. I’m not sure how probable that is in other cases, but people sometimes express concern about it.
If the simulation is running at computer speed then only in it’s subjective time this would be the case. By the time we notice that variation number 34 on backtracking and arc consistency actually speeds up evolution by the final 4 orders of magnitude, it may be too late to stop the process from creating simulants that are smarter than us and can achieve takeoff acceleration.
Keep in mind though that if this were to happen than we’d be overwhelmingly likely to be living in a simulation already, for the reasons given in the Simulation Argument paper.
You could have a system that automatically stopped when you reached a system with a certain level of tested intelligence. But perhaps I’m misunderstanding you.
With respect to the simulation argument, would you mind reminding us of why we would be overwhelmingly likely to be in a simulation if we build AI which ‘achieves takeoff acceleration’, if it’s quick to summarize? I expect many people haven’t read the simulation argument paper.
Having a system that stops when you reach a level of tested intelligence sounds appealing, but I’d be afraid of the measure of intelligence at hand being too fuzzy.
So the system would not detect it had achieved that level of intelligence, and it would bootstrap and take off in time to destroy the control system which was supposed to halt it. This would happen if we failed to solve any of many distinct problems that we don’t know how to solve yet, like symbol grounding, analogical and metaphorical processing and 3 more complex ones that I can think of, but don’t want to spend resources in this thread mentioning. Same for simulation argument.
If AI were developed in this way, one upside might be that you would have a decent understanding of how smart it was at any given point (since that is how the AI was selected), so would be unlikely to accidentally have a system that was much more capable than you expected. I’m not sure how probable that is in other cases, but people sometimes express concern about it.
If the simulation is running at computer speed then only in it’s subjective time this would be the case. By the time we notice that variation number 34 on backtracking and arc consistency actually speeds up evolution by the final 4 orders of magnitude, it may be too late to stop the process from creating simulants that are smarter than us and can achieve takeoff acceleration.
Keep in mind though that if this were to happen than we’d be overwhelmingly likely to be living in a simulation already, for the reasons given in the Simulation Argument paper.
You could have a system that automatically stopped when you reached a system with a certain level of tested intelligence. But perhaps I’m misunderstanding you.
With respect to the simulation argument, would you mind reminding us of why we would be overwhelmingly likely to be in a simulation if we build AI which ‘achieves takeoff acceleration’, if it’s quick to summarize? I expect many people haven’t read the simulation argument paper.
Having a system that stops when you reach a level of tested intelligence sounds appealing, but I’d be afraid of the measure of intelligence at hand being too fuzzy.
So the system would not detect it had achieved that level of intelligence, and it would bootstrap and take off in time to destroy the control system which was supposed to halt it. This would happen if we failed to solve any of many distinct problems that we don’t know how to solve yet, like symbol grounding, analogical and metaphorical processing and 3 more complex ones that I can think of, but don’t want to spend resources in this thread mentioning. Same for simulation argument.