EDIT: Sorry, I’m specifically not talking about seed AI’s. I’m talking about the (non-) possibility of commercial programs designed for specific applications “going rogue”
To adopt self-modification as a strategy, it would have to have knowledge of itself. And then, it order to pursue the strategy, it would have to decide that the costs of discovering self-improvements were an efficient use of its resources, if it could even estimate the amount of time it took to discover an actual improvement on its system.
Intelligence can’t just instantly come up with the right answer by applying heuristics. Intelligence has to go through a heuristic (narrowing the search space)/random search/TEST (or PROVE) cycle.
Self-improvement is very costly in terms of these cycles. To even confirm that a modification is a self-improvement, a system has to simulate its modified performance on a variety of test problems. If a system is designed to solve problems that take X amount of time, it would take at least X that amount of time to get an empirical sample to answer whether or not a proposed modification would be worth it (and likely more time for proof). And with no prior knowledge, most proposed modifications would not be improvements.
AI ethics is not necessary to constrain such systems. Just a non-lenient pruning process, (which would be required anyways for efficiency on ordinary problems.)
EDIT: Sorry, I’m specifically not talking about seed AI’s. I’m talking about the (non-) possibility of commercial programs designed for specific applications “going rogue”
To adopt self-modification as a strategy, it would have to have knowledge of itself. And then, it order to pursue the strategy, it would have to decide that the costs of discovering self-improvements were an efficient use of its resources, if it could even estimate the amount of time it took to discover an actual improvement on its system.
Intelligence can’t just instantly come up with the right answer by applying heuristics. Intelligence has to go through a heuristic (narrowing the search space)/random search/TEST (or PROVE) cycle.
Self-improvement is very costly in terms of these cycles. To even confirm that a modification is a self-improvement, a system has to simulate its modified performance on a variety of test problems. If a system is designed to solve problems that take X amount of time, it would take at least X that amount of time to get an empirical sample to answer whether or not a proposed modification would be worth it (and likely more time for proof). And with no prior knowledge, most proposed modifications would not be improvements.
AI ethics is not necessary to constrain such systems. Just a non-lenient pruning process, (which would be required anyways for efficiency on ordinary problems.)