A programmer in a basement writes some code. That code is picked up and sent to you at the computer monitoring station. You read it and can’t understand it. Now what? You don’t know the nature of intelligence. It might be possible for a team of very smart people to unravel an arbitrary piece of spaghetti code, and prove that its safe, sometimes. (Rice theorem says you can’t always prove anything about code) But incompetent coders are producing buckets of the stuff, and expect it to run the moment they press go.
An algorithm that can understand arbitrary code, to the level where it can test for intelligence, and can run in a split second on the dev’s laptop (so they don’t notice a delay) is well into foom territory. A typical programmer will see little more than suggestively named variables and how many if statements are used, if they have to quickly scan other peoples code to see if its “safe”.
One can’t understand code, but predicting the goals of the programmer may be a simpler task. If he has read “Superintelligence”, googled “self-improving AI” and is an expert in ML, the fact that he locked himself into a basement may be alarming.
A programmer in a basement writes some code. That code is picked up and sent to you at the computer monitoring station. You read it and can’t understand it. Now what? You don’t know the nature of intelligence. It might be possible for a team of very smart people to unravel an arbitrary piece of spaghetti code, and prove that its safe, sometimes. (Rice theorem says you can’t always prove anything about code) But incompetent coders are producing buckets of the stuff, and expect it to run the moment they press go.
An algorithm that can understand arbitrary code, to the level where it can test for intelligence, and can run in a split second on the dev’s laptop (so they don’t notice a delay) is well into foom territory. A typical programmer will see little more than suggestively named variables and how many if statements are used, if they have to quickly scan other peoples code to see if its “safe”.
One can’t understand code, but predicting the goals of the programmer may be a simpler task. If he has read “Superintelligence”, googled “self-improving AI” and is an expert in ML, the fact that he locked himself into a basement may be alarming.