Good article. I also agree with the comment about AI being a “second species” is likely incorrect.
A comment about the “agentic tool” situation. For most of the time people are like that, i.e. if you are “in the moment” you are not questioning whether you should be doing something else, being distracted, consulting your ethics about whether the task is good for the world etc. I expect this to be the default state for AI. i.e. always less “unitary agent” than people. The crux is how much and in what proportion.
However, in an extremely fast takeoff, with an arms race situation you could of course imagine someone just telling the system to get ahead as much as possible, especially if say a superpower believed to be behind and would do anything to catch up. A unitary agent would probably be the fastest way to do that. “Improve yourself asap so we don’t lose the war” requires situational awareness, power seeking etc.
Good article. I also agree with the comment about AI being a “second species” is likely incorrect.
A comment about the “agentic tool” situation. For most of the time people are like that, i.e. if you are “in the moment” you are not questioning whether you should be doing something else, being distracted, consulting your ethics about whether the task is good for the world etc. I expect this to be the default state for AI. i.e. always less “unitary agent” than people. The crux is how much and in what proportion.
However, in an extremely fast takeoff, with an arms race situation you could of course imagine someone just telling the system to get ahead as much as possible, especially if say a superpower believed to be behind and would do anything to catch up. A unitary agent would probably be the fastest way to do that. “Improve yourself asap so we don’t lose the war” requires situational awareness, power seeking etc.