Why is it that most people seem to stubbornly think of AI systems as passive tools and believe strongly that it cannot anytime soon become agentic in the same way humans or animals are.
It could be because they are thinking of AI and the kind of conventional computer they are familiar with...and over generalising. Ordinary PCs are passive , waiting for the user to tell them what to do, because thats what the mass market wants...the market for agentive software is smaller and more esoteric.
It doesn’t have to be the result of explicit metaphysical beliefs...it could be the result of vague guesswork, and analogical thinking.
The closer you are to the system you study, the clearer it is that the system is an automaton.
Now you are defining “agentic” as “possessing spooky metaphysical free will” rather than “not passive”. It’s perfectly possibly to build an agent-in-the-sense-of-active out of mechanical parts.
It doesn’t have to be the result of explicit metaphysical beliefs...it could be the result of vague guesswork, and analogical thinking.
Yeah I could be wrong but my claim is implicit metaphysical beliefs have a big role here.
defining “agentic” as “possessing spooky metaphysical free will” rather than “not passive”. It’s perfectly possibly to build an agent-in-the-sense-of-active out of mechanical parts.
I was just noting that people who are aware of the internal workings of AI will have to acutely face cognitive dissonance if they admit it can have “spooky” agency. They can’t compartmentalize it the way others can.
It could be because they are thinking of AI and the kind of conventional computer they are familiar with...and over generalising. Ordinary PCs are passive , waiting for the user to tell them what to do, because thats what the mass market wants...the market for agentive software is smaller and more esoteric.
It doesn’t have to be the result of explicit metaphysical beliefs...it could be the result of vague guesswork, and analogical thinking.
Now you are defining “agentic” as “possessing spooky metaphysical free will” rather than “not passive”. It’s perfectly possibly to build an agent-in-the-sense-of-active out of mechanical parts.
I don’t think Yudkowsky has.
Yes. That’s why it’s a bad idea to treat any metaphysical claim as certain. Including the one above.
Yeah I could be wrong but my claim is implicit metaphysical beliefs have a big role here.
I was just noting that people who are aware of the internal workings of AI will have to acutely face cognitive dissonance if they admit it can have “spooky” agency. They can’t compartmentalize it the way others can.