I don’t think humans do have such real world volition, regardless of simulation hypothesis being true or false. Humans seem to have a blacklist of solutions that are deemed wrong, and that’s it. The blacklist gets selected by the world (those using bad blacklists don’t reproduce a whole lot), but isn’t really product of reasoning, and the effective approach to reproduction relies on entirely fake ultimate goals (religion), and seem to work only for a low part of intelligence range.
Agents includes humans by definition, but doesn’t mean humans will have attributes that you think agents should have.
If not even humans satisfy your definition of agent (which was, at least a couple comments ago, a tool possessing “real world intentionality”), then why is your version of the tool/agent distinction worthwhile?
My impression is that the tool/agent distinction is really about whether we use the social-modeling parts of our brain. It’s a question not about the world as much as about what’s a fruitful outlook. Modeling humans as humans works well—we are wired for this. Anthropomorphizing the desires of software or robots is only sometimes useful.
Does “real world intentionality” amount to desiring both X and desiring that X hold in the “real world”?
If the simulation hypothesis is true, are humans still agents?
I don’t think humans do have such real world volition, regardless of simulation hypothesis being true or false. Humans seem to have a blacklist of solutions that are deemed wrong, and that’s it. The blacklist gets selected by the world (those using bad blacklists don’t reproduce a whole lot), but isn’t really product of reasoning, and the effective approach to reproduction relies on entirely fake ultimate goals (religion), and seem to work only for a low part of intelligence range.
Agents includes humans by definition, but doesn’t mean humans will have attributes that you think agents should have.
If not even humans satisfy your definition of agent (which was, at least a couple comments ago, a tool possessing “real world intentionality”), then why is your version of the tool/agent distinction worthwhile?
My impression is that the tool/agent distinction is really about whether we use the social-modeling parts of our brain. It’s a question not about the world as much as about what’s a fruitful outlook. Modeling humans as humans works well—we are wired for this. Anthropomorphizing the desires of software or robots is only sometimes useful.