Understanding this definition would require me to understand your distinction between tools and agents, which already depends upon the definition of “real world intentionality”.
I think putting it as distinction between tools and agents is perhaps not very clear, but there isn’t a lot of words to choose from. The good way to understand would be to understand how the software which we know how to make, and do make, works, and what it does, and how it is fundamentally different from compressed English statement like ‘paperclip maximizing’.
In simpler terms, say, you have children. You want your children (in real world) to be safe; you do not want mere belief that they are safe while they are not. This is real world intentionality. It seems very simple but it gets very elusive when you note that ‘your children’ is defined within your mind, their safety is defined within your mind, and that you can’t do anything but work towards a state of belief.
Most interestingly still, you can update definitions, so that e.g. you can opt to destructively mind upload children into a remote backup if you anticipate high likelihood of imminent destruction of your household by a meteorite strike, and have no non-destructive scanning. At same time this ability to update makes other people kill their children and themselves to go into heaven. This flexibility is insecure against your own problem solver.
I don’t think humans do have such real world volition, regardless of simulation hypothesis being true or false. Humans seem to have a blacklist of solutions that are deemed wrong, and that’s it. The blacklist gets selected by the world (those using bad blacklists don’t reproduce a whole lot), but isn’t really product of reasoning, and the effective approach to reproduction relies on entirely fake ultimate goals (religion), and seem to work only for a low part of intelligence range.
Agents includes humans by definition, but doesn’t mean humans will have attributes that you think agents should have.
If not even humans satisfy your definition of agent (which was, at least a couple comments ago, a tool possessing “real world intentionality”), then why is your version of the tool/agent distinction worthwhile?
My impression is that the tool/agent distinction is really about whether we use the social-modeling parts of our brain. It’s a question not about the world as much as about what’s a fruitful outlook. Modeling humans as humans works well—we are wired for this. Anthropomorphizing the desires of software or robots is only sometimes useful.
Understanding this definition would require me to understand your distinction between tools and agents, which already depends upon the definition of “real world intentionality”.
Can you tell me what it means in simpler terms?
hmm did you see edit?
I think putting it as distinction between tools and agents is perhaps not very clear, but there isn’t a lot of words to choose from. The good way to understand would be to understand how the software which we know how to make, and do make, works, and what it does, and how it is fundamentally different from compressed English statement like ‘paperclip maximizing’.
In simpler terms, say, you have children. You want your children (in real world) to be safe; you do not want mere belief that they are safe while they are not. This is real world intentionality. It seems very simple but it gets very elusive when you note that ‘your children’ is defined within your mind, their safety is defined within your mind, and that you can’t do anything but work towards a state of belief.
Most interestingly still, you can update definitions, so that e.g. you can opt to destructively mind upload children into a remote backup if you anticipate high likelihood of imminent destruction of your household by a meteorite strike, and have no non-destructive scanning. At same time this ability to update makes other people kill their children and themselves to go into heaven. This flexibility is insecure against your own problem solver.
Does “real world intentionality” amount to desiring both X and desiring that X hold in the “real world”?
If the simulation hypothesis is true, are humans still agents?
I don’t think humans do have such real world volition, regardless of simulation hypothesis being true or false. Humans seem to have a blacklist of solutions that are deemed wrong, and that’s it. The blacklist gets selected by the world (those using bad blacklists don’t reproduce a whole lot), but isn’t really product of reasoning, and the effective approach to reproduction relies on entirely fake ultimate goals (religion), and seem to work only for a low part of intelligence range.
Agents includes humans by definition, but doesn’t mean humans will have attributes that you think agents should have.
If not even humans satisfy your definition of agent (which was, at least a couple comments ago, a tool possessing “real world intentionality”), then why is your version of the tool/agent distinction worthwhile?
My impression is that the tool/agent distinction is really about whether we use the social-modeling parts of our brain. It’s a question not about the world as much as about what’s a fruitful outlook. Modeling humans as humans works well—we are wired for this. Anthropomorphizing the desires of software or robots is only sometimes useful.