Unsolved in the sense that we don’t know how to give computer intelligences intentional states in a way that everyone would be all like “wow that AI clearly has original intentionality and isn’t just coasting off of humans sitting at the end of the chain interpreting their otherwise entirely meaningless symbols”. Maybe this problem is just stupid and will solve itself but we don’t know that yet, hence e.g. Peter’s (unpublished?) paper on goal stability under ontological shifts. (ETA: I likely don’t understand how you’re thinking about the problem.)
Unsolved in the sense that we don’t know how to give computer intelligences intentional states in a way that everyone would be all like “wow that AI clearly has original intentionality and isn’t just coasting off of humans sitting at the end of the chain interpreting their otherwise entirely meaningless symbols”.
Being able to do this would also be a step towards the related goal of trying to give computer intelligences intelligence that we cannot construe as ‘intentionality’ in any morally salient sense, so as to satisfy any “house-elf-like” qualms that we may have.
e.g. Peter’s (unpublished?) paper on goal stability under ontological shifts.
Unsolved in the sense that we don’t know how to give computer intelligences intentional states in a way that everyone would be all like “wow that AI clearly has original intentionality and isn’t just coasting off of humans sitting at the end of the chain interpreting their otherwise entirely meaningless symbols”. Maybe this problem is just stupid and will solve itself but we don’t know that yet, hence e.g. Peter’s (unpublished?) paper on goal stability under ontological shifts. (ETA: I likely don’t understand how you’re thinking about the problem.)
Being able to do this would also be a step towards the related goal of trying to give computer intelligences intelligence that we cannot construe as ‘intentionality’ in any morally salient sense, so as to satisfy any “house-elf-like” qualms that we may have.
I assume you mean Ontological Crises in Artificial Agents’ Value Systems? I just finished republishing that one. Originally published form. New SingInst style form. A good read.