Right. That said, wireheading, aka the grounding problem, is a huge unsolved philosophical problem, so I’m not sure Schmidhuber is obligated to answer wireheading objections to his theory.
Unsolved philsophical problem? Huh? No additional philosophical breakthroughs are required for wireheading to not be a problem.
If I want (all things considered, etc) to wirehead, I’ll wirehead. If I don’t want to wirehead I will not wirehead. Wireheading introduces no special additional problems and is handled the same way all other preferences about future states of the universe can be handled.
(Note: It is likely that you have some more specific point regarding in what sense you consider wireheading ‘unsolved’. I welcome explanations or sources.)
Unsolved in the sense that we don’t know how to give computer intelligences intentional states in a way that everyone would be all like “wow that AI clearly has original intentionality and isn’t just coasting off of humans sitting at the end of the chain interpreting their otherwise entirely meaningless symbols”. Maybe this problem is just stupid and will solve itself but we don’t know that yet, hence e.g. Peter’s (unpublished?) paper on goal stability under ontological shifts. (ETA: I likely don’t understand how you’re thinking about the problem.)
Unsolved in the sense that we don’t know how to give computer intelligences intentional states in a way that everyone would be all like “wow that AI clearly has original intentionality and isn’t just coasting off of humans sitting at the end of the chain interpreting their otherwise entirely meaningless symbols”.
Being able to do this would also be a step towards the related goal of trying to give computer intelligences intelligence that we cannot construe as ‘intentionality’ in any morally salient sense, so as to satisfy any “house-elf-like” qualms that we may have.
e.g. Peter’s (unpublished?) paper on goal stability under ontological shifts.
Unsolved philsophical problem? Huh? No additional philosophical breakthroughs are required for wireheading to not be a problem.
If I want (all things considered, etc) to wirehead, I’ll wirehead. If I don’t want to wirehead I will not wirehead. Wireheading introduces no special additional problems and is handled the same way all other preferences about future states of the universe can be handled.
(Note: It is likely that you have some more specific point regarding in what sense you consider wireheading ‘unsolved’. I welcome explanations or sources.)
Unsolved in the sense that we don’t know how to give computer intelligences intentional states in a way that everyone would be all like “wow that AI clearly has original intentionality and isn’t just coasting off of humans sitting at the end of the chain interpreting their otherwise entirely meaningless symbols”. Maybe this problem is just stupid and will solve itself but we don’t know that yet, hence e.g. Peter’s (unpublished?) paper on goal stability under ontological shifts. (ETA: I likely don’t understand how you’re thinking about the problem.)
Being able to do this would also be a step towards the related goal of trying to give computer intelligences intelligence that we cannot construe as ‘intentionality’ in any morally salient sense, so as to satisfy any “house-elf-like” qualms that we may have.
I assume you mean Ontological Crises in Artificial Agents’ Value Systems? I just finished republishing that one. Originally published form. New SingInst style form. A good read.