Interesting. Is it that if we were Caresian, you’d expect to be able to look at the agent-outside-the-world to find answers to questions about what even is the right way to go about building AI?
Not really. If we were Cartesian, in order to fit the way we find the world, it seems to be that it’d have to be that agentiness is created outside the observable universe, possibly somewhere hypercomputation is possible, which might only admit an answer about how to build AI that would look roughly like “put a soul in it”, i.e. link it up to this other place where agentiness is coming from. Although I guess if the world really looked like that maybe the way to do the “soul linkage” part would be visible, but it’s not so seems unlikely.
Well ok, agreed, but even if we were Cartesian, we would still have questions about what is the right way to link up our machines with this place where agentiness is coming from, how we discern whether we are in fact Cartesian or embedded, and so on down to the problem of the criterion as you described it.
One common response to any such difficult philosophical problems seems to be to just build AI that uses some form of indirect normativity such as CEV or HCH or AI debate to work out what wise humans would do about those philosophical problems. But I don’t think it’s so easy to sidestep the problem of the criterion.
Oh, I don’t think those things exactly sidestep the problem of the criterion so much as commit to a response to it without necessarily realizing that’s what they’re doing. All of them sort of punt on it by saying “let humans figure out that part”, which at the end of the day is what any solution is going to do because we’re the ones trying to build the AI and making the decisions, but we can be more or less deliberate about how we do this part.
Largely agree. I think you’re exploring what I’d call the deep implications of the fact that agents are embedded rather than Cartesian.
Interesting. Is it that if we were Caresian, you’d expect to be able to look at the agent-outside-the-world to find answers to questions about what even is the right way to go about building AI?
Not really. If we were Cartesian, in order to fit the way we find the world, it seems to be that it’d have to be that agentiness is created outside the observable universe, possibly somewhere hypercomputation is possible, which might only admit an answer about how to build AI that would look roughly like “put a soul in it”, i.e. link it up to this other place where agentiness is coming from. Although I guess if the world really looked like that maybe the way to do the “soul linkage” part would be visible, but it’s not so seems unlikely.
Well ok, agreed, but even if we were Cartesian, we would still have questions about what is the right way to link up our machines with this place where agentiness is coming from, how we discern whether we are in fact Cartesian or embedded, and so on down to the problem of the criterion as you described it.
One common response to any such difficult philosophical problems seems to be to just build AI that uses some form of indirect normativity such as CEV or HCH or AI debate to work out what wise humans would do about those philosophical problems. But I don’t think it’s so easy to sidestep the problem of the criterion.
Oh, I don’t think those things exactly sidestep the problem of the criterion so much as commit to a response to it without necessarily realizing that’s what they’re doing. All of them sort of punt on it by saying “let humans figure out that part”, which at the end of the day is what any solution is going to do because we’re the ones trying to build the AI and making the decisions, but we can be more or less deliberate about how we do this part.