So the first thing to do is to group the partial preferences together according to similarity (for example, preferences for concepts closely related in terms of webs of connotations should generally be grouped together), and generalise them in some regularised way. Generalise means, here, that they are transformed into full preferences, comparing all possible universes. [...] It seems that standard machine learning techniques should already be up to this task (with all the usual current problems).
I don’t understand how this is even close to being possible today. For example I have some partial preferences that could generally be described as valuing the existence of positive conscious experiences, but I have no idea how to generalize this to full preferences, since I do not have a way to determine, given an arbitrary physical system, whether it contains a mind that is having a positive conscious experience. This seems like a very hard philosophical problem to solve, and I don’t see how “standard machine learning techniques” could possibly be to up to this task.
The way I would approach this problem is to say that humans seem to have a way of trying to generalize (e.g., figure out what we really mean by “positive conscious experience”) by “doing philosophy” or “applying philosophical reasoning”, and if we better understood what we’re doing when we “do philosophy” then maybe we can program or teach an AI to do that. See Some Thoughts on Metaphilosophy where I wrote down some recent thoughts along these lines.
I’m curious to know what your thinking is here, in more detail.
I’d say that this problem doesn’t belong in section 2.3-2.4 (collecting and generalising preferences), but in section 1.2 (symbol grounding, and especially the web of connotations). That’s where these questions should be solved, in my view.
So yeah, I agree that standard machine learning is not up to the task yet, at all.
(as a minor aside, I’m also a bit unsure how necessary it is to make partial preferences total before combining them; this may be unnecessary)
I don’t understand how this is even close to being possible today. For example I have some partial preferences that could generally be described as valuing the existence of positive conscious experiences, but I have no idea how to generalize this to full preferences, since I do not have a way to determine, given an arbitrary physical system, whether it contains a mind that is having a positive conscious experience. This seems like a very hard philosophical problem to solve, and I don’t see how “standard machine learning techniques” could possibly be to up to this task.
The way I would approach this problem is to say that humans seem to have a way of trying to generalize (e.g., figure out what we really mean by “positive conscious experience”) by “doing philosophy” or “applying philosophical reasoning”, and if we better understood what we’re doing when we “do philosophy” then maybe we can program or teach an AI to do that. See Some Thoughts on Metaphilosophy where I wrote down some recent thoughts along these lines.
I’m curious to know what your thinking is here, in more detail.
I’d say that this problem doesn’t belong in section 2.3-2.4 (collecting and generalising preferences), but in section 1.2 (symbol grounding, and especially the web of connotations). That’s where these questions should be solved, in my view.
So yeah, I agree that standard machine learning is not up to the task yet, at all.
(as a minor aside, I’m also a bit unsure how necessary it is to make partial preferences total before combining them; this may be unnecessary)