Can someone explain what problem metaethics is supposed to solve?
The problem to which you believe you have a solution, a solution so obvious to you that you no longer see the problem and cannot even describe the solution.
Your solution seems to consist no more than a description of your subjective experience of moral intuition and a couple of speculations about the mechanism, speculations you have done no more than imagine, and then imagine that you could implement them. All you’ve done is imagine a black box with an output labelled “moral intuitions”. Or a collection of black boxes.
Either perspective can be implemented as a computer program pretty easily
It seems I fail at expressing myself clearly. Sometimes I write posts formulated as questions (e.g. this one, complete with “please help!”) but they come across as position statements instead. I’m not proposing or defending any solution (to what?), I’m asking where the philosophical problem of “metaethics” lies, modulo the (obvious) understanding that our moral intuitions come from some mechanical source. I’m asking what sense lukeprog’s questions can make, when reformulated in such mechanical terms. Why oh why can’t people just answer directly?
I’m not proposing or defending any solution (to what?), I’m asking where the philosophical problem of “metaethics” lies, modulo the (obvious) understanding that our moral intuitions come from some mechanical source.
That final clause is your proposed solution to the questions you quoted from lukeprog, and it fails to dissolve them. It’s like answering “how do living things work?” with “atoms!”.
He’s asking something like: ‘given that we know living things are built from atoms, what specific questions are you trying to answer?‘. He wants answers (to the question of living things) that specifically mention atoms like ‘I want to know ‘what specific configurations of atoms are commonly used in living things?″ which would have a corresponding answer of ‘well there 20 amino acids are commonly used, here are their structures’.
Then that would be the wrong way to go about it, and part of (I suspect) why anti-reductionist ideologies become popular among human minds. From the fact that atoms (or quarks) govern social interaction and preferences, it does not follow that the best explanation/model directed at a human will speak at the level of atoms, or explicitly reference them.
Rather, it need only use higher level regularities such as emotions, their historical basis, their chemical mechanisms, etc. The mechanisms for moral intuitions almost certainly act in a way that is not dependent on the particulars of atoms, in the same way that the mechanisms behind a heat engine do not depend on any particular atom having any particular velocity—just that, in the aggregate, they produce a certain pressure, temperature, etc.
The constraint of reductionism (which correct reasoning quickly converges to in this universe) is not that every explanation must reference atoms, but rather, that it could ultimately be connected to an atom-level model, even if that adds no further insight on the particular problem under investigation.
So asking for an atom-level explanation is asking for far too fine-grained of a model.
Sorry, I was unclear. I didn’t mean that cousin_it was looking for atom level explanations specifically, I meant that cousin it wanted the questions explained in terms of questions involving already understood phenomena or at least questions that are obviously in-principle reducible (like ‘what is the cognitive algorithm that makes humans think then experience X?’).
Your solution seems to consist no more than a description of your subjective experience of moral intuition and a couple of speculations about the mechanism, speculations you have done no more than imagine, and then imagine that you could implement them.
That is not correct, at the very least his explanation is the simpler one.
If we want to figure out the reasons for why something like moral philosophy does exist we’ll have to reduce it to underlying phenomena and not talk about it in terms of itself.
Can you be more explicit in saying what the problem is? This answer isn’t much helpful.
He has said that (1) morality is a term which denotes some subset of cognitive algorithms, which are very probably deterministic and thus implementable as a program (of course we don’t know what exactly those algorithms are, but this is an empirical, not a philosophical problem) and that (2) the origins of morality can be explained by evolutionary psychology. Do you disagree with either (1) or (2)?
The problem to which you believe you have a solution, a solution so obvious to you that you no longer see the problem and cannot even describe the solution.
Your solution seems to consist no more than a description of your subjective experience of moral intuition and a couple of speculations about the mechanism, speculations you have done no more than imagine, and then imagine that you could implement them. All you’ve done is imagine a black box with an output labelled “moral intuitions”. Or a collection of black boxes.
You’ve solved AGI? Tell us more!
It seems I fail at expressing myself clearly. Sometimes I write posts formulated as questions (e.g. this one, complete with “please help!”) but they come across as position statements instead. I’m not proposing or defending any solution (to what?), I’m asking where the philosophical problem of “metaethics” lies, modulo the (obvious) understanding that our moral intuitions come from some mechanical source. I’m asking what sense lukeprog’s questions can make, when reformulated in such mechanical terms. Why oh why can’t people just answer directly?
They are confused by the same words they use to dissolve their confusion about those words.
That final clause is your proposed solution to the questions you quoted from lukeprog, and it fails to dissolve them. It’s like answering “how do living things work?” with “atoms!”.
He’s asking something like: ‘given that we know living things are built from atoms, what specific questions are you trying to answer?‘. He wants answers (to the question of living things) that specifically mention atoms like ‘I want to know ‘what specific configurations of atoms are commonly used in living things?″ which would have a corresponding answer of ‘well there 20 amino acids are commonly used, here are their structures’.
Then that would be the wrong way to go about it, and part of (I suspect) why anti-reductionist ideologies become popular among human minds. From the fact that atoms (or quarks) govern social interaction and preferences, it does not follow that the best explanation/model directed at a human will speak at the level of atoms, or explicitly reference them.
Rather, it need only use higher level regularities such as emotions, their historical basis, their chemical mechanisms, etc. The mechanisms for moral intuitions almost certainly act in a way that is not dependent on the particulars of atoms, in the same way that the mechanisms behind a heat engine do not depend on any particular atom having any particular velocity—just that, in the aggregate, they produce a certain pressure, temperature, etc.
The constraint of reductionism (which correct reasoning quickly converges to in this universe) is not that every explanation must reference atoms, but rather, that it could ultimately be connected to an atom-level model, even if that adds no further insight on the particular problem under investigation.
So asking for an atom-level explanation is asking for far too fine-grained of a model.
Sorry, I was unclear. I didn’t mean that cousin_it was looking for atom level explanations specifically, I meant that cousin it wanted the questions explained in terms of questions involving already understood phenomena or at least questions that are obviously in-principle reducible (like ‘what is the cognitive algorithm that makes humans think then experience X?’).
Oh, OK.
That is not correct, at the very least his explanation is the simpler one.
If we want to figure out the reasons for why something like moral philosophy does exist we’ll have to reduce it to underlying phenomena and not talk about it in terms of itself.
Can you be more explicit in saying what the problem is? This answer isn’t much helpful.
He has said that (1) morality is a term which denotes some subset of cognitive algorithms, which are very probably deterministic and thus implementable as a program (of course we don’t know what exactly those algorithms are, but this is an empirical, not a philosophical problem) and that (2) the origins of morality can be explained by evolutionary psychology. Do you disagree with either (1) or (2)?