For instance, I think the classic heuristics and biases view holds that many of our default intuitions are predictably inaccurate.
The classic heuristics and biases literature is about things like the planning fallacy; it has very little to say about intuitions about human value, which is more in the domain of experimental moral philosophy. “Intuitions” is also a pretty broad term and arguably doesn’t cleave reality at the joints; it’s easy for some intuitions to fall prey to cultural blindspots (along the lines of Paul Graham’s moral fashion) in exactly the way I’m worried about and I think it’s an important problem to find ways of getting past blindspots in a broad sense. I trust some of my intuitions, whatever that means, much more than others for doing this.
I believe there are many both in broader society and in this community for whom human sexuality is outright negative on net, and yet I see extremely few willing to entertain this notion, even as a hypothesis.
I’m willing to entertain this as a hypothesis, although I’d be extremely sad to live in this world. I appreciate your willingness to stick up for this belief; I think this is exactly the kind of getting-past-blindspots thing we need on the meta level even if I currently disagree on the object level.
I’d be interested in hearing examples of scenarios where you believe important policies seem disaligned with embodied values; I find this claim very bold and am quite interested in figuring out whether there’s a good way to test it. If true, it could necessitate a serious realignment of many groups.
So as I mentioned in another comment, I think basically all of the weird positions described in the SSC post on EAG 2017 are wrong. People who are worrying about insect suffering or particle suffering seem to me to be making philosophical mistakes and to the extent that those people are setting agendas I think they’re wasting everyone’s time and attention.
Personally I wonder how much of this disagreement can be attributed to prematurely settling on specifc fundamental positions or some hidden metaphysics that certain organizations have (perhaps unknowingly) committed to—such as dualism or pansychism. One of the most salient paragraphs from Scott’s article said:
Morality wasn’t supposed to be like this. Most of the effective altruists I met were nonrealist utilitarians. They don’t believe in some objective moral law imposed by an outside Power. They just think that we should pursue our own human-parochial moral values effectively. If there was ever a recipe for a safe and milquetoast ethical system, that should be it. And yet once you start thinking about what morality is – really thinking, the kind where you try to use mathematical models and formal logic – it opens up into these dark eldritch vistas of infinities and contradictions. The effective altruists started out wanting to do good. And they did: whole nine-digit-sums worth of good, spreadsheets full of lives saved and diseases cured and disasters averted. But if you really want to understand what you’re doing – get past the point where you can catch falling apples, to the point where you have a complete theory of gravitation – you end up with something as remote from normal human tenderheartedness as the conference lunches were from normal human food.
I feel like this quote has some extremely deep but subtly stated insight that is in alignment with some of the points you made. Somehow, even if we all start from the position that there is no univeral or ultimately real morality, when we apply all of our theorizing, modeling, debate, measurement, thinking, etc., this somehow leads us to making absolutist conclusions about what the “truly most important thing” is. And I wonder if this is primarily a social phenomenon: In the process of debate and organizing groups of people to accomplish things, it’s easier if we all converge to agreement about specific and easy to state questions.
A possible explanation for Scott’s observed duality between the “suits” on the one hand who just want to do the most easily-measurable good, and the “weirdos” on the other hand who want to converge to rigorous answers on the toughest of philosophical questions (and those answers tend to look pretty bizarre), and the fact that these are often the same people—my guess is this has something to do with coverging to agreement on relatively formalizable questions. Those questions often appear in two forms: The “easy to measure” kind of questions (how many people are dying from malaria, how poor is this group of people, etc.), and the “easy to model or theorize about” questions (what do we mean by suffering, what counts as a conscious being, etc.), and so you see a divergence of activity and effort spent between those two forms of questions.
“Easy” is meant in a relative sense, of course. Unfortunately, it seems that the kind of questions that interest you (and which I agree are of crucial importance) fall into the “relatively hard” category, and therefore are much more difficult to organize concerted efforts around.
The classic heuristics and biases literature is about things like the planning fallacy; it has very little to say about intuitions about human value, which is more in the domain of experimental moral philosophy.
Fair point, though I do think it provides at least weak evidence in this domain as well. That said, there are other examples of cases where intuitions about human value can be very wrong in the moment that are perhaps more salient, - addictions and buyer’s remorse come to mind.
I’m willing to entertain this as a hypothesis, although I’d be extremely sad to live in this world. I appreciate your willingness to stick up for this belief; I think this is exactly the kind of getting-past-blindspots thing we need on the meta level even if I currently disagree on the object level.
Thanks!
So as I mentioned in another comment, I think basically all of the weird positions described in the SSC post on EAG 2017 are wrong. People who are worrying about insect suffering or particle suffering seem to me to be making philosophical mistakes and to the extent that those people are setting agendas I think they’re wasting everyone’s time and attention.
I agree that these positions are mistakes. That said, I have three replies:
I don’t think the people who are making these sorts of mistakes are setting agendas or important policies. There are a few small organizations that are concerned with these matters, but they are (as far as I can tell) not taken particularly seriously aside from a small contingent of hardcore supporters.
I worry that similar arguments can very easily be applied to all weird areas, even ones that may be valid. I personally think AI alignment considerations are quite significant, but I’ve often seen people saying things that I would parse as “being worried about AI alignment is a philosophical mistake”, for instance.
It is not clear to me that the “embodied” perspective you describe offers especially useful clarification on these issues. Perhaps it does in a way that I am too unskilled with this approach to understand? I (like you) think insect suffering and particle suffering are mistaken concepts and shouldn’t be taken seriously, but I don’t necessarily feel like I need an embodied perspective to realize that.
The classic heuristics and biases literature is about things like the planning fallacy; it has very little to say about intuitions about human value, which is more in the domain of experimental moral philosophy. “Intuitions” is also a pretty broad term and arguably doesn’t cleave reality at the joints; it’s easy for some intuitions to fall prey to cultural blindspots (along the lines of Paul Graham’s moral fashion) in exactly the way I’m worried about and I think it’s an important problem to find ways of getting past blindspots in a broad sense. I trust some of my intuitions, whatever that means, much more than others for doing this.
I’m willing to entertain this as a hypothesis, although I’d be extremely sad to live in this world. I appreciate your willingness to stick up for this belief; I think this is exactly the kind of getting-past-blindspots thing we need on the meta level even if I currently disagree on the object level.
So as I mentioned in another comment, I think basically all of the weird positions described in the SSC post on EAG 2017 are wrong. People who are worrying about insect suffering or particle suffering seem to me to be making philosophical mistakes and to the extent that those people are setting agendas I think they’re wasting everyone’s time and attention.
Personally I wonder how much of this disagreement can be attributed to prematurely settling on specifc fundamental positions or some hidden metaphysics that certain organizations have (perhaps unknowingly) committed to—such as dualism or pansychism. One of the most salient paragraphs from Scott’s article said:
I feel like this quote has some extremely deep but subtly stated insight that is in alignment with some of the points you made. Somehow, even if we all start from the position that there is no univeral or ultimately real morality, when we apply all of our theorizing, modeling, debate, measurement, thinking, etc., this somehow leads us to making absolutist conclusions about what the “truly most important thing” is. And I wonder if this is primarily a social phenomenon: In the process of debate and organizing groups of people to accomplish things, it’s easier if we all converge to agreement about specific and easy to state questions.
A possible explanation for Scott’s observed duality between the “suits” on the one hand who just want to do the most easily-measurable good, and the “weirdos” on the other hand who want to converge to rigorous answers on the toughest of philosophical questions (and those answers tend to look pretty bizarre), and the fact that these are often the same people—my guess is this has something to do with coverging to agreement on relatively formalizable questions. Those questions often appear in two forms: The “easy to measure” kind of questions (how many people are dying from malaria, how poor is this group of people, etc.), and the “easy to model or theorize about” questions (what do we mean by suffering, what counts as a conscious being, etc.), and so you see a divergence of activity and effort spent between those two forms of questions.
“Easy” is meant in a relative sense, of course. Unfortunately, it seems that the kind of questions that interest you (and which I agree are of crucial importance) fall into the “relatively hard” category, and therefore are much more difficult to organize concerted efforts around.
Fair point, though I do think it provides at least weak evidence in this domain as well. That said, there are other examples of cases where intuitions about human value can be very wrong in the moment that are perhaps more salient, - addictions and buyer’s remorse come to mind.
Thanks!
I agree that these positions are mistakes. That said, I have three replies:
I don’t think the people who are making these sorts of mistakes are setting agendas or important policies. There are a few small organizations that are concerned with these matters, but they are (as far as I can tell) not taken particularly seriously aside from a small contingent of hardcore supporters.
I worry that similar arguments can very easily be applied to all weird areas, even ones that may be valid. I personally think AI alignment considerations are quite significant, but I’ve often seen people saying things that I would parse as “being worried about AI alignment is a philosophical mistake”, for instance.
It is not clear to me that the “embodied” perspective you describe offers especially useful clarification on these issues. Perhaps it does in a way that I am too unskilled with this approach to understand? I (like you) think insect suffering and particle suffering are mistaken concepts and shouldn’t be taken seriously, but I don’t necessarily feel like I need an embodied perspective to realize that.