Well, that’s true, but I think it’s less a problem for me than it is for a lot of people here, because I don’t think there’s any respectable moral/ethical metric that you can maximize to begin with.
Ethics as a philosophical subject is on very shaky ground because it basically deals with creating pretty, consistent frameworks to systematize intuitions… but nobody ever told the intuitions that they had to be amenable to that. All forms of utilitarianism, specifically, have horrible problems with the lack of any defensible way to aggregate utilities. There are also issues about whose utility should count. Some people would include imaginary people, some would include animals, etc. But the alternatives to utilitarianism have their own problems.
So I, at least, am free to go for a lot of possible futures and take a lot of things into consideration. I can feel OK about using raw intuitions, or even aesthetic preferences, to choose between courses of action, including creating more-but-somewhat-sadder people versus fewer-but-somewhat-happier people. I don’t have any rigid system to narrow what I can care about, so I can choose to look at things like the diversity or complexity of their experiences, or how good the best ones are, or how bad the worst ones are, or in fact things that aren’t related to human experience at all. Or I can mix and match those at any moment. I’d feel uncomfortable with creating a bunch of people whose experiences were uniformly totally miserable, but that leaves me a lot of room to maneuver.
… and I’m even at least a little bit insulated from feeling like I have to actually do the absolute most I possibly can to predict every possible consequence of every action I take all of the time. I get the sense that that impossible self-demand really eats at a lot of people on Less Wrong.
Anyhow, if you’re like me, and you’ve somehow counterfactually been given the godlike power to knowingly choose between hunter-gatherers and agriculturalists as people to “make real”, even given total knowledge of the consequences, you can take into account not only their experiences, but your own aesthetic view of the kinds of worlds they generate. And I don’t find numbers aesthetically compelling.
It’d the same for more-globally-warmed and less-globally-warmed people, although in that case you know that there will also be major consequences for people who are actually alive right now.
Sure, I find that take on moral intuitions plausible. But if society has to make a real choice of the order of “how much to tax carbon”, I think that collectively we would not want to make the decision based on people saying “meh, no strong opinions here, future world X just seems kinda prettier”. We need some kind of principled framework, and for that… well, I guess you need moral philosophy!
I don’t think it’s plausible that there’ll ever be widespread agreement on any philosophical framework to be used to make policy decisions. In fact, I think that it’s much easier to make public policy decisions without trying to have a framework, precisely because the intuitions tend to be more shared than the systematizations.
I’ve never seen an actual political process that spent much time on a specific framework, and I’ve surely never heard of a constitution or other fundamental law or political consensus, anywhere, that said, let alone enforced, anything like “we’re a utilitarian society and will choose policies accordingly” or “we’re a virtue ethics society and will choose policies accordingly” or whatever.
The curious thing about your wording is that you go from ‘we would not want to make‘ to ‘we need some kind of principled framework’. The former does not automatically imply the latter.
Additionally, you presuppose the possibility of discovering a ‘principled framework’ without first establishing that such a thing even exists. I think the parent comment was trying to get at this core issue.
Well, that’s true, but I think it’s less a problem for me than it is for a lot of people here, because I don’t think there’s any respectable moral/ethical metric that you can maximize to begin with.
Ethics as a philosophical subject is on very shaky ground because it basically deals with creating pretty, consistent frameworks to systematize intuitions… but nobody ever told the intuitions that they had to be amenable to that. All forms of utilitarianism, specifically, have horrible problems with the lack of any defensible way to aggregate utilities. There are also issues about whose utility should count. Some people would include imaginary people, some would include animals, etc. But the alternatives to utilitarianism have their own problems.
So I, at least, am free to go for a lot of possible futures and take a lot of things into consideration. I can feel OK about using raw intuitions, or even aesthetic preferences, to choose between courses of action, including creating more-but-somewhat-sadder people versus fewer-but-somewhat-happier people. I don’t have any rigid system to narrow what I can care about, so I can choose to look at things like the diversity or complexity of their experiences, or how good the best ones are, or how bad the worst ones are, or in fact things that aren’t related to human experience at all. Or I can mix and match those at any moment. I’d feel uncomfortable with creating a bunch of people whose experiences were uniformly totally miserable, but that leaves me a lot of room to maneuver.
… and I’m even at least a little bit insulated from feeling like I have to actually do the absolute most I possibly can to predict every possible consequence of every action I take all of the time. I get the sense that that impossible self-demand really eats at a lot of people on Less Wrong.
Anyhow, if you’re like me, and you’ve somehow counterfactually been given the godlike power to knowingly choose between hunter-gatherers and agriculturalists as people to “make real”, even given total knowledge of the consequences, you can take into account not only their experiences, but your own aesthetic view of the kinds of worlds they generate. And I don’t find numbers aesthetically compelling.
It’d the same for more-globally-warmed and less-globally-warmed people, although in that case you know that there will also be major consequences for people who are actually alive right now.
Sure, I find that take on moral intuitions plausible. But if society has to make a real choice of the order of “how much to tax carbon”, I think that collectively we would not want to make the decision based on people saying “meh, no strong opinions here, future world X just seems kinda prettier”. We need some kind of principled framework, and for that… well, I guess you need moral philosophy!
Sorry, missed this somehow.
I don’t think it’s plausible that there’ll ever be widespread agreement on any philosophical framework to be used to make policy decisions. In fact, I think that it’s much easier to make public policy decisions without trying to have a framework, precisely because the intuitions tend to be more shared than the systematizations.
I’ve never seen an actual political process that spent much time on a specific framework, and I’ve surely never heard of a constitution or other fundamental law or political consensus, anywhere, that said, let alone enforced, anything like “we’re a utilitarian society and will choose policies accordingly” or “we’re a virtue ethics society and will choose policies accordingly” or whatever.
The curious thing about your wording is that you go from ‘we would not want to make‘ to ‘we need some kind of principled framework’. The former does not automatically imply the latter.
Additionally, you presuppose the possibility of discovering a ‘principled framework’ without first establishing that such a thing even exists. I think the parent comment was trying to get at this core issue.