We are not evaluating ethical systems but intuitions about abortion.
joaolkf
It’s a nice post with a sound argumentation towards an unconformable conclusion to many EA/rationalists. We certainly need more of this.However, this isn’t the first time someone has tried to sketch some probability calculus in order to account for moral uncertainty when analysing abortion. In the same way as the previous attempts, yours seems to be surreptitiously sneaking in some controversial assumptions into probability estimates and numbers. This is further evidence to me that trying to do the math in cases where we still need more conceptual clarification isn’t really as useful as it would seem. Here are a few points you have sneaked/ignored:
You are accepting some sort of Repugnant Conclusion, as mentioned here
You are ignoring the real life circumstances in which abortion takes place. Firstly, putting your kid for adoption isn’t always available. Additionally, I believe that in practice people are mostly choosing between having an abortion and raising an unwanted child with scare resources (which probably has a negative moral value).
You are not accounting for the fact that even if adoption successfully takes place, adopted children have very low quality of life.
Overall, I think you are completely ignoring the fact that abortion can (perhaps more correctly) be characterized by the choice between creating a new life of suffering (negative value) or creating nothing (0 value). At the very least there is a big uncertainty there as well, so not aborting would perhaps have a value ranging from −77 to +77 QUALYs. The moral value of aborting would then depend on the expected quality of life of the new life being created (and on the probability that not aborting would preclude having a wanted child later on). Therefore, it would be determined case by case. I would expect that wealth and the moral value of abortion are inversely correlated. This would mean abortion is permissible in countries were it should’t, and impermissible in countries were it should be permissible.
Pretty much what I was going to comment. I would add that even if he somehow were able to avoid having to accept the more general Repugnant Conclusion, he would certainly have to at least accept that if abortion is wrong in these grounds, not having a child is (nearly) equally wrong on the same grounds.
- Jan 5, 2015, 6:43 PM; 11 points) 's comment on Compartmentalizing: Effective Altruism and Abortion by (
Have you found any good solutions besides the ones already mentioned?
It’s not just people in general that feel that way, but also some moral philosophers. Here are two related link about the demandingness objection to utilitarianism:
http://en.wikipedia.org/wiki/Demandingness_objection
http://blog.practicalethics.ox.ac.uk/2014/11/why-i-am-not-a-utilitarian/
Haven’t seen a deal so sweet since I was Pascal mugged last year!
On October 18, 1987, what sort of model of uncertainty of models one would have to have to say the uncertainty over the 20-sigma estimative was enough to allow it to be 3-sigma? 20-sigma, give 120 or take 17? Seems a bit extreme, and maybe not useful.
At least now when I cite Eliezer’s stuff on my doctoral thesis people who don’t know him—there are a lot of them in philosophy—will not say to me “I’ve googled him and some crazy quotes came up eventually, so maybe you should avoid mentioning his name altogether”. This was a much bigger problem to me than what is sounds. I had to do all sort of workarounds to use Eliezer’s ideas as if someone else said it because I was advised not to cite him (and the main, often the only, argument was in fact the crazy quote things).
There might be some very small level of uncertainty as whether Alex’s behaviour had a positive or negative overall impact(maybe it made MIRI slightly update in the right direction, somehow). But I can say with near certainty it made my life objectively worse in very quantifiable measures (i.e. I lost a month or two with the workarounds, and would continue to lose time with this).
It seems you have just closed the middle road.
Not sure if directly related, but some people (e.g. Alan Carter) suggest having indifference curves. These consist of isovalue curves on a plane with average happiness and amount of happy people as axes, each curve corresponding to the same amount of total utility. The Repugnant Conclusion scenario would be nearly flat on the amount of happy people axis and the a fully satisfied Utility Monster nearly flat on the average happiness axis. It seems this framework produces similar results as yours. Every time you create a being slightly less happy than the average you have a gain in the amount of happy people but a loss in average happiness and might end up with the exact same total utility.
You do not stand to Eliezer as you stand to Sarah Palin (as a far as public figures go). The equivalent would be a minor congressmen consistently devoting his speaking time to highlight all the stupid things Sarah Palin has said (and retracted). I’m pretty sure such congressman would meet far worse consequences than you have been meeting.
EDIT: Not sure why this comment is being downvoted, but as a clarification I merely meant the difference in social status between Alex and Sarah is bigger than between Eliezer and him. When the gap is big enough, it doesn’t matter what one says about the other, but this is not the case here. Why is that offensive/such a bad idea?
Although you try to correct this on the footnote, your post still gives the idea Alan Carter created this concept. Value pluralism has been going on for quite some time and is recognized as a defensible position in ethics to the extent of having a SEP entry to it: http://plato.stanford.edu/entries/value-pluralism/ . More importantly, if you are aiming at bring Eliezer’s ideas closer to mainstream philosophy, then I don’t think Alan should be your choice. Not because he is not part of mainstream philosophy but because there are much bigger names defending some sort of value pluralism: Isaiah Berlin, Bernard Williams, Thomas Nagel, Larry Temkin and so on. In fact, some people argue that Stuart Mill was not value-monist. There is also the position called moral particularism which claims morality does not consist solely of general principles or guidelines but it is extremely context-dependent (which seems to mean it would be hard to compress), a position the US Supreme Court seems to adopt.
In cases where it might not suffice, the quote and your comment do. I suggest deleting it and sending a private message. I wouldn’t had found the page if not for your comment.
Cambridge’s Endowment: £4.9 billion
Oxford’s Endowment: £4.03 billion
The discontinuity would increase as subjective time per instant decreased and as subjective change per instant increased. If you have a lot of changes but also a lot of subjective time, it’s fine. So running at faster speeds is fine, as long the number of subjective steps are the same.
Yes, the idea, simply put is that eventually the universe will be filled with utilitronium, and I’m asking whether anything that goes on before that can impact the value of the utilitronium, i.e. the maximal amount of value.
Given that most plausible cross-temporal dependency we know of are on cases of continuity, then the transition from humans to a superintelligence is the best candidate here. Which means they would be before the creation of a superintelligence.
Cross-temporal dependency, value bounds and superintelligence
Ok. If I ever get to work with this I will let you know, perhaps you can help/join.
I have been saying this for quite some time. I regret not posting it first. It would be nice to have a more formal proof of all of this with utility functions, deontics and whatnot. If you are up for it, let me know. I could help, feedback, or we could work together. Perhaps someone else has done it already. It has always struck me as pretty obvious, but this is the first time I’ve seen stated like this.
Human’s lateral visual search is considerably more efficient than horizontal. 414 spreaded more laterally beats regular 650. There are ultra-wide huge screens, of course, but they weren’t cheaper per inches than two monitors when I did my research 6 months ago.
http://archpedi.jamanetwork.com/article.aspx?articleid=379446
This paper is already a major update from the long standing belief adoptees had lower quality of life, i.e. this is as optimistic as it gets.
Given that stress during early childhood has a dramatic impact on an individual’s adult life, I think this is something very uncontroversial.