It seems like moral problems get a negative phrasing more often than not in general, not just when Yudkowsky is writing them. I mean, you have the Trolley problem, the violinist, pretty much all of these, the list goes on. Have you ever looked at the morality subsections of any philosophy forums? Everything is about rape, torture, murder etc. I just assumed that fear is a bigger motivator than potential pleasantness and is a common aspect of rhetoric in general. I think that at least on some level it’s just the name of the game, moral dilemma → reasoning over hard decisions during very negative situations, not because ethicist are autistic, but because that is the hard part of morality for most humans. When I overhear people arguing over moral issues, I hear them talking about whether torture is ever justified or if murder is ever o.k.
Arguing about whether the tradeoff of killing one fat man to save five people is justified is more meaningful to us as humans than debating whether, say; we should give children bigger lollipops if it means there can’t be as much raw material for puppy chow (ergo, we will end up with fewer puppies since we are all responsible and need to feed our puppies plenty, but we want as many puppies as possible because puppies are cute, but so are happy children).
This isn’t to say that simply because this is how it’s done currently means that it is the most rational way to carry on a moral dialogue, only that you seem to be committing a fundamental attribution error due to a lack of general exposure to moral dilemmas and the people arguing them.
Besides, it’s not like I’m thinking about torture all the time just because I’m considering moral dilemmas in the abstract. I think that most people can differentiate between an illustration meant to show a certain sort of puzzle and reality. I don’t get depressed or anxious after reading Lesswrong, if anything; I’m happier and more excited and revitalized. So I’m just not picking up on the neurosis angle at all, seems like it might be a mind projection fallacy?
Considering this style of thinking has lead lesswrong to redact whole sets of posts out of (arguably quite delusional) cosmic horror, I think there’s plenty of neurosis to go around, and that it runs all the way to the top.
I can certainly believe not everybody here is part of it, but even then, it seems in poor taste. The moral problems you link to don’t strike me as philosophically illuminating, they just seem like something to talk about at a bad party.
I catch your drift about the post deletion, and I think that there is a bit of neurosis in the way of secrecy and sometimes keeping order in questionable ways, but that wasn’t what you brought up initially; you brought up the tendency to reason about moral dilemmas that are generally quite dark. I was merely pointing out that this seems like the norm in moral thought experiments, not just the norm on lesswrong. I might concede your point if you provide at least a few convincing counterexamples, I just haven’t really seen any.
If anything, I worry more about the tendency to call deviations from lesswrong standards insane, as it seems to be more of an in-group/out-group bias than is usually admitted, though it might be improving.
Yeah, really what I find to be the ugliest thing about lesswrong by far is the sense of self-importance, which contributed to the post deletion quite a bit as well.
Maybe it’s the combination of these factors that’s the problem. When I read mainstream philosophical discourse about pushing a fat man in front of a trolley, it just seems like a goofy hypothetical example.
But lesswrong seems to believe that it carries the world on its shoulders, and that when they talk about deciding between torture and dust specks, or torture and alien invasion, or torture and more torture, i get the impression people are treating this at least in part as though they actually expect to have to make this kind of decision.
If all the situations you think about involve horrible things, regardless of the reason for it, you will find your intuitions gradually drifting into paranoia. There’s a certain logic to “hope for the best, prepare for the worst”, but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
Do you think that maybe it could also be tied up with this sort of thing? Most of the ethical content of this site seems to be heavily related to the sort of approach Eliezer takes to FAI. This isn’t surprising.
Part of the mission of this site is to proselytize the idea that FAI is a dire issue that isn’t getting anywhere near enough attention. I tend to agree with that idea.
Existential risk aversion is really the backbone of this site. The flow of conversation is driven by it, and you see its influence everywhere. The point of being rational in the Lesswrongian sense is to avoid rationalizing away the problems we face each and every day, to escape the human tendency to avoid difficult problems until we are forced to face them.
In any event, my main interest in this site is inexorably tied in with existential risk aversion. I want to work on AGI, but I’m now convinced that FAI is a necessity. Even if you disagree with that, it is still the case that there are going to be many ethical dilemmas coming down the pipe as we gain more and more power to change our environment and ourselves through technology. There are many more ways to screw up than there are to get it right.
This is all there is to it; someone is going to be making some very hard decisions in the relatively near future, and there are going to be some serious roadblocks to progress if we do not equip people with the tools they need to sort out new, bizarre and disorienting ethical dilemmas. This I believe to likely be the case. We have extreme anti-aging, nanotech and AGI to look forward to, to name only a few. The ethical issues that come hand in hand with these sorts of technologies are immense and difficult to sort out. Very few people take these issues seriously; even fewer are trying to actually tackle them, and those who are don’t seem to be doing a good enough job. It is my understanding that changing this state of affairs is a big motive behind lesswrong. Maybe lesswrong isn’t all that it should be, but it’s a valiant attempt, in my estimation.
There’s a certain logic to “hope for the best, prepare for the worst”, but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
I resent the suggestion that I instinctively think of 3^^^3 dust specks! I have to twist my cortex in all sorts of heritage violating imaginative ways to come up with the horrible things I like to propose in goofy hypotheticals! I further assert that executing the kind of playfully ridiculous-but-literal conversation patterns that involve bizarre horrible things did not help my ancestors get laid.
It seems like moral problems get a negative phrasing more often than not in general, not just when Yudkowsky is writing them. I mean, you have the Trolley problem, the violinist, pretty much all of these, the list goes on. Have you ever looked at the morality subsections of any philosophy forums? Everything is about rape, torture, murder etc. I just assumed that fear is a bigger motivator than potential pleasantness and is a common aspect of rhetoric in general. I think that at least on some level it’s just the name of the game, moral dilemma → reasoning over hard decisions during very negative situations, not because ethicist are autistic, but because that is the hard part of morality for most humans. When I overhear people arguing over moral issues, I hear them talking about whether torture is ever justified or if murder is ever o.k.
Arguing about whether the tradeoff of killing one fat man to save five people is justified is more meaningful to us as humans than debating whether, say; we should give children bigger lollipops if it means there can’t be as much raw material for puppy chow (ergo, we will end up with fewer puppies since we are all responsible and need to feed our puppies plenty, but we want as many puppies as possible because puppies are cute, but so are happy children).
This isn’t to say that simply because this is how it’s done currently means that it is the most rational way to carry on a moral dialogue, only that you seem to be committing a fundamental attribution error due to a lack of general exposure to moral dilemmas and the people arguing them.
Besides, it’s not like I’m thinking about torture all the time just because I’m considering moral dilemmas in the abstract. I think that most people can differentiate between an illustration meant to show a certain sort of puzzle and reality. I don’t get depressed or anxious after reading Lesswrong, if anything; I’m happier and more excited and revitalized. So I’m just not picking up on the neurosis angle at all, seems like it might be a mind projection fallacy?
Considering this style of thinking has lead lesswrong to redact whole sets of posts out of (arguably quite delusional) cosmic horror, I think there’s plenty of neurosis to go around, and that it runs all the way to the top.
I can certainly believe not everybody here is part of it, but even then, it seems in poor taste. The moral problems you link to don’t strike me as philosophically illuminating, they just seem like something to talk about at a bad party.
I catch your drift about the post deletion, and I think that there is a bit of neurosis in the way of secrecy and sometimes keeping order in questionable ways, but that wasn’t what you brought up initially; you brought up the tendency to reason about moral dilemmas that are generally quite dark. I was merely pointing out that this seems like the norm in moral thought experiments, not just the norm on lesswrong. I might concede your point if you provide at least a few convincing counterexamples, I just haven’t really seen any.
If anything, I worry more about the tendency to call deviations from lesswrong standards insane, as it seems to be more of an in-group/out-group bias than is usually admitted, though it might be improving.
Yeah, really what I find to be the ugliest thing about lesswrong by far is the sense of self-importance, which contributed to the post deletion quite a bit as well.
Maybe it’s the combination of these factors that’s the problem. When I read mainstream philosophical discourse about pushing a fat man in front of a trolley, it just seems like a goofy hypothetical example.
But lesswrong seems to believe that it carries the world on its shoulders, and that when they talk about deciding between torture and dust specks, or torture and alien invasion, or torture and more torture, i get the impression people are treating this at least in part as though they actually expect to have to make this kind of decision.
If all the situations you think about involve horrible things, regardless of the reason for it, you will find your intuitions gradually drifting into paranoia. There’s a certain logic to “hope for the best, prepare for the worst”, but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
Do you think that maybe it could also be tied up with this sort of thing? Most of the ethical content of this site seems to be heavily related to the sort of approach Eliezer takes to FAI. This isn’t surprising.
Part of the mission of this site is to proselytize the idea that FAI is a dire issue that isn’t getting anywhere near enough attention. I tend to agree with that idea.
Existential risk aversion is really the backbone of this site. The flow of conversation is driven by it, and you see its influence everywhere. The point of being rational in the Lesswrongian sense is to avoid rationalizing away the problems we face each and every day, to escape the human tendency to avoid difficult problems until we are forced to face them.
In any event, my main interest in this site is inexorably tied in with existential risk aversion. I want to work on AGI, but I’m now convinced that FAI is a necessity. Even if you disagree with that, it is still the case that there are going to be many ethical dilemmas coming down the pipe as we gain more and more power to change our environment and ourselves through technology. There are many more ways to screw up than there are to get it right.
This is all there is to it; someone is going to be making some very hard decisions in the relatively near future, and there are going to be some serious roadblocks to progress if we do not equip people with the tools they need to sort out new, bizarre and disorienting ethical dilemmas. This I believe to likely be the case. We have extreme anti-aging, nanotech and AGI to look forward to, to name only a few. The ethical issues that come hand in hand with these sorts of technologies are immense and difficult to sort out. Very few people take these issues seriously; even fewer are trying to actually tackle them, and those who are don’t seem to be doing a good enough job. It is my understanding that changing this state of affairs is a big motive behind lesswrong. Maybe lesswrong isn’t all that it should be, but it’s a valiant attempt, in my estimation.
I resent the suggestion that I instinctively think of 3^^^3 dust specks! I have to twist my cortex in all sorts of heritage violating imaginative ways to come up with the horrible things I like to propose in goofy hypotheticals! I further assert that executing the kind of playfully ridiculous-but-literal conversation patterns that involve bizarre horrible things did not help my ancestors get laid.