Considering this style of thinking has lead lesswrong to redact whole sets of posts out of (arguably quite delusional) cosmic horror, I think there’s plenty of neurosis to go around, and that it runs all the way to the top.
I can certainly believe not everybody here is part of it, but even then, it seems in poor taste. The moral problems you link to don’t strike me as philosophically illuminating, they just seem like something to talk about at a bad party.
I catch your drift about the post deletion, and I think that there is a bit of neurosis in the way of secrecy and sometimes keeping order in questionable ways, but that wasn’t what you brought up initially; you brought up the tendency to reason about moral dilemmas that are generally quite dark. I was merely pointing out that this seems like the norm in moral thought experiments, not just the norm on lesswrong. I might concede your point if you provide at least a few convincing counterexamples, I just haven’t really seen any.
If anything, I worry more about the tendency to call deviations from lesswrong standards insane, as it seems to be more of an in-group/out-group bias than is usually admitted, though it might be improving.
Yeah, really what I find to be the ugliest thing about lesswrong by far is the sense of self-importance, which contributed to the post deletion quite a bit as well.
Maybe it’s the combination of these factors that’s the problem. When I read mainstream philosophical discourse about pushing a fat man in front of a trolley, it just seems like a goofy hypothetical example.
But lesswrong seems to believe that it carries the world on its shoulders, and that when they talk about deciding between torture and dust specks, or torture and alien invasion, or torture and more torture, i get the impression people are treating this at least in part as though they actually expect to have to make this kind of decision.
If all the situations you think about involve horrible things, regardless of the reason for it, you will find your intuitions gradually drifting into paranoia. There’s a certain logic to “hope for the best, prepare for the worst”, but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
Do you think that maybe it could also be tied up with this sort of thing? Most of the ethical content of this site seems to be heavily related to the sort of approach Eliezer takes to FAI. This isn’t surprising.
Part of the mission of this site is to proselytize the idea that FAI is a dire issue that isn’t getting anywhere near enough attention. I tend to agree with that idea.
Existential risk aversion is really the backbone of this site. The flow of conversation is driven by it, and you see its influence everywhere. The point of being rational in the Lesswrongian sense is to avoid rationalizing away the problems we face each and every day, to escape the human tendency to avoid difficult problems until we are forced to face them.
In any event, my main interest in this site is inexorably tied in with existential risk aversion. I want to work on AGI, but I’m now convinced that FAI is a necessity. Even if you disagree with that, it is still the case that there are going to be many ethical dilemmas coming down the pipe as we gain more and more power to change our environment and ourselves through technology. There are many more ways to screw up than there are to get it right.
This is all there is to it; someone is going to be making some very hard decisions in the relatively near future, and there are going to be some serious roadblocks to progress if we do not equip people with the tools they need to sort out new, bizarre and disorienting ethical dilemmas. This I believe to likely be the case. We have extreme anti-aging, nanotech and AGI to look forward to, to name only a few. The ethical issues that come hand in hand with these sorts of technologies are immense and difficult to sort out. Very few people take these issues seriously; even fewer are trying to actually tackle them, and those who are don’t seem to be doing a good enough job. It is my understanding that changing this state of affairs is a big motive behind lesswrong. Maybe lesswrong isn’t all that it should be, but it’s a valiant attempt, in my estimation.
There’s a certain logic to “hope for the best, prepare for the worst”, but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
I resent the suggestion that I instinctively think of 3^^^3 dust specks! I have to twist my cortex in all sorts of heritage violating imaginative ways to come up with the horrible things I like to propose in goofy hypotheticals! I further assert that executing the kind of playfully ridiculous-but-literal conversation patterns that involve bizarre horrible things did not help my ancestors get laid.
Considering this style of thinking has lead lesswrong to redact whole sets of posts out of (arguably quite delusional) cosmic horror, I think there’s plenty of neurosis to go around, and that it runs all the way to the top.
I can certainly believe not everybody here is part of it, but even then, it seems in poor taste. The moral problems you link to don’t strike me as philosophically illuminating, they just seem like something to talk about at a bad party.
I catch your drift about the post deletion, and I think that there is a bit of neurosis in the way of secrecy and sometimes keeping order in questionable ways, but that wasn’t what you brought up initially; you brought up the tendency to reason about moral dilemmas that are generally quite dark. I was merely pointing out that this seems like the norm in moral thought experiments, not just the norm on lesswrong. I might concede your point if you provide at least a few convincing counterexamples, I just haven’t really seen any.
If anything, I worry more about the tendency to call deviations from lesswrong standards insane, as it seems to be more of an in-group/out-group bias than is usually admitted, though it might be improving.
Yeah, really what I find to be the ugliest thing about lesswrong by far is the sense of self-importance, which contributed to the post deletion quite a bit as well.
Maybe it’s the combination of these factors that’s the problem. When I read mainstream philosophical discourse about pushing a fat man in front of a trolley, it just seems like a goofy hypothetical example.
But lesswrong seems to believe that it carries the world on its shoulders, and that when they talk about deciding between torture and dust specks, or torture and alien invasion, or torture and more torture, i get the impression people are treating this at least in part as though they actually expect to have to make this kind of decision.
If all the situations you think about involve horrible things, regardless of the reason for it, you will find your intuitions gradually drifting into paranoia. There’s a certain logic to “hope for the best, prepare for the worst”, but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
Do you think that maybe it could also be tied up with this sort of thing? Most of the ethical content of this site seems to be heavily related to the sort of approach Eliezer takes to FAI. This isn’t surprising.
Part of the mission of this site is to proselytize the idea that FAI is a dire issue that isn’t getting anywhere near enough attention. I tend to agree with that idea.
Existential risk aversion is really the backbone of this site. The flow of conversation is driven by it, and you see its influence everywhere. The point of being rational in the Lesswrongian sense is to avoid rationalizing away the problems we face each and every day, to escape the human tendency to avoid difficult problems until we are forced to face them.
In any event, my main interest in this site is inexorably tied in with existential risk aversion. I want to work on AGI, but I’m now convinced that FAI is a necessity. Even if you disagree with that, it is still the case that there are going to be many ethical dilemmas coming down the pipe as we gain more and more power to change our environment and ourselves through technology. There are many more ways to screw up than there are to get it right.
This is all there is to it; someone is going to be making some very hard decisions in the relatively near future, and there are going to be some serious roadblocks to progress if we do not equip people with the tools they need to sort out new, bizarre and disorienting ethical dilemmas. This I believe to likely be the case. We have extreme anti-aging, nanotech and AGI to look forward to, to name only a few. The ethical issues that come hand in hand with these sorts of technologies are immense and difficult to sort out. Very few people take these issues seriously; even fewer are trying to actually tackle them, and those who are don’t seem to be doing a good enough job. It is my understanding that changing this state of affairs is a big motive behind lesswrong. Maybe lesswrong isn’t all that it should be, but it’s a valiant attempt, in my estimation.
I resent the suggestion that I instinctively think of 3^^^3 dust specks! I have to twist my cortex in all sorts of heritage violating imaginative ways to come up with the horrible things I like to propose in goofy hypotheticals! I further assert that executing the kind of playfully ridiculous-but-literal conversation patterns that involve bizarre horrible things did not help my ancestors get laid.