Donated!
dthunt
If you have some sort of decision-making process you do a lot that you expect is going to become a thing you build intuition around later, make sure you have the right feedback loops in place, so that you have something to help keep that intuition calibrated. (This also applies to processes you engineer for others.)
- 31 Dec 2014 9:13 UTC; 3 points) 's comment on Open thread, Dec. 29, 2014 - Jan 04, 2015 by (
I’m kind of curious; what do you think CFAR’s objective is 5 years from now (assuming they get the data they want and it strongly supports the value of the workshops)?
You might check IRC - #lesswrong, maybe #slatestarcodex, someone is probably willing to help, and you might make a friend.
Out of curiosity, thoughts on the Againstness class?
I REALLY like this question, because I don’t know how to approach it, and that’s where learning happens.
So it’s definitely less bad to grow cows with good life experiences than with bad life experiences, even if their ultimate destiny is being killed for food. It’s kind of like asking if you’d prefer a punch in the face and a sandwich, or just a sandwich. Really easy decisions.
I think it’d be pretty suspicious if my moral calculus worked out in such a way that there was no version of maximally hedonistic existence for a cow that I could say that the cow didn’t have a damned awesome life and that we should feel like monsters for allowing it to have existed at all.
That having been said, if you give me a choice between cows that have been re-engineered such that their meat is delicious even after they die of natural causes, and humans don’t artificially shorten their lives, and they stand around having cowgasms all day - and a world where cows grow without brains - and a world where you grew steaks on bushes -
I think I’ll pick the bush-world, or the brainless cow world, over the cowgasm one, but I’d almost certainly eat cow meat in all of them. My preference there doesn’t have to do with cow-suffering. I suspect it has something to do with my incomplete evolution from one moral philosophy to another.
I’m kind of curious how others approach that question.
So, there’s a heuristic that I think is a decent one, which is that less-conscious things have less potential suffering. I feel that if you had a suffer-o-meter and strapped it to the heads of paramecia, ants, centipedes, birds, mice, and people, they’d probably rank in approximately that order. I have some uncertainty in there, and I could be swayed to a different belief with evidence or an angle I had failed to consider, but I have a hard time imagining what those might be.
I think I buy into the notion that most-conscious doesn’t strictly mean most-suffering, though—if there were a slightly less conscious, but much more anxious branch of humanoids out there, I think they’d almost certainly be capable of more suffering than humans.
Well, how comparable are they, in your view?
Like, if you’d kill a cow for a 10,000 dollars (which could save a number of human lives), but not fifty million cows for 10,000 dollars, you evidently see some cost associated with cow-termination. If you, when choosing methods, could pick between methods that induced lots of pain, versus methods that instantly terminated the cow-brain, and have a strong preference toward the less-painful methods (assuming they’re just as effective), then you clearly value cow-suffering to some degree.
The reason I went basically vegan is I realized I didn’t have enough knowledge to run that calculation, but I was fairly confident that I was ethically okay with eating plants, sludges, and manufactured powders, and most probably the incidental suffering they create, while I learned about those topics.
I am basically with you on the notion that hurting a cow is better than hurting a person, and I think horse is the most delicious meat. I just don’t eat it any-more. (I’d also personally kill some cows, even in relatively painful ways, in order to save a few people I don’t know.)
You can always shoot someone an email and ask about the financial aid thing, and plan a trip stateside around a workshop if, with financial aid, it looks doable, and if after talking to someone, it looks like the workshop would predictably have enough value that you should do it now rather than when you have more time and money.
Noticing confusion is the first skill I tried to train up last year, and is definitely a big one, because knowing what your models predict and noticing when they fail is a very valuable feedback loop that prevents you from learning if you can’t even notice it.
Picturing what sort of evidence would unconvince you of something you actively believe is a good exercise to pair with the exercise of picturing what sort of evidence would convince you of something that seems super unlikely. Noticing unfairness there is a big one.
Realizing when you are trying to “win” at truthfinding, which is… ugh.
Not feeling connected with people, or, increasingly feeling less connection with people.
I actively socialize myself, and this helps, but the other thing maybe suggests to me I’m doing something wrong.
(Edit: to clarify, my empathy thingy works as well (maybe better) than it ever has, I just feel like the things I crave from social interactions are getting harder to acquire. Like, people “getting” you, or having enough things in common that you can effectively talk about the stuff that interests you. So, like, obviously, one of the solutions there is to hang out with more bright-and-happy CFAR-ish/LW-ish/EA-ish people.)
Hey, does anyone else struggle with feelings of loneliness?
What strategies have you found for either dealing with the negative feelings, or addressing the cause of loneliness, and have they worked?
By far the best definition I’ve ever heard of the supernatural is Richard Carrier’s: A “supernatural” explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.” (http://lesswrong.com/lw/tv/excluding_the_supernatural/)
I have made a prosecutor pale in the face by suggesting that courthouses should be places where people with plea bargains shop their offers around with each other so that they know what’s a good deal and a bad deal.
I don’t think it’s going to matter very much. 3 digits after the dot, with the understanding that the third digit is probably not very good, but the second probably is pretty good.
Faith in Humanity moment: LW will not submit garbage poll responses using other LW-users as public keys.
I definitely don’t have a strong identity in this sense; like, I suspect I’d be pretty okay if an alien teenager swooped by and pushed the “swap sex!” button on me, and the result was substantially functional and not horrible to the eye. Like, obviously I’d be upset about having been abused by an outside force, but I don’t think the result itself is inherently distasteful or anything like that.
I’m really curious to see how this and related stuff (male/female traits, fingers) relate.
Definitely had a thought on this order; I went with “don’t die at any point and still reach age 1000”, though I also don’t really consider solutions that involve abandoning bodies counting.
At the very least, I suspect one of the analyses will be ‘bucketize corresponding to certainty, then plot “what % of responses in bucket were right?”’ - something that was done last year (see 2013 LessWrong Survey Results)
Last year it was broken down into “elite” and “typical” LW-er groups, which presumably would tell you if hanging out here made you better at overconfidence, or something similar in that general vicinity.
So I had one of those typical mind fallacy things explode on me recently, and it’s caused me to re-evaluate a whole lot of stuff.
Is there a list of high-impact questions people tend to fail to ask about themselves somewhere?