January 1-14, 2012 Open Thread
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
If continuing the discussion becomes impractical, that means you win at open threads; a celebratory top-level post on the topic is traditional.
Poster’s Note: omg, it felt so weird typing “2012” up there.
Experiment: The open threads are always used a lot when they are first posted, but they are quickly forgotten, because they get hidden as the month progresses.
This month I will post two open threads. One for January 1-14, and another for January 15-31. I predict, at P=.8 that there will be significantly more open thread posts in the second half of the month using this 2-post method.
To test my hypothesis, I will average the number of open thread comment in the second half of the month in the past few open thread posts, and compare them to the amount of open thread comments on the January 15-31st post.
This test WILL be biased, as it is neither blind on my side, nor on the commenters’ side (aka people who post will read this and see the test I am doing.). This will be somewhat ameliorated by me NOT posting this experiment note in the second thread, and in two weeks’ time it will not be foremost in people’s minds.
Repeating my suggestion from the June 2010 Open Thread:
Some sites have gone to an every Friday open thread; maybe we should do it weekly instead of monthly, too.
I think it would also be good for people who have interesting links but no real comments on them. It is annoying when someone comments on link only posts complaining that the person should not have posted a link without their own commentary.
What’s “significantly more”?
I predict:
P(twice as many subthreads started)= .4
P(50% more)= .7
P(5% more) = .9
Why would you test just the second half of the month, versus total posts in a month?
It seems that you would actually want to measure total participation in a month. I see no reason not to measure it directly, rather than indirectly.
Good idea! Thanks. It requires more counting, but I think it’s the better way. Experiment updated!
Two Open Threads will be posted in the Discussion Section of Less Wrong in the month of January 2012 and the second thread will have at least double the number of posts contained in the first thread before February 2012.
I’m not even sure if this is even worth a comment.
But the “ETA” acronym. Can we stop using this? To normal people, “ETA” means “estimated time of arrival”, not “edited to add”. Just using “Edit:” instead of “ETA:” will work fine.
I like “Addendum:”, but I don’t think I’ve seen anyone else use this.
On the subject of confusing acronyms, I get thrown every time someone uses NT to abbreviate neurotypical; my first take is always that they mean a Myers-Briggs -NT- type, until I realize that there is an alternative which makes the post actually make sense.
Also, P.S. is a fine replacement.
Why doesn’t Microsoft put an ad-blocker in Internet Explorer to reduce Google’s revenue?
EDIT: And that of Facebook.
They might not want escalation?
It would likely leave them open to an anti-trust suit and might even violate their existing settlement agreement with the DOJ.
They are more interested in supplanting google with bing than in destroying that business model? And they might have been afraid of accusations of abusing their dominating position in the browser market and not have reevaluated in light of their shrinking market share.
Because Microsoft’s primary customers are not end users, but integration companies such as Dell and HP, and these companies maximize profits by loading PCs up with lots of advertisements.
When buying items I often do quite a bit of research, I suspect many people on LW are also the type to do this. I think this is a potential goldmine of information as I would trust the research/reviews of people from LW at least an order of magnitude more than random reviews on the internet.
Something as simple as “what do LWers buy on amazon” would be highly interesting to me. Is there a feasible way of collating this kind of info?
Isn’t there a lesswrong referral set up for everything bought via a link from the site? I try to always use this. If it keeps any info about what was purchased that would definitely be interesting.
I note that the 2012 Welcome thread accumulated ~100 comments in very short order—it looks like there was pent-up demand. It may be worth doing these more often than every year or two as well.
In particular, the about page needs to link to the current welcome thread, not the 2010 one!
This sounds like the sort of thing that just needs someone to be pro-active about. If you think it’s time for a new Intro thread, post one!
(Ditto on open threads, media threads, social interaction threads, etc)
The catch being that someone has to link it from “about” :-) But yes.
Is it ethical to eat a cow with human level intelligence that wanted to be eaten? To avoid convenient worlds, assume the cow not only wants to be eaten, but also likes and approves of being eaten. [Edited to clarify that this is an intelligent cow we’re talking about.]
Well. The default answer is yes, because we like fulfilling preferences.
If the cow was genetically engineered to want to be eaten by a farmer who wanted to sell meat to vegetarians, then we may want to not buy the meat, just to discourage the farmer from doing that in the first place.
That’s what I think, anyway.
We can already “grow” meat in a lab. I can’t imagine that we would have the technology to genetically engineer an intelligent cow that wants to be eaten, but NOT be able to grow whatever meat we want of an extremely high quality.
Even more than that, I would bet the grown meat is more economical in terms of resource usage. ie with grown meat, we are only growing the parts we want, whereas with a cow, all the less-edible meat pieces are probably being wasted.
Isn’t that kind of missing the point of the question?
There was a recent discussion, Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning, that contemplated the idea that sometimes it is useful to miss the point of the question. When I read the Eat Smart Cow question, it seemed like the type of question that requires said “missing of point”. Quote below:
I am in fact, rather happy that I read something on LW, and applied the thought to a different question a couple days later. It makes me feel like an a growing aspiring rationalist who Learns Things.
As I said in that thread, fighting the hypo is not polite behavior.
On the first day of a physics class, you can ask the professor to justify learning physics given the problem of Cartesian skepticism. The professor might have an interesting answer if she decided to engage you. But what would actually happen is that the professor will ask you to leave, because the conversation will not be a physics conversation, and the social norm is that physics classes are for physics conversations.
In short, the practical conversation you are trying to start is not the same as the theoretical one that was started.
The problem is that it doesn’t answer the point of the question. In the least convenient possible world, where we can make the cows but can’t just grow their meat, how would you answer the question?
I eat cows that emphatically don’t want to be eaten all the time. I don’t have an ethical problem with it.
Sorry, I was talking about the cow that wanted to be eaten in the Hitchhiker’s Guide to the Galaxy. That cow was as intelligent as a human adult.
At a first guess I’d say it is ethical to eat that cow but potentially not ethical to create him in the first place.
The Ameglian Major Cow — which not only wants to be eaten, but is capable of saying so, clearly and distinctly — seems to be in the same family of ethically problematic artificial intelligences as the house-elf — which wants to serve you, suffers if it cannot do so, and has no defenses against mistreatment.
In both cases, if the creature already exists, you may as well exploit it, since doing so fulfills the creature’s own intentions. But it seems to have been created for just the purpose of turning a vice into a virtue: of creating an artificial setup in which doing something that would normally be wrong (killing and eating other sapient beings; keeping slaves and benefiting from their labor) is rendered not-wrong by exceptional circumstances.
And this, in turn, seems calculated to degrade our moral intuitions. I suspect I would not want to meet a person who had grown up around house-elves and Ameglian Major Cows, and therefore expected that all intelligences were similarly eager for exploitation.
I assume that it’s not the “eating” that’s the ethical problem, but the killing?
So, can’t we boil the question to its essentials of “is it ethical to kill creatures that want to be killed (and like and approve it)”?
Oops.
(here’s another one of my crap confessions)
This reminds me of how, when I was about 18 and craved sexual excitement (more than I do now), I correctly guessed that shock, shame and guilt would make great aphrodisiacs for a mind like mine (although they wear off quickly). So I started looking for extreme and transgressive porn on the net. What I found to my liking was guro hentai and written extreme, to the point of lethality (“snuff”) BDSM stories. However, I realized that descriptions of blatant non-consent turned on some defense mechanisms that gave me a measure of disgust and anger, not arousal, so I turned to “consensual” extreme material.
I was, back then, amazed to discover that there was a large number of stories featuring entirely willing torture, sexualized killings, and, yes, cannibalism. So, yes, I’ve read a fair amount of descriptions of humans desiring, eroticizing and actively seeking out being cooked and eaten. Some were quite competently written.
Soon enough, however, the novelty and the psychological effects wore off, and now I prefer much more mild BDSM porn; I feel neither shocked by nor drawn to the stuff I described.
Not really sure if all that is acceptable to disclose on LW, but I suspect that most of you folks won’t be outraged or creeped out.
I don’t have much beef with that guy who killed and ate a few kilograms of a human volunteer he found on the Internet; eating a cow with human smarts that wants to be eaten doesn’t seem qualitatively worse.
I would not want that guy in my neighborhood. I want to live around people who will not eat me, even if I go crazy.
Yes.
Probably not. I take it as uncontroversial that some people are insane or mentally unstable and their wants/desires should not be fulfilled. The way to probe this possibility is to ask for a justification of the want/desire. So I’d ask the cow to give reasons for wanting to be eaten. It’s hard for me to see how those reasons could be convincing. Certainly “because I’m a cow” wouldn’t convince me. I can imagine that an intelligent cow might long for death and seek assisted suicide, since being an intelligent cow would be rather like being a severely disabled human being, but the part where he or she wants to be eaten is alarming.
“Why do you want to be eaten?”
“It seems nice. Why do you want to have sex with attractive members of your species?”
“Because it gives me pleasure.”
“But you’re not having that pleasure right now. You’re just anticipating it. Your anticipation is something happening in your brain now, irrespective of whether a particular sex act would actually turn out to be pleasurable. Similarly, my desire to be eaten is happening in my brain now, fully aware of the fact that I won’t be around to notice it if it happens. Not that different.”
“So? I know where my desire to have sex comes from — it comes from my evolutionary past; members of my ancestors’ generations who didn’t want sex are much less likely to have had kids. They died alone, or became monks or something.”
“And I know where my desire to be eaten comes from — it comes from my genetically-engineered past; members of my ancestors’ generations who didn’t want to be eaten were discarded. Their bodies ended up being destroyed without being eaten! (And, of course, without having their DNA cloned and propagated.)”
“But that’s artificial! You were manipulated!”
“No, I wasn’t. My ancestral environment and my ancestors’ genes were manipulated; just as yours were manipulated by sexual selection. I’ve personally never met a genetic engineer in my life! My experience of it is just that ever since puberty, I’ve really wanted someone to eat me. It’s just the sort of organism I am. Human gets horny; cow gets tasty.”
“So what you’re saying is, you want to be eaten because …”
″… because I’m a cow, pretty much. So … steak?”
This whole thread puts me in mind of this for some reason.
Do you think consensual BDSM is unethical?
I think that in some extreme forms the ability to consent is called into question. That’s what I’m claiming with the cow: The cow’s desire is extreme enough to call into question its sanity, which would render it unable to consent, which would make the act unethical. I would say the same about any form of BDSM that results in death.
The cow goes to a psychiatrist. The psychiatrist notes that she shows none of the typical signs of insanity: delusional beliefs, poor self-control, emotional distress. The cow simply values being eaten.
If that wouldn’t convince you that the cow was sane, what would?
Let me make sure I understand this: the fact that the cow consents to death is sufficient evidence to justify the conclusion that the cow is unable to meaningfully consent to death?
No, absolutely not. The fact that the cow consents to being eaten is potentially evidence that the cow in unable to meaningfully consent to death. Again, the cow might have good reasons to want to die—it might even have good reasons to not care about whether you eat it or not after it’s dead—but what I’m disputing is whether it can have good reasons to want to be eaten. These are all extremely different things. Likewise, there may be good reasons for a person to want to die but sexual gratification is not a good reason and it’s highly likely to signify mental derangement.
So, I’ve asked this elsewhere, but… why is “well, geez, it’s more useful than just having me rot in the ground” not a good enough reason to prefer (and not just be indifferent to) being eaten after I’m dead?
Conversely, what makes wanting to be buried underground after I die not evidence that I’m unable to consent? (Many people in the real world seem to have this desire.)
(I don’t mean to collide with the cryonics conversation here; we can assume my brain has been cryopreserved in all of these cases if we like. Or not. It has nothing to do with my question.)
There’s a difference between wanting to be eaten and wanting to die in addition to either being indifferent to being eaten afterwards or preferring it. The difference is that in the former case dying is a consequence of the desire to be eaten whereas in the latter case presumably the cow would have a reason to want to die in addition to its preference to want to be eaten afterwards.
The cow that wants to be eaten does not necessarily want to die at all. Death is a consequence of fulfilling its desire to be eaten and to want to be eaten implies that it finds dying an acceptable consequence of being eaten but no more. The cow could say “I don’t want to die, I love living, but I want to be eaten and I’m willing to accept the consequences.” It could simply value being eaten over living without necessarily wanting to die.
Likewise, I can say, “I don’t want to die, but if I do, I’d like to be buried afterwards” and this is obviously a very different thing that saying “I want to be buried and if I have to die in order to be buried I’m willing to accept that consequence.”
Ah, OK. When you said “it might even have good reasons to not care about whether you eat it or not after it’s dead—but what I’m disputing is whether it can have good reasons to want to be eaten” I thought you were contrasting indifference with active desire.
Sure, I agree that there’s a relevant difference between wanting X after I die and wanting X now, especially when X will kill me.
So, OK, revising… is the fact that the cow desires being eaten enough to accept death as a consequence of satisfying that desire sufficient evidence to justify the conclusion that the cow is unable to meaningfully consent to death?
So, any suicide attempt must be prevented?
That last part seems backward to me. If I’m going to die anyway, why shouldn’t I want to be eaten? My corpse has nutritional value; I generally prefer that valuable things be used rather than discarded.
Understand, I don’t want to be eaten when I die, but it seems clear to me that I’m the irrational one here, not the cow. It’s just that my irrationality on the matter is conventional.
The UK supposedly has a rule where you can take home and eat roadkill that you find, but not roadkill that you yourself were responsible for killing. The theoretical incentive problem is fairly obvious, even though enforcability is not.
The blanket prohibition on eating people whether or not they want to be eaten after they die may make sense in terms of not incentivizing other people to terminate them ahead of schedule.
Sure, I agree that it may be in our collective best interests to prevent individuals from eating one another, whether they want to be eaten or not. It may even be in our best interests to force individuals to assert that they don’t want to be eaten, and if so, it’s probably best for them to do so sincerely rather than lie, since sincere belief is a much reliable source of such assertions.
I just deny that their desire to be eaten, should they have it, is irrational.
So, you’d ask the cow to derive ought from is?
I would definitely be OK with it if the cow was also uploaded before death (though that’s far from the least convenient word).
Yes, provided it was ethical to breed it.
I’m not sure there’s a causal relationship. If you met a cow in the wild that wanted to be eaten, I think it would be ethical to eat it.
But I strongly believe that creating such a cow would be unethical.
I agree. I think there is no difference between creating an intelligent cow that wants to be eaten vs. creating a human that tastes good, and wants to be eaten.
Depends on what you mean by “ethical,” as many things do.
For me personally, it doesn’t seem that bad. However, I don’t think it’s the optimal outcome—maybe we could encourage the cow to take up some other interests?
In the spirit of dissolving questions (like Yvain did very well for disease), I wanted to give an off-fthe-cuff breakdown of a similar contentions issue: use of the term “design”, as in “cats are designed to be good hunters [of small animals]” or “knives are designed to cut”.
Generally, people find both of those intuitive, which leads to a lot of unnecessary dispute between reductionists and anti-reductionists, with the latter claiming that the former implicitly bring teleology into biology.
So, here’s what I think is going on when people make a “design” evaluation, or otherwise find the term intuitively applicable: there are a few criteria that make something seem “designed”, and if enough of them are satisfied, it “feels” designed. Further, I think there are three criteria, and people call something designed if it meets two of them. Here’s how it works.
“X is designed to Y” if at least two of these are met:
1) Goodness: X is good at Y.
2) Narrowness: The things that make X better at Y make it worse at other things normally similar to Y.
3) Human intent: A human crafted X with the intent that it be used for Y. (Alternately: replace human with “an intelligent being”.)
(You can think of it as a “hub-and-spoke” neural net model where each of the criteria’s being activated make the “design” judgment stronger.)
“A knife is designed to cut” meets all three, and we have no problem calling it designed for that. Likewise for “A computer is designed to do computations quickly”.
Now for some harder cases that create fake disagreements:
“A cat is designed to hunt small animals.” Cats are good at hunting mice, etc, so it meets 1. They weren’t human-designed to catch small animals so they fail 3. Finally, a lot of the things that make them good at catching mice make them unsuited for other purposes. For example, their (less-)social mentality and desire to keep themselves clean makes them harder for mice to detect when hunting, but it prevents them from using hunting tactics that dogs use on larger animals (e.g. “split the pack up and have one of them chase the prey downwind to the rest of the pack”).
Thus, the cat example meets 2, and this “narrow optimality” gives us the “feel” of it being designed, and has historically led humans to equate this with 3.
How about this one: “a stone is designed to hurt people”. Stones are good at hurting people, so they pass 1. However, they are not narrowly good, nor were they specifically crafted with the intent of hurting people. Thus, we generally don’t get the feeling of stones being designed to hurt people.
Try this test out for yourself on things that “feel” designed or non designed.
That’s the most literally dissolved question I’ve ever seen.
Sorry, fixed. I was trying to remove the google crap from the URL when I got the search results but overdid it to the point that even the anchor text disappeared.
I’m interested in seeing statistics about what LWers do in their daily lives. In that vein, I’m thinking of a survey about what lifehacks the average LWer uses. This could help people decide which lifehacks to try out.
For instance, most readers of this comment probably know what SRS, nootropics, n-back, self-tracking, polyphasic sleep and so on are, but I’d guess less than 10% of you currently practice each. To reduce the barriers to entry to people actually trying to improve their lives with these tools, we need to get some quick clues about what’s worthwhile.
The survey could also measure the reasons why people don’t employ certain lifehacks. Is it because they’re lazy or because they doubt the lifehack’s efficacy or because they’ve never heard of the hack?
I plan to make a post for the second half of this month because daenerys hasn’t made one yet. Can I have a couple of karma points so I can make it happen?
You literally are open thread guy, I must admit I didn’t expect that.
Is this video (Thomas Sowell on Intellectuals and Society ) and the ideas, concepts and arguments in it worth discussing on LW in a separate discussion thread? By which I mean am I underestimating the mind-killing triggered by some of the political criticisms made by Thomas Sowell in it? Obviously he’s politically biased in his own direction, but the fundamental idea that public intellectuals are basically rent seekers seems alarmingly plausible.
Also Thomas Sowell is obviously an intellectual, his criticism should reduce our trust in his criticism. ;)
The Sowell I’ve read has been fascinating and useful, but I agree with much of his politics and his insights seem mostly confined to politics (and economics). I’m not sure if it’s worth discussing videos as much as books- they introduce a handful of ideas, but without the extensive justifications and clarifications that exist in the book. That suggests to me that bias will play a larger role in interpretation than it would when considering the full argument.
Maybe a thread about the book he mentions in the video. I’ll need to actually read it then. :)
Timelapse video of the Milky Way Stunningly beautiful. I knew I live on a planet spinning through space, but I’d never felt it viscerally.
One more item for the FAI Critical Failure Table (humor/theory of lawful magic):
37. Any possibility automatically becomes real, whenever someone justifiably expects that possibility to obtain.
Discussion: Just expecting something isn’t enough, so crazy people don’t make crazy things happen. The anticipation has to be a reflection of real reasons for forming the anticipation (a justified belief). Bad things can be expected to happen as well as good things. What actually happens doesn’t need to be understood in detail by anyone, the expectation only has to be close enough to the real effect, so the details of expectation-caused phenomena can lawfully exist independently of the content of people’s expectations about them. Since a (justified) expectation is sufficient for something to happen, all sorts of miracles can happen. Since to happen, a miracle has to be expected to happen, it’s necessary for someone to know about the miracle and to expect it to happen. Learning about a miracle from an untrustworthy (or mistakenly trusted) source doesn’t make it happen, it’s necessary for the knowledge of possibility (and sufficiently clear description) of a miracle to be communicated reliably (within the tolerance of what counts for an effect to have been correctly anticipated). The path of a powerful wizard is to study the world and its history, in order to make correct inferences about what’s possible, thereby making it possible.
Cracked delivers: 6 Small Math Errors That Caused Huge Disasters.
The first paragraph hook is perfect:
Good article, but they missed a great opportunity to talk about hindsight bias. Those mistakes are only “laughably simple” after the fact.
This correction from the NYT seems almost designed as LW-linkbait for miscellaneous humor.
My grandmother died tonight. If you are on the fence, or procrastinating, please sign up for cryonics. Costs less then signing up for high speed internet for a year, opens up the possibility of not ceasing to exist. In a side note, I learned again tonight that atheism is no sure sign of rationality: http://www.reddit.com/r/atheism/comments/o4x5u/think_you_are_rational_my_grandmother_died/
Then again, I was not at my most elegant.
Rejoice, all you who read these words, for tonight we live, and so have the continued hope of escaping death.
Life would’ve been hell of a lot more fun if it worked on hentai logic. Of course, grass being greener on the other side, people would’ve probably consumed anti-porn in such a universe. Like, about celibate nuns that actually stay celibate and nothing sexual ever happens to them. This’d be as over-the-top ridiculous as tentacle sex is to us.
(My taste is heavily slanted in favor of yaoi, but I am told the trend is only slightly weaker in straight porn.) An interesting feature of hentai logic is that some people (women and ukes, or in some works everyone) do not know if they want sex (and usually hold the false belief they do not), whereas their partners have perfect knowledge of their wishes. The connection can be universal, conditional on the active partner loving or lusting after the passive one, or the other way around.
It would be pleasant, for people who enjoy surprises, to really have such a thing as “surprise sex you didn’t know you wanted”. People who don’t can top, and people who don’t want that all the time can switch; two tops together produces the current situation and two bottoms together produces the mirrored version of the trope where both partners initiate sex they were initially reluctant to have because they magically sense their partner’s desire (though it requires ignorance about one’s preference, not false belief that I WOULD NEVER). Having your desires known to your partners is obviously necessary as a replacement[*] if any sex is going to happen; while some may enjoy the intimacy of having the connection depend on mutual feelings, it may be more convenient to have it be universal so that the knowledge is available at all times.
Also, no work would ever get done.
[*] Edit: Actually, one could also have a central authority that looks into the hearts and pants of everyone, and decrees “You two. Get it on.”.
(Reasonable enough, you’d better forward this to Eliezer so that he can think through that possibility deeper and harder for his planned story)
I kept looking for the parent of this comment to figure out what the hell the context was.
You’ll be delighted, I’m sure, to know that the unimpeachably robust scientific researchers of the Discovery Institute have been writing about transhumanism.
He invokes standard rhetoric to decry transhumanists’ desire to increase their intelligence, but not their compassion. I think this raises an interesting question: are there any drugs that are at least somewhat likely to increase compassion and who is experimenting with them?
I’m not a biochemist by any means, but I was looking into this a few months back. I was able to find some people experimenting with intranasal oxytocin, who reported increases in subjective trust and empathy; the biological half-life of that delivery route seems very short, though, on the order of minutes.
Then there’s the well-known empathogen family, of course.
I vaguely recall some results that indicated greater likelihood of experiencing an emotional reaction in response to seeing someone else modeling that emotion (e.g., reporting being sad when someone else in the same room was crying) after taking some drug or another… seratonin, maybe?… but it was many years ago and I may well be misremembering.
Which isn’t quite the same thing as compassion, of course, but it wouldn’t surprise me to discover they were correlated.
People at the longecity forum seem to be taking serotonin precursors, including 5-HTP and tryptophan. I’m not sure exactly what benefits they expect to receive, but improved mood is at least one of them.
And, of course, a conspiracy theorist analysis of the sinister transhumanist cult. I bet you didn’t know ems have already been achieved. (courtesy ciphergoth)
In accordance to this comment and the high number of upvotes received, here is the place to list further threads worthy of preservation, advertising, rejuvenation; also to list short-term/long-term solutions to this problem.
List of threads:
the mentoring-network thread
The thread where people offered listening-time to depressed people.
The thread with the ongoing survey for zip-codes of LWlers
gate-keeper/AI-box-experiment-volunteering-thread
Possible super-easy solutions to better ensure visibility of these threads:
make a wiki-page
reposting in every other “monthly open thread”
include a link in “welcome to lesswrong”-thread
Any and all suggestions are welcome, especially longterm-solutions ; and I hereby precommit to make a wiki-page with the suggestions a week from now and post it in at least the next two open threads.
Real Names Don’t Make for Better Commenters, but Pseudonyms Do
Interesting that pseudonymous commentators get both more positive and more negative feedback than real-named commentators; granted, the difference on at least the latter is pretty small, and from the Slate article I have no idea if it’s statistically significant. (Suspect not, actually; if only four percent of the data set came from people posting under real names, the data set would have to be very large for a 3% spread between that and the pseudonymous baseline to be significant.)
Assuming it is, the first possibility that comes to mind is that pseudonyms lend themselves to constructed identity in a way that real names don’t, and that constructed identities tend to be amplified in some sense relative to natural ones, leading to stronger reactions. That’s pretty speculative, though.
I think it would be cool to have some sort of asterisk next to the usernames of people who have declared “Crocker’s Rules”. Would that be hard to implement?
I think it was generally assumed that people posting on this site are looking for information rather than social niceties.
I’m not totally sure what your saying here, but if it’s that “it’s safe to assume everyone here is operating by Crocker’s Rules”, then I don’t think you’re right.
Funny comic = a straw vulcan of a rationalist relationship.
http://www.smbc-comics.com/index.php?db=comics&id=2474#comic
And a good example of the true rejection problem.
What are the warmest gloves known to humanity? I have poor circulation in my hands and none of the store gloves ever keep my hands warm outside.
you can get a battery-operated hand-warmer insert
I have many friends with bad circulation who swear by the various available USB warming gloves. Not so useful outside, but the obvious Google search turns up many leads.
I have poor circulation (a touch of Reynaud’s syndrome) as well, and I’ve tried a great many products in the context of cycling, ice climbing, and just general being outside in the cold. The short answer is that there are no gloves that will reliably keep your hands warm and allow you to retain dexterity if you’re not getting your heart rate up to promote circulation. Mittens work better, by far. In no particular order, here are some more long winded tips:
1) Use mittens whenever possible. Ones that allow skin-to-skin contact between your fingers work best.
2) Keep gloves in your pockets and switch from the mittens to the gloves when you need dexterity.
3) Cut off a pair of small wool socks to make wrist warmers. This helps but isn’t a panacea.
4) Use chemical handwarmers when necessary.
5) If you have to use gloves, some relatively cheap options that work well include, in order of warmth: a) freezer gloves, b) lined elk skin gloves (available at large hardware stores), c) Gore Windstopper gloves, available in outdoor shops.
6) Try to keep your heart rate up when outside, with your hands below your heart. This helps a lot.
7) Never wear wet gloves. If you’re going to get wet, alternate two or more pairs of gloves and keep the extras inside your jacket where they will stay warm and dry out a bit.
8) Consider vapor barrier gloves or mittens from RBH Designs if you want to spend some real money. I have not personally tried their handwear, but their vapor barrier socks are impressively warm and perform as advertised.
Better insulating your torso will increase circulation to your extremities. It’s probably easier.
Next time we have a real winter here, I’m getting me a pair of thermal gloves from an outdoor sports shop. Haven’t tried them yet, but I’m very satisfied with my pair of thermal underwear I bought last winter.
I got a kickass coat at a sports shop last month, literally half as thick and heavy as my other one yet better insulated cause it’s made out of technobabble. Didn’t know that winter clothing has such a gradient of quality.
Stealing this phrase.
Would you be willing check and see what the material/name/brand is?
I would be interested in purchasing a such a coat, or one which is similar.
It’s Outventure.
I have had to work outside in cold weather, often with thin gloves, try getting a coat with better insulated sleeves, or even adding insulation (think leg warmers) to your existing coat. Also knit wristlets will help an amazing, too—not only is there often a variable gap between cuff and glove, but that is where our circulation is closest to the surface. Also, mittens are always warmer than gloves of similar, or even quite a bit heavier, weight. If you don’t need to use your fingers, try them; and despite the other comment, they are usually warmer without an inner glove.
Maybe try light gloves (like knit) inside a big pair of water-resistant mittens.
I was wondering, why aren’t don’t articles that are part of a sequence tagged as such?
For example: fun theory sequence
I often get linked to one of the older articles and then have to do a search first to figure out which sequence it is a part of and then to find the other parts of the sequence.
I wasn’t sure this was worth its own discussion thread, so I rather dumped it here.
Manna
A neat science fiction story set in the period of transition to a automated society and later in a post-scarcity world. There are several problems with it however, the greatest is that so far the author seems to have assumed that “want” and “envy” are tied to primarily tied in material needs. This is simply not true.
I would love to live in a society with material equality on a sufficiently hight standard, I’d however hate to live in society with a enforced social equality, simply because that would override my preferences and freedom to interact or not interact with whomever I wish.
Also since things like the willpower to work out (to stay in top athletic condition even!) or not having the resources to fulfil even basic plans are made irrelevant, things like genetic inequality or how comfortable you are messing with your own hardware to upgrade your capabilities or how much time you dedicate to self-improvement would be more important than ever.
I predict social inequality would be pretty high in this society and mostly involuntary. Even a decision about something like the distribution of how much time you use for self-improvement, which you could presumably change later, there wouldn’t be a good way to catch up with anyone (think opportunity cost and compound interest), unless technological progress would hit diminishing returns and slow down. Social inequality would however be more limited than pure financial inequality I would guess because of things like Dunbar’s number. There would still be tragedy (that may be a feature rather than a bug of utopia). I guess people would be comfortable with gods above and beasts below them, that don’t really figure in their “my social status compared to others” part of the brain, but even in the narrow band where you do care about inequality would grow rapidly. Eventually you might find yourself alone in your specific spot.
To get back to my previous point about probable (to me) unacceptable limitations on freedom, It may seem silly that a society with material equality would legislate intrusive and micromanaging rules that would force social equality to prevent this, but the hunter gatherer instincts in us are strong. We demand equality. We enjoy bringing about “equality”. We look good demanding equality. Once material needs are met, this powerful urge will still be there and bring about signalling races. And new and new ways to avoid the edicts produced by such races (because also strong in us is our desire to be personally unequal or superior to someone, to distinguish and discriminate in our personal lives). This would play out in interesting and potentially dystopia ways.
I’m pretty sure the vast majority of people in the Australia project would probably end up wireheading. Why bother to go to the Moon when you can have a perfect virtual reality replica of it, why bother with the status of building a real fusion reactor when you can just play a gameified simplified version and simulate the same social reward, why bother with a real relationship ect… dedicating resoruces for something like a real life space elevator simply wouldn’t cross their minds. People I think systematically overestimate how much something being “real” matters to them. Better and better also means better and better virtual super-stimuli. Among the tiny remaining faction of remaining “peas” (those choosing to spend most of their time in physical existence), there would be very few that would choose to have children, but they would dominate the future. Also I see no reason why the US couldn’t buy technology from the Australia Project to use for its own welfare dependant citizens. Instead of the cheap mega-shelters, just hook them up on virtual reality, with no choice in the matter. Which would make a tiny fraction of them deeply unhappy.
I maintain that the human brains default response to unlimited control of its own sensor input and reasonable security of continued existence is solipsism. And the default of a society of human brains with such technology is first social fragmentation, then value fragmentation and eventually a return to living under the yoke of an essentially Darwinian processes. Speaking of which the society of the US as described in the story would probably outpace Australia since it would have machines do its research and development.
It would take some time for the value this creates to run out though, much like Robin Hanson finds a future with a dream time of utopia followed by trillions of slaves glorious , I still find a few subjective millennia of a golden age followed by non-human and inhuman minds to be worth it.
It is not like we have to choose between infinity and something finite, the universe seems to have an expiration date as it is. A few thousand or million years dosen’t seem like something fleas on a insignificant speck should sneer at.
Who do I talk to to change my Less Wrong username?
Are you hesitant to test the default hypothesis of “Eliezer Yudkowsky” because of the vast disutility of interrupting him with such insignificant earthly matters?
Yes. And if SI is letting Eliezer be webmaster as well as AI Researcher, then something is totally wrong.
It’s certainly letting him be a fanfiction writer. And even a fanfiction reader too.
Yes, that’s true. Perhaps I’d be more disturbed by this if I was more sane and enjoyed the fan fic less.
That fanfiction has probably done more to raise the sanity waterline than lesswrong.com. I’m basing this assertion on the fact that five of my friends have read HP:MOR and all seemed to learn from it, but none have the patience or free time to invest in LW.
(I just realized that this kind of thought is why we have open threads. It’s an observation I’ve been kicking around for a while but it never seems appropriate to bring up on this site.)
Probably Matt, although he might tell you to just create a new account.
What is everybody’s Twitter handle? I want to follow you.
Mine is @michaelcurzi.
@davidgerard
By the way—if you use the web interface, you’ll notice it’s been sucking real bad lately. I’ve been finding the mobile site far more usable.
@TaelorMcClurg
@arundelo. (I’m not very active.)
@Rongorg. I see you also follow Steven Kaas, who is the best tweeter.
@dearerstill
More people should reply to this. I tweet for my audience so if Less Wrong people follow me I will tweet a) more and b) about things you will probably find interesting.
Our brains are paranoid. The feeling illustrated by this comic is, I must unfortunately admit, pretty familiar.
It seems uncontroversial that a substantial amount of behavior that society labels as altruistic (i.e. self-sacrificing) can be justified by decision theoretic concepts like reputation and such. For example, the “altruistic” behavior of bonobos is strong evidence to me that more altruism can be justified by decision theory than I know decision theory. (Obviously, this assumes that bonobo behavior is as de Waal describes).
Still, I have an intuition that human morality cannot be completely justified on the basis of decision theory. Yes, superrationality and such, but that’s not mathematically rigorous AFAIK and thus is susceptible to being used as a just-so story.
Does anyone else have this intuition? Can the sense that morality is more than game theory be justified by evidence or formal logic?
Morality is a goal, like making paperclips. That doesn’t follow from game-theoretic considerations.
Fair enough. But I still have the intuition that a common property of moral theories is a commitment to instrumental values that require decisions different from those recommended by game theory.
One response is to assert that game theory is about maximizing utility, so any apparent contradiction between game theory and your values arises solely out of your confusion about the correct calculation of your utility function (i.e. the value should adjust the utility pay-out so that game theory recommends the decision that is consistent with your values). I find this answer unsatisfying, but I’m not sure if the dissatisfaction is rational.
Yes, lots of other people have the intuition that human morality requires more than decision theory to justify it. For example, it’s a common belief among several sorts of theists that one cannot have morality without some form of divine intervention.
I wasn’t clear. My question wasn’t about the justifications so much as the implications of morality.
In other words, is it a common property of moral theories that they call for different decisions than called for by decision theory?
I suspect we’re still talking past each other. Perhaps it will help to be concrete.
Can you give me an illustrative example of a situation where decision theory calls for a decision, where your intuition is that moral theories should/might/can call for a different decision?
I’m thinking of a variant of Parfit’s Hitchhiker. Suppose the driver lets you in the car. When you get to the city, decision theory says not to pay.
To avoid that result, you can posit reputation-based justifications (protecting your own reputation, creating an incentive to rescue, etc). Or you can invoke third-party coercion (i.e. lawsuit for breach of contract). But I think it’s very plausible to assert that these mechanisms wouldn’t be relevant (it’s a big, anonymous city, rescuing hitchhikers from peril is sufficiently uncommon, how do you start a lawsuit against someone who just walks away and disappears from your life).
Yet I think most moral theories currently in practice say to pay despite being able to get away with not paying.
OK, I think I understand what you’re saying a little better… thanks for clarifying.
It seems to me that decision theory simply tells me that if I estimate that paying the driver improves the state of the world (including the driver) by some amount that I value more than I value the loss to me, then I should pay the driver, and if not I shouldn’t. And in principle it gives me some tools for estimating the effect on the world of paying or not-paying the driver, which in practice often boil down to “answer hazy, try again later”.
Whereas most moral theories tell me whether I should pay the driver or not, and the most popularly articulated real-world moral theories tell me to pay the driver without bothering to estimate the effect of that action on the world in the first place. Which makes sense, if I can’t reliably estimate that effect anyway.
So I guess I’d say that detailed human morality in principle can be justified by decision theory and a small number of value choices (e.g., how does value-to-me compare to value-to-the-world-other-than-me), but in practice humans can’t do that, so instead we justify it by decision theory and a large number of value choices (e.g., how does fulfilling-my-commitments compare to blowing-off-my-commitments), and there’s a big middle ground of cases where we probably could do that but we’re not necessarily in the habit of doing so, so we end up making more value choices than we strictly speaking need to. (And our formal moral structures are therefore larger than they strictly speaking need to be, even given human limitations.)
And of course, the more distinct value choices I make, the greater the chance of finding some situation in which my values conflict.
Pleased to finally meet you, agent.
I’m A BOMB.