Extraordinary claims require extraordinary evidence. But claims of sexual harassment, abuse, assault, and rape are not extraordinary. They are depressingly ordinary. So the level of evidence we should need to believe a claim about sexual harassment, abuse, assault, or rape is substantially lower than the level of evidence we should need to believe a claim about, say, Bigfoot.
This is straight Bayes — since the prior for rape is higher than the prior for Bigfoot, it requires less evidence to raise our credence above 0.5 in any given case of a claimed occurrence. In the comments, one person points out the connection to Bayes, in part remarking:
“Bayesian updating” is a good method for using evidence rationally to change your mind. If someone requires extraordinary evidence to believe a depressingly common event, they are not being rational.
In response, another commenter, apparently triggered by the mention of Bayes, goes on a tirade about Michael Anissimov and Less Wrong being misogynistic. This commenter selectively quotes Anissimov regarding IQ, Larry Summers, and “political correctness” — a quote that (at least, out of context) sounds pretty damning, as silly as it would be to infer from Anissimov to Less Wrong. When I read this comment, I winced; my reaction could be stated something like this: “Aw, jeez. LW does not need a squabble with the FTB folks and the progressive-feminist end of the skeptic movement. Hardly anyone can speak both groups’ languages. If a conflict happened, both groups would be worsened by the polarization.”
That is, I was (for just a moment) ① willing to take the tirade-poster as representative of “the FTB folks” and ② predicting the tirade-poster to be a catalyst of an intertribal conflict between two groups I’d prefer to see reconciled.
But I kept looking … and it turned out that the tirade-poster was a troll, or at least a crank, on FTB and was readily recognized as such by the folks there. In other words, my initial expectation of a brewing political clash was flat wrong — and I had (albeit momentarily) taken the words of a deviant, undesired member of a group as indicative of that group!
Given all the concerns about replication in psychology, it’s good to see that at least the most important studies get replicated: [1][2][3][4][5][6][7]. ;)
Before reading these, I recommend making predictions and then seeing how well-calibrated you were. I learned that V arneyl pubxrq ba zl sbbq ynhtuvat jura V “ernq” gurfr.
I’ve decided to live less on the internet (a.k.a. the world’s most popular superstimulus) and more in real life. I pledge to give $75 to MIRI if I make any more posts on this account or on my reddit account before the date of October 13 (two months from now).
On a related note, I was thinking about how to solve the problem of the constant temptation to waste time on the internet. For most superstimuli, the correct action is to cut yourself off completely, but that’s not really an option at all here. Even disregarding the fact that it would be devastatingly impractical in today’s world, the internet is an instant connection to all the information in the world, making it incredibly useful. Ideally one would use the internet purely instrumentally—you would have an idea of what you want to do, open up the browser, do it, then close the browser.
To that end, I have an idea for a Chrome extension. You would open up the browser, and a pop-up would appear prompting you to type in your reason for using the internet today. Then, your reason would be written in big black letters at the top of the page while you’re browsing, and only go away when you close Chrome. This would force you to remain focused on whatever you were doing, and when you notice that you’ve fulfilled that purpose and are now just checking your email for no reason, that would be your clue to close the browser and do something else.
I don’t think anything like this exists yet. I might try to make it myself—I don’t have that much coding experience, but it seems like it could be relatively easy.
Perhaps a stupid question, or, more accurately, not even a question—but I don’t understand this attitude. If you enjoy going on the Internet, why would you want to stop? If you don’t enjoy it, why would it tempt you? It reminds me, and I mean no offense by this, like the attitude addicts have towards drugs. But it really stretches plausibility to say that the Internet could be something like a drug.
Perhaps a stupid question, or, more accurately, not even a question—but I don’t understand this attitude. If you enjoy going on the Internet, why would you want to stop? If you don’t enjoy it, why would it tempt you?
Wanting is mediated by dopamine. Liking is mostly about opiods. The two features are (unfortunately) not always in sync.
It reminds me, and I mean no offence by this, like the attitude addicts have towards drugs. But it really stretches plausibility to say that the Internet could be something like a drug.
It really doesn’t stretch plausibility. The key feature here is “has addictive potential”. It doesn’t matter to the brain whether the reward is endogenous dopamine released in response to a stimulus or something that came in a pill.
This is confusing to me. Intuitively, reward that is not wireheading is a good thing, and the Internet’s rewarding-ness is in complex and meaningful information which is the exact opposite of wireheading. For the same reason, I’m confused about what tasty foods are not seen as a dangerous evil that needs to be escaped.
there are things that can too easily expand to fill all of your time while only being a certain level of better than baseline. If you want to feel even better than just browsing the internet you need to not allow it to fill all your time. I also value doing DIFFERENT things, though not everyone does. It’s easier to do different activities (ie the threshold cost to starting them, which is usually the biggest emotional price you pay) if you’re NOT doing something fairly engrossing already.
if your base state is 0 hedons (neutral) an hour, internet is 5 hedons an hour, and going to go out dancing is maybe 1 hedon during travel time and 20 while doing it, it’s easier to go dancing if you’re deliberately cutting off your internet time, because you don’t have to spend −4 hedons to get out of the house.
Another concern is when people care about things other than direct hedons. If you have goals other than enjoying your time, then allowing internet to take up all your time sabotages those goals.
it really stretches plausibility to say that the Internet could be something like a drug.
The brain appears to have separable capabilities for wanting something and enjoying something. There are definitely some things that I feel urges to do but don’t particularly enjoy at any point. A common example is lashing out at someone verbally—sometimes, especially on the internet, I have urges to be a jerk, but when I act on those urges it isn’t rewarding to me.
I guess I can’t identify with that feeling. I don’t think I’ve ever felt that way—I’ve never wanted something that I could have identified as “not rewarding” at the time that I wanted it (regardless of the how long I reflected on it). The only times I wanted something but didn’t enjoy it was because of lack of information.
I pledge to give $75 to MIRI if I make any more posts on this account or on my reddit account before the date of October 13 (two months from now).
Quick, everyone! If we can do it for less than $75, then let’s make LW super extra interesting to gothgirl420666 for the next two months. :D
Joking aside, perhaps an effective strategy for making yourself spend less time online is to reduce your involvement with online communities—for me at least, flashing inbox icons and commitments made to people on various forums (such as promising you’ll upload a certain file) are a big part of what makes me keep coming back to certain places I want to spent less time at. If it weren’t for that nagging feeling in the back of my mind, that I’ll lose social cred in some place if I don’t come back and act on my promises, or vanish for a few months and leave PMs unanswered, I’d be tempted to make a “vow of online silence” too.
I can imagine a site-blocking tool where you could select a browsing “mode”. Each mode would block different websites. When you open an unknown website, it would ask you to classify it.
Typical modes are “work” (you block everything not work-related) and “free time” (you may still want to block the largest time sinks), but maybe there could be something like “a break from the work” that would allow some fun but keep within some limits, for example only allow programming-related blogs and debates.
I’ve decided to live less on the internet (a.k.a. the world’s most popular superstimulus) and more in real life. I pledge to give $75 to MIRI if I make any more posts on this account or on my reddit account before the date of October 13 (two months from now).
The comments are full of deathism: many people apparently sincerely coming out in favour of not just death (limited lifespan) but aging and deterioration.
Everyone who doesn’t feel in their gut that many (most?) normal people truly believe aging and death are good, and will really try to stop you from curing it if they can, should go and read through all the comments there. It’s good rationality training if (like me) you haven’t ever discussed this in person with your friends (or if they all happened to agree). It’s similar to how someone brought up by and among atheists (again, me) may not understand religion emotionally without some interaction with it.
Someone marked the appeal to worse problems article on Wikipedia for prospective deletion, for lack of sourcing—it appears to mostly have been written from the TVTropes page page. I’ve given it its proper name and added “whataboutery” as another name for it—but it needs more, and preferably from a suitably high-quality source.
A fact about industrial organization that recently surprised me:
Antimonopoly rules prevent competitors from coordinating. One exemption in the US is political lobbying: executives can meet at their political action committee. Joint projects in some industries are organized as for-profit companies owned by (nonprofit) political action committees.
My girlfriend taught me how to dive this past weekend. I’m 26. I had fully expected to go my entire life without learning how to dive, I guess because I unconsciously thought it was “too late” to learn, somehow. Now I’m wondering what other skills I never learned at the typical age and could just as easily learn now.
(if you’re looking for object-level takeaways, just start out with kneeling dives—they’re way easier and far less intimidating—then gradually try standing up more and more)
Two roads diverged in a woods, and I
Stepped on the one less traveled by
Yet stopped, and pulled back with a cry
For all those other passers-by
Who had this road declined to try
Might have a cause I knew not why
What dire truths might that imply?
I feared that road might make me die.
And so with caution to comply
I wrung my hands and paced nearby
My questions finding no reply
Until a traveller passed nigh
With stronger step and focused eye
I bid the untouched road goodbye
And followed fast my new ally.
The difference made I'll never know
'Till down that other path you go.
I am impressed how you managed to do a reasonable variation on that poem using almost solely rhymes on i/y (even if you had to reuse some words like ‘by’).
Did that really change in the last 3 days? If so, impressive turnaround! And surprising that it’d change without any sort of discussion. Now I’m confused. Where was the search box showing up before?
What are the relative merits of using one’s real name vs. a pseudonym here?
When I first started reading LessWrong, I was working in an industry obsessed with maintaining very mainstream appearances, so I chose to go with a pseudonym. I have since changed industries and have no intention of going back, so my original reason for using a pseudonym is probably irrelevant now.
’Master Palaemon’s hand, dry and wrinkled as a mummy’s, groped until it found mine. “Among the initiates of religion it is said, ‘You are an epopt always.’ The reference is not only to knowledge but to their chrism, whose mark, being invisible, is ineradicable. You know our chrism.”
I nodded again.
“Less even than theirs can it be washed away. Should you leave now, men will only say, ‘He was nurtured by the torturers.’ But when you have been anointed they will say, ‘He is a torturer.’ You may follow the plow or the drum, but still you will hear, ‘He is a torturer.’ Do you understand that?”′
I continue running into obstacles (largely-but-not-exclusively of an accessibility nature) when it comes to the major crowdfunding websites. It seems not to be just me; the major platforms (Kickstarter/Indiegogo) could stand to be much more screen reader-friendly, and the need for images (and strong urging to use videos) is an obstacle to any blind person seeking funding who doesn’t have easy access to sighted allies/minions.
My present thoughts are that I’d rather outsource setting up crowdfunding campaigns to someone for whom these would not be serious obstacles (said manager would probably be compensated with a cut of the funds).
What I don’t know is:
how to find/recruit someone willing to do this,
how likely it is they’d be satisfied with what I’d consider a reasonable cut of the funds from any given campaign, and
what sorts of legal arrangements would need to be made to protect against said manager just walking away with everything.
Can anyone hereabouts answer one or all of the above? (I am also curious as to whether or not the demand among blind developers/startups/etc mightn’t be high enough that “crowdfunding manager for the blind” might not be a profitable side-job for someone with halfway decent marketing skill, but that’s not an easy thing to estimate.)
Here’s an interesting article that argues for using (GPL-protected) open source strategies to develop strong AI, and lays out reasons why AI design and opsec should be pursued more at the modular implementation level (where mistakes can be corrected based on empirical feedback) rather than attempted at the algorithmic level. I would be curious to see MIRI’s response.
I searched and it doesn’t look like anyone has discussed this criticism of LW yet. It’s rather condescending but might still be of interest to some: http://plover.net/~bonds/cultofbayes.html
I don’t think “condescending” touches accurately upon what is going on here. This seems to be politics being the mindkiller pretty heavily (ironically one of the things they apparently think is stupid or hypocritical). They’ve apparently taken some of the lack of a better term “right-wing” posts and used that as a general portrayal of LW. Heck, I’m in many ways on the same political/tribal group as this author and think most of what they said is junk.. Examples include:
Members of Lesswrong are adept at rationalising away any threats to their privilege with a few quick “Bayesian Judo” chops. The sufferings caused by today’s elites — the billions of people who are forced to endure lives of slavery, misery, poverty, famine, fear, abuse and disease for their benefit — are treated at best as an abstract problem, of slightly lesser importance than nailing down the priors of a Bayesian formula. While the theories of right-wing economists are accepted without argument, the theories of socialists, feminists, anti-racists, environmentalists, conservationists or anyone who might upset the Bayesian worldview are subjected to extended empty “rationalist” bloviating. On the subject of feminism, Muehlhauser adopts the tactics of an MRA concern troll, claiming to be a feminist but demanding a “rational” account of why objectification is a problem. Frankly, the Lesswrong brand of “rationality” is bigotry in disguise.
A variety of interesting links are included in that paragraph. Most noteworthy, every word in `extended empty “rationalist” bloviating’ links to a different essay, with “rationalist” linking to this, which criticizes rhetorical arguments made throughout the standard political spectrum.
A number of essays are quoted in ways that look like they are either being quoted in an out of context fashion or in a way that is consistent with maximally uncharitable interpretations. The section about race and LW easily falls into this category (and is as far as I can tell, particularly ironic given that as far as I can tell, there has been more explicit racism on LW before).
Similarly, while I stand fairly strongly as one of the people here who really don’t like PUA, it is clear that calling it a “de facto rape methodology” is simply inaccurate.
At least a few points bordered on almost satire of a certain sort of argument. One obvious paragraph in that regard is:
Yudkowsky believes that “the world is stratified by genuine competence” and that today’s elites have found their deserved place in the hierarchy. This is a comforting message for a cult that draws its membership from a social base of Web entrepreneurs, startup CEOs, STEM PhDs, Ivy leaguers, and assorted computer-savvy rich kids. Yudkowsky so thoroughly identifies himself with this milieu of one-percenters that even when discussing Bayesianism, he slips into the language of a heartless rentier. A belief should “pay the rent”, he says, or be made to suffer: “If it turns deadbeat, evict it.”
I’ll let others who want to spend the time analyze everything that’s off about that paragraph.
Another fun bit:
The main reason to pay attention to the Lesswrong cult is that it has a lot of rich and powerful backers. The Singularity Institute is primarily bankrolled by Yudkowsky’s billionaire friend Peter Thiel, the hedge fund operator and co-founder of PayPal, who has donated over a million dollars to the Institute throughout its existence [4]. Thiel, who was one of the principal backers of Ron Paul’s 2012 presidential campaign, is a staunch libertarian and lifelong activist for right-wing causes. Back in his undergrad days, he co-founded Stanford University’s pro-Reagan rag The Stanford Review, which became notorious for its anti-PC stance and its defences of hate speech. The Stanford experience seems to have marked Thiel with a lasting dislike of PC types and feminists and minorities and other people who tend to remind him what a shit he is. In 1995, he co-wrote a book called The Diversity Myth: ‘Multiculturalism’ and the Politics of Intolerance at Stanford, which was too breathtakingly right-wing even for Condi Rice; one of his projects today is the Thiel’s Little Achievers Fellowship, which encourages students to drop out of university and start their own businesses, free from the corrupting influence of left-wing academics and activists.
Apparently Thiel is to certain groups the same sort of boogeyman that the Koch brothers are to much of the left and George Soros is to some on the right. I find it interesting to see one of the rare examples of someone actually using “PC” as a positive term, and actually made me briefly wonder if this was satire.
There are handful of marginally valid points here but they get completely lost in the noise, and they aren’t by and large original points. I do think however, that some aspects of the essay might raise interesting thought exercises, such as explaining everything that’s wrong with footnote 2.
Perhaps by “which became notorious for its anti-PC stance and its defences of hate speech” he means “notorious for being so anti-PC that it defended hate speech”? I think that’s pretty accurate. (Bond’s weak tea 2011 link doesn’t defend hate speech, but argues that it is often a false label.)
I’d take the author’s “anti-PC” to mean something like “seeing ‘political correctness’ everywhere, and hating it.”
For instance, there are folks who respond to requests for civil and respectful behavior on certain subjects — delivered with no force but the force of persuasion — as if those requests were threats of violence, and as if resistance to those requests were the act of a bold fighter for freedom of speech.
one of the rare examples of someone actually using “PC” as a positive term
My English teacher used “Political Correctness” as a positive term, which surprised me too, though I guess in the context of a teacher who’s supposed to avoid discussing politics in class it does make sense to use it as an explicit norm.
I searched and it doesn’t look like anyone has discussed this criticism of LW yet. It’s rather condescending but might still be of interest to some: http://plover.net/~bonds/cultofbayes.html
I’d more go with “incoherent ranting” than “condescending”.
I once read a chunk of Bond’s site after running into that page; after noting its many flaws (including a number of errors of fact, like claiming Bayes tried to prove God using his theorem when IIRC, that was Richard Price and he didn’t use a version of Bayes theorem), I was curious what the rest was like.
I have to say, I have never read video game reviews which were quite so… politicized.
It’s written by a mindkilled idiot whose only purpose in life seems to be finding the least charitable interpretation of people he hates, which probably means everyone except his friends, assuming he has any. There are millions of such idiots out there, and the only difference is that this one mentioned LW in one of his articles. We shouldn’t feed the trolls just because they decided to pay attention to us.
There are people who believe that one of the best works of English literature is an unfinished Harry Potter fanfic by someone who can barely write a comprehensible English sentence.
Starting with the very first paragraph… uhm, strawmanning mixed with plain lies… why exactly should anyone spend their limited time reading this?
It is a proof of Bell’s Inequality using counterfactual language. The idea is to explore links between counterfactual causal reasoning and quantum mechanics. Since these are both central topics on Less Wrong, I’m guessing there are people on this website who might be interested.
I don’t have any background in Quantum Mechanics, so I cannot evaluate the paper myself, but I know two of the authors and have very high regard for their intelligence.
Does anybody think that there might be another common metaethical theory to go along w/ deontology, consequentialism, and virtue? I think it’s only rarely codified, usu. used implicitly or as a folk theory, in which morality consists of bettering ones own faction and defeating opposing factions, and as far as I can see it’s most common in radical politics of all stripes. Is this distinguishable from very myopic consequentialism or mere selfishness?
It depends on the reasons why one considers it right to benefit one’s own faction and defeat opposing ones, I guess. Or are you proposing that this is just taken as a basic premise of the moral theory? If so, I’m not sure you can justifiably attribute it to many political groups. I doubt a significant number of them want to defeat opposing factions simply because they consider that the right thing to do (irrespective of what those factions believe or do).
Also, deontology, consequentialism and virtue ethics count as object-level ethical theories, I think, not meta-ethical theories. Examples of meta-ethical theories would be intuitionism (we know what is right or wrong through some faculty of moral intuition), naturalism (moral facts reduce to natural facts) and moral skepticism (there are no moral facts).
Okay… wow. I somehow managed to get that wrong for all this time? Oh dear.
This one isn’t ever formal and rarely meta-ed about, and it’s far from universal in highly combative political groups. But it seems distinct from deontologists who think it right to defeat your enemies, and from consequentialists who think it beneficial to defeat their enemies.
Maybe you’re talking about moral relativism, which can be a meta-ethical position (what’s right or wrong depends on the context) as well as a normative theory.
Are you thinking of a situation where, for example, the bank robbers think it’s okay to pull heists, but they concede that it’s okay for the police to try to stop heists? And that they would do the same thing if they were police? Kind of like in Heat? Such a great movie.
Yeah, sort of. That’s basically the case for which faction membership is not in question and is not mutable.
The only time I’ve really heard it formalized is in Plato’s Republic where one of the naive interlocutors suggests that morality consists of “doing good to one’s friends and harm to one’s enemies”.
I don’t think it’s often explicitly stated or even identified as a premise - the only case in which I see it stated by people who understand what it means is when restrictionists bring it up in debates about immigration. Its opponents call it tribalism, what its proponents call it differs depending on what the in-group is.
I would classify it as a form of moral intuitionism. By the way, there are other ethical theories in addition to the three you mentioned. For example: contractarianism (though perhaps it’s a form of consequentialism), contractualism (maybe consequentialist or deontological), and various forms of moral intuitionism.
I often write things out to make them clear in my own mind. This works particularly well for detailed planning. Just as some people “don’t know what they think until they hear themselves say it”, I don’t know what I think until I write it down. (Fast typing is an invaluable skill.)
Sometimes I use the same approach to work out what I think, know or believe about a subject. I write a sort of evolving essay laying out what I think or know.
And now I wonder: how much of that is true for other people? For instance, when Eliezer set out to write the Sequences, did he already know or believe everything that is written in them? Or did he gradually discover what is in them as he wrote them? If he hadn’t known some of what is written, could he have discovered it via the process of trying to write? Or is intellectual reflection and working-out premises into conclusions experienced differently by other people?
Which part is “that”? The fact that you write things out to make them clearer in your mind or the fact that writing things out makes them clearer in your mind? I think the latter is true for many people but the former is an uncommon habit. I didn’t explicitly pick it up until after attending the January CFAR workshop.
It’s very much how I operate as well. Talking it out also works, but it needs to be the right kind of person at the right time, whereas writing pretty much always works.
Idle curiosity / possibility of post being deleted:
At one point in LessWrong’s past (some time in the last year, I think), I seem to recall replying to a post regarding matters of a basilisk nature. I believe that the post I replied to was along these lines:
Given that the information has been leaked, what is the point of continuing to post discussions of this matter?
I believe my response was long the lines of:
I hate to use silly reflective humor, but given that the information has been leaked, what is the point of censoring discussions of this matter?
At this time, I am unable to find these posts. Am I being paranoid, or was perhaps this thread deleted?
My tactic when trying to find this kind of reference is to use a user page search. If you can recall a suitable keyword then it you should be able to find the discussion here. I couldn’t find anything based on ‘basilisk’ or ‘censor’, unfortunately.
What is EY thinking hiding this? Unless… he thinks it’s right or might be, but only if we… no, even then, it’s best dealt with as quietly as it would be if it were never touched. No one would be thinking about this if it were left open.
It was not hidden because of the basilisk, but because it was a reply to a −4 post. It is no longer invisible on the user page. You can test my claim by downvoting the parent to −4 and reloading that user page.
Please don’t. This feature is a heuristic for reducing low-quality clutter (in the global comments feed and on the post pages). Assuming it usually works, precommitting to upvoting hidden comments amounts to precommitting to reducing of the average quality of the visible comments.
(The hidden comments are visible under the “comments” tab of user pages, just not under “overview”. Wei’s tool can be easily fixed to look at that page instead of “overview”.)
Thanks for telling me about this undocumented feature. Was there any way for me to learn about it other than yelling my head off? Is this the feedback you want to give?
PS—I’m not changing my actions until Wei’s tool stops invisibly failing.
PPS—here is the corresponding comments page on which the visibility of the particular comment does not seem to depend on the visibility of the parent.
I’m not changing my actions until Wei’s tool stops invisibly failing.
These things don’t seem related. Don’t express your frustration by randomly punishing the community.
(For example, if you believed that the hiding feature makes things worse, that might be a motivation to oppose it, although the method is anarchic, something like personally destroying draconian speed limit signs; but so far you haven’t indicated that there is any motivation at all.)
This is the supposed Modus Operandi of the admins (or maybe only EY) - making such comments hard to find without deleting them. It has been mentioned here and there and I am fairly sure I experienced a version of this recently when the latest comment in the Open Thread feature on the sidebar stopped showing the latest comment for the duration of this (it could’ve been a coincidence and it is a decent way to lessen the Streisand effect so I don’t blame EY for it)
It can be found from your user page. Click the Comments tab, go to the bottom and click Next, and (currently) it will be on that page.
As far as I can tell, the Comments tab shows you all of your comments, but the Overview tab omits anything with an ancestor downvoted to −4 or below (and maybe also anything with a banned ancestor).
Deletion by the admins does not hide comments from either “overview” or “comments,” at least not today.
Please don’t use the word “ban” to refer to deletion of comments. It very often confuses people and make them think users are being banned. Admins do it because their UI uses it, but that’s a terrible reason.
Attractive commentary is insightful and pithy, but forums do not accumulate pith. Forums bloat with redundant observations, misread remarks, and misunderstanding replies unless the community aggressively cull those comments.
Having your comment dismissed is unwelcoming and hurtful. Even if we know that downvotes shouldn’t be hurtful, they are.
Bob writes a comment that doesn’t carry its weight. Alice, a LW reader, can choose to up-vote, down-vote, or Dismiss Bob’s comment. Dismiss advises the community that a commentent may not be worth reading, but does not notify the comment’s author (Bob).
Anyone who Dismissed Bob’s comment would see it as folded (or even removed)
Bob would not see his commment as folded.
Anyone else who didn’t dismiss Bob’s comment might or might not see it folded, depending on their (“Don’t show me comments with a score less than x”) preferences.:
For the purposes of Karma folding, Alice’s dissmiss would count as if it were −1 karma. If Bob’s comment had a karma score of 1 and was Dismissed once, users who fold comments with scores of 0 would see Bob’s comment as folded.
If Bob’s comment were for Alice’s article, Alice’s Dismiss would count as if it were −3 karma for the purposes of comment folding. This allows article authors quickly nip off-topic commentary.
If Bob dissmissed his own comment, it would also count as if it were −3 karma.
The total number of times Bob’s comment was Dismissed would be invisible to all users. If he really wanted to, Bob could set up a puppet account, vary the (“Don’t show me comments with a score less than x”) field until his comment was folded, and from that infer the number of dismisses on that comment. If Bob does that, I don’t particularly care if his feelings are hurt.
Upvoted for a good analysis of the problem, but I think the proposed solution would make the forum worse, not better—it makes the system more complex (more buttons, more states a comment can be in), and more prone to abuses (dismissing as censorship), and drama and complaints about people abusing the feature even if they are not.
Comments don’t have to be “bad” to be worth hiding—they can just be “not very good” or “not very good anymore” . The fastest way to improve a document is to remove the least good parts, even if those parts aren’t “bad”. Many comments are necessary at the time, but fluffy afterwards (“By foo do you mean bar?”, “No, I meant baz, and have edited my original post to make that clear”, “OK, then I withdraw my objection”). If two people independently offer the same exact brilliant insight, we’d should still hide one of them. There are no shortage of times I’d like to hide a comment without discouraging or punishing the author.
That, in effect, sets up a parallel karma system. There is the normal karma, visible and both up- and downvotable. And then there is the karma of dismissal which is unseen and can only go down but never can go up.
Besides that, the system implies personal “hide-this” flags for all comments. The Dismiss button, then, does two things simultaneously: sets the hide-this flag for the comment and decreases the comment’s dismissal karma.
Researchers have found that people experiencing Nietzschean angst tend to cling to austere ethical codes, in the hopes of reorienting themselves.
That quote is from this Slate article—the article is mostly about social stigma surrounding mental illness.
The quote is plausible, in an untrustworthy common-sense kind of way. It also seems to align with my internal perspective of my moral life. Does anyone know if it is actually true? What research is out there?
EDIT: In case it isn’t clear, I’m asking if anyone knows anything about the (uncited) research mentioned in the quote. My intuition leans towards the quote being right. But I don’t know if I should trust that intuition, since intuitions are often unreliable and I have many reasons to distrust my intuitions specifically. So I’m looking for some amount of external verification.
I’m a CFAR alumnus looking to learn how to code for the very first time. When I met Luke Muehlhauser, he said that as far as skills go, coding is very good for learning quickly whether one is good at it or not. He said that Less Wrong has some resources for learning and assessing my own natural talent or skill for coding, and he told me to come here to find it.
So, where or what is this resource which will assess my own coding skills with tight feedback loops? Please and thanks.
I’ve set up a prediction tracking system for personal use. I’m assigning confidence levels to each prediction so I can check for areas of under- or over-confidence.
My question: If I predicted X, and my confidence in X changes, will it distort the assessment of my overall calibration curve if I make a new prediction about X at the new confidence level, keep the old prediction, and score both predictions later? Is that the “right” way to do this?
More generally, if my confidence in X fluctuates over time, does it matter at all what criterion I use for deciding when and how many predictions to make about X, if my purpose is to see if my confidence levels are well calibrated? (Assuming I’ve predetermined which X’s I want to eventually make predictions about)
My thinking is that a confidence level properly considers it’s own future volatility, and so it shouldn’t matter when I “sample” by making a prediction. But if I imagine a rule like: “Whenever your confidence level about X is greater than 90%, make two identical predictions instead of one”, it feels like I’m making some mistake.
If you ask “Does it matter?” the answer is probably: Yes.
How you query yourself and when has effects. The effects are likely to be complicated and you are unlikely to fully aware of all of them.
When it comes to polling it frequently happens that the way you ask a question has effects.
This has probably been mentioned before, but I didn’t feel like searching the entire comment archive of Less Wrong to find discussion on it: Can functionality be programmed into the website to sort the comments from posts from Overcoming Bias days by “Best” or at least “Top” (“New” would be nice as well!!)? Those posts are still open for commenting, and sometimes I find comments from years later more insightful. Plus, I’m sick and tired of scrolling through arguments with trolls.
And, given that this probably has been discussed before—why hasn’t it been done yet?
Running simulations with sentient beings is generally accepted as bad here at LW; yes or no?
If you assign a high probability of reality being simulated, does it follow that most people with our experiences are simulated sentient beings?
I don’t have an opinion yet, but I find the combination of answering yes to both questions, extremely unsettling. It’s like the whole universe conspires against your values. Surprisingly, each idea encountered by itself doesn’t seem to too bad. It’s when simultaneously being against simulation of sentience beings and believing that most sentient beings are probably simulated, that really makes it disturbing.
What’s bad about running simulations with sentient beings? (Nonperson Predicates is about inadvertently running simulations with sentient beings and then killing them because you’re done with the simulation.)
There’s nothing inherently wrong with simulating intelligent beings, so long as you don’t make them suffer. If you simulate an intelligent being and give it a life significantly worse than you could, well, that’s a bit ethically questionable. If we had the power to simulate someone, and we chose to simulate him in a world much like our own, including all the strife, trouble, and pain of this world, when we could have just as easily simulated him in a strictly better world, then I think it would be reasonable to say that you, the simulator, are morally responsible for all that additional suffering.
What’s bad about running simulations with sentient beings?
Considering the avoidance of, inadvertently running simulations then killing them because we’re done, I suppose you are right in that it doesn’t necessarily have to be a bad thing. But now how about this question:
If one believes there is high probability of living in a simulated reality, must it mean that those running our simulation do not care about Nonpersons Predicates since there is clearly suffering and we are sentient? If so, that is slightly disturbing.
Why? I don’t feel like I have a good grasp of the space of hypotheses about why other people might want to simulate us, and I see no particular reason to promote hypotheses involving those people being negligent rather than otherwise without much more additional information.
...and I see no particular reason to promote hypotheses involving those people being negligent rather than otherwise without much more additional information.
It seems that our simulators are at the very least indifferent if not negligent in terms of our values; there have been 100 billion people that have lived before us and some have lived truly cruel and tortured lives. If one is concerned aboutNonperson Predicates in which an AI models a sentient you trillions of times over just to kill you when it is done, wouldn’t you also be concerned about simulations that model universes of sentient people that die and suffer?
I suppose we can’t do much about it anyway, but it’s still an interesting thought that if one has values that reflect either ygert’s commets or Nonperson Predicates and they wish to always want to want these values, then the people running our simulation are indifferent to our values.
Interestingly, all this thought has changed my credence ever so slightly towards Nick Bostrom’s second of three possibilities regarding the simulation argument, that is:
… (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;…
In this video Bostrom states ethical concerns as a possible reason why a human-level civilization would not carry out simulations. These are the same kinds of concerns as that of Nonperson Predicates and ygert’s comments.
I think you need to differentiate between “physical” simulations and “VR” simulations. In a physical simulation, the only way of arriving at a universe state is to compute all the states that precede it.
1 - Depends what you mean by simulation—maintaining ems who think they’re in meat bodies? That’s dishonest at least, but I could see cases for certain special cases being a net good. Creating a digital pocket universe? That’s inefficient, but that inefficiency could end up being irrelevant. Any way you come at it, the same usual ethics regarding creating people apply, and those generally boil down to ‘it’s a big responsibility’ (cf. pregnancy)
2 - I don’t, but if you think so, then obviously yes. I mean, unless you think reality contains even more copies of us than the simulation. That seems a bit of a stretch.
I’ve decided to start a blog, and I kind of like the name “Tin Vulcan”, but I suspect that would be bad PR. Thoughts? (I don’t intend it to be themed, but I would expect most of the posts to be LW-relevant.)
(Name origins: Fbzr pbzovangvba bs “orggre guna n fgenj ihypna” naq gur Gva Jbbqzna.)
At least personally, I don’t pay very much attention to the titles of blogs: what matters is the content of the articles. So as long as your title isn’t something like “Adolf Hitler is my idol”, it probably doesn’t matter very much. (But I’m generalizing from my own experience, so if someone feels otherwise, please say so.)
I assume prominent Star Trek terms used in a nonfiction context will connote bad superficial pop philosophy and lazy science journalism, so I’d prefer something different.
Hm. I feel like I’m not particularly worried about those connotations, though maybe I should be. I’m more worried about connoting “thinks Vulcans have the right idea” and/or “thinks he is as logical as a Vulcan”.
It also occurs to me that having watched essentially no Star Trek, my model of a straw Vulcan is really more of a straw straw Vulcan, and that seems bad.
Currently leaning towards “picking a different title if I come up with one soon-ish”.
I would be very hesitant to invoke a fictional philosophical concept I wasn’t familiar with. You are invoking related concepts and ideas, and your unfamiliarity with the source material could easily cause readers who are familiar with that material to misread your message.
I’ve heard the idea of adaptive screen brightness mentioned here a few times. I know fluxgui in Linux that does it. and seems that windows 7 and 8 come equipped.
My windows is XP in one of my computers, how do I get it to lose brightness automatically during late hours?
Socks: Traditionally I’ve worn holeproof explorers. Last time I went shopping for new socks, I wanted to try something new but was overwhelmed by choice and ended up picking some that turned out to be rather bad. My holeproofs and the newer ones are both coming to the end of their lives, and I’ll need to replace them all soon. Where should I go to learn about what types of sock would be best?
A quick google for best socks or optimal socks leads me to lots of shops, and pages for sports socks, and pages for sock fashion, but nothing about picking out a comfortable, not-smelly sock that doesn’t develop holes quickly or lose fit. I suppose, failing all else, I could just pick my way through amazon reviews, but I thought someone here might be able to give me some pointers in the right direction?
That the sweat your skin produces adheres to the fibers of the fabric and is redistributed throughout the fabric. It’s a real effect, but the term is often used imprecisely.
It doesn’t mean that the sweat is all drawn through and out via evaporation (though it may evaporate) and it doesn’t mean that you won’t feel moisture on your skin, though you may feel less than you would otherwise.
You do understand that “optimality” for socks can differ a great deal, right? It depends on the intended usage (e.g. backpacking socks are rather different from dress socks), your particular idiosyncrasies (e.g. how strongly do your feet sweat), personal preferences (e.g. do you care how soft your socks are), etc.
My approach to socks is a highly sophisticated simulated annealing-type algorithm for efficient search in sock-space:
(1) Pick a pair of socks which looks okay (2) Wear them for a bit (3a) If you don’t like them, discard and goto (1) (3b) If you do like them, buy more (or close equivalents) until you’re bored with then, then goto (1)
I’m happy with goldtoe cotton socks for durability (easy to measure) and comfort, but I’m not especially picky about socks. What makes a sock comfortable for you?
I would appreciate some advice. I’ve been trying to decide what degree to get. I’ve already taken a bunch of general classes and now I need to decide what to focus on. There are many fields that I think would enjoy working in, such as biotechnology, neuroscience, computer science, molecular manufacturing, alternative energy, etc. Since I’m not sure what I want to go into I was thinking of getting degree with a wide range of applications, such as physics, or math. I plan on improving my programming skills in my spare time, which should widen my prospects.
One goal I have is to donate to various organizations. One area of study I was considering was gerontology, working at SENS, but I learned that they already have a large pool of researchers, and their main bottleneck is funding. Other areas I want to donate to are MIRI and GiveWell.
Deciding to stop “punishing” behavior (which usually isn’t much fun for either of you, though the urge to punish is ingrained). It’s certainly a useful thing to be able to do.
Does anyone have a working definition of “forgiveness”?
What the (emotional) decision to refrain from further vengeance (often) feels like from the inside.
Given that definition, do you find it to be a useful thing to do?
Sometimes. Certainly not all the time. Tit-for-tat with a small amount of forgiveness often performs well. Note that tit-for-tat (the part where the other defects and then cooperates you then proceed to cooperate) also sometimes counts as ‘forgiviness’ in common usage. Like many cases where game theory and instinctive emotional adaptions intended to handle some common games (like what feels like ‘blackmail’) the edges between the concepts are blurry.
That’s interesting, because I think I usually refrain from vengeance by default, but I do try to like … limit further interaction and stuff. Maybe that’s similar.
The way I was thinking about it is that there’s an internal feelings component—like, do you still feel sad and hurt and angry? Then there’s the updating on evidence component—are they likely to do that or similar things again? And then there’s also a behavioral piece, where you change something in the way you act towards/around them (and I’m not sure if vengeance or just running awaaay both count?) So I wasn’t sure which combination of those were part of “forgiveness” in common usage. It sounds like you’re saying internal + behavioral, right?
So, I do, and it’s informed by religion, but I’ll try to phrase it as LW-friendly as possible: to free somebody else of claims I have against them.
It’s not an emotional state I enter or something self-centered (the “I refuse to ruminate about what you did to me” pop song thing), though sometimes it produces the same effects. The psychological benefits are secondary, even though they’re very strong for me. I usually feel much more free and much more peaceful when I’ve forgiven someone, but forgiveness causes my state of mind, not vice versa. It’s like exercise: you did it and it was good even if you didn’t get your runner’s high.
Other useful aspects, from the most blandly general perspective: it’s allowed me to salvage relationships, and it’s increased the well-being of people I’ve forgiven. I’ve been the beneficiary of forgiveness from others, and it’s increased my subjective well-being enormously.
From a very specific, personal perspective: every time I experience or give forgiveness, it reminds me of divine forgiveness, and that reminder makes me happier.
There was a recent post or comment about making scientific journal articles more interesting by imagining the descriptions (of chemical interactions?) as being GIGANTIC SPECIAL EFFECTS. Anyone remember it well enough to give a link?
Does anyone else has problems with the appearance of Lesswrong? My account is somehow at the bottom of the site and the text of some posts transgresses the white background. I noticed the problem about 2 days ago. I didn’t change my browser (Safari) or something else.
Here are 2 screenshots:
Testing with safari 5.1.9, I find that it behaves nicely for me at all times, even if I squinch the window down really narrowly. What safari version are you using?
Rhodiola is apparently the bomb, but I’ve read somewhere it suffers from poor quality supplements. Here in CEE in pharmacies the brand name they sell is called Vitango. Any experiences? http://www.vitango-stress.com/
I had an idea for Wei Dai’s “What is Probability, Anyway?,” but after actually typing up I became rather unsure that I was actually saying anything new. Is this something that hasn’t been brought up before, or did I just write up a “durr”? (If it’s not, I’ll probably expand it into a full Discussion post later.)
The fundamental idea is, imagining a multiverse of parallel universes, define all identical conscious entities as a single cross-universal entity, and define probability of an observation E as (number of successors to the entity which observed E) / (total number of successors to the entity). Observations constrain the entity to particular universes, as do decisions, but in different ways; so that we occasionally find ourselves on either side of an observation, but never see ourselves move counter to a decision (except in the sense that what we decide as a brain is not always what we consciously decide.)
Fair warning: I attempted to formalize the concept, but as a undergrad non-math major, the result may look less than impressive to trained eyes. My apologies if this is the case.
The idea is as follows:
Define a conscious observer as some algorithm P(0). P(0) computes on available data and returns a new observer P(1) to act on new available data. Note that it is possible to generate a set of all possible outputs P(n); on human timescales and under the limitation of a human lifetime, it is plausible that such a set would match with the intuitive concept of a “character” who undergoes development.
Assume many-worlds. There are now a very large number of identical algorithms P(n) scattered across the many worlds. Since P(n)=P(n), no local experiment can distinguish between algorithms; therefore scratch the concept of them being separate entirely, and consider them all to be a single conscious entity P.
P does not know which universe it is in (by definition) to start. It can change this by making an observation: it updates itself on sensory data. Regardless of which result is recorded, P(n+1) has lesser measure than P(n): P(n+1) occupies precisely half of the universes P does. P(n+1) has learned more about the universe it is in, so ist space of possible universes has diminished.
An example: consider the example of observing a fair coin—for example, observing the spin of an electron. All of P(n) runs the same algorithm: read the single bit corresponding to the spin, add bit to memory with a suitable wrapper: “Result of experiment: 0/1”. This is the new P(n+1), which regardless of result is a new entity. Let us designate successors to P(n) which observed a positive spin Q+, and those which observed a negative spin Q-. Since Q+ and Q- are not equal—they differ in one bit—they are not the same entity, even though they are both successors to (and are part of the same “character” as) P(n). Thus each of Q+/- observe only one version of the experiment.
As a lead-in to decision-making: consider what would happen if P(n) had precommitted to producing Q+, and never produced a Q-. Then the universe “Character P observes a negative spin” is inconsistent, and does not exist (barring, say, a random cosmic ray changing the algorithm.) Such a mind would never observe a spin-down event. This is distinct from quantum immortality/suicide—whereas a quantum suicide leaves behind a “world without you,” precommitting in this way means that a given world is inconsistent and never existed in the first place. Barring improbability, no successor of P(n) observes a spin-down event.
In this sense, we can define a decision as a “false observation.” P(n) decides to cause event E by choosing to only output successor functions in which event E is observed. (Note that this wording is excessively confusing; a brain which outputs a “move arm” signal is highly unlikely to be in a state where the arm does not move, and so can be said to have “decided” to move the arm.) A decision, then, as expected, also narrows the field of possible universes—but, at least hypothetically, in a purposeful manner.
(Trigger warnings: mention of rape, harassment, and hostile criticism of Less Wrong.)
A lesson on politics as mindkiller —
There’s a thread on Greta Christina’s FTB blog about standards of evidence in discussions of rape and harassment. One of her arguments:
This is straight Bayes — since the prior for rape is higher than the prior for Bigfoot, it requires less evidence to raise our credence above 0.5 in any given case of a claimed occurrence. In the comments, one person points out the connection to Bayes, in part remarking:
In response, another commenter, apparently triggered by the mention of Bayes, goes on a tirade about Michael Anissimov and Less Wrong being misogynistic. This commenter selectively quotes Anissimov regarding IQ, Larry Summers, and “political correctness” — a quote that (at least, out of context) sounds pretty damning, as silly as it would be to infer from Anissimov to Less Wrong. When I read this comment, I winced; my reaction could be stated something like this: “Aw, jeez. LW does not need a squabble with the FTB folks and the progressive-feminist end of the skeptic movement. Hardly anyone can speak both groups’ languages. If a conflict happened, both groups would be worsened by the polarization.”
That is, I was (for just a moment) ① willing to take the tirade-poster as representative of “the FTB folks” and ② predicting the tirade-poster to be a catalyst of an intertribal conflict between two groups I’d prefer to see reconciled.
But I kept looking … and it turned out that the tirade-poster was a troll, or at least a crank, on FTB and was readily recognized as such by the folks there. In other words, my initial expectation of a brewing political clash was flat wrong — and I had (albeit momentarily) taken the words of a deviant, undesired member of a group as indicative of that group!
Congratulations, you avoided stepping on a landmine!
Is there a name for the bias “if a person A is commenting on a forum X, then person A is a representative of the forum X”?
Given all the concerns about replication in psychology, it’s good to see that at least the most important studies get replicated: [1] [2] [3] [4] [5] [6] [7]. ;)
Before reading these, I recommend making predictions and then seeing how well-calibrated you were. I learned that V arneyl pubxrq ba zl sbbq ynhtuvat jura V “ernq” gurfr.
I’ve decided to live less on the internet (a.k.a. the world’s most popular superstimulus) and more in real life. I pledge to give $75 to MIRI if I make any more posts on this account or on my reddit account before the date of October 13 (two months from now).
On a related note, I was thinking about how to solve the problem of the constant temptation to waste time on the internet. For most superstimuli, the correct action is to cut yourself off completely, but that’s not really an option at all here. Even disregarding the fact that it would be devastatingly impractical in today’s world, the internet is an instant connection to all the information in the world, making it incredibly useful. Ideally one would use the internet purely instrumentally—you would have an idea of what you want to do, open up the browser, do it, then close the browser.
To that end, I have an idea for a Chrome extension. You would open up the browser, and a pop-up would appear prompting you to type in your reason for using the internet today. Then, your reason would be written in big black letters at the top of the page while you’re browsing, and only go away when you close Chrome. This would force you to remain focused on whatever you were doing, and when you notice that you’ve fulfilled that purpose and are now just checking your email for no reason, that would be your clue to close the browser and do something else.
I don’t think anything like this exists yet. I might try to make it myself—I don’t have that much coding experience, but it seems like it could be relatively easy.
Perhaps a stupid question, or, more accurately, not even a question—but I don’t understand this attitude. If you enjoy going on the Internet, why would you want to stop? If you don’t enjoy it, why would it tempt you? It reminds me, and I mean no offense by this, like the attitude addicts have towards drugs. But it really stretches plausibility to say that the Internet could be something like a drug.
Wanting is mediated by dopamine. Liking is mostly about opiods. The two features are (unfortunately) not always in sync.
It really doesn’t stretch plausibility. The key feature here is “has addictive potential”. It doesn’t matter to the brain whether the reward is endogenous dopamine released in response to a stimulus or something that came in a pill.
This is confusing to me. Intuitively, reward that is not wireheading is a good thing, and the Internet’s rewarding-ness is in complex and meaningful information which is the exact opposite of wireheading. For the same reason, I’m confused about what tasty foods are not seen as a dangerous evil that needs to be escaped.
there are things that can too easily expand to fill all of your time while only being a certain level of better than baseline. If you want to feel even better than just browsing the internet you need to not allow it to fill all your time. I also value doing DIFFERENT things, though not everyone does. It’s easier to do different activities (ie the threshold cost to starting them, which is usually the biggest emotional price you pay) if you’re NOT doing something fairly engrossing already.
if your base state is 0 hedons (neutral) an hour, internet is 5 hedons an hour, and going to go out dancing is maybe 1 hedon during travel time and 20 while doing it, it’s easier to go dancing if you’re deliberately cutting off your internet time, because you don’t have to spend −4 hedons to get out of the house.
Another concern is when people care about things other than direct hedons. If you have goals other than enjoying your time, then allowing internet to take up all your time sabotages those goals.
The brain appears to have separable capabilities for wanting something and enjoying something. There are definitely some things that I feel urges to do but don’t particularly enjoy at any point. A common example is lashing out at someone verbally—sometimes, especially on the internet, I have urges to be a jerk, but when I act on those urges it isn’t rewarding to me.
Aaanyhow, your sentence is also the worst argument :P
I guess I can’t identify with that feeling. I don’t think I’ve ever felt that way—I’ve never wanted something that I could have identified as “not rewarding” at the time that I wanted it (regardless of the how long I reflected on it). The only times I wanted something but didn’t enjoy it was because of lack of information.
Quick, everyone! If we can do it for less than $75, then let’s make LW super extra interesting to gothgirl420666 for the next two months. :D
Joking aside, perhaps an effective strategy for making yourself spend less time online is to reduce your involvement with online communities—for me at least, flashing inbox icons and commitments made to people on various forums (such as promising you’ll upload a certain file) are a big part of what makes me keep coming back to certain places I want to spent less time at. If it weren’t for that nagging feeling in the back of my mind, that I’ll lose social cred in some place if I don’t come back and act on my promises, or vanish for a few months and leave PMs unanswered, I’d be tempted to make a “vow of online silence” too.
I use AdBlock to block the “new messages” icon on LessWrong at my work.
I can imagine a site-blocking tool where you could select a browsing “mode”. Each mode would block different websites. When you open an unknown website, it would ask you to classify it.
Typical modes are “work” (you block everything not work-related) and “free time” (you may still want to block the largest time sinks), but maybe there could be something like “a break from the work” that would allow some fun but keep within some limits, for example only allow programming-related blogs and debates.
checks
Congratulations!
Should we take rhodiola rosea which “extends the lifespans of fruit flies 24% and delays age-related loss in physical performance”?
There was a post on Slashdot today arguing that “Aging is a disease and we should try to defeat it or at least slow it down”.
The comments are full of deathism: many people apparently sincerely coming out in favour of not just death (limited lifespan) but aging and deterioration.
Everyone who doesn’t feel in their gut that many (most?) normal people truly believe aging and death are good, and will really try to stop you from curing it if they can, should go and read through all the comments there. It’s good rationality training if (like me) you haven’t ever discussed this in person with your friends (or if they all happened to agree). It’s similar to how someone brought up by and among atheists (again, me) may not understand religion emotionally without some interaction with it.
Someone marked the appeal to worse problems article on Wikipedia for prospective deletion, for lack of sourcing—it appears to mostly have been written from the TVTropes page page. I’ve given it its proper name and added “whataboutery” as another name for it—but it needs more, and preferably from a suitably high-quality source.
A fact about industrial organization that recently surprised me:
Antimonopoly rules prevent competitors from coordinating. One exemption in the US is political lobbying: executives can meet at their political action committee. Joint projects in some industries are organized as for-profit companies owned by (nonprofit) political action committees.
My girlfriend taught me how to dive this past weekend. I’m 26. I had fully expected to go my entire life without learning how to dive, I guess because I unconsciously thought it was “too late” to learn, somehow. Now I’m wondering what other skills I never learned at the typical age and could just as easily learn now.
(if you’re looking for object-level takeaways, just start out with kneeling dives—they’re way easier and far less intimidating—then gradually try standing up more and more)
http://lesswrong.com/lw/m3/politics_and_awful_art/
It wasn’t necessarily supposed to be non-awful.
I am impressed how you managed to do a reasonable variation on that poem using almost solely rhymes on i/y (even if you had to reuse some words like ‘by’).
I found this much more amusing than it should have been.
I couldn’t find a place to mention this sort of thing at the wiki, so I’m mentioning it here.
The search box should be near the top of the page.
It’s one of the most valuable things on a lot of websites, especially wikis, and I don’t want to have to look for it.
Did that really change in the last 3 days? If so, impressive turnaround! And surprising that it’d change without any sort of discussion. Now I’m confused. Where was the search box showing up before?
What are the relative merits of using one’s real name vs. a pseudonym here?
When I first started reading LessWrong, I was working in an industry obsessed with maintaining very mainstream appearances, so I chose to go with a pseudonym. I have since changed industries and have no intention of going back, so my original reason for using a pseudonym is probably irrelevant now.
--Gene Wolfe, The Shadow of the Torturer
I haven’t read that book, but I hope the hero did not choose dust specks instead!
Don’t worry, he didn’t chose TBotNS’s version of dust specks.
I continue running into obstacles (largely-but-not-exclusively of an accessibility nature) when it comes to the major crowdfunding websites. It seems not to be just me; the major platforms (Kickstarter/Indiegogo) could stand to be much more screen reader-friendly, and the need for images (and strong urging to use videos) is an obstacle to any blind person seeking funding who doesn’t have easy access to sighted allies/minions.
My present thoughts are that I’d rather outsource setting up crowdfunding campaigns to someone for whom these would not be serious obstacles (said manager would probably be compensated with a cut of the funds).
What I don’t know is:
how to find/recruit someone willing to do this,
how likely it is they’d be satisfied with what I’d consider a reasonable cut of the funds from any given campaign, and
what sorts of legal arrangements would need to be made to protect against said manager just walking away with everything.
Can anyone hereabouts answer one or all of the above? (I am also curious as to whether or not the demand among blind developers/startups/etc mightn’t be high enough that “crowdfunding manager for the blind” might not be a profitable side-job for someone with halfway decent marketing skill, but that’s not an easy thing to estimate.)
Here’s an interesting article that argues for using (GPL-protected) open source strategies to develop strong AI, and lays out reasons why AI design and opsec should be pursued more at the modular implementation level (where mistakes can be corrected based on empirical feedback) rather than attempted at the algorithmic level. I would be curious to see MIRI’s response.
I searched and it doesn’t look like anyone has discussed this criticism of LW yet. It’s rather condescending but might still be of interest to some: http://plover.net/~bonds/cultofbayes.html
I don’t think “condescending” touches accurately upon what is going on here. This seems to be politics being the mindkiller pretty heavily (ironically one of the things they apparently think is stupid or hypocritical). They’ve apparently taken some of the lack of a better term “right-wing” posts and used that as a general portrayal of LW. Heck, I’m in many ways on the same political/tribal group as this author and think most of what they said is junk.. Examples include:
A variety of interesting links are included in that paragraph. Most noteworthy, every word in `extended empty “rationalist” bloviating’ links to a different essay, with “rationalist” linking to this, which criticizes rhetorical arguments made throughout the standard political spectrum.
A number of essays are quoted in ways that look like they are either being quoted in an out of context fashion or in a way that is consistent with maximally uncharitable interpretations. The section about race and LW easily falls into this category (and is as far as I can tell, particularly ironic given that as far as I can tell, there has been more explicit racism on LW before).
Similarly, while I stand fairly strongly as one of the people here who really don’t like PUA, it is clear that calling it a “de facto rape methodology” is simply inaccurate.
At least a few points bordered on almost satire of a certain sort of argument. One obvious paragraph in that regard is:
I’ll let others who want to spend the time analyze everything that’s off about that paragraph.
Another fun bit:
Apparently Thiel is to certain groups the same sort of boogeyman that the Koch brothers are to much of the left and George Soros is to some on the right. I find it interesting to see one of the rare examples of someone actually using “PC” as a positive term, and actually made me briefly wonder if this was satire.
There are handful of marginally valid points here but they get completely lost in the noise, and they aren’t by and large original points. I do think however, that some aspects of the essay might raise interesting thought exercises, such as explaining everything that’s wrong with footnote 2.
Someone using ‘Political Correctness’ as a positive term?
(Warning: Political comedy)
Perhaps by “which became notorious for its anti-PC stance and its defences of hate speech” he means “notorious for being so anti-PC that it defended hate speech”? I think that’s pretty accurate. (Bond’s weak tea 2011 link doesn’t defend hate speech, but argues that it is often a false label.)
I’d take the author’s “anti-PC” to mean something like “seeing ‘political correctness’ everywhere, and hating it.”
For instance, there are folks who respond to requests for civil and respectful behavior on certain subjects — delivered with no force but the force of persuasion — as if those requests were threats of violence, and as if resistance to those requests were the act of a bold fighter for freedom of speech.
My English teacher used “Political Correctness” as a positive term, which surprised me too, though I guess in the context of a teacher who’s supposed to avoid discussing politics in class it does make sense to use it as an explicit norm.
I’d more go with “incoherent ranting” than “condescending”.
Worthless ranting.
His footnote 3 is particularly telling:
In other words, this is soup of the soup.
Looking at the other articles on his site, they’re all like that. I would say that this is someone who does not know how to learn.
I once read a chunk of Bond’s site after running into that page; after noting its many flaws (including a number of errors of fact, like claiming Bayes tried to prove God using his theorem when IIRC, that was Richard Price and he didn’t use a version of Bayes theorem), I was curious what the rest was like.
I have to say, I have never read video game reviews which were quite so… politicized.
It’s written by a mindkilled idiot whose only purpose in life seems to be finding the least charitable interpretation of people he hates, which probably means everyone except his friends, assuming he has any. There are millions of such idiots out there, and the only difference is that this one mentioned LW in one of his articles. We shouldn’t feed the trolls just because they decided to pay attention to us.
Starting with the very first paragraph… uhm, strawmanning mixed with plain lies… why exactly should anyone spend their limited time reading this?
Does anyone have any opinions on this paper? [http://arxiv.org/pdf/1207.4913.pdf]
It is a proof of Bell’s Inequality using counterfactual language. The idea is to explore links between counterfactual causal reasoning and quantum mechanics. Since these are both central topics on Less Wrong, I’m guessing there are people on this website who might be interested.
I don’t have any background in Quantum Mechanics, so I cannot evaluate the paper myself, but I know two of the authors and have very high regard for their intelligence.
Seems solid to me. Not exactly surprising, but very clean.
Does anybody think that there might be another common metaethical theory to go along w/ deontology, consequentialism, and virtue? I think it’s only rarely codified, usu. used implicitly or as a folk theory, in which morality consists of bettering ones own faction and defeating opposing factions, and as far as I can see it’s most common in radical politics of all stripes. Is this distinguishable from very myopic consequentialism or mere selfishness?
It depends on the reasons why one considers it right to benefit one’s own faction and defeat opposing ones, I guess. Or are you proposing that this is just taken as a basic premise of the moral theory? If so, I’m not sure you can justifiably attribute it to many political groups. I doubt a significant number of them want to defeat opposing factions simply because they consider that the right thing to do (irrespective of what those factions believe or do).
Also, deontology, consequentialism and virtue ethics count as object-level ethical theories, I think, not meta-ethical theories. Examples of meta-ethical theories would be intuitionism (we know what is right or wrong through some faculty of moral intuition), naturalism (moral facts reduce to natural facts) and moral skepticism (there are no moral facts).
Okay… wow. I somehow managed to get that wrong for all this time? Oh dear.
This one isn’t ever formal and rarely meta-ed about, and it’s far from universal in highly combative political groups. But it seems distinct from deontologists who think it right to defeat your enemies, and from consequentialists who think it beneficial to defeat their enemies.
Maybe you’re talking about moral relativism, which can be a meta-ethical position (what’s right or wrong depends on the context) as well as a normative theory.
Are you thinking of a situation where, for example, the bank robbers think it’s okay to pull heists, but they concede that it’s okay for the police to try to stop heists? And that they would do the same thing if they were police? Kind of like in Heat? Such a great movie.
Yeah, sort of. That’s basically the case for which faction membership is not in question and is not mutable.
The only time I’ve really heard it formalized is in Plato’s Republic where one of the naive interlocutors suggests that morality consists of “doing good to one’s friends and harm to one’s enemies”.
I don’t think it’s often explicitly stated or even identified as a premise - the only case in which I see it stated by people who understand what it means is when restrictionists bring it up in debates about immigration. Its opponents call it tribalism, what its proponents call it differs depending on what the in-group is. I would classify it as a form of moral intuitionism. By the way, there are other ethical theories in addition to the three you mentioned. For example: contractarianism (though perhaps it’s a form of consequentialism), contractualism (maybe consequentialist or deontological), and various forms of moral intuitionism.
I often write things out to make them clear in my own mind. This works particularly well for detailed planning. Just as some people “don’t know what they think until they hear themselves say it”, I don’t know what I think until I write it down. (Fast typing is an invaluable skill.)
Sometimes I use the same approach to work out what I think, know or believe about a subject. I write a sort of evolving essay laying out what I think or know.
And now I wonder: how much of that is true for other people? For instance, when Eliezer set out to write the Sequences, did he already know or believe everything that is written in them? Or did he gradually discover what is in them as he wrote them? If he hadn’t known some of what is written, could he have discovered it via the process of trying to write? Or is intellectual reflection and working-out premises into conclusions experienced differently by other people?
Which part is “that”? The fact that you write things out to make them clearer in your mind or the fact that writing things out makes them clearer in your mind? I think the latter is true for many people but the former is an uncommon habit. I didn’t explicitly pick it up until after attending the January CFAR workshop.
I do this by talking to myself. It attracts odd looks from loved ones, but it works for me so I’m going to keep doing it, dammit.
It’s very much how I operate as well. Talking it out also works, but it needs to be the right kind of person at the right time, whereas writing pretty much always works.
Idle curiosity / possibility of post being deleted:
At one point in LessWrong’s past (some time in the last year, I think), I seem to recall replying to a post regarding matters of a basilisk nature. I believe that the post I replied to was along these lines:
I believe my response was long the lines of:
At this time, I am unable to find these posts. Am I being paranoid, or was perhaps this thread deleted?
My tactic when trying to find this kind of reference is to use a user page search. If you can recall a suitable keyword then it you should be able to find the discussion here. I couldn’t find anything based on ‘basilisk’ or ‘censor’, unfortunately.
After more work than I would honestly prefer to put into such an effort, I eventually found this post:
http://lesswrong.com/lw/goe/open_thread_february_1528_2013/8iuo
As a curiosity, this post cannot be found from my user-page, nor can it be found via Wei Dai’s app. Fascinating.
What is EY thinking hiding this? Unless… he thinks it’s right or might be, but only if we… no, even then, it’s best dealt with as quietly as it would be if it were never touched. No one would be thinking about this if it were left open.
It was not hidden because of the basilisk, but because it was a reply to a −4 post. It is no longer invisible on the user page. You can test my claim by downvoting the parent to −4 and reloading that user page.
I COMMIT TO UPVOTING EVERY HIDDEN COMMENT.
Please don’t. This feature is a heuristic for reducing low-quality clutter (in the global comments feed and on the post pages). Assuming it usually works, precommitting to upvoting hidden comments amounts to precommitting to reducing of the average quality of the visible comments.
(The hidden comments are visible under the “comments” tab of user pages, just not under “overview”. Wei’s tool can be easily fixed to look at that page instead of “overview”.)
Thanks for telling me about this undocumented feature. Was there any way for me to learn about it other than yelling my head off? Is this the feedback you want to give?
PS—I’m not changing my actions until Wei’s tool stops invisibly failing.
PPS—here is the corresponding comments page on which the visibility of the particular comment does not seem to depend on the visibility of the parent.
These things don’t seem related. Don’t express your frustration by randomly punishing the community.
(For example, if you believed that the hiding feature makes things worse, that might be a motivation to oppose it, although the method is anarchic, something like personally destroying draconian speed limit signs; but so far you haven’t indicated that there is any motivation at all.)
This is the supposed Modus Operandi of the admins (or maybe only EY) - making such comments hard to find without deleting them. It has been mentioned here and there and I am fairly sure I experienced a version of this recently when the latest comment in the Open Thread feature on the sidebar stopped showing the latest comment for the duration of this (it could’ve been a coincidence and it is a decent way to lessen the Streisand effect so I don’t blame EY for it)
It can be found from your user page. Click the Comments tab, go to the bottom and click Next, and (currently) it will be on that page.
As far as I can tell, the Comments tab shows you all of your comments, but the Overview tab omits anything with an ancestor downvoted to −4 or below (and maybe also anything with a banned ancestor).
Deletion by the admins does not hide comments from either “overview” or “comments,” at least not today.
Please don’t use the word “ban” to refer to deletion of comments. It very often confuses people and make them think users are being banned. Admins do it because their UI uses it, but that’s a terrible reason.
Problem:
Attractive commentary is insightful and pithy, but forums do not accumulate pith. Forums bloat with redundant observations, misread remarks, and misunderstanding replies unless the community aggressively cull those comments.
Having your comment dismissed is unwelcoming and hurtful. Even if we know that downvotes shouldn’t be hurtful, they are.
Inspiration:
This thread.
Proposal:
Dismiss comment button
Bob writes a comment that doesn’t carry its weight. Alice, a LW reader, can choose to up-vote, down-vote, or Dismiss Bob’s comment. Dismiss advises the community that a commentent may not be worth reading, but does not notify the comment’s author (Bob).
Anyone who Dismissed Bob’s comment would see it as folded (or even removed)
Bob would not see his commment as folded.
Anyone else who didn’t dismiss Bob’s comment might or might not see it folded, depending on their (“Don’t show me comments with a score less than x”) preferences.:
For the purposes of Karma folding, Alice’s dissmiss would count as if it were −1 karma. If Bob’s comment had a karma score of 1 and was Dismissed once, users who fold comments with scores of 0 would see Bob’s comment as folded.
If Bob’s comment were for Alice’s article, Alice’s Dismiss would count as if it were −3 karma for the purposes of comment folding. This allows article authors quickly nip off-topic commentary.
If Bob dissmissed his own comment, it would also count as if it were −3 karma.
The total number of times Bob’s comment was Dismissed would be invisible to all users. If he really wanted to, Bob could set up a puppet account, vary the (“Don’t show me comments with a score less than x”) field until his comment was folded, and from that infer the number of dismisses on that comment. If Bob does that, I don’t particularly care if his feelings are hurt.
Upvoted for a good analysis of the problem, but I think the proposed solution would make the forum worse, not better—it makes the system more complex (more buttons, more states a comment can be in), and more prone to abuses (dismissing as censorship), and drama and complaints about people abusing the feature even if they are not.
negative karma that doesn’t discourage the poster from making further similar comments is almost pointless.
Comments don’t have to be “bad” to be worth hiding—they can just be “not very good” or “not very good anymore” . The fastest way to improve a document is to remove the least good parts, even if those parts aren’t “bad”. Many comments are necessary at the time, but fluffy afterwards (“By foo do you mean bar?”, “No, I meant baz, and have edited my original post to make that clear”, “OK, then I withdraw my objection”). If two people independently offer the same exact brilliant insight, we’d should still hide one of them. There are no shortage of times I’d like to hide a comment without discouraging or punishing the author.
That, in effect, sets up a parallel karma system. There is the normal karma, visible and both up- and downvotable. And then there is the karma of dismissal which is unseen and can only go down but never can go up.
Besides that, the system implies personal “hide-this” flags for all comments. The Dismiss button, then, does two things simultaneously: sets the hide-this flag for the comment and decreases the comment’s dismissal karma.
That would be the little minimise button in the corner.
That quote is from this Slate article—the article is mostly about social stigma surrounding mental illness.
The quote is plausible, in an untrustworthy common-sense kind of way. It also seems to align with my internal perspective of my moral life. Does anyone know if it is actually true? What research is out there?
EDIT: In case it isn’t clear, I’m asking if anyone knows anything about the (uncited) research mentioned in the quote. My intuition leans towards the quote being right. But I don’t know if I should trust that intuition, since intuitions are often unreliable and I have many reasons to distrust my intuitions specifically. So I’m looking for some amount of external verification.
I’m a CFAR alumnus looking to learn how to code for the very first time. When I met Luke Muehlhauser, he said that as far as skills go, coding is very good for learning quickly whether one is good at it or not. He said that Less Wrong has some resources for learning and assessing my own natural talent or skill for coding, and he told me to come here to find it.
So, where or what is this resource which will assess my own coding skills with tight feedback loops? Please and thanks.
Checking for the Programming Gear contains a discussion of one really strong version of Luke’s claim. The comments on I Want to Learn Programming point to several good ways to start learning and assessing your talent. Also, the programming thread compiles a bunch of programming resources that may be useful.
Here.
I’ve set up a prediction tracking system for personal use. I’m assigning confidence levels to each prediction so I can check for areas of under- or over-confidence.
My question: If I predicted X, and my confidence in X changes, will it distort the assessment of my overall calibration curve if I make a new prediction about X at the new confidence level, keep the old prediction, and score both predictions later? Is that the “right” way to do this?
More generally, if my confidence in X fluctuates over time, does it matter at all what criterion I use for deciding when and how many predictions to make about X, if my purpose is to see if my confidence levels are well calibrated? (Assuming I’ve predetermined which X’s I want to eventually make predictions about)
My thinking is that a confidence level properly considers it’s own future volatility, and so it shouldn’t matter when I “sample” by making a prediction. But if I imagine a rule like: “Whenever your confidence level about X is greater than 90%, make two identical predictions instead of one”, it feels like I’m making some mistake.
If you ask “Does it matter?” the answer is probably: Yes.
How you query yourself and when has effects. The effects are likely to be complicated and you are unlikely to fully aware of all of them. When it comes to polling it frequently happens that the way you ask a question has effects.
This has probably been mentioned before, but I didn’t feel like searching the entire comment archive of Less Wrong to find discussion on it: Can functionality be programmed into the website to sort the comments from posts from Overcoming Bias days by “Best” or at least “Top” (“New” would be nice as well!!)? Those posts are still open for commenting, and sometimes I find comments from years later more insightful. Plus, I’m sick and tired of scrolling through arguments with trolls.
And, given that this probably has been discussed before—why hasn’t it been done yet?
Just a few questions for some of you:
Running simulations with sentient beings is generally accepted as bad here at LW; yes or no?
If you assign a high probability of reality being simulated, does it follow that most people with our experiences are simulated sentient beings?
I don’t have an opinion yet, but I find the combination of answering yes to both questions, extremely unsettling. It’s like the whole universe conspires against your values. Surprisingly, each idea encountered by itself doesn’t seem to too bad. It’s when simultaneously being against simulation of sentience beings and believing that most sentient beings are probably simulated, that really makes it disturbing.
What’s bad about running simulations with sentient beings? (Nonperson Predicates is about inadvertently running simulations with sentient beings and then killing them because you’re done with the simulation.)
There’s nothing inherently wrong with simulating intelligent beings, so long as you don’t make them suffer. If you simulate an intelligent being and give it a life significantly worse than you could, well, that’s a bit ethically questionable. If we had the power to simulate someone, and we chose to simulate him in a world much like our own, including all the strife, trouble, and pain of this world, when we could have just as easily simulated him in a strictly better world, then I think it would be reasonable to say that you, the simulator, are morally responsible for all that additional suffering.
Agree, but I’d like to point out that “just as easily” hides some subtlety in this claim.
Considering the avoidance of, inadvertently running simulations then killing them because we’re done, I suppose you are right in that it doesn’t necessarily have to be a bad thing. But now how about this question:
If one believes there is high probability of living in a simulated reality, must it mean that those running our simulation do not care about Nonpersons Predicates since there is clearly suffering and we are sentient? If so, that is slightly disturbing.
Why? I don’t feel like I have a good grasp of the space of hypotheses about why other people might want to simulate us, and I see no particular reason to promote hypotheses involving those people being negligent rather than otherwise without much more additional information.
It seems that our simulators are at the very least indifferent if not negligent in terms of our values; there have been 100 billion people that have lived before us and some have lived truly cruel and tortured lives. If one is concerned aboutNonperson Predicates in which an AI models a sentient you trillions of times over just to kill you when it is done, wouldn’t you also be concerned about simulations that model universes of sentient people that die and suffer?
I suppose we can’t do much about it anyway, but it’s still an interesting thought that if one has values that reflect either ygert’s commets or Nonperson Predicates and they wish to always want to want these values, then the people running our simulation are indifferent to our values.
Interestingly, all this thought has changed my credence ever so slightly towards Nick Bostrom’s second of three possibilities regarding the simulation argument, that is:
In this video Bostrom states ethical concerns as a possible reason why a human-level civilization would not carry out simulations. These are the same kinds of concerns as that of Nonperson Predicates and ygert’s comments.
If we are, in fact, running in a simulation, there’s little reason to think this is true.
I think you need to differentiate between “physical” simulations and “VR” simulations. In a physical simulation, the only way of arriving at a universe state is to compute all the states that precede it.
1 - Depends what you mean by simulation—maintaining ems who think they’re in meat bodies? That’s dishonest at least, but I could see cases for certain special cases being a net good. Creating a digital pocket universe? That’s inefficient, but that inefficiency could end up being irrelevant. Any way you come at it, the same usual ethics regarding creating people apply, and those generally boil down to ‘it’s a big responsibility’ (cf. pregnancy)
2 - I don’t, but if you think so, then obviously yes. I mean, unless you think reality contains even more copies of us than the simulation. That seems a bit of a stretch.
I’ve decided to start a blog, and I kind of like the name “Tin Vulcan”, but I suspect that would be bad PR. Thoughts? (I don’t intend it to be themed, but I would expect most of the posts to be LW-relevant.)
(Name origins: Fbzr pbzovangvba bs “orggre guna n fgenj ihypna” naq gur Gva Jbbqzna.)
At least personally, I don’t pay very much attention to the titles of blogs: what matters is the content of the articles. So as long as your title isn’t something like “Adolf Hitler is my idol”, it probably doesn’t matter very much. (But I’m generalizing from my own experience, so if someone feels otherwise, please say so.)
I assume prominent Star Trek terms used in a nonfiction context will connote bad superficial pop philosophy and lazy science journalism, so I’d prefer something different.
Hm. I feel like I’m not particularly worried about those connotations, though maybe I should be. I’m more worried about connoting “thinks Vulcans have the right idea” and/or “thinks he is as logical as a Vulcan”.
It also occurs to me that having watched essentially no Star Trek, my model of a straw Vulcan is really more of a straw straw Vulcan, and that seems bad.
Currently leaning towards “picking a different title if I come up with one soon-ish”.
I would be very hesitant to invoke a fictional philosophical concept I wasn’t familiar with. You are invoking related concepts and ideas, and your unfamiliarity with the source material could easily cause readers who are familiar with that material to misread your message.
In short, you are setting yourself up for long inferential distance, which I would not recommend.
I’ve heard the idea of adaptive screen brightness mentioned here a few times. I know fluxgui in Linux that does it. and seems that windows 7 and 8 come equipped. My windows is XP in one of my computers, how do I get it to lose brightness automatically during late hours?
f.lux exists for windows as well
This may help.
Socks: Traditionally I’ve worn holeproof explorers. Last time I went shopping for new socks, I wanted to try something new but was overwhelmed by choice and ended up picking some that turned out to be rather bad. My holeproofs and the newer ones are both coming to the end of their lives, and I’ll need to replace them all soon. Where should I go to learn about what types of sock would be best?
A quick google for best socks or optimal socks leads me to lots of shops, and pages for sports socks, and pages for sock fashion, but nothing about picking out a comfortable, not-smelly sock that doesn’t develop holes quickly or lose fit. I suppose, failing all else, I could just pick my way through amazon reviews, but I thought someone here might be able to give me some pointers in the right direction?
All you’d ever want to know about socks.
Perfect, just what I was looking for. Thanks.
On this topic, can anyone explain to me what “moisture wicking” means in concrete physical terms?
Capillary action.
That the sweat your skin produces adheres to the fibers of the fabric and is redistributed throughout the fabric. It’s a real effect, but the term is often used imprecisely.
It doesn’t mean that the sweat is all drawn through and out via evaporation (though it may evaporate) and it doesn’t mean that you won’t feel moisture on your skin, though you may feel less than you would otherwise.
You do understand that “optimality” for socks can differ a great deal, right? It depends on the intended usage (e.g. backpacking socks are rather different from dress socks), your particular idiosyncrasies (e.g. how strongly do your feet sweat), personal preferences (e.g. do you care how soft your socks are), etc.
My approach to socks is a highly sophisticated simulated annealing-type algorithm for efficient search in sock-space:
(1) Pick a pair of socks which looks okay
(2) Wear them for a bit
(3a) If you don’t like them, discard and goto (1)
(3b) If you do like them, buy more (or close equivalents) until you’re bored with then, then goto (1)
I’m happy with goldtoe cotton socks for durability (easy to measure) and comfort, but I’m not especially picky about socks. What makes a sock comfortable for you?
If you want warm socks, I like smartwools.
I would appreciate some advice. I’ve been trying to decide what degree to get. I’ve already taken a bunch of general classes and now I need to decide what to focus on. There are many fields that I think would enjoy working in, such as biotechnology, neuroscience, computer science, molecular manufacturing, alternative energy, etc. Since I’m not sure what I want to go into I was thinking of getting degree with a wide range of applications, such as physics, or math. I plan on improving my programming skills in my spare time, which should widen my prospects.
One goal I have is to donate to various organizations. One area of study I was considering was gerontology, working at SENS, but I learned that they already have a large pool of researchers, and their main bottleneck is funding. Other areas I want to donate to are MIRI and GiveWell.
do any programmers or web developers have an opinion about getting training on team tree house? has anyone else done this?
Does anyone have a working definition of “forgiveness”? Given that definition, do you find it to be a useful thing to do?
Deciding to stop “punishing” behavior (which usually isn’t much fun for either of you, though the urge to punish is ingrained). It’s certainly a useful thing to be able to do.
What the (emotional) decision to refrain from further vengeance (often) feels like from the inside.
Sometimes. Certainly not all the time. Tit-for-tat with a small amount of forgiveness often performs well. Note that tit-for-tat (the part where the other defects and then cooperates you then proceed to cooperate) also sometimes counts as ‘forgiviness’ in common usage. Like many cases where game theory and instinctive emotional adaptions intended to handle some common games (like what feels like ‘blackmail’) the edges between the concepts are blurry.
That’s interesting, because I think I usually refrain from vengeance by default, but I do try to like … limit further interaction and stuff. Maybe that’s similar.
The way I was thinking about it is that there’s an internal feelings component—like, do you still feel sad and hurt and angry? Then there’s the updating on evidence component—are they likely to do that or similar things again? And then there’s also a behavioral piece, where you change something in the way you act towards/around them (and I’m not sure if vengeance or just running awaaay both count?) So I wasn’t sure which combination of those were part of “forgiveness” in common usage. It sounds like you’re saying internal + behavioral, right?
So, I do, and it’s informed by religion, but I’ll try to phrase it as LW-friendly as possible: to free somebody else of claims I have against them.
It’s not an emotional state I enter or something self-centered (the “I refuse to ruminate about what you did to me” pop song thing), though sometimes it produces the same effects. The psychological benefits are secondary, even though they’re very strong for me. I usually feel much more free and much more peaceful when I’ve forgiven someone, but forgiveness causes my state of mind, not vice versa. It’s like exercise: you did it and it was good even if you didn’t get your runner’s high.
Other useful aspects, from the most blandly general perspective: it’s allowed me to salvage relationships, and it’s increased the well-being of people I’ve forgiven. I’ve been the beneficiary of forgiveness from others, and it’s increased my subjective well-being enormously.
From a very specific, personal perspective: every time I experience or give forgiveness, it reminds me of divine forgiveness, and that reminder makes me happier.
There was a recent post or comment about making scientific journal articles more interesting by imagining the descriptions (of chemical interactions?) as being GIGANTIC SPECIAL EFFECTS. Anyone remember it well enough to give a link?
here
Thanks very much. I’ve posted the link as a comment to Extreme Mnemonics.
In some fields you don’t even need to imagine...
http://www.youtube.com/watch?v=jgJKaP0Sj5U
http://www.youtube.com/watch?v=3IY5ZjcwakE
http://www.youtube.com/watch?v=Hm03rCUODqg
Though imagining can help: https://www.youtube.com/watch?v=u3jQuY0URyg
Does anyone else has problems with the appearance of Lesswrong? My account is somehow at the bottom of the site and the text of some posts transgresses the white background. I noticed the problem about 2 days ago. I didn’t change my browser (Safari) or something else. Here are 2 screenshots:
http://i.imgur.com/OO5UHPX.png http://i.imgur.com/0Il8TeJ.png
Testing with safari 5.1.9, I find that it behaves nicely for me at all times, even if I squinch the window down really narrowly. What safari version are you using?
Version 6.0.5
Rhodiola is apparently the bomb, but I’ve read somewhere it suffers from poor quality supplements. Here in CEE in pharmacies the brand name they sell is called Vitango. Any experiences? http://www.vitango-stress.com/
In programming, you can “call” an argumentless function and get a value. But in mathematics, you can’t. WTF?
I had an idea for Wei Dai’s “What is Probability, Anyway?,” but after actually typing up I became rather unsure that I was actually saying anything new. Is this something that hasn’t been brought up before, or did I just write up a “durr”? (If it’s not, I’ll probably expand it into a full Discussion post later.)
The fundamental idea is, imagining a multiverse of parallel universes, define all identical conscious entities as a single cross-universal entity, and define probability of an observation E as (number of successors to the entity which observed E) / (total number of successors to the entity). Observations constrain the entity to particular universes, as do decisions, but in different ways; so that we occasionally find ourselves on either side of an observation, but never see ourselves move counter to a decision (except in the sense that what we decide as a brain is not always what we consciously decide.)
Fair warning: I attempted to formalize the concept, but as a undergrad non-math major, the result may look less than impressive to trained eyes. My apologies if this is the case.
The idea is as follows:
Define a conscious observer as some algorithm P(0). P(0) computes on available data and returns a new observer P(1) to act on new available data. Note that it is possible to generate a set of all possible outputs P(n); on human timescales and under the limitation of a human lifetime, it is plausible that such a set would match with the intuitive concept of a “character” who undergoes development.
Assume many-worlds. There are now a very large number of identical algorithms P(n) scattered across the many worlds. Since P(n)=P(n), no local experiment can distinguish between algorithms; therefore scratch the concept of them being separate entirely, and consider them all to be a single conscious entity P.
P does not know which universe it is in (by definition) to start. It can change this by making an observation: it updates itself on sensory data. Regardless of which result is recorded, P(n+1) has lesser measure than P(n): P(n+1) occupies precisely half of the universes P does. P(n+1) has learned more about the universe it is in, so ist space of possible universes has diminished.
An example: consider the example of observing a fair coin—for example, observing the spin of an electron. All of P(n) runs the same algorithm: read the single bit corresponding to the spin, add bit to memory with a suitable wrapper: “Result of experiment: 0/1”. This is the new P(n+1), which regardless of result is a new entity. Let us designate successors to P(n) which observed a positive spin Q+, and those which observed a negative spin Q-. Since Q+ and Q- are not equal—they differ in one bit—they are not the same entity, even though they are both successors to (and are part of the same “character” as) P(n). Thus each of Q+/- observe only one version of the experiment.
As a lead-in to decision-making: consider what would happen if P(n) had precommitted to producing Q+, and never produced a Q-. Then the universe “Character P observes a negative spin” is inconsistent, and does not exist (barring, say, a random cosmic ray changing the algorithm.) Such a mind would never observe a spin-down event. This is distinct from quantum immortality/suicide—whereas a quantum suicide leaves behind a “world without you,” precommitting in this way means that a given world is inconsistent and never existed in the first place. Barring improbability, no successor of P(n) observes a spin-down event.
In this sense, we can define a decision as a “false observation.” P(n) decides to cause event E by choosing to only output successor functions in which event E is observed. (Note that this wording is excessively confusing; a brain which outputs a “move arm” signal is highly unlikely to be in a state where the arm does not move, and so can be said to have “decided” to move the arm.) A decision, then, as expected, also narrows the field of possible universes—but, at least hypothetically, in a purposeful manner.
Little Wayne
Paul Donovan