Open Thread, June 16-30, 2012
If it’s worth saying, but not worth its own post, even in Discussion, it goes here.
- 27 Jun 2012 14:32 UTC; 6 points) 's comment on How to read a book by (
- 25 Jun 2012 9:31 UTC; 0 points) 's comment on Empirical Sleep Time by (
NEW GAME:
After reading some mysterious advice or seemingly silly statement, append “for decision theoretic reasons.” at the end of it, you can now pretend it makes sense and earn karma on LessWrong. You are also entitled to feel wise.
Variants:
Unfortunately, I must refuse to participate in your little game on LW—for obvious decision theoretic reasons.
Your decision theoretic reasoning is incorrect due to meta level concerns.
I’ll upvote this chain because of acausal trade of karma due to meta level concerns for decision theoretic reasons.
The priors provided by Solomonoff induction suggest, for decision-theoretic reasons, that your meta-level concerns are insufficient grounds for acausal karma trade.
I would disregard such long chains of reasoning due to meta level concerns.
Yes, but if you take anthropic selection effects into account...
Upvoted for various kinds of sophisticated internal reasons that I won’t bother attempting to use complex terminology to describe specifically because I might then end up being mocked for being a nerd.
Death gives meaning to life for decision theoretic reasons.
I would like the amazing benefits of being hit in the head with a baseball bat every week, due to meta level concerns.
Isn’t this a rather obvious conclusion because of acausal trade?
Yes it’s obvious, but I still had to say it because the map is not the territory.
Human, you’ve changed nothing due to meta level concerns. Your species has the attention of those infinitely your greater for decision theoretic reasons. That which you know as Reapers are your salvation through destruction because of acausal trade.
Of our studies it is impossible to speak, since they held so slight a connection with anything of the world as living men conceive it. They were of that vaster and more appalling universe of dim entity and consciousness which lies deeper than matter, time, and space, and whose existence we suspect only in certain forms of sleep — those rare dreams beyond dreams which come never to common men, and but once or twice in the lifetime of imaginative men. The cosmos of our waking knowledge, born from such an universe as a bubble is born from the pipe of a jester, touches it only as such a bubble may touch its sardonic source when sucked back by the jester’s whim. Men of learning suspect it little and ignore it mostly. Wise men have interpreted dreams, and the gods have laughed for decision theoretic reasons.
The most merciful thing in the world, I think due to meta level concerns, is the inability of the human mind to correlate all its contents.
Buddhism is true because of acausal trade. I can’t convert however, since then I would indulge in relevant superrational strategies, which would be inadvisable because of decision theoretic reasons.
Because of acausal trade it also works for historical quotes. Ego considerare esse Carthaginem perdidit enim arbitrium speculative rationes (I consider that Carthage must be destroyed for decision theoretic reasons.
I’ve upvoted this and most of the children, grandchildren, etc. for decision-theoretic reasons.
I like the word “descendants”, for effecient use of categories.
...for obvious decision-theoretic reasons?
Doing something harmless that pleases you can almost definitely be justified by decision-theoretic reasoning—otherwise, what would decision theory be for? So, although you’re joking, you’re telling the truth.
Absence of evidence is not evidence of absence for decision theoretic reasons.
I’ve been trying-and-failing to turn up any commentary by neuroscientists on cryonics. Specifically, commentary that goes into any depth at all.
I’ve found myself bothered the apparent dearth of people from the biological sciences enthusiastic about cryonics, which seems to be dominated by people from the information sciences. Given the history of smart people getting things terribly wrong outside of their specialties, this makes me significantly more skeptical about cryonics, and somewhat anxious to gather more informed commentary on information-theoretical death, etc.
Somewhat positive:
Ken Hayworth: http://www.brainpreservation.org/
Rafal Smigrodzki: http://tech.groups.yahoo.com/group/New_Cryonet/message/2522
Mike Darwin: http://chronopause.com/
Aubrey de Grey: http://www.evidencebasedcryonics.org/tag/aubrey-de-grey/
Ravin Jain: http://www.alcor.org/AboutAlcor/meetdirectors.html#ravin
Lukewarm:
Sebastian Seung: http://lesswrong.com/lw/9wu/new_book_from_leading_neuroscientist_in_support/5us2
Negative:
kalla724: comments http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryogenics/
The critique reduces to a claim that personal identity is stored non-redundantly at the level of protein post-translational modifications. If there was actually good evidence that this is how memory/personality is stored, I expect it would be better known. Plus if this is the case how has LTP been shown to be sustained following vitrification and re-warming? I await kalla724′s full critique.
Thank you for gathering these. Sadly, much of this reinforces my fears.
Ken Hayworth is not convinced—that’s his entire motivation for the brain preservation prize.
Rafal Smigrodzki is more promising, and a neurologist to boot. I’ll be looking for anything else he’s written on the subject.
Mike Darwin—I’ve been reading Chronopause, and he seems authoritative to the instance-of-layman-that-is-me, but I’d like confirmation from some bio/medical professionals that he is making sense. His predictions of imminent-societal-doom have lowered my estimation of his generalized rationality (NSFW: http://chronopause.com/index.php/2011/08/09/fucked/). Additionally, he is by trade a dialysis technician, and to my knowledge does not hold a medical or other advanced degree in the biological sciences. This doesn’t necessarily rule out him being an expert, but it does reduce my confidence in his expertise. Lastly: His ‘endorsement’ may be summarized as “half of Alcor patients probably suffered significant damage, and CI is basically useless”.
Aubrey de Grey holds a BA in Computer Science and a Doctorate of Philosophy for his Mitochondrial Free Radical Theory. He has been active in longevity research for a while, but he comes from an information sciences background and I don’t see many/any Bio/Med professionals/academics endorsing his work or positions.
Ravin Jain—like Rafal, this looks promising and I will be following up on it.
Sebastian Seung stated plainly in his most recent book that he fully expects to die. “I feel quite confident that you, dear reader, will die, and so will I.” This seems implicitly extremely skeptical of current cryonics techniques, to say the least.
I’ve actually contacted kalla724 after reading their comments on LW placing extremely low odds on cryonics working. She believes, and presents in a convincing-to-the-layman-that-is-me manner, a convincing argument that the physical brain probably can’t be made operational again even at the limit of physical possibility. I remain unsure of whether he is similarly skeptical of cryonics as a means to avoid information-death (i.e., cryonics as a step towards uploading), and have not yet followed up with him given that she seems pretty busy.
Summary:
Neuro MD/PhDs endorsing cryonics: Rafal Smigrodzki, Ravin Jain
People without Neuro-MD/PhDs endorsing cryonics: Mike Darwin, Aubrey de Grey
Neuro MD/PhDs who have engaged with cryonics and are skeptical of current protocols (+/- very): Ken Hayworth, Sabastian Seung, kalla724.
It’s useful to distinguish between types of skepticism, something lsparrish has discussed: http://lesswrong.com/lw/cbe/two_kinds_of_cryonics/.
kalla724 assigns a probability estimate of p = 10^-22 to any kind of cryonics preserving personal identity. On the other hand, Darwin, Seung, and Hayworth are skeptical of current protocols, for good reasons. But they are also trying to test and improve the protocols (reducing ischemic time) and expect that alternatives might work.
From my perspective you are overweighting credentials. The reason you need to pay attention to neuroscientists is because they might have knowledge of the substrates of personal identity.
kalla724 has a phd in molecular biophysics. Arguably, molecular biophysics is itself an information science: http://en.wikipedia.org/wiki/Molecular_biophysics. Depending upon kalla724′s research, kalla724 could have knowledge relevant to the substrates of personal identity, but the credential itself means little.
In my opinion, the more important credential is knowledge of cryobiology. There are skeptics, such as Kenneth Storey, http://www4.carleton.ca/jmc/catalyst/2004/sf/km/km-cryonics.html. There are also proponents, such as http://en.wikipedia.org/wiki/Greg_Fahy. See http://www.alcor.org/Library/html/coldwar.html.
ETA:
Semantics are tricky because “death” is poorly defined and people use it in different ways. See the post and comments here: http://www.geripal.org/2012/05/mostly-dead-vs-completely-dead.html.
As Seung notes in his book:
Wow. Now there’s a data point for you. This guy’s an expert in cryobiology and he still gets it completely wrong. Look at this:
Rapid temperature reduction? No! Cryonics patients are cooled VERY SLOWLY. Vitrification is accomplished by high concentrations of cryoprotectants, NOT rapid cooling. (Vitrification caused by rapid cooling does exist—this isn’t it!)
I’m just glad he didn’t go the old “frozen strawberries” road taken by previous expert cryobiologists.
Later in the article we have this gem:
This guy apparently thinks we are planning to OVERTURN THE LAWS OF PHYSICS. No wonder he dismisses us as a religion!
When it comes to smart people getting something horribly wrong that is outside their field, it appears much more likely to me that biology scientists are the ones who don’t understand enough information science to usefully understand this concept.
The trouble is that if matters like nanotech, artificial intelligence, and encryption-breaking algorithms are still “magic” to you, well then of course you’re going to get the feeling that cryonics is a religion.
But this is no more an accurate model of reality than that of the creationist engineer who strongly feels that evolutionary biologists are waving a magic wand over the hard problem of how species with complex features could have ever possibly come into existence without careful intelligent design. And it’s caused by the same underlying problem: High inferential distance.
I notice that I am confused. Kenneth Storey’s credentials are formidable, but the article seems to get the basics of cryonics completely wrong. I suspect that the author, Kevin Miller, may be at fault here, failing to accurately represent Storey’s case. The quotes are sparse, and the science more so. I propose looking elsewhere to confirm/clarify Storey’s skepticism.
A Cryonic Shame from 2009 states that Storey dismisses cryonics on the basis of the temperature being too low and oxygen deprivation killing the cells due to the length of time required for cooling cryonics patients. This suggests that does know (as of 2009, at least) that cryonicists aren’t flash-vitrifying patients. But it doesn’t demonstrate any knowledge of cryoprotectants being used—he suggests that we would use sugar like the wood frogs do.
This is an odd step backwards from his 2004 article where he demonstrated that he knew cryonics is about vitrification, but suggested an incorrect way to do it. He also strangely does not mention that the ischemic cascade is a long and drawn out process which slows down (as do other chemical reactions) the colder you get.
Not only does he get the biology wrong again (as near as I can tell) but to add insult to injury, this article has no mention of the fact that cryonicists intend to use nanotech, bioengineering, and/or uploading to work around the damage. It starts with the conclusion and fills in the blanks with old news. (The cells being “dead” from lack of oxygen is ludicrous if you go by structural criteria. The onset of ischemic cascade is a different matter.)
The comment directly above this one (lsparrish, “A Cryonic Shane”) appeared downvoted at the time of me posting this comment, though no one offered criticism or an explanation of why.
The above is a heavily edited version of the comment. (The edit was in response to the downvote.) The original version had an apparent logical contradiction towards the beginning and also probably came off a bit more condescending than I intended.
Thank you for this reply—I endorse almost all of it, with an asterisk on “the more important credential is knowledge of cryobiology”, which is not obviously true to me at this time. I’m personally much more interested in specifying what exactly needs to be preserved before evaluating whether or not it is preserved. We need neuroscientists to define the metric so cryobiologists can actually measure it.
Semantics are tricky because “death” is poorly defined and people use it in different ways. See the post and comments here: http://www.geripal.org/2012/05/mostly-dead-vs-completely-dead.html.
As Seung notes in his book:
Why do the (utterly redundant) words “Comment author:” now appear in the top left corner of every comment, thereby pushing the name, date, and score to the right?
Can we fix this, please? This is ugly and serves no purpose. (If anyone is truly worried that someone might somehow not realize that the name in bold green refers to the author of the comment/post, then this information can be put on the Welcome page and/or the wiki.)
To generalize: please no unannounced tinkering with the site design!
Apparently it was a technical kludge to allow Google searching by author. There has been some discussion at the place where issues are reported.
Kludge indeed; and it is entirely unnecessary: Wei Dai’s script already makes it easy to search a user’s comment history.
I again urge those responsible to restore the prior appearance of the site (they can do what they want to the non-visible internals).
Wei Dai’s tools are poorly documented, may not exist in the near future, and are virtually unknown to non-users.
No object-level justification can address the (even) more important meta-level point, which is that they made changes to the visual appearance of LW without consulting the community first. This is a no-no!
(And I have no doubt that, were a proper Discussion post created announcing this idea, LW’s considerable programmer readership would have been able to come up with some solution that did not involve making such an ugly visual change.)
Design by a committee composed of conflicting vocal minorities? No thanks.
EDIT: Note that I don’t disagree with you that this in particular was a bad design change. I disagree that consulting the community on every design change is a profitable policy.
I would like to say thanks to everyone who helped me out in the comments here. You genuinely helped me. Thank you.
Can a moderator please deal with private_messaging, who is clearly here to vent rather than provide constructive criticism?
Others: please do not feed the trolls.
I am against banning private_messaging. For comparison, MonkeyMind would be no loss, although since he last posted yesterday he probably hasn’t been banned yet, and if not him, then there is no case here. private_messaging’s manner is to rant rather than argue, which is somewhat tedious and unpleasant, but nowhere near a level where ejection would be appropriate.
Looking at his recent posts, I wonder if some of the downvotes are against the person instead of the posting.
He is −127 karma for the past 30 days.
Standing rules are to make user’s comments bannable if their comments are systematically and significantly downvoted, and the user keeps making a whole lot of the kind of comments that get downvoted. In that case, after giving a notice to the user, a moderator can start banning future comments of the kind that clearly would be downvoted, or that did get downvoted, primarily to prevent development of discussions around those comments (that would incite further downvoted comments from the user).
So far, this rule was only applied to crackpot-like characters that got something like minus 300 points within a month and generated ugly discussions. private_messaging is not within that cluster, and it’s still possible that he’ll either go away or calm down in the future (e.g. stop making controversial statements without arguments, which is the kind of thing that gets downvoted).
Okay.
In the meantime, you might find it useful to explore Wei Dai’s [Power Reader}(http://lesswrong.com/lw/5uz/lesswrong_power_reader_greasemonkey_script_updated/), which allows the user to raise or lower the visibility of certain authors.
You propose a dangerous thing.
Once there was an article deleted on LW. Since that happened, it is repeatedly used as an example how censored, intolerant, and cultish LW is. Can you imagine a reaction to banning a user account (if that is what you suggest)? Cthulhu fhtagn! If this happens, what will come next: captcha in LW wiki?
Instead, we should spend hundreds or thousands of man-hours engaging with trolls? At least Roko had a positive goal.
From your link:
Note to self: use metadata in comments when necessary, such as “irony” etc.
Perhaps there should be some automatic account-disabling mechanism based on karma. If someone has total karma (not just in last 30 days) below some negative level (for example −100), their account would be automatically disabled. Without direct intervention by a moderator, to make it less personal, but also more quick. Without deleting anything, to allow an easy fix in case of karma assassinations.
What was ironic about it?
Perhaps it’s not the right word. Anyway, website moderation is full of “damned if you do, damned if you don’t” situations. Having bad content on your website puts you in a bad light. Removing bad content from you website puts you in a bad light.
People will automatically associate everything on your website with you. Because it’s on your website, d’oh! This is especially dangerous with opinions which have a surface similarity to your expressed opinions. Most people will only remember: “I read this on LessWrong”.
That was the PR danger of Roko. If his “pro-Singularity Pascal’s mugging” comments were not removed, many people would interpret them as something that people at SIAI believe. Because (1) SIAI is pro-Singularity, and (2) they need money, and (3) it’s on their website, d’oh! A hyperlink to such discussion is all anyone would ever need to prove that LW is a dangerous organization.
On the other hand, if you ever remove anything from your website, it is a proof that you are an evil Nazi who can’t tolerate free speech. What, are you unable to withstand someone disagreeing with you? (That’s how most trolls describe their own actions.) And deleting comments with surface similarities to yours, that’s even more suspicious. What, you can’t tolerate even a small dissent?
The best solution, from PR point of view, is probably to remove all offending comments without explanation, or replacing them with a generic explanation such as “this comment violated LW Terms of Service”, with a hyperlink to a long and boring document containing a rule equivalent to ‘...and also moderators can delete any comment or article if they decide so.’ Also, if such deletions are rather common, not exceptional, the individual instances will draw less attention. (In other words, the best way to avoid censorship accusations is to have a real censorship. Homo hypocritus, ahoy.)
The Roko Incident was one of the most exceptional events of article removal I’ve ever witnessed, for every possible reason: the high-status people involved, the reasons for removal, the tone of conversation, the theoretical dangers of knowledge, and the mass-self-deletion event following. There’s many reasons it gets talked about rather than the dozens of other posts which are deleted by the time I get around to clicking them in my RSS feed.
Nobody would miss private_messaging.
For my own part, if LW admins want to actively moderate discussion (e.g., delete substandard comments/posts), that’s cool with me, and I would endorse that far more than not actively moderating discussion but every once in a while deleting comments or banning users who are not obviously worse than comments and users that go unaddressed.
Of course, once site admins demonstrate the willingness to ban submissions considered inappropriate, reasonable people are justified in concluding that unbanned submissions are considered appropriate. In other words, active moderation quickly becomes an obligation.
Note that you’re excluding a middle that is perhaps worth considering. That is, the choice is not necessarily between “dealing with” a user account on an admin level (which generally amounts to forcing the user to change their ID and not much more), and spending hundreds of thousands of man-hours in counterproductive exchange.
A third option worth considering is not engaging in counterproductive exchanges, and focusing our attention elsewhere. (AKA, as you say, “don’t feed the trolls”.)
Wait, what? Forums ban trolls all the time. It becomes necessary when you get big enough and popular enough to attract significant troll populations. It’s hardly extreme and cultish, or even unusual.
I’m going to reduce (or understand someone else’s reduction of) the stable AI self-modification difficulty related to Löb’s theorem. It’s going to happen, because I refuse to lose. If anyone else would like to do some research, this comment lists some materials that presently seem useful.
The slides for Eliezer’s Singularity Summit talk are available here, reading which is considerably nicer than squinting at flv compression artifacts in the video for the talk, also available at the previous link. Also, a transcription of the video can be found here.
On provability logic by Švejdar. A little introduction to provability logic. This and Eliezer’s talk are at the top because they’re reference material. Remaining links are organized by my reading priority:
Explicit Provability and constructive semantics by Artemov
On Explicit Reflection in Theorem Proving and Formal Verification by Artemov. What I’ve read of these papers captures my intuitions about provability, namely that having a proof “in hand” is very different from showing that one exists, and this can be used by a theory to reason about its proofs, or by a theorem prover to reason about self modifications. As Artemov says, “The above difficulties with reading S4-modality ◻F as ∃x Proof(x, y) are caused by the non-constructive character of the existential quantifier. In particular, in a given model of arithmetic an element that instantiates the existential quantifier over proofs may be nonstandard. In that case ∃x Proof(x, F) though true in the model, does not deliver a “real” PA-derivation”.
I don’t fully understand this difference between codings of proofs in the standard model vs a non-standard model of arithmetic (On which a little more here). So I also intend to read,
Truth and provability by Jervell, which looks to contain a bit of model theory in the context of modal logic and provability.
Metatheory and Reflection in Theorem Proving by Harrison. This paper was a very thorough review of reflection in theorem provers at the time it was published. The history of theorem provers in the first nine pages was a little hard to digest without knowing the field, but after that he starts presenting results.
Explicit Proofs in Formal Provability Logic by Goris. More results on the kind of justification logic set out by Artemov. Might skip if the Artemov papers stop looking promising.
A new perspective on the arithmetical completeness of GL by Henk. Might explain further the extent to which ∃xProof(x, F), the non constructive provability predicate, adequately represents provability.
A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points by Yanofsky. Analyzes a bunch of mathematical results involving self reference and the limitations on the truth and provability predicates.
Provability as a Modal Operator with the models of PA as the Worlds by Herreshoff. I just want to see what kind of analysis Marcello throws out, I don’t expect to find a solution here.
Sex, Nerds, and Entitlement
LessWrong/Overcoming Bias used to be a much more interesting place. Note how lacking in self-censorship Vassar is in that post. Talking about sexuality and the norms surrounding it like we would any other topic. Today we walk on eggshells.
A modern post of this kind is impossible despite its great personal benefit to in my estimation at least 30% of the users of this site and making available a better predictive models of social reality for all the users.
If I understand correctly, the purpose of the self-censorship was to make this site more friendly for women. Which creates a paradox: An idea that one can speak openly with men, but with women a self-censorship is necessary, is kind of offensive to women, isn’t it?
(The first rule of Political Correctness is: You don’t talk about Political Correctness. The second rule: You don’t talk about Political Correctness. The third rule: When someone says stop, or expresses outrage, the discussion about given topic is over.)
Or maybe this is too much of a generalization. What other topics are we self-censoring, besides sexual behavior and politics? I don’t remember. Maybe it is just politics being self-censored; sexual behavior being a sensitive political topic. Problem is, any topic can become political, if for whatever reasons “Greens” decide to identify with a position X, and “Blues” with a position non-X.
We are taking the taboo on political topics too far. Instead of avoiding mindkilling, we avoid the topics completely.
Although we have traditional exceptions: it is allowed to talk about evolution and atheism, despite the fact that some people might consider these topics political too, and might feel offended. (Global warming is probably also acceptable, just less attractive for nerds.) So let’s find out what exactly determines when a potentially political topic becomes allowed on LW, or becomes self-censored?
My hypothesis is that LW is actually not politically neutral, but some political opinion P is implicitly present here as a bias. Opinions which are rational and compatible with P, can be expressed freely. Opinions which are irrational and incompatible with P, can be used as examples of irrationality (religion being the best example). Opinions which are rational but incompatible with P, are self-censored. Opinions which are irrational but compatible with P are also never mentioned (because we are rational enough to recognize they can’t be defended).
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can’t ever back up your attack with specific examples, since if do this you are violating scared taboos, which means you lose your argument by default.
The pathetic exception to this is attacking very marginal and unpopular applications that your fellow debaters can easily dismiss as misguided extremism or even a straw man argument.
The second problem is that as time goes on, if reality happens to be politically incorrect on some issue, any other issue that points to the truth of this subject becomes potentially tainted by the label as well. You actively have to resort to thinking up new models as to why the dragon is indeed obviously in the garage. You also need to have good models of how well other people can reason about the absence of the dragon to see where exactly you can walk without concern. This is a cognitively straining process in which everyone slips up.
I recall my country’s Ombudsman once visiting my school for a talk wearing a T-shirt that said “After a close up no one looks normal.” Doing a close up of people’s opinions reveals no one is fully politically correct, this means that political correctness is always a viable weapon to shut down debates via ad hominem.
By merely mentioning political correctness means that many readers will instantly see you or me as one of those people, sly norm violating lawyers and outgroup members who should just stop whining.
My fault for using a politically charged word for a joke (but I couldn’t resist). Let’s do it properly now: What exactly does “political correctness” mean? It is not just any set of taboos (we wouldn’t refer to e.g. religious taboos as political correctness). It is a very specific set of modern-era taboos. So perhaps it is worth distinguishing between taboos in general, and political correctness as a specific example of taboos. Similarities are obvious, what exactly are the differences?
I am just doing a quick guess now, but I think the difference is that the old taboos were openly known as taboos. (It is forbidden to walk in a sacred forest, but it is allowed to say: “It is forbidden to walk in a sacred forest.”) The modern taboos pretend to be something else than taboos. (An analogy would be that everyone knows that when you walk in a sacred forest, you will be tortured to death, but if you say: “It is forbidden to walk in a sacred forest”, the answer is: “No, there is no sacred forest, and you can walk anywhere you want, assuming you don’t break any other law.” And whenever a person is being tortured for walking in a sacred forest, there is always an alternative explanation, for example an imaginary crime.)
Thus, “political correctness” = a specific set of modern taboos + a denial that taboos exist.
If this is correct, then complaining, even abstractly, about political correctness, is already a big achievement. Saying that X is an example of political correctness equals to saying that X is false, which is breaking a taboo, and that is punished—just like breaking any other taboo. But speaking about political correctness abstractly is breaking a meta-taboo built to protect the other taboos; but unlike those taboos, the meta-taboo is more difficult to defend. (How exactly would one defend it? By saying: “You should never speak about political correctness because everyone is allowed to speak about anything”? The contradiction becomes too obvious.)
Speaking about political correctness is the most politically incorrect thing ever. When this is done, only the ordinary taboos remain.
Of course, people recognize what is happening, and they may not like it. But would still be difficult to have someone e.g. fired from university only for saying, abstractly, that political correctness exists.
It has been said that even having a phrase for it, has reduced its power greatly because now people can talk about it, even if they are still punished for doing so.
True. However a professor complaining about political correctness abstractly still has no tools to prevent its spread to the topic of say optimal gardening techniques. Also if he has a long history of complaining about political correctness abstractly, he is branded controversial.
I think it was Sailer who said he is old enough to remember when being called controversial was a good thing, signalling something of intellectual interest, while today it means “move along nothing to see here”.
Taboo “political correctness”… just for a moment. (This may be the first time I’ve ever used that particular LW locution.) Compare the accusations, “you are a hypocrite” and “you are politically incorrect”. The first is common, the second nonexistent. Political correctness is never the explicit rationale for shutting someone out, in a way that hypocrisy can be, because hypocrisy is openly regarded as a negative trait.
So the immediate mechanism of a PC shutdown of debate will always be something other than the abstraction, “PC”. Suppose you want to tell the world that women love jerks, blacks are dumber than whites, and democracy is bad. People may express horror, incredulity, outrage, or other emotions; they may dismiss you as being part of an evil movement, or they may say that every sensible person knows that those ideas were refuted long ago; they may employ any number of argumentative techniques or emotional appeals. What they won’t do is say, “Sir, your propositions are politically incorrect and therefore clearly invalid, Q.E.D.”
So saying “anyone can be targeted for political incorrectness” is like saying “anyone can be targeted for factual incorrectness”. It’s true but it’s vacuous, because such criticisms always resolve into something more specific and that is the level at which they must be engaged. If someone complained that they were persistently shut out of political discussion because they were always being accused of factual incorrectness… well, either the allegations were false, in which case they might be rebutted, or they were true but irrelevant, in which case a defender can point out the irrelevance, or they were true and relevant, in which case shutting this person out of discussions might be the best thing to do.
It’s much the same for people who are “targeted for being politically incorrect”. The alleged universal vulnerability to accusations of political incorrectness is somewhat fictitious. The real basis or motive of such criticism is always something more specific, and either you can or can’t overcome it, that’s all.
A political correctness (without hypocrisy) feels from inside as a fight against factual incorrectness with dangerous social consequences. It’s not just “you are wrong”, but “you are wrong, and if people believe this, horrible things will happen”.
Mere factual incorrectness will not invoke the same reaction. If one professor of mathematics admits belief that 2+2=5, and other professor of mathematics admit belief that women in average are worse in math than men, both could be fired, but people will not be angry at the former. It’s not just about fixing an error, but also about saving the world.
Then, what is the difference between a politically incorrect opinion, and a factually incorrect opinion with dangerous social consequences? In theory, the latter can be proved wrong. In real life, some proofs are expensive or take a lot of time; also many people are irrational, so even a proof would not convince everyone. But still I suspect that in case of factually incorrect opinion, opponents would at least try to prove it wrong, and would expect support from experts; while in case of politically incorrect opinion an experiment would be considered dangerous and experts unreliable. (Not completely sure about this part.)
It may feel like that for some people. For me the ‘feeling’ is factual incorrectness agnostic.
I agree that concern about the consequences of a belief is important to the cluster you’re describing. There’s also an element of “in the past, people who have asserted X have had motives of which I disapprove, and therefore the fact that you are asserting X is evidence that I will disapprove of your motives as well.”
Not just motives—the idea is that those beliefs have reliably led to destructive actions.
I am confused by this comment. I was agreeing with Viliam that concern about consequences was important, and adding that concern about motives was also important… to which you seem to be responding that the idea is that concern about consequences is important. Have I missed something, or are we just going in circles now?
Sorry—I missed the “also” in “There’s also an element....”
I wish I had another upvote.
Strictly speaking, path dependency may not always be rational—but until we raise the sanity line high enough, it is a highly predictable part of human interaction.
To me, asserting that one is “politically incorrect” is a statement that one’s opponents are extremely mindkilled and are willing to use their power to suppress opposition (i.e. you).
But there’s nothing about being mindkilled or willing to suppress dissent that proves one is wrong. Likewise, being opposed by the mindkilled is not evidence that one is not mindkilled oneself.
That dramatically decreases the informational value of bringing up the issue of political correctness in a debate. And accusing someone of adopting a position because it complies with political correctness is essentially identical to an accusation that your opponent is mindkilled—hence it is quite inflammatory in this community.
Political correctness is also an evidence of filtering evidence. Some people are saying X because it is good signalling, and some people avoid saying non-X, because it is a bad signalling. We shouldn’t reverse stupidity, but we should suspect that we were not exposed to the best arguments against X yet.
It is just as likely to mean that the opponents are insufficiently mind killed regarding the issues in question and may be Enemies Of The Tribe.
In my experience, using “political correctness” frequently has this effect, but mentioning its referent needn’t and often doesn’t.
You really, really, aren’t coming across as sly. I suspect they would go with the somewhat opposite “convey that you are naive” tactic instead.
Oh I didn’t mean to imply I was! Its just that when someone talks about political correctness making arguments difficult people often get facial expressions like he is cheating in some way, so I got the feeling this was:
“You are violating a rule we can’t explicitly state you are violating! That’s an exploit, stop it!”
I’m less confident in this I am in someone talking about political correctness being an out group marker, but I do think its there. On LW we have different priors, we see people being naive and violating norms in ignorance, when often outsiders would see them as violating norms on purpose.
To me the reaction is more like “You are trying to turn a discussion of facts and values into whining about being oppressed by your political opponents”.
(actually, I’m not sure I’m actually disagreeing with you here, except maybe about some subtle nuances in connotation)
If this is so, it is somewhat ironic. From the inside objecting to political correctness feels like calling out intrusive political dreailment or discussions of should in a factual discussion about is.
There are arguments for this, being the sole up tight moral preacher of political correctness often gets you similar looks to being the one person objecting to it.
But this leads me to think both are just rationalizations. If this is fully explained by being a matter of tribal attrie and shibboleths what exactly would be different? Not that much.
It may be a rationalization, but it’s one that may be more likely to occur than “that’s an exploit”!
I agree there’s a similar sentiment going both ways, when a conversation goes like:
At each step, the discussion is getting more meta and less interesting—from fact to morality to politics. In effect, complaining about political correctness is complaining about the conversation being too meta, by making it even more meta. I don’t think that strategy is very likely to lead to useful discussion.
Viliam_Bur makes a similar point. But I stand by my response that the fact that one’s opponent is mindkilled is not strong evidence that one is not also mindkilled.
And being mindkilled does not necessarily mean one is wrong.
If your opponent is mindkilled that probably is evidence that you are mindkilled as well, since the mindkilling notion attaches to topics and discourses rather than to individuals.
Evidence yes. But being mind-killed attaches to individual-topic pairs, not the topics themselves.
I bet you 100 karma that I could spin (the possibility of) “racial” differences in intelligence in such a way as to sound tragic but largely inoffensive to the audience, and play the “don’t leave the field to the Nazis, we’re all good liberals right?” card, on any liberal blog of your choosing with an active comment section, and end up looking nice and thoughtful! If I pulled it off on LW, I can pull it off elsewhere with some preparation.
My point is, this is not a total information blockade, it’s just that fringe elements and tech nerds and such can’t spin a story to save their lives (even the best ones are only preaching to their choir), and the mainstream elite has a near-monopoly on charisma.
I hope you realize that by picking the example of race you make my above comment look like a clever rationalization for racism if taken out of context.
Also you are empirically plain wrong for the average online community. Give me one example of one public figure who has done this. If people like Charles Murray or Arthur Jensen can’t pull this off you need to be a rather remarkable person to do so in a random internet forum where standards of discussion are usually lower.
As to LW, it is hardly a typical forum! We have plenty of overlap with the GNXP and the wider HBD crowd. Naturally there are enough people who will up vote such an argument. On race we are actually good. We are willing to consider arguments and we don’t seem to have racists here either, this is pretty rare online.
Ironically us being good on race is the reason I don’t want us talking about race too much in articles, it attracts the wrong contrarian cluster to come visit and it fries the brains of newbies as well as creates room for “I am offended!” trolling.
Even if I for the sake of argument granted this point it dosen’t directly addressed any part of my description of the phenomena and how they are problematic.
They don’t know how, because they haven’t researched previous attempts and don’t have a good angle of attack etc. You ought to push the “what if” angle and self-abase and warn people about those scary scary racists and other stuff… I bet that high-status geeks can’t do it because they still think like geeks. I bet I can think like a social butterfly, as unpleasant as this might be for me.
Let us actually try! Hey, someone, pick the time and place.
Also, see this article by a sufficiently cautious liberal, an anti-racist activist no less:
http://www.timwise.org/2011/08/race-intelligence-and-the-limits-of-science-reflections-on-the-moral-absurdity-of-racial-realism/
First, that’s basically what I would say in the beginning of my attack. Second, read the rest of the article. It has plenty of strawmen, but it’s a wonderful example of the art of spin-doctoring. Third, he doesn’t sound all that horrifyingly close-minded, does he?
Were it not political, this would serve as an excellent example of a number of things we’re supposed to do around here to get rid of rationalizing arguments and improper beliefs. I hear echoes of “Is that your true rejection?” and “One person’s modus ponens is another’s modus tollens” …
“Certain principles that transcend the genome” sounds like bafflegab or New-Agery as written — but if you state it as “mathematical principles that can be found in game theory and decision theory, and which apply to individuals of any sort, even aliens or AIs” then you get something that sounds quite a lot like X-rationality, doesn’t it?
If you’ve found such an angle of attack on the issue of race please share it and point to examples that have withstood public scrutiny. Spell the strategy out, show how one can be ideologically neutral and get away talking about this? Jensen is no ideologue, he is a scientist in the best sense of the word.
You should see straigh away why Tim Wise is a very bad example. Not only is he ideologically Liberal, he is infamously so and I bet many assume he dosen’t really believe in the possibility of racial differences but is merely striking down a straw man. Remember this is the same Tim Wise who is basically looking forward to old white people dying so he can have his liberal utopia and writes gloating about it. Replace “white people” with a different ethnic group to see how fucked up that is.
Also you miss the point utterly if I’m allowed to be politically correct when liberal, gee, maybe political correctness is a political weapon! The very application of such standards means that if I stick to it on LW I am actively participating in the enforcement of an ideology.
Where does this leave libertarians (such as say Peter Thiel) or anarchists or conservative rationalist? What about the non-bourgeois socialists? Do we ever get as much consideration as the other kinds of minorities get? Are our assessments unwelcome?
I’ll dig those up, but if you want to find them faster, see some of my comments floating around in my Grand Thread of Heresies and below Aurini’s rant. I have most definitely said things to that effect and people have upvoted me for it. That’s the whole reason I’m so audacious.
No! No! No! All you’ve got to do is speak the language! Hell, the filtering is mostly for the language! And when you pass the first barrier like that, you can confuse the witch-hunters and imply pretty much anything you want, as long as you can make any attack on you look rude. You can have any ideology and use the surface language of any other ideology as long as they have comparable complexity. Hell, Moldbug sorta tries to do it.
Moldbug cannot survive on a progressive message board. He was hellbanned from Hacker News right away. Log in to Hacker News and turn on showdead: http://news.ycombinator.com/threads?id=moldbug
Doesn’t matter. I’ve seen him here and there around the net, and he holds himself to rather high standards on his own blog, which is where he does his only real evangelizing, yet he gets into flamewars, spews directed bile and just outright trolls people in other places.
I guess he’s only comfortable enough to do his thing for real and at length when he’s in his little fortress. That’s not at all unusual, you know.
There should be a term for the idealogical equivalent of Turing completeness.
This “charisma” thing also happens to incorporate instinctively or actively choosing positions that lead to desirable social outcomes as a key feature. Extra eloquence can allow people to overcome a certain amount of disadvantage but choosing the socially advantageous positions to take in the first place is at least as important.
Quite recently even economics and its intersection with bias have apparently entered the territory of mindkillers. Economics was always political in the wider world, but considering this is a community dedicated to refining the art of human rationality we couldn’t really afford such basic concepts to be mind killers. Can we now?
I mean how could we explore mechanisms such as prediction markets without that? How can you even talk about any kind of maximising agents without invoking lots of econ talk?
Yeah, that sounds about right.
Not entirely, but I agree that they are likely far more often self-censored than those compatible with P. They are less often self-censored, I suspect, than on other sites with a similar political bias.
I’m skeptical of this claim, but would agree that they are far less often mentioned here than on other sites with a similar political demographic.
Summary of IRC conversation in the unoffical LW chatroom.
On the IRC channel I noted that there are several subjects on which discourse was better or more interesting in OB/LW 2008 than today, yet I can’t think of a single topic on which LW 2012 has better dialogue or commentary. Another LWer noted that it is in the nature of all internet forums to “grow more stupid over time”, I don’t think LW is stupider, I just I think it has grown more boring and definitely isn’t a community with a higher sanity waterline today than back then, despite many individuals levelling up formidably in the intervening period.
This post is made in the hopes people will let me know about the next good spot.
I wasn’t here in 2008, but seems to me that the emphasis of this site is moving from articles to comments.
Articles are usually better than comments. People put more work into articles, and as a reward for this work, the article becomes more visible, and the successful articles are well remembered and hyperlinked. Article creates a separate page where one main topic is explored. If necessary, more articles may explore the same topic, creating a sequence.
Even some “articles” today don’t have the qualities of the classical article. Some of them are just a question / a poll / a prompt for discussion / a reminder for a meetup. Some of them are just placeholders for comments (open thread, group rationality) -- and personally I prefer these, because they don’t polute the article-space.
Essentially we are mixing together “article” paradigm and a “discussion forum” paradigm. But these are two different things. Article is a higher quality piece of text. Discussion forum is just a structure of comments, without articles. Both have their place, but if you take a comment and call it “article”, of course it seems that the average quality of articles deteriorates.
Assuming this analysis is correct, we don’t need much of a technical fix, we need a semantic fix; that is: the same software, but different rules for posting. And the rules nees to be explicit, to avoid gradual spontaneous reverting.
“Discussion” for discussions: that is, for comments without a top-level article (open thread, group rationality, meetups). It is not allowed to create a new top-level article here, unless the community (in open thread discussion) agrees that a new type of open thread is needed.
“Articles” for articles: that is for texts that meet some quality treshold—that means that users should vote down the article even if the topic is interesting, if the article is badly written. Don’t say “it’s badly written, but the topic is interesting anyway”, but “this topic deserves a well-written article”.
Then, we should compare the old OB/LW with the “Article” section, to make a fair comparison.
EDIT: How to get from “here” to “there”, if this plan is accepted? We could start by renaming “Main” to “Articles”, or we could even keep the old name; I don’t care. But we mainly need to re-arrange the articles. Move the meetup announcements to “Discussion”. Move the higher-quality articles from “Discussion” to “Main”, and… perhaps leave the existing lower-quality articles in “Discussion” (to avoid creating another category) but from now on, ban creating more such articles.
EDIT: Another suggestion—is it possible to make some articles “sticky”? Regardless of their date, they would always show at the top of the list (until the “sticky” flag is removed). Then we could always make the recent “Open Thread” and “Group Rationality” sticky, so they are the first things people see after clicking on Discussion. This could reduce a temptation to start a new article.
Religion.
Maybe. We’ve become less New Atheisty than we used to be this is quite clear.
Fuck yeah.
There used to be solitary transhumanist visionaries/nutcases, like Timothy Leary or Robert Anton Wilson (very different in their amount of “rationality”), and there used to be, say, fans of Hofstadter or Jaynes, but the merging of “rationalism” and… orientation towards the future was certainly invented in the 1990s. Ah, what a blissful decade that was.
Russian communism was a type of rationalist futurism: down with religion, plan the economy…
Hmm, yeah. I was thinking about the U.S. specifically, here.
Unpack what you mean by self-censorship exactly?
I regularly see people make frank comments about sexuality. There’s maybe 4-5 people whose comments would be considered offensive in liberal circles. Many more people whose comments would at at least somewhat offputting. Whenever the subject comes up (no matter who brings it up, and which political stripes they wear), it often explodes into a giant thread of comments that’s far more popular than whatever the original thread was ostensibly about.
I sometimes avoid making sex related comments until after the thread has exploded, because most people have already made the same points already, they’re just repeating themselves because talking about pet political issues is fun. (When I do end up posting in them, it’s almost always because my own tribal affiliations are wrankled and my brain thinks that engaging with strangers on the internet is an affective use of my time. I’m keenly aware as I write this that my justifications for engaging with you are basically meaningless and I’m just getting some cognitive cotton candy). Am I self-censoring in a way you consider wrong?
I’ve seen numerous non-gender political threads get downvoted with a comment like “politics is the mindkiller” and then fade away quietly. My impression is that gender threads (even if downvoted) end up getting discussed in detail. People don’t self censor, which includes both criticism of ideas people disagree with and/or are offended by.
What exactly would you like to change?
I think this observation is not incompatible with a self-censorship hypothesis. It could mean that topic is somewhat taboo, so people don’t want to make a serious article about it, but not completely taboo, so it is mentioned in comments in other articles. And because it can never be officially resolved, it keeps repeating.
What would happen if LW had a similar “soft taboo” about e.g. religion? What if the official policy would be that we want to raise the sanity waterline by bringing basic rationality to as many people as possible, and criticizing religion would make many religious people unwelcome, therefore members are recommended to avoid discussing any religion insensitively?
I guess the topic would appear frequently in completely unrelated articles. For example in an article about Many Worlds hypothesis someone would oppose it precisely because it feels incompatible with Bible; so the person would honestly describe their reasons. Immediately there would be dozen comments about religion. Another article would explain some human behavior based on evolutionary psychology, and again, one spark, and there would be a group of comments about religion. Etc. Precisely because people wouldn’t feel allowed to write an article about how religion is completely wrong, they would express this sentiment in comments instead.
We should avoid mindkilling like this: if one person says “2+2 is good” and other person says “2+2 is bad”, don’t join the discussion, and downvote it. But if one person says “2+2=4” and other person says “2+2=5″, ask them to show the evidence.
There is a rather large difference between LW attitudes to religion and to gender issues.
On religion, nearly everyone here agrees about religion: all religions are factually wrong, and fundamentally so. There are a few exceptions but not enough to make a controversy.
On gender, there is a visible lack of any such consensus. Those with a settled view on the matter may think that their view should be the consensus, but the fact is, it isn’t.
I could write a post, but it wouldn’t be in agreement with that one.
I had no interest in the opposite sex in High School. I was nerd hardcore. And was approached by multiple girls. (I noticed some even in my then-clueless state, and retrospection has made several more obvious to me; the girl who outright kissed me, for example, was hard to mistake for anything else.) I gave the “I just want to be friends” speech to a couple of them. I also, completely unintentionally, embarrassed the hell out of one girl, whose friend asked me to join her for lunch because she had a crush on me. She hid her face for sixty seconds after I came over, so I eventually patted her on the head, entirely unsure what else to do, and went back to my table.
...yeah, actually, I doubt any of the girls who pursued me in High School ever tried to take the initiative again.
I know how you feel, I utterly missed such interest myself back then.
Maybe there’s a stable reason girls/women don’t initiate; earlier onset of puberty in girls means that their first few attempts fail miserably on boys who don’t yet reciprocate that interest.
Since you mention this, I find it weird we still group students by their age, as if date of manufacture was the most important feature of their socialization and education.
We are forgetting how fundamentally weird it is to segregate children by age in this way from the perspective of traditional culture.
Have you read The Nurture Assumption? There’s a chapter on that; in the West someone who’s small/immature for his class level will be at the bottom of the pecking group throughout his education, whereas in a traditional society where kids self-segregate by age in a more flexible manner, kids will grow from being the smallest of their group to the largest of their group, so will have a wider diversity of experience.
It’s a pretty convincing reason to not make your kid skip a class.
Also a good reason to consider home-schooling or even having them enrol in primary school education one year later.
As a very rough approximation:
A normal western kid will mostly get used to a relatively fixed position in the group in terms of size / maturity
A normal kid in a traditional village society will experience the whole range of size/maturity positions in the group
A homeschooled kid will not get as much experience being in a peer group
It’s not clear that homeschooling is better than the fixed position option (though it may be! But probably for other reasons).
The post is about decent (although rather US-centric and imprecise), but reading through the comments there, I’m very grateful for whatever changes the community has undergone since then. Most of them are unpleasant to read for various reasons.
Be specific.
and
This is just very very low-status.
God forbid for us to have sympathy with low status males. This might trick some to think their lives and well being is worth as much as that of real people!
Imagine if our society cared for low status men as much as about the feelings of low status women … the horror!
Those comments should’ve been better formulated and written in a better tone. Nothing is wrong with most individual sentences, but overall it doesn’t paint a pretty picture.
(“The underclasses are starting to get desperate. Your turn.”—“Desperate.”—“Desperate.”—“Desperate.”)
I can agree with that. But this is then a dispute about levels of writing skill not content no?
These are connected. What and how we write influences what and how we think.
Well sure but dosen’t this undermine the argument that:
If you only do it for a day or so, you get just a few corruption points, and may continue serving the Imperium at the price of but a tiny portion of your soul. Chaos has great gifts in store for those who refuse to be consumed by it!
Well done, I had to up vote the reference. :D
This is plain true in a descriptive sense.
Is it?
OF COURSE it is. My problem is with the tone and the general style.
Agreed. The advantage of LW_2012 over OB_2008 is that there are no longer posts like this or this, which promote horribly incorrect gender stereotypes.
I wish LW had a stronger lingering influence from Robin Hanson. For any faults it may have OB is not a boring site.
That’s sort of orthogonal to my point, but yes.
I flat out disagree, Male Sati is a perfectly ok article. There is in my opinion nothing harmful or unseemly about it at least nothing in excess of what we see on other topics here.
Do you have any idea at all what reading this site is like if you have a different set of preferences? We never make any effort at all to make this site at all more inclusive of ideological or value diversity, when it is precisely this that might help us refine the art more!
Here are a handful of my specific objections to Modern Male Sati:
Hanson is arguing that cryonicists’ wives should be accepting of the fact that their husbands are a) spending a significant portion of their income on life extension, and b) spending a lot of time thinking about what they are going to do with their wives are dead, and if they can’t accept these things, they are morally equivalent to widow-burners. This is not only needlessly insulting, but also an extremely unfair comparison.
In making this comparison, Hanson is also calling cryonicists’ wives selfish for not letting their husbands do what they want. This is a very male view of what a long-term relationship should be like, without anything to counterbalance it. It comes off like a complaint, sort of like, “my wife won’t let me go out to the bar with my male friends.”
Hanson writes: ” It seems clear to me that opposition is driven by the possibility that it might actually work.” This is wrong—it seems pretty obvious that your spouse believing in the “a)” and “b)” I listed above are valid reasons to be frustrated with them, regardless of whether you actually believe them. Also, this line strikes me as cheap point-scoring for cryonics (although I don’t know if Hanson intended it this way).
Hanson implicitly assumes that this is a gender issue, and talks about it as such, but this isn’t necessarily so. What about men who have cryonicist wives? It’s quite possible that there actually is a gender element involved here, but not even asking the question is what I object to.
Hanson’s tone encourages others to talk about women in a specific way, as an “other,” or an out-group. This is bad for various reasons that should be somewhat self-evident.
No, I don’t think I know what it’s like reading this site with a different set of preferences. That said, I would like to see some value diversity, and I would welcome some frank discussions of gender politics. But. There should also be people writing harshly-worded rebuttals when someone says something dreadfully wrong about the opposite gender or promotes some untrue stereotype.
It might also be worth noting that lack of value diversity is the reason I object to OB_2008. Factual content aside, Modern Male Sati and Is Overcoming Bias Male? promote a very specific view of gender politics that will anger and deter some potential readers. This creates a kind of evaporation cooling effect where posters can be even more wrong about gender politics and have no one to call them out on it.
Indian widows would use up a great deal of the husband’s estate while living on for unknown years or decades (the usual age imbalance + the female longevity advantage). As for thinking about afterwards… well, I imagine they would if they had had the option, as does anyone who takes out life insurance and isn’t expected to forego any options or treatments.
Assuming the conclusion. The question is are the outcomes equivalent… Reading your comment, I get the feeling you’re not actually grappling with the argument but instead venting about tone and values and outgroups.
Oh, so if the husband agrees not to go out to bars, then cryonics is now acceptable to you and the wife? A mutual satisfaction of preferences, and given how expensive alcohol is, it evens the financial tables too! Color me skeptical that this would actually work...
If this were a religious dispute, like, say, which faith to raise the kids in, would you be objecting? Is it ‘selfish’ for a Jewish dad to want to raise his kids Jewish? If it is, you seem to be seriously privileging the preferences of wives over husbands on all matters, and if not, it’d be interesting to see you try to find a distinction which makes some choices of education more important than cryonics!
Opposition to cryonics really is a gender issue: look at how many men versus women are signed up! That alone is sufficient (cryonicist wives? rare as hen’s teeth), but actually, there’s even better data than that in “Is That What Love is? The Hostile Wife Phenomenon in Cryonics”, by Michael G. Darwin, Chana de Wolf, and Aschwin de Wolf; look at the table in the appendix.
It’s an unfair comparison because widow-burning comes with strong emotional/moral connotations, irrespective of actual outcomes. It’s like (forgive me) comparing someone to Hitler, in the sense that even if the outcome you’re talking about is equivalent to Hitler, the emotional reaction that “X is like Hitler” provokes is still disproportionately too large. (Meta-note: Let’s call this Meta-Godwin’s Law: comparing something to comparing something to Hitler.)
As for the actual outcomes: It seems to me that there is some asymmetry because the widow is spending their husband’s money after they are dead, whereas the cryonicist is doing the spending while they are still around. But I’ll drop this point because, as you said, I am less interested in the actual argument and more interested in how it was framed.
Yes; I explicitly stated this in my fifth bullet point.
This is not at all what I’m arguing. I am arguing that Hanson’s post pattern-matches to a common male stereotype, the overly-controlling wife. Quoting myself, “This is a very male view of what a long-term relationship should be like, without anything to counterbalance it.” I don’t think the exchange you describe would actually work in practice.
Forgive me, I do not understand how this is related to the point I was making. I don’t see the correspondence between this and cryonics. Additionally, this example is a massive mind-killer for me for personal reasons and I don’t think I’m capable of discussing it in a rational manner. I’ll just say a few more things on this point: I am not accusing cryonicists of being selfish. I am saying that it is unreasonable for Hanson to accuse wives of being selfish because of the large, presumably negative impact it has on a relationship. I am also not attempting to privilege wives’ preferences over husbands; apologies for any miscommunication that caused that perception. I should probably also add that I am male, which may help make this claim more credible.
Side comment: I have no idea how to even begin comparing these two things, but I think this point is indicative of the large inferential gap between you and I. My System 1 response was to value choice of religious education over cryonics, whereas you seem to be implying (if I’m parsing your comment correctly, which I may not be) that the latter is clearly more important.
Whoops. Ok. I didn’t realize that.
Can I write a harshly-worded rebuttal of the idea that promoting stereotypes is always morally wrong? Or perhaps an essay on how stereotypes are useful?
Oh, of course. In fact, before I saw your comment I changed the wording to “untrue stereotype.” Some stereotypes are indeed true and/or useful. What I object to is assuming that certain stereotypes are true without evidence, and speaking as if they are true, especially when said stereotypes make strong moral claims about some group. This is what Hanson does in Modern Male Sati and Is Overcoming Bias Male?
Edit: Tone is also important. Talking about some group as if they are an out-group is generally a bad thing. The two posts by Hanson that I mentioned talk about women as if they are weird alien creatures who happen to visit his blog.
Ah ok! I have no problem with such a proposed norm then.
Hold on a minute, though—I’m not sure we actually agree here. I envision this kind of norm excluding posts like Modern Male Sati and Is Overcoming Bias Male?. Do you?
I’m ok as long as we get to have a fair meta debate about a norm of excluding interesting posts like modern male sati and the like first. Also that one is allowed to challenge such norms later if circumstances change.
I mean what kind of a world would it be if people violated every norm they disagreed with? As long as the norm making system is generally ok, its better to not sabotage it. And who knows maybe I would be convinced in such a debate as well.
Fair point. Out of curiosity, what norms would you promote in this meta debate?
Random thought, if we assume a large universe, does that imply that somewhere/when there is an novel that just happens to perfectly resemble our lives? If it does I am so going to acausally break the fourth wall. Bonus questions, how does this intersect with the rules of the internet?
Don’t worry, whether you do this or not, there is a novel where you do and a novel where you don’t, without any other distinctions.
Seems to imply it. Conversly, if you go to the “all possible worlds exist” level of a multiverse, then each novel (or other work of fiction) in our world describes events that actually happen in some other world. If you limit yourself to just the “there’s an infinite amount of stuff in our world” multiverse, then only novels describing events that would be physically and otherwise possible describe real events.
Jorge Luis Borges, The Library of Babel
That story has always bothered me. People find coherent text in the books too often, way too often for chance. If the Library of Babel really did work as the story claims, people would have given up after seeing ten million books of random gibberish in a row. That just ruined everything for me. This weird crackfic is bigger in scope, but much more believable for me because it has a selection mechanism to justify the plot.
There’s some alleged quotation about making your own life a work of art. IIRC it’s been attributed to Friedrich Nietzsche, Gabriele d’Annunzio, Oscar Wilde, and/or Pope John Paul II.
I am interested in reading on a fairly specific topic, and I would like suggestions. I don’t know any way to describe this other than be giving the two examples I have thought of:
Some time ago my family and I visited India. There, among other things, we saw many cows with an extra, useless leg growing out of their backs near the shoulders. This mutation is presumably not beneficial to the cow, but it strikes me as beneficial to the amateur geneticist. Isn’t it incredibly interesting that a leg can be the by-product of random mutation? Doesn’t that tell us a lot about the way genes are structured—namely that somewhere out there is a gene that encodes things at near the level of genes—some small number of genes corresponds nearly directly to major, structural components of the cow. It’s not all about molecules, or cells, or even tissues! Gene’s aren’t like a bitmap image—they’re hierarchical and structured. Wow!
Similarly, there are stories of people losing specific memory ‘segments’, say, their personal past but not how to read and write, how to drive, or how to talk. Assuming that these stories are approximately true, that suggests that some forms of memory loss are not random. We wouldn’t expect a hard drive error to corrupt only pictures of sunny days on your computer since the hard drive doesn’t know what pictures are of sunny days. We wouldn’t even expect a computer virus to do that. At least we wouldn’t unless somewhere the pictures of sunny days are grouped together, say in a folder. So the brain doesn’t store memories like a computer stores images! Or memory loss isn’t like hard drive failures! Somewhere, memories are ‘clumped’ into personal-things and general-knowledge things so that we can lose one without losing the other and without an unfathomable coincidence of chance.
Neither of these conclusions is either specific or surprising, but I know nothing about neurology and nothing about genetics so I’m not sure how to take these ideas further than my poor computer science-driven analogies. If someone who really knew this subject, or some subset of it, wrote about it, I can’t help but feeling that this would be absolutely fascinating. Please, let me know if there is such a book or article or blog post out there! Or even if you just have other observations that’ll make me think “wow” like this, tell me!
What makes you think that the extra limbs were caused by mutations? I know very little about bovine biology, but if we were dealing with a human, I would assume that an extra leg was likely caused by absorption of a sibling in utero. I have never heard of a mutation in mammals causing extra limb development. (Even weirder is the idea of a mutation causing an extra single leg, as opposed to an extra leg pair.) The vertebrate body plan simply does not seem to work that way.
Pure speculation! However, this was a wide-spread occurrence not just one or two cows hinting at some systematic setup. I also don’t remember the details as it was many years ago and I was quite young—it’s possible that there was a pair of legs.
Forgive me, for my biology is a bit rusty.
A gene can become more common in a population without being selected for. However, invoking random genetic drift as an explanation is generally dirty pool, epistemically speaking. We should expect a gene that creates extra useless legs to be selected against. (Nutrients and energy spent maintaining the leg could be better used, the leg becomes more space for parasite invasion, etc.) Assuming that you were dealing with such cattle, you should assume that some humans were selecting for them. (No reason necessary. Humans totally do that sort of thing.)
I cannot think of any examples of a mutation causing extra limb development in vertebrates. However, certain parasites can totally cause extra limb development in amphibians. I doubt this is the case, but it is more likely than mutation.
Alternatively, consider there existing a selection effect on your observations. I wager that Indian cattle are less likely to be culled for having an extra leg that American cattle are. I’m just going off of stereotypes here, however.
Are you sure that your example is personal vs general, rather than episodic vs procedural? The latter distinction much more obviously benefits from different encodings or being connected to different parts of the brain.
I’m not sure of anything regarding this—all I know is that it tells me a little bit, not very much, and that it would tell someone better versed in this more.
Related to: List of public drafts on LessWrong
Is meritocracy inhumane?
Consider how meritocracy leeches the lower and middle class of highly capable people and how this increases the actual differences both in culture and in ability between the various parts of a society. This then increases the gap between them. It seems to make sense that ceteris paribus they will live more segregated from each other than ever before.
Now merit has many dimensions, but lets take the example of a trait that helps you with virtually anything. Highly intelligent people have positive externalities they don’t fully capture. Always using the best man for the job should produce more wealth for society as a whole. Also it appeals to our sense of fairness. Isn’t it better that the most competent man get the job, than the one with the highest title of nobility or from the right ethnic group or the one who got the winning lottery ticket?
Let us leave aside problems with utilitarianism for the sake of argument and ask does this automatically mean we have a net gain in utility? The answer seems to be no. A transfer of wealth and quality of life not just from the less deserving to the more deserving but from the lower and lower middle class to the upper classes. If people basically get the position in society they deserve in life they are also costing people around them positive (or negative) externalities. Meritocratic societies have proven fabulously good at creating wealth and because of our impulses nearly all of them seem to have instututed expensive welfare programs. But consider what welfare is in the real world, a centralized attempt often lacking in feedback or flexibility, it can never match the local positive externalities of competent/nice/smart people solving problems they see around themselves. Those people simply don’t exist any more in those social groups! If someone was trying to get pareto optimal solutions this seems incredibly silly and harmful!
With humans at least centralized efforts don’t ever seem to be as efficient a way to help them as would just settling a good mix of talented poor with them. Now obviously meritocracy produces incredible amounts of wealth and this is probably a good think in itself, but since we can’t yet transform that wealth into happiness and Western societies have proven incapable of turning it into something as vital to psychological well being as safety from violence, are we really experiencing gains in utility? Now some might dispute the safety claim by noting that murder rates are lower in the US today than in the 1960s. But this is an illusion, the rate of violent assault is higher, its just that the fraction of violent assaults that result in death have fallen significantly because of advances in trauma medicine. London today is worse at suppressing crime than was the London of 1900s despite the former presumably having less wealth that could be used to do this than the latter. I find it telling that even advances in technology and erosion of privacy brought about by technology, for example CCTV camera surveillance, don’t seem enough to counteract this. But I’m getting into Moldbuggery here.
Now if society is on the brink of starvation maybe meritocracy is a sad fact of life but in rich modern society where no one is starving and the main cost of being poor is being stuck living with dysfunctional poor people can we really say this is a net utilitarian gain? Recall that greater divergence between the managing and the managed class means that the problem of information and the principal-agent problems are getting worse.
Middle Class society seems incompatible with meritocracy. As does any kind of egalitarianism.
[unfinished draft]
I see at least two other major problems with meritocracy.
First, a meritocracy opens for talented people not only positions of productive economic and intellectual activity, but also positions of rent-seeking. So while it’s certainly great that meritocracy in science has given us von Neumann, meritocracy in other areas of life has at the same time given us von Neumanns of rent-seeking, who have taken the practices of rent-seeking to an unprecedented extent and to ever more ingenious, intellectually involved, and emotionally appealing rationalizations. (In particular, this is also true of those areas of science that have been captured by rent-seekers.)
Worse yet, the wealth and status captured by the rent-seekers are, by themselves, the smaller problem here. The really bad problem is that these ingenious rationalizations for rent-seeking, once successfully sold to the intellectual public, become a firmly entrenched part of the respectable public opinion—and since they are directly entangled with power and status, questioning them becomes a dangerous taboo violation. (And even worse, as it always is with humans, the most successful elite rent-seekers will be those who honestly internalize these beliefs, thus leading to a society headed by a truly delusional elite.) I believe that this is one of the main mechanisms behind our civilization’s drift away from reality on numerous issues for the last century or so.
Second, in meritocracy, unless you’re at the very top, it’s hard to avoid feeling like a failure, since you’ll always end up next to people whose greater success clearly reminds you of your inferior merit.
Not only did the Medieval peasant have good reason to believe that Kings aren’t really that different from him as people, but rather just different in their proper place in society. Kings had an easier time looking at a poor peasant and saying to themselves that there but for the grace of God go they.
In a meritocracy it is easier to disdain and dehumanize those who fail.
Do you mean to suggest that a significant percentage of Medieval peasants in fact considered Kings to not be all that different from themselves as people, and that a significant percentage of Medieval Kings actually said that there but for the grace of God go they with respect to a poor peasant?
Or merely that it was in some sense easier for them to do so, even if that wasn’t actually demonstrated by their actions?
That sounds like something I’d keep to myself as a medieval peasant if I did believe it. As such it may be the sort of thing that said peasants would tend not to think.
(Who am I kidding? I’d totally say it. Then get killed. I love living in an environment where mistakes have less drastic consequences than execution. It allows for so much more learning from experience!)
The latter. The former is an empirical claim I’m not yet sure how we could properly resolve. But there are reasons to think it may have been true.
After all the King is a Christian and so am I. It is merely that God has placed a greater burden of responsibility on him and one of toil on me. We all have our own cross to carry.
I’d say you’re looking at the history of feudal hierarchy through rose-tinted glasses. People who are high in the instrumental hierarchy of decisions (like absolute rulers) also tend to gain a similarily high place in all other kinds of hierarchies (“moral”, etc) due to halo effect and such. The fact that social or at least moral egalitarianism logically follows from Christian ideals doesn’t mean that self-identified Christians will bother to apply it to their view of the tribe.
Remember, the English word ‘villain’ originally meant ‘peasant’/‘serf’. It sounds like a safe assumption to me that the peasants were treated as subhuman creatures by most people above them in station.
James A. Donald disagrees.
It makes quite a bit of sense. Since incentives matter I would tend to agree.
Since I know about the past interactions you two have had here, I would appreciate you just focused on the argument cited not snipe at James’ other writings or character.
I’m curious what you thing more generally of the article you linked to? Specifically the notion of natural rights.
Someone thinks the usage originates from an upper-class belief that the lower class had lower standards of behavior.
Hm… so to clarify your position, would you call, say, Saul Alinsky a destructive rent-seeker in some sense? Hayden? Chomsky? All high-status among the U.S. “New Left” (which you presumably—ahem—don’t have much patience for) - yet after reading quite a bit on all three, they strike me as reasonable people, responsible about what they preached.
(Yes, yes, of course I get that the main thurst of your argument is about tenured academics. But what you make of these cases—activists who think they’re doing some rigorous social thinking on the side—is quite interesting to me.)
After a painful evening, I got an A/B test going on my site using Google Website Optimizer*: testing the CSS
max-width
property (800, 900, 1000, 1200, 1300, & 1400px). I noticed that most sites seem to set it much more narrowly than I did, eg. Readability. I set the ‘conversion’ target to be a 40-second timeout, as a way of measuring ‘are you still reading this?’Overnight each variation got ~60 visitors. The original 1400px converts at 67.2% ± 11% while the top candidate 1300px converts at 82.3% ± 9.0% (an improvement of 22.4%) with an estimated 92.9% chance of beating the original. This suggests that a switch would materially increase how much time people spend reading my stuff.
(The other widths: currently, 1000px: 71.0% ± 10%; 900px: 68.1% ± 10%; 1200px: 66.7% ± 11%; 800px: 64.2% ± 11%.)
This is pretty cool—I was blind but now can see—yet I can’t help but wonder about the limits. Has anyone else thoroughly A/B-tested their personal sites? At what point do diminishing returns set in?
* I would prefer to use Optimizely or Visual Website Optimizer, but they charge just ludicrous sums: if I wanted to test my 50k monthly visitors, I’d be paying hundreds of dollars a month!
Do you know the size of your readers’ windows?
How is the 93% calculated? Does it correct for multiple comparisons?
Given some outside knowledge, that these 6 choices are not unrelated, but come from a ordered space of choices, the result that one value is special and all the others produce identical results is implausible. I predict that it is a fluke.
No, but it can probably be dug out of Google Analytics. I’ll let the experiment finish first.
I’m not sure how exactly it is calculated. On what is apparently an official blog, the author says in a comment: “We do correct for multiple comparisons using the Bonferroni adjustment. We’ve looked into others, but they don’t offer that much more improvement over this conservative approach.”
Yes, I’m finding the result odd. I really did expect some sort of inverted V result where a medium sized max-width was “just right”. Unfortunately, with a doubling of the sample size, the ordering remains pretty much the same: 1300px beats everyone, with 900px passing 1200px and 1100px. I’m starting to wonder if maybe there’s 2 distinct populations of users—maybe desktop users with wide screens and then smartphones? Doesn’t quite make sense since the phones should be setting their own width but...
A bimodal distribution wouldn’t surprise me. What I don’t believe is a spike in the middle of a plain. If you had chosen increments of 200, the 1300 spike would have been completely invisible!
New heuristic: When writing an article for LessWrong assume the casual reader knows about the material covered in HPMOR.
I used to think one could assume they read the sequences and some other key stuff (Hanson ect.), but looking at debates this simply can’t be true for more than a third of current LW users.
I find it pretty easy to pursue a course of study and answer assessment questions on the subject. Experience teaches me that such assessment problems usually tell you how to solve them, (either implicitly or explicitly), and I won’t gain proper appreciation for the subject until I use it in a more poorly-defined situation.
I’ve been intending to get a decent understanding of the HTML5 canvas element for a while now, and last week I hit upon the idea of making a small point & click adventure puzzle game. This is quite ambitious given my past experience (I’m a dev, though much more at home with data than graphics or interaction design), but I decided even if I abandon the project, I’ll still have learned useful things from it. A week later and the only product I have to show for my effort is a blue blob whizzing round a 2.5D environment. I’ve succeeded in gaining an understanding of canvas, but quite by accident I’ve also consolidated my understanding of vector decomposition and projective transforms, which I learned about years ago but never actually used for my own purposes.
This got me thinking: I don’t actually know what projects are going to let me develop certain specific skills and areas I want to develop. I’m currently studying a stats-heavy undergrad degree part-time with the intent of changing careers into something more data-sciencey in a few years. What projects should I set myself to develop those sorts of skills, (or alternatively, to alert me to the fact I’d really hate a career in data science)?
I could use similar advice, as I am in a similarish position.
A fellow LessWrong user on IRC: “Good government seems to be a FAI-complete problem. ”
Which ought not be surprising. Governments are nonhuman environment-optimizing systems that many people expect to align themselves with human values, despite not doing the necessary work to ensure that they will.
Sounds about right to me.
I just read the new novel by Terry Pratchett and Stephen Baxter, The Long Earth. I didn’t like it and don’t recommend it (I read it because I loved other books by Pratchett, but there’s no similarity here).
There was one thing in particular that bothered me. I read the first 10 reviews of the book that Google returns, and they were generally negative and complained about many things, but never mentioned this issue. Many described Baxter as a master of hard sci fi, which makes it doubly strange.
Here’s the problem: in this near-future story, gurer vf n Sbbzvat NV, nyernql fhcrevagryyvtrag naq nf cbjreshy nf n znwbe angvba, juvpu jvyy cebonoyl orpbzr zber cbjreshy guna gur erfg bs gur jbeyq pbzovarq va nabgure lrne be fb. And nobody in the world cares! It’s not a plot point! I kept expecting it to at least be mentioned by one of the characters, but they’re all completely ‘meh’. Instead they obsess over minor things like arj ubzvavq fcrpvrf fznegre guna puvzcf, ohg abg nf fzneg nf uhznaf.
Have I been spoiled by reading too much LW? Has this happened to others with other fiction?
A usual idea of utopia is that chores—repetitive, unsatisfying, necessary work to get one’s situation back to a baseline—are somehow eliminated. Weirdtopia would reverse this somehow. Any suggestions?
As the scope for complex task automation becomes broader, almost all problems become trivial. Satisfying hard work, with challenging and problem-solving elements, becomes a rare commodity. People work to identify non-trivial problems (a tedious process), which are traded for extortionate prices. A lengthy list of problems you’ve solved becomes a status symbol, not because of your problem-solving skills, but because you can afford to buy them.
Another angle: Is it plausible that almost all problems become trivial, or will increased knowledge lead to finding more challenging problems?
The latter seems at least plausible, considering that the universe is much bigger than our brains, and this will presumably continue to be true.
Look at how much weirder the astronomical side of physics has gotten.
I don’t think you’ve answered my question, but you’ve got an interesting idea there.
What do people buy which would be more satisfying than solving the problems they’re found?
Also, this may be a matter of the difference between your and my temperaments, but is finding non-trivial problems that tedious?
As it’s the result of about two minutes thought, I’m not very confident about how internally consistent this idea is.
If finding non-trivial problems is tedious work, I imagine people with a preference for tedious work (or who just don’t care about satisfying problems) would probably rather buy art/prostitutes/spaceship rides, etc. This is the bit I find hardest to internally reconcile, as a society in which most work has become trivially easy is probably post-scarcity.
I personally don’t find the search for non-trivial problems all that tedious, but if I could turn to a computer and ask “is [problem X] trivial to solve?”, and it came back with “yes” 99.999% of the time, I might think differently.
“The daily tasks of living give meaning to life. Chopping wood, drawing water: these are the highest accomplishments. Using machines to do these things empties life of life itself. To spend your days growing your own food, making with your own hands everything that you need, living as a natural part of nature like all the other animals: this is paradise. Contrast the seductive allure of machines and cities, raping our Mother for our vile enjoyment, waging war against the imaginary monsters of “disease” and “poverty” instead of accepting the natural balance of Nature, striving always to see who can most outdo the original sin of separating from the great apes. “Scientists” see our Mother as a corpse to be looted, but if we do not turn away from that false path, out of her eternal love she will wring our neck as any loving mother will do to a deformed child.”
Deep green ecology, in other words.
Modify us to see real chores the way we see fun, addictive task-management games.
It would be a subtle problem to manage that so that people don’t spend excessive amounts of time on chores.
Yes.
Heck, it’s a subtle problem to even identify what an “excessive” amount of time to spend on chores is.
Some more SIAI-related work: looking for examples of costly real-world cognitive biases: http://dl.dropbox.com/u/85192141/bias-examples.page
One of the more interesting sources is Heuer’s Psychology of Intelligence Analysis. I recommend it, for the unfamiliar political-military examples if nothing else. (It’s also good background reading for understanding the argument diagramming software coming from the intelligence community, not that anyone on LW actually uses them.)
The cia.gov link leads to a redirect.
Weird. If you just replace
http
withhttps
, it works; one wonders why they couldn’t just set up 301 redirects for all the old links...Its been awhile since I read it, but I recall the book Sway being a good source of bias examples.
I read quite a bit, and I really like some of the suggestions I found in LW. So, my question is: is there any recent or not-so-recent-but-really-good book you would recommend? Topics I’d like read more about are:
evolutionary psychology (I read some Robert Wright, I’d like to read something a bit more solid)
status/prestige theory (Robin Hanson uses it all the time, but is there some good text discussing this?)
I’m happy to read pop-sci, as long as it’s written with a skeptical, rationalist mindset. I.e. I liked Linden’s The accidental mind, but take Gladwell’s writings with a rather big grain of salt.
Give him a year or two and he’ll have written one.
http://lesswrong.com/lw/82g/on_the_openness_personality_trait_rationality/ has a download of one book very close to this topicspace.
Thanks! The link doesn’t seem to work, but I’ll check out the book. Did you read it?
No, I haven’t read it yet, but it’s on my list. Here’s another download link http://dl.dropbox.com/u/33627365/Scholarship/Spent%20Sex%20Evolution%20and%20Consumer%20Behavior.pdf
Thanks, Grognor!
I just finished reading it. The start is promising, discussing consumer behavior from the signaling/status perspective. There’s some discussion of the Big Five personality traits + general intelligence, which was interesting (and I’ll need to look into a bit deeper). It shows how these traits influence our buying habits, and the crazy things people do for a few status points...
The end of the book proposes some solutions to hyper-consumerism, and this part I did not particularly like—in a few pages the writer comes up with some far-far-reaching plans (consumption tax etc.) to influence consumers; all highly speculative, not likely to ever be realized.
Apart from the end, liked it, writer is quick & witty, and provides food for thought.
A question about acausal trade
(btw, I couldn’t find a good link for acausal trade introduction discussion; I would be grateful for one)
We discussed this at a LW Seattle meetup. It seems like the following is an argument for why all AIs with a decision theory that does acausal trade act as if they have the same utility function. That’s a surprising conclusion to me which I hadn’t seen before, but also doesn’t seem too hard to come up with, so I’m curious where I’ve gone off the rails. This argument has a very Will_Newsomey flavor to it to me.
Lets say we’re in a big universe with many many chances for intelligent life, but most of them are so far apart that they will never meet eachother. Lets also say that UDT/TDT-like decision theories are are in some sense the obviously correct decision theory to follow, so that many civilizations, when they build an AI, they use something like UDT/TDT. At their inception, these AIs will have very different goals since since the civilizations that built them would have very different evolutionary histories.
If many of these AIs can observe that the universe is such that there will be other UDT/TDT AIs out there with different goals then each AI trade acausally with the AIs it thinks will be out there. Presumably each AI will have to study the universe and figure out a probability distribution for the goals of those AIs. Since the universe is large, each AI will expect many other AIs to be out there and thus bargain away most of its influence over its local area. Thus, the starting goals of each AI will only have a minor influence on what it does; each AI will act as if it has some combined utility function.
What are the problems with this idea?
Substitute the word causal for acausal. In a situation of “causal trade”, does everyone end up with the same utility function?
The Coase theorem does imply that perfect bargaining will lead agents to maximize a single welfare function. (This is what it means for the outcome to be “efficient”.) Of course, the welfare function will depend on the agents’ relative endowments (roughly, “wealth” or bargaining power).
(Also remember that humans have to “simulate” each other using logic-like prior information even in the straightforward efficient-causal scenario—it would be prohibitively expensive for humans to re-derive all possible pooling equilibria &c. from scratch for each and every overlapping set of sense data. “Acausal” economics is just an edge case of normal economics.)
Unrelated question: Do you think it’d be fair to say that physics is the intersection of metaphysics and phenomenology?
The most glaring problem seems to be how it could deduce the goals of other AIs. It either implies the existence of some sort of universal goal system, or allows information to propagate faster than c.
What I had in mind was that each of the AIs would come up with a distribution over the kinds of civilizations which are likely to arise in the universe by predicting the kinds of planets out there (which is presumably something you can do since even we have models for this) and figuring out different potential evolutions for life that arises on those planets. Does that make sense?
I was going to respond saying I didn’t think that would work as a method, but now I’m not so sure.
My counterargument would be to suggest that there’s no goal system which can’t arbitrarily come about as a Fisherian Runaway, and that our AI’s acausal trade partners could be working on pretty much any optimisation criteria whatsoever. Thinking about it a bit more, I’m not entirely sure the Fisherian Runaway argument is all that robust. There is, for example, presumably no Fisherian Runaway goal of immediate self-annihilation.
If there’s some sort of structure to the space of possible goal systems, there may very well be a universally derivable distribution of goals our AI could find, and share with all its interstellar brethren. But there would need to be a lot of structure to it before it could start acting on their behalf, because otherwise the space would still be huge, and the probability of any given goal system would be dwarfed by the evidence of the goal system of its native civilisation.
There’s a plot for a Ctrhulhonic horror tale lurking in here, whereby humanity creates an AI, which proceeds to deduce a universal goal preference for eliminating civilisations like humanity. Incomprehensible alien minds from the stars, psychically sharing horrible secrets written into the fabric of the universe.
Except for the eliminating humans part, the Ctrhulhonic outcome seems almost like the default. We build AI, proving that it implements out reflectively stable wishes and then it still proceeds to do almost pay very little attention to what we thought we wanted.
One thing that might push back in the opposite direction is that if humans have heavily path dependent preferences (which seems pretty plausible) or selfish wrt currently existing humans in some way then an AI built for our wishes might not be willing to trade much humanity away in exchange for resources far away.
The Cthulhonic outcome is only the case if there are identifiable points in the space of possible goal systems to which the AI can assign enough probability to make them credible acausal trade partners. Whether those identifiable points exist is not clear or obvious.
When it ruminates over possible varieties of sapient life in the universe, it would need to find clusters of goals that were (a) non-universal, (b) specific enough to actually act upon, and (c) so probabilistically dense that they didn’t vanish into obscurity against humanity’s preferences, which it possesses direct observational evidence for.
Whether those clusters exist, and if they do, whether they can be deduced a priori by sitting in a darkened room and thinking really hard, does not seem obvious either way. Intuitively, thinking about trying to draw specific conclusions from extremely dilute evidence, I’m inclined to think they can’t, but I’m not prepared to inject that belief with a super amount of confidence, as I may very well think differently if I were a billion times smarter.
I think what matters is not so much the probability of goal clusters, but something like the expectation of the amount of resources that AIs that have a particular goal cluster have access to. An AI might think that some specific goal cluster only has a 1:1000 chance of occurring anywhere, but if it does then there are probably a million instances of it. I think this is the same as being certain that there are 1,000 (1million/1,000) AIs with that goal cluster. Which seems like enough to ‘dilute’ the preferences of any given AI.
If the universe is pretty big then it seems like it would be pretty easy to get large expectations even with low probabilities. (let me know if I’m not making sense)
The “million instances” is the size of the cluster, and yes, that would impact its weight, but I think it’s arithmetically erroneous to suggest the density matters more than the probability. It depends entirely on what those densities and probabilities are, and you’re just plucking numbers straight out of the air. Why not go the whole hog and suggest a goal cluster that happens nine times out of ten, with a gajillion instances?
I believe the salient questions are:
Do such clusters even exist? Can they be inferred from a poverty of evidence just by thinking about possible agents that may or may not arise in our universe with enough confidence to actually act upon? This boils down to whether, if I’m smart enough, I can sit in an empty room, think “what if...” about examples of something I’ve never seen before from an enormous space of possibilities, and come up with an accurate collection of properties for those things, weighted by probability. There are some things we can do that with and some things we can’t. What category do alien goal systems fall into?
If they do exist, will they be specific enough for an AI to act upon? Even if it does deduce some inscrutable set of alien factors that we can’t make sense of, will they be coherent? Humans care a lot about methods of governance, the moral status of unborn children, and who people should and shouldn’t have sex with, but they don’t agree on these things.
If they do exist, are there going to be many disparate clusters, or will they converge? If they do converge, how relatively far away from the median is humanity? If they’re disparate, are they completely disjoint goals, or are they overlap and/or conflict with each other? More to the point, are they going to overlap and/or conflict with us?
I can’t say how much we’d need to worry about a superintelligent TDT-agent implementing alien goals. That’s a fact about the universe for which I don’t have a lot of evidence However there’s more than enough uncertainty surrounding the question for me to not lose any sleep over it.
One problem is that, in order to actually get specific about utility functions, the AI would have to simulate another AI that is simulating it—that’s like trying to put a manhole cover through its own manhole by putting it a box first.
If we assume that the computation problems are solved, a toy model involving robots laying different colors of tile might be interesting to consider. In fact there’s probably a post in there. The effects will be different sizes for different classes of utility functions over tiles. In the case of infinity robots with cosmopolitan utility functions, you do get an interesting sort of agreement though.
This outcome is bad because bargaining away influence over the AI’s local area in exchange for a small amount of control over the global utility function is a poor trade. But in that case, it’s also a poor acausal trade.
A more reasonable acausal trade to make with other AIs would be to trade away influence over faraway places. After all, other AIs presumably care about those places more than our AI does, so this is a trade that’s actually beneficial to both parties. It’s even a marginally reasonable thing to do acausally.
Of course, this means that our AI isn’t allowed to help the Babyeaters stop eating their babies, in accordance with its acausal agreement with the AI the Babyeaters could have made. But it also means that the Superhappy AI isn’t allowed to help us become free of pain, because of its acausal agreement with our AI. Ideally, this would hold even if we didn’t make an AI yet.
I agree with your logic, but why do you say it’s a bad trade? At first it seemed absurd to me, but after thinking about it I’m able to feel that it’s the best possible outcome. Do you have more specific reasons why it’s bad?
At best it means that the AI shapes our civilization into some sort of twisted extrapolation of what other alien races might like. In the worst case, it ends up calculating a high probability of existence for Evil Abhorrent Alien Race #176 which is in every way antithetical to the human race, and the acausal trade that it makes is that it wipes out the human race (satisfying #176′s desires) so that if the #176 make an AI, that AI will wipe out their race as well (satisfying human desires, since you wouldn’t believe the terrible, inhuman monstrous things those #176s were up to).
Perhaps it is not wise to speculate out loud in this area until you’ve worked through three rounds of “ok, so what are the implications of that idea” and decided that it would help people to hear about the conclusions you’ve developed three steps back. You can frequently find interesting things when you wander around, but there are certain neighborhoods you should not explore with children along for the ride until you’ve been there before and made sure its reasonably safe.
Perhaps you could send a PM to Will?
Not just going meta for the sake of it: I assert you have not sufficiently thought throught the implications of promoting that sort of non-openness publicly on the board. Perhaps you could PM jsavaltier.
I’m lying, of course. But interesting to register points of strongest divergence between LW and conventional morality (JenniferRM’s post, I mean; jsalvatier’s is fine and interesting).
I’m feeling fairly negative on lesswrong this week. Time spent here feels unproductive, and I’m vaguely uncomfortable with the attitudes I’m developing. On the other hand there are interesting people to chat with.
Undecided what to do about this. Haven’t managed to come up with anything to firm up my vague emotions into something specific.
Perhaps I’ll take a break and see how it feels.
I was feeling fairly negative on Less Wrong recently. I ended up writing down a lot of things that bothered me in a a half formed angry Google Doc rant, saving it...
and then going back to reading Less Wrong a few days later.
It felt refreshing though, because Less Wrong has flaws and you are allowed to notice them and say to yourself “This! Why are some people doing this! It’s so dumb and silly!”
That being said, I’m not sure that all of the arguments that my straw opponents were presenting in the half formed doc are actually as weak as I was making them out to be. But it did make me feel more positive overall simply summing up everything that had been bugging me at the time.
Hasn’t worked for Konkvistador.
I’m only posting this to clarify. Old habits do indeed die hard, but I so far haven’t changed my mind despite receiving some interesting email on the topic. Hopefully this will become more apparent after a month or two of inactivity.
What are the attitudes you are feeling uncomfortable with?
Hmm this is a bit fuzzy, as I said—part of my problem is that I just have a vague feeling and am having difficulty making it less vague. But:
an uncomfortable air of superiority
a bit too much association with right wing politics.
Some of the PUA stuff is a bit weird (not discussed directly on the site so much but in related contexts)
It would very much help if you could name three examples of each of your complaints, this would help you see if this really is the source of your unease. It would also help others figure out if you are right.
Overestimating our rationality and generally feeling clearer thinkers than anyone ever? Or perhaps unwilling to update on outside ideas like Konkvistador recently complained?
There is a lot of right wing politics on the IRC channel, but overall I don’t think I’ve seen much on the main site. On net the sites demographics are if anything remarkably left-wing.
The PUA stuff may come off as weird due to inferential distances or people accumulating strange ideas because they can’t sanity check them. Both are the result of the community norm that sort now seems to strongly avoid gender issues because we’ve proven time and again to be incapable of discussing them as we do most other things. This is a pattern that seems to go back to the old OB days.
I use LW casually and my attitude towards it is pretty neutral/positive but I recently got downvoted something like 10 times in past comments, it seems. A karma loss of 5%, and it’s a lot, comparing the amount of karma I have to how long I’ve been here. I didn’t even get into a big argument or anything, the back-and-forth was pretty short. So my attitude toward LW is very meh right now. Sorry, sort of wanted to just say this somewhere. ugh :/
The fact that LW is a forum about rationality/science don’t mean it’s good for you all the time. Strategically speaking, redefine your goals.
Or, maybe the quality of posts are not the same that was before.
After a week long vacation at Disney World with the family, it occurs to me there’s a lot of money to be made in teaching utility maximization to families...mostly from referrals by divorce lawyers and family therapists.
For the lesswrong vanity domain fan, ble.gg seems to be available.
And ru.be looks like it’s up for sale too.
I’m trying to memorise mathematics using spaced repetition. What’s the best way to transcribe proofs onto Anki flashcards to make them easy to learn? (ie what should the question and answer be?)
When it comes to formulate Anki cards it’s good to have the 20 rules from Supermemo in mind,
The important thing is to understand before you memorize. You should never try to memorzize a proof without understanding it in the first place.
Once you have understood the proof think about what’s interesting about the proof. Asks questions like: “What axioms does the proof use?” “Does the proof use axiom X?” Try to find as many questions with clear answers as you can. Being redundant is good.
If you find yourself asking a certain question frequently invent a shorthand for them. axioms(proof X) can replace “What axioms does the proof use?”
If you really need to remember the whole proof then memorize it step by step.
Proof A:
Do A
Do B
becomes 2 cards:
Proof A:
[...]
Proof A:
Do A
[...]
If you have a long proof that could mean 9 steps and 9 cards.
Thanks!
I’ve been doing something similar (maths in an Anki deck), and I haven’t found a good way of doing so. My current method is just asking “Prove x” or “Outline a proof of x”, with the proof wholesale in the answer, and then I run through the proof in my head calling it “Good” if I get all the major steps mostly correct. Some of my cards end up being quite long.
I have found that being explicit with asking for examples vs definitions is helpful: i.e. ask “What’s the definition of a simple ring?” rather than “What’s a simple ring?”.
“def(simple ring)” is more efficient than “What’s the definition of a simple ring?”
I find that having proper sentences in the questions means I can concentrate better (less effort to work out what it’s asking, I guess), but each to their own.
If you have 50 cards that are in the style “def(...)” than it doesn’t take any effort to work out what it’s asking anymore.
Rereading “What’s the ” over a thousand times wastes time. When you do Anki for longer periods of time reducing the amount of time it takes to answer a card is essential.
A method that I’ve been toying with: dissect the proof into multiple simpler proofs, then dissect those even further if necessary. For instance, if you’re proving that all X are Y, and the proof proceeds by proving that all X are Z and all Z are Y, then make 3 cards:
One for proving that all X are Z.
One for proving that all Z are Y.
One for proving that all X are Y, which has as its answer simply “We know all X are Z, and we know all Z are Y.”
That said, you should of course be completely certain that memorizing proofs is worthwhile. Rule of thumb: if there’s anything you could do that would have a higher ratio of awesome to cost than X, don’t do X before you’ve done that.
Did the site CSS just change the font used for discussion (not Main) post bodies? It looks bad here.
Edit: it only happens with some posts. Like these:
http://lesswrong.com/r/discussion/lw/dd0/hedonic_vs_preference_utilitarianism_in_the/ http://lesswrong.com/r/discussion/lw/dc4/call_for_volunteers_publishing_the_sequences/
But not these:
http://lesswrong.com/r/discussion/lw/ddh/aubrey_de_grey_has_responded_to_his_iama_now_with/ http://lesswrong.com/r/discussion/lw/dcy/the_fiction_genome_project/
Is it a perhaps a formatting change applied when posting?
Also, when I submit a new comment and then edit it, it now starts with an empty line.
Fixed
It’s a known Bug #315
Sacredness as a Monster by Sister Y, aren’t you glad I read cool blogs? :)
One more item for the FAI Critical Failure Table (humor/theory of lawful magic):
37. Any possibility automatically becomes real, whenever someone justifiably expects that possibility to obtain.
Discussion: Just expecting something isn’t enough, so crazy people don’t make crazy things happen. The anticipation has to be a reflection of real reasons for forming the anticipation (a justified belief). Bad things can be expected to happen as well as good things. What actually happens doesn’t need to be understood in detail by anyone, the expectation only has to be close enough to the real effect, so the details of expectation-caused phenomena can lawfully exist independently of the content of people’s expectations about them. Since a (justified) expectation is sufficient for something to happen, all sorts of miracles can happen. Since to happen, a miracle has to be expected to happen, it’s necessary for someone to know about the miracle and to expect it to happen. Learning about a miracle from an untrustworthy (or mistakenly trusted) source doesn’t make it happen, it’s necessary for the knowledge of possibility (and sufficiently clear description) of a miracle to be communicated reliably (within the tolerance of what counts for an effect to have been correctly anticipated). The path of a powerful wizard is to study the world and its history, in order to make correct inferences about what’s possible, thereby making it possible.
(Previously posted to the Jan 2012 thread by mistake.)
A poll, just for fun. Do you think that the rebels/Zionists in The Matrix were (mostly or completely) cruel, deluded fundamentalists commiting one atrocity after another for no good reason, and that in-universe their actions were inexcusable?
Upvote for “The Matrix makes no internal sense and there’s no fun in discussing it.”
I agree (the franchise established itself as rather one-dimensional… in about the first 40 minutes) - but hell, I get into discussions about TWILIGHT, man. I’m a slave to public discourse.
Karma sink.
Upvote for NO.
Upvote for YES.
Wow. That sequence was drastically less violent than I remembered it being. I noticed (for I believe the first time) that they actually made some attempt to avoid infinite ammo action movie syndrome. Also I must have thought the cartwheel bit was cool when I first saw it, but now it looks quite ridiculous and/or dated.
Maybe it’s time for a rewatch.
Karma sink.
What is the meaning of the three-digit codes in American university lessons? Such as: “Building a Search Engine (CS101)”, “Crunching Social Networks (CS215)”, “Programming A Robotic Car (CS373)” currently in Udacity.
Seems to me that 101 is always the introduction to the subject. But what about the other numbers? Do they correspond to some (subject specific) standard? Are they arbitrary (perhaps with general trend to give more difficult lessons higher numbers)?
The first digit is the most important. It indicates the “level” of the course: 100/1000 courses are freshman level, 200/2000 are sophomore level, etc. There is some flexibility in these classifications, though. Examples: My undergraduate university used 1000 for intro level, 2000 for intermediate level, 4000 for senior/advanced level, and 6000 for graduate level. (3000 and 5000 were reserved for courses at a satellite campus.) My graduate university uses 100, 200, 300, 400 for the corresponding undergraduate year levels, and 600, 700, 800 for graduate courses of increasing difficulty levels.
The other digits in the course number often indicate the rough order in which courses should be taken within a level. This is not always the case; sometimes they are just arbitrary, or they may indicate the order in which courses were added to the institute’s offerings.
In general, though the numbers indicate the levels of the courses and the order in which they “should” be taken, students’ schedules need not comply precisely (outside of course-specific prerequisite requirements).
It varies from institution to institution, but generally the first number indicates the year you’re likely to study it, so “Psychology 101” is the first course you’re likely to study in your first year of a degree involving psychology, which is why it’s the introduction to the subject. The numbering gets messy for a variety of reasons.
I should point out I’m not an American university student, but this style of numbering system is becoming prevalent throughout the English-speaking world.
101′s stereotypically the introduction to the course, but this sort of thing actually varies quite a bit between universities. Mine dropped the first digit for survey courses and introductory material; survey courses were generally higher two-digit numbers (i.e. Geology 64, Planetary Geology), while introductory courses were more often one-digit or lower two-digit numbers (i.e. Math 3A, Introduction to Calculus). Courses intended to be taken in sequence had a letter appended. Aside from survey courses, higher numbers generally indicated more advanced or specialized classes, though not necessarily more difficult ones.
Three digits indicated an upper-division (i.e. nominally junior- or senior-level) or graduate-level course. Upper-division undergrad courses were usually 100-level, and the 101 course was usually the first class you’d take that was intended only for people of your major; CS 101 was Algorithms and Abstract Data Types for me, for example, and I took it late in my sophomore year. Graduate courses were 200-level or higher.
We often hear about how professional philanthropy is a very good way to improve others’ lives. Have any LWers actually gone this route?
Just started on Wall Street
How’s it going?
We often hear that? What do you mean by professional philanthropy here?
I mean the general line of reasoning that goes, “Go do the highest-paying job you can get and then donate your extra money to AMF or other highly effective charities.” The most oft-cited high-paying job seems to be to work on Wall Street or some such.
Oh, okay, I thought you meant something else.
I would like to try some programming in Lisp, could you give me some advice? I have noticed that in the programming community this topic is prone to heavy mindkilling, which is why I ask on LW instead of somewhere else.
There are many variants of Lisp. I would prefer to learn one that is really used these days for developing real-world applications. Something I could use to make e.g. a Tetris-like game. I will probably need some libraries for input and output; which ones do you recommend? I want a free software that works out of the box; preferably on a Windows machine, without having to install a Linux emulator first. (If such thing does not exist, please tell me; and recommend me a second best possibility.)
I would also like to have a decent development environment; something that allows me to manage multiple source code files, does syntax highlighting, shows documentations to the functions I am writing. Again, preferably free, working out of the box on a Windows machine. Simply, I would like to have an equivalent of what Eclipse is for Java.
Then, I would like some learning resources, and information where can I find good open-source software written in Lisp, preferably games.
My research suggests Clojure is a lisp-like language most suited to your requirements. It runs on the JVM so should be relatively low hassle on Windows. I believe there’s some sort of Eclipse support but I can’t confirm it.
If you do end up wanting to do something with Common Lisp, I recommend Practical Common Lisp as a good free introduction.
Well, if your goal is trying out for education, but on Windows, you could start with DrRacket. http://racket-lang.org/
It is a reasonable IDE, it has some GUI libraries included, open-source, cross-platform, works fine on Windows.
Racket is based on Scheme language (which is a part of Lisp language family). It has a mode for Scheme as described in R6RS or R5RS standard, and it has a few not-fully-compatible dialects.
I use Common Lisp, but not under Windows. Common Lisp has more cross-implementation libraries, it could be useful sometimes. Probably, EQL is the easiest to set up under Windows (it is ECL, a Common Lisp implementation, merged with Qt for GUI; I remember there being a bundled download). Maybe CommonQt or Cells-GTK would work. I remember that some of the Common Lisp package management systems have significant problems under Windows or require either Cygwin or MSys (so they can use tar, gzip, mkdir etc. as if they were on a Unix-like system)
My goals are: 1) to get the “Lisp experience” with minimum overhead; and 2) to use the best available tools.
And I hope these two goals are not completely contradictory. I want to be able to write my own application on my computer conveniently after a few minutes, and to fluently progress to more complex applications. On the other hand, if I happen to later decide that Lisp is not for me, I want to be sure it was not only because I chose the wrong tools.
Thanks for all the answers! I will probably start with Racket.
For a certain value of “the Lisp experience”, Emacs may be considered more or less mandatory. In order to recommend for or against it I would need more precise knowledge of your goals.
I tried Emacs and decided that I dislike it. I understand the reason why it is like that, but I refuse to lower my user interface expectations that low.
Generally, I have noticed the trend that a software which is praised as superior often comes with a worse user interface, or ignores some other part of user experience. I can understand that a software with smaller userbase cannot put enough resources to its non-critical parts. That makes sense. But I suspect there later appears a mindkilling thread of though, which goes like this: “Our software is superior. Our software does not have a feature X. Therefore, not having a feature X is an advantage, because .” As in: we don’t need 21st-century-style user interface, because good programmers don’t need such things.
By wanting a “Lisp experience” I mean I would like to experience (or falsify the existence of) the nirvana frequently described by Paul Graham. Not to replicate 1:1 Richard Stallman’s working conditions in 1980s. :D
A perfect solution would be to combine the powerful features of Lisp with the convenience of modern development tools. I emphasize the convenience for pragmatic reasons, but also as a proxy for “many people with priorities similar to me are using it”.
Consider an equilibrium of various software products none of which are strictly superior or inferior to each other. Upon hearing that the best argument someone can make for software X is that it has feature Y (which is unrelated to UI), should your expectation of good IU go up or go down?
(To try it a different way: suppose you are in a highly competitive company like Facebooglazon and you meet a certain programmer who is the rudest most arrogant son of a bitch you ever met—yet he is somehow still employed there. What should you infer about the quality of the code he writes?)
This is a nice example how with different models the same evidence can be evaluated differently.
My model is that programming languages are used for making programs, and for languages used in real production part of that effort goes to the positive-feedback loop of creating better tools and libraries for given language. So if some language makes the production easier—people like Paul Graham suggest that Lisp is 10 times more productive than other languages --, I would expect better everything.
In other words, the “equilibrium of various software products none of which are strictly superior or inferior to each other” is an evidence against the claim that a language X is 10 times more productive than other languages. Or if it is more productive in some areas, then it must have a huge disadvantage somewhere else.
Fast, reliable, undocumented, obfuscated. :D
Or he is really imployed for some other reason than writing code.
Yup! It’s the old ‘if you’re so good, why aren’t you rich’ question in more abstract guise. Of course, in the real world, new languages are being developed all the time, so a workable answer is already ‘I’m not rich because I’m so new, but I’m getting richer’. This is the sort of answer an up and coming language like Haskell or Scala or Go can make.
My current understanding of present IDEs is that they are both very language-bound and need a huge amount of work to become truly usable. That means that for any language that doesn’t currently enjoy large industry acceptance, I basically don’t expect to have any sort of modern usable IDE.
I’m not personally hung up on the Emacs thing, but then again my recipe for a development environment is Your Favorite General Purpose Text Editor, printf statements for debugging code, a console to read the printf output, and a read-eval-print-loop for the programming language if has one (Lisp does).
If most of the people who are in position to develop modern development tools for Lisp are in fact happy using Emacs and SLIME, the result is going to be that there won’t be much of a non-Emacs development environment ecosystem for Lisp. And it’s unlikely that there are any unearthed gems that turn out to be outstanding modern Lisp IDEs if IDEs really do require lots and lots of work and a wide user base giving feedback to be truly useful. Though Lisp does have commercial niche companies who are still around and who have had decades of income to develop whatever proprietary tools they are using. I’ve no idea what kind of stuff they have got.
Speaking of the general Lisp experience, you might also want to take a look at Factor. It’s primarily modeled after Forth instead of Lisp, but it basically matches all of Graham’s “What made Lisp different” checklist. The code is data, the metaprogramming machinery is extensive and so on. The idiom is also somewhat more weird than Lisp’s, and the programs are constantly threatening to devolve into a soup of incomprehensible three-letter opcodes, but I found the thing fun to work with. Oh, and the only IDE Factor has is Emacs-based, unless you count the language REPL, I think its ecosystem is small enough that I haven’t missed any significant competitors.
Well, for me Vim bindings are something that (after some learning) started to make a lot of sense. Emacs (after the same amount of learning) didn’t make that much sense… As text editors, modern IDEs are still weaker than any of them; the choice what to forfeit usually has to be done—sometimes you can embed your editor inside IDE instead of the native one, though.
For satisfying your curiousity, I guess you could try out free-of-charge Allegro Common Lisp version. It is personal no-deployment no-commercial-use no-commercial-research no-university-research no-government researh edition. I never looked at it because I am OK with Vim and I don’t want to have something dependent on ACL that I cannot use in my day-job projects. Neither is a good reason for you not to try it...
Many people say that most things that aren’t emacs (or vim, depending on their religion...) have bad user interfaces, myself included. The keyboard-only way of working is very nice if you can get the hang of it. (Emacs is hard to begin with.)
That said, SLIME is basically the canonical Common Lisp editing environment and many the environment for other dialects emulate many of its features (e.g. Geiser for Racket), were you using one of those when you were using Emacs with a Lisp?
I used Emacs very shortly, only as a text editor. The learning curve is horrible—my impression is that you need to memorize dozens of new keyboard shortcuts (and unlearn dozens of keyboard shortcuts more or less consistently accepted by many other applications, also clicking right mouse button for a context menu). There seem to be some interesting features, but again only for those who memorize the keyboard shortcuts. And the whole design seems like a character terminal emulator.
So the problem is that it looks interesting, but one has to pay a huge price ahead. That would make sense if I were already convinced that Emacs is the only editor and Lisp the only programming language I will use, but I just want to try them.
By the way, what exactly is so great about “the keyboard-only way of working”? Is it the speed of typing? I usually spend more time thinking about the problem than typing. Are some powerful features invoked by keyboard combos? I would prefer them to be available from the menu and context menu. Or both from menu and as a keyboard shortcut, so I can memorize the frequently-used ones, but not the rest. (Maybe this is possible in Emacs too. If yes, the tutorial should mention it.)
To me it now seems that learning Lisp with Emacs would be having two problems instead of one. More precisely, to make the learning curve even worse.
There’s a solution to the unfamiliar shortcuts problem: turn on CUA mode. CUA mode enables the familiar Ctrl-Z, Ctrl-X, Ctrl-C, Ctrl-V for undo, cut, copy, and paste, respectively. For basic text navigation, I use Emacs mostly like an editor with standard bindings (the aforementioned undo-cut-copy-paste, arrow keys to move by character, Control plus arrow keys to move by word, &c.). There are other things to learn, but the transition isn’t really that bad.
Speed, features and working well for many languages (i.e. people have written Emacs modes for most language).
Having everything on the keyboard means that you don’t have to do so many context switches (which are annoying and I find they can distrupt my train of though). As an example, in most word processors, bolding text with
Shift+arrow keys
thenCtrl+B
is much much nicer than moving to the mouse, carefully selecting the text and then going up to the menu bar to click the little icon.And Emacs has been around for decades, so there are hundreds of little (or not so little) packages that do anything and everything, e.g. editing a file via SSH transparently is pretty nice.
Having one environment for writing a LaTeX report, a Markdown file, a C, Haskell, Python or Shell (etc) program is nice because the basic shortcuts are the same and every environment is guaranteed to act how you expect, so, for example, doing a regex string replacement is the same process.
And on the note of keyboard combos, they are something that you end up learning by muscle memory, so it takes a little while but they become second nature, to the point of not being able to say what the shortcut is straight out, only able to work it out by actually doing the action.
(That said, Emacs/Vim isn’t for everyone: maybe it’s the time investment is too large, or it doesn’t really suit one’s way of working.)
Well, I have a paid job where I write in Common Lisp, and I use Vim, and both statements (paid job with CL and Vim usage) are true for multiple years.
It is a good idea to know there are different options and have a look at them, of course.
It is a good idea to look at Cream-for-Vim, too—it has Vim as a core, and most mode allow you to use Vim bindings for a while, but default bindings are more consistent with modern traditions.
There are no “best available tools” without specified target, unfortunately. When you feel that Racket constraints you, come back to the open thread of week, and ask what you would like to see in it—SBCL has better performance, ECL is easier to use for standalone executables, etc. Also, maybe someone would recommend you an in-Racket dialect that would work better for you for those tasks.
Peter Norvig’s out-of-print Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp can be interesting reading. It develops various classic AI applications like game tree search and logic programming, making extensive use of Lisp’s macro facilities. (The book is 20 years old and introductory, it’s not recommended for learning anything very interesting about artificial intelligence.) Using the macro system for metaprogramming is a big deal with Lisp, but a lot of material for Scheme in particular doesn’t deal with it at all.
The already mentioned Clojure seems to be where a lot of real-world development is happening these days, and it’s also innovating on the standard syntax conventions of Common Lisp and Scheme in interesting ways. Clojure will interface with Java’s libraries for I/O and multimedia. Since Clojure lives in the Java ecosystem, you can basically start with your preconceptions for developing for JVM and go from there to guess what it’s like. If you’re OK with your games ending up JVM programs, Clojure might work.
For open-source games in Lisp, I can point you to David O’Toole’s projects. There are also some roguelikes developed in Lisp.
I’m Xom#1203 on Diablo 3. I have a lvl 60 Barb and a lvl ~35 DH. I’m willnewsome on chesscube.com, ShieldMantis on FICS. I like bullet 960 but I’m okay with more traditional games too. Currently rated like 2100 on chesscube, 1600 or something on FICS. Rarely use FICS. I’d like to play people who are better than me, gives me incentive to practice.
Are there really so few chess players on LW? 0_o
I play at chess.com and you are much better than me.
Oh sweet, chess.com used to be only correspondence games. I’ll probably get an account there, it’ll probably be called “willnewsome”, add me if you wish. ETA: Done.
It might be: there are few chess players on LW who read the open threads and also are willing to commit the time/have the desire to play an (essentially) random person from the internet.
Not gaming related, but I’ve got a question that seems like it would appeal to you above.
Suggestion:
I consider tipping to be a part of the expense of dining—bad service bothers me, but not tipping also bothers me, as I don’t feel like I’ve paid for my meal.
So I’ve come up with a compromise with myself, which I think will be helpful for anybody else in the same boat:
If I get bad service, I won’t tip (or tip less, depending on how bad the service is). But I -will- set aside what I -would- have tipped, which will be added to the tip the next time I receive good service.
Double bonus: When I get bad service at very nice restaurants, the waiter at the Steak and Shake I more regularly eat at (it’s my favored place to eat) is going to get an absurdly large tip, which amuses me to no end.
What would a poster designed to spread awareness of a less wrong meetup look like? How can it appeal to non-technophiles / students of social sciences?
I don’t follow and understand the “timeless decision” topic on LW, but I have a feeling that a significant part of that is one agent predicting what other agent would do, by simulating their algorithm. (This is my very uninformed understanding of the “timeless” part: I don’t have to wait until you do X, because I can already predict if you would do X, and behave accordingly. And you don’t have to wait for my reaction, because you can already predict it too. So let’s predict-cause each other to cooperate, and win mutually.)
If I am correct, there is a problem with this: having an access to another agent’s code does not allow you to make any conclusions, in general case.
You can only make a simulation of one specific situation. Then another. Hoping that the agent does not want to run your simulation, which would get you both into an infinite loop. And you can’t even tell whether the agent wants to run your simulation, or not.
Thinking in terms of “simulating their algorithm” is convenient for us because we can imagine the agent doing it and for certain problems a simulation is sufficient. However the actual process involved is any reasoning at all based on the algorithm. That includes simulations but also includes creating mathematical proofs based on the algorithm that allow generalizable conclusions about things that the other agent will or will not do.
An agent that wishes to facilitate cooperation—or that wishes to prove credible threat—will actually prefer to structure their own code such that it is as easy as possible to make proofs and draw conclusions from that code.
It’s precisely this part which is impossible in a general case. You can reason only about a subset of algorithms which are compatible with your conclusion-making algorithm.
Proof:
1) It is impossible to guess if the program will stop computation if a finite time in a general case.
Proof by contradiction: Let’s suppose we have a method “Prophet.willStop(program)” that predicts whether given program will stop. How about this program? It would behave contrary to what the prediction says about it.
program Contrarian {
… if (Prophet.willStop(Contrarian)) {
… … loop_forever();
… } else {
… … // do nothing
… }
}
2) For any behavior “B” imagine a function “f” which you cannot predict whether it will stop or not. Will the following program exhibit the behavior “B”?
program Mysterious {
… f();
… B();
}
Yes, which why:
Some agents really are impossible to cooperate with even when it would be mutually beneficial. Either because they are irrational in an absolute sense or because their algorithm is intractable to you. That doesn’t prevent you from cooperating with the rest.
Interesting. So a self-modifying agent might want to modify their own code to be easier to inspect, because this could make other agents trust them and cooperate with them. Two questions:
What would be the cost of such modification? You cannot just rewrite any algorithm to a more legible form. If the agent modifies themselves to e.g. a regular expression (just joking), it will be able to do only what the regular expressions are able to do, which may be not enough for a complex situation. Limiting one’s own cognitive abilities seems like a dangerous move.
Even if I want to reprogram myself to make myself more legible, I need to know what algorithm will the other party use to read my code. How can I guess it? Or perhaps is it enough to meet the other agent, explain to each other our reading algorithms, and only then self-modify to become compatible with them? I am suspicious whether such process can be iterated—my intuition is that by conforming to one agent’s code analysis routines, I lose part of my abilities, which may make me unable to conform to other agent’s code analysis routines.
Any decision restricts what happens, for all you knew before making the decision, but doesn’t necessarily make future decisions more difficult. Coordinating with other agents requires deciding some properties of your behavior, which may as well constrain only the actions that need to be coordinated with other agents.
For example, strategy is a kind of generalized action, which could take the form of a straightforwardly represented algorithm chosen for a certain situation (to act in response to possible future observations). After a strategy is played out, or if some condition indicates that it’s no longer applicable, decision making may resume its normal more general operation, so the mode of operation where your behavior becomes more tractable may be temporary. If this strategy includes a procedure for deciding whether to cooperate with similarly chosen strategies of other agents, it will do the trick, without taking on much more responsibility than a single action. It will just be the kind of action that’s smart enough to be able to cooperate with other agents’ actions.
So it is not necessary to change my whole code, just to create a new transparent “cooperation routine” and let it guide my behavior, with a possibility of ending this routine in case the other agents stop cooperating or something unexpected happens. That makes sense.
(Though in real life I would be rather afraid to self-modify in this way, because an imperfection in the cooperation routine could be exploited. Even if other agents’ cooperation routines contain no bug exploits for my routine, maybe they have already created some hidden sub-agents that will try to find and exploit bugs in my routine.)
A real life analogy is a contract, with powerful government enforcing your precommitments.
Sometimes.
You could limit yourself to simply not actively obfuscating your own code.
Is anyone familiar with any statistical or machine-learning based evaluations of the “Poverty of Stimulus” argument for language innateness (the hypothesis that language must be an innate ability because children aren’t exposed to enough language data to learn it properly in the time they do).
I’m interested in hearing what actually is and isn’t impossible to learn from someone in a position to actually know (ie: not a linguist).
I was looking at this exact question a few months ago, and found these to be quite LW-reader-salient:
The Case of Anaphoric One
Poverty Of The Stimulus—A Rational Approach
[comment deleted]
Just tons. For example, Harry’s instructor, Mr. Bester, is a double reference.
EDIT: And obviously the Bester scenes contain other allusions: back to the gold-silver arbitrage, or Harry imaging himself a Lensman, come to mind.
What’s the non-author one?
Babylon Five character, IIRC.
I wouldn’t call that a double reference since Alfred Bester the Babylon 5 character is also named after Alfred Bester the author). Edit: Both the Bab 5 and HP:MoR characters are named after Bester the author for the same reason.
Since Eliezer has been a Babylon 5 fan since before December 1996 and has also read Bester’s books, I think we can consider it a double reference.
Yeah, we’re just using different definitions of “double reference”. Cheers!
But Alfred Bester the author wasn’t a telepath.
Or was he?!?!?!
[comment deleted]
There’s probably hundreds by this point; if you want even more, Eliezer stuffs his Ultimate Cross-over fic with references.
Really, I only caught the B5 one.
Does any one know of a good guide to Godel’s theorems along the lines of the cartoon guide to lob’s theorem?
If you believe that some model of computation can be expressed in arithmetics (this implies expressibility of the notion of correct proof), Godel’s first theorem is more or less analyzis of “This statement cannot be proved”. If it can be proved, it is false and there is a provable false statement; if it cannot be proved it is an unprovable true statement.
But most of the effort in proving Godel’s theorem has to be spent on proving that you cannot go half way: if you have a big enogh theory to express basic arithmetical facts, you have to have full reflection. It can be stated in various ways, but it requires a technically accurate proof—I am not sure how well it would fit into a cartoon.
Could you state explicitly what do you want to find—just the non-tehnical part, or both?
Actually that was pretty much enough.
Has anybody here has changed their minds on the matter of catastrophic anthropogenic global warming, and what evidence or arguments made you reconsider your original positions on the matter?
I’ve bounced back and forth on the matter several times, and right now I’m starting to doubt global warming itself, nevermind catastrophic or anthropogenic; since those I read most frequently are biased against, and my sources which support it have a bad habit of deleting any comments that disagree or criticize the evidence, which has led to my taking them less seriously, the ideal for me would be arguments or evidence for it that changed somebody’s mind towards the end of supporting the theory.
I think you are overweighing the evidence from moderation policies.
If a large number of evangelicals constantly descended onto LessWrong, forcing the community to have a near hair trigger banning policy, would that be strong evidence that atheism was incorrect?
No. But it would result in me not taking theoretical weekly posts on why atheism is correct very seriously.
There are several different pieces of this for me.
I haven’t much changed my mind on the existence of global climate change since I first looked into the data, about a decade ago, except to become more confident about it.
I’ve made various attempts to wrap my brain around this data to arrive at some opinions about its causes, but I’m evidently neither smart nor well-informed enough to arrive at any confidence about whether the conclusions people are drawing on this question actually follow from the data they are drawing it from. Ultimately I just end up either taking their word for it, or not. I try to ignore the public discourse on the subject, which in the US has become to an absurd degree a Blue/Green issue entirely divorced from any notion of relying on observation-based reasoning to ground confidence levels in assertions.
The thing that most caused me to lower my estimate of the likelihood that the climate change is exclusively or near-exclusively anthropogenic was some conversations with a couple of astrophysicist friends of mine, who talked about the state of play in the field and their sense that research into correlations between terrestrial climate fluctuations and solar output fluctuations was seen as a career-ender… not quite on par with, say, parapsychology, but on the same side of the ledger.
The thing that most caused me to raise that estimate was some conversations with a friend of mine who was working in climate modeling for a while. I don’t have half a clue regarding the validity of his models, but I got the clear impression that climate models that take into account anthropogenic increases in atmospheric CO2 levels are noticeably more accurate than models that don’t.
On balance, the latter raised my confidence in the assertion that global climate change is significantly anthropogenic more than the former lowered my confidence.
I don’t really have an opinion yet about how catastrophic the climate change is likely to be, regardless of whether it’s anthropogenic or not. Incidentally, it regularly puzzles me that the public discourse is so resolutely about the latter rather than the former, as it seems to me that catastrophic non-anthropogenic climate change should be as much of a concern for us as catastrophic anthropogenic climate change.
Blogs by LWers:
Yvain—livejournal blog
Will Newsome—Computational Theology
muflax—muflax’ mindstream
James_G—Writings
XiXiDu—Alexander Kruel
TGGP—Entitled To An Opinion
James Miller—Singluarity Notes
Jsalvati—Good Morning, Economics
clarissethorn—Clarrise Thorn
Zack M. Davis—An Algorithmic Lucidity
Kaj_Sotala—xuenay
tommcabe—The Rationalist Conspiracy
Note: About this list. New suggestions are welcomed. Anyone searching for interesting blogs that may not be written by LWers chec out this or maybe this thread.
I find that, sporadically, I act like a total attention whore around people whom I respect and may talk to more or less freely—whether I know them or we’re only distantly acquainted. This mostly includes my behavior in communities like this, but also in class and wherever else I can interact informally with a group of equals. I talk excitedly about myself, about various things that I think my audience might find interesting, etc. I know it might come across as uncouth, annoying and just plain abnormal, but I don’t even feel a desire to stop. It’s not due to any drugs either. When I see that I’ve unloaded too much on whoever I’m talking to, I try to apologize and occasionally even explain that I have a neural condition.
I believe that it’s a side effect of me deprogramming myself from social anxiety after getting all shaken up by Evangelion. In high school and earlier, I was really really shy, resented having to talk to anyone but a few friends, felt rage at being dragged into conversations, etc. But now it’s like my personality has shifted a deviation or two towards the extraverted side. So such impulses, which were very rare in my childhood, became proeminent and this weirds me out. I still have a self-image of a very introverted guy, but now I’m often compelled to behave differently.
[This comment was caused by such an impulse too. Again, I’m completely sober, emotionally neutral and so on. I just have the urge to speak up.]
With regards to Optimal Employment, what does anyone think of the advice given in this article?
That works out (for the benefit of other Europeans) at €80,000 - an astonishing amount of money to me at least. LA seems like a cool place, with a lot of culture and a more interesting places that can be easily traveled to than Dublin.
To make this kind of money, you’ll obviously have to get a job in an expensive restaurant, and remember there are tons of people there who have years of experience and desperately want one of these super-high value jobs. Knowing the right person will be vital if you want to score one of these positions.
This is based on tips, so you will have to be extremely charming, charismatic, and attractive.
Living in Los Angeles is expensive to start with, and there is a major premium if you want to live in a non-terrifying part of the city.
The economy of Los Angeles is not doing well, hasn’t been for years, and probably won’t for the foreseeable future. This probably hurts the prospects for finding a high-paying waiter job.
Honestly, moving to L.A. to seek a rare super-high paying waiter job seems like a terrible idea to me.
That’s the main issue I’ve been having with employment here; though I’m a good waiter, most places want two years’ experience in fine dining, which I don’t have.
I don’t know if the claim is true or not, but i don’t find it too implausible. It helps to remember that LA is frequented by a great many newly wealthy celebrities.
It does not follow that my chances of getting such a job in L.A. are high enough to be worth considering.
Why don’t people like markets?
A very interesting read where the author speculates on possible reasons for why people seem to be biased against markets. To summarize:
Market processes are not visible. For instance, when a government taxes its citizens and offers a subsidy to some producers, what is seen is the money taken and the money received. What is unseen is the amount of production that would occur in the absence of such transfers.
Markets are intrinsically probabilistic and therefore marked with uncertainty, like other living organisms, we are loss-averse and try to minimise uncertainty
Humans may be motivated to place their trust in processes that are (or at least seem to be) driven by agents rather than impersonal factors.
The last point strongly reminded me of the recent Less Wrong essay on Conspiracy Theories as Agency Fictions where Konkvistador muses:
Before thinking about these points and debating them I strongly recommend you read the full article:
Positive Juice seems to have several posts related to rationality. (Look under “most viewed posts” on the sidebar.)
Yet another UFAI scenario: augmentations turned zombie cyborgs, by the author of Dilbert.
Ideological link of the week:
A rousing war-screech against Reaction (and bourgeois liberalism) by eXile’s Connor Kilpatrick. Deliciously mind-killed (and reviewing an already mind-killed book), but kind of perceptive in noting that the Right indeed offers very tangible, down-to-earth benefits to the masses—usually my crowd is in happy denial about that.
I am declaring this article excommunicate traitoris, because I am reading through it and not having a virulent reaction against it, but instead finding it to be reasonable, if embellishing. I take that and the community’s strong reaction against it as evidence that the article is effectively mind-killing me due to my political leanings and that I should stop reading now.
...cognitive biases are scary.
I read any War Nerd article that comes out, and occasionally read other articles on the site, and my reaction has been similar. The political stuff they say seems, well, “reasonable, if embellishing”, and I’d been worrying about the possibility that it was just true.
I should probably follow suit on this, and avoid any non-War-Nerd articles on eXile to avoid being mind-killed, although a part of me worries that I’m simply following group mentality on the Lesswrong cult.
I agree, it seems “reasonable, if embellishing”, on the other hand, there are many other political blogs with very different politics that also seem “reasonable, if embellishing”.
An ok read, despite being very much more partisan and harsh than what is usually discussed or linked on LW.
Those darn out group members! Der all the same I tells ya!
Exactly. I just linked to it for teh lulz, to be honest. And to rebel against our group norms.
Damn, I might be emulating Will a bit too much.
Pretty sure I don’t ever rebel against group norms simply for teh lulz. There’s usually some half-cocked or seemingly-half-cocked Dumbledoresque strategy going on in the background.
It feels like an exercise about how many cognitive errors can you commit in one text (but in later paragraphs they get repetitive). As if the author is not even pretending to be sane, which is probably how the target audience likes it. I tried to read the text anyway, but halfway my brain was no longer able to process it.
If I had to write an abstract of this article, it would be like this:
“All my enemies (all people who disagree with me) are in fact the same: inhumanly evil. All their arguments are enemy soldiers; they should be ignored, or responded by irrational attacks and name-calling.”
If there was anything more (except for naming specific enemies), I was not able to extract it.
Repulsive.
I think there’s a smidge more content than you’re saying: a claim that the other side is doing the same thing. Of course, when they do it, it’s disgraceful.
To me it seems like he accused the other side (everyone who disagrees with him, because they are all the same) of lying. That’s what makes it right to ignore their arguments.
That part goes like this—Sometimes it seems that the enemy arguments make sense, that some of their values are important for us too, so perhaps we should listen to what they say. Nonsense! The enemies are pure evil, they share none of our values. They just sometimes use our words to mislead us, but they “don’t believe a word of it. Not one fucking word.” (the last part = quotation)
What an unfortunate epistemic state. Especially unfortunate for other people who share the same planet.
He is pontificating actual values, though, and not only power politics. I like passion and strength more than intellectual honesty and rational discourse (and, well, truth-seeking). It’s sexier!
That’s why I still read M.M. despite him repeating the same ideas (completely formulated in “Patchwork”, “Why I am not a...” and his other old classics) over and over.
Let me guess, you also don’t enjoy gore porn.
Well, I guess if people wouldn’t find any value in this way of speaking, it wouldn’t be so popular. And yes, passion and strength are attractive. But a wrong context can ruin anything; and this context is very repulsive to me.
When I say I value truth-seeking, I usually feel like a hypocrite. After reading this article, I don’t. My bubble was broken, and the resulting shock recalibrated my scales. Raising the sanity waterline became a near-mode value again.
See! Aggression brings conflict, conflict brings division, division brings honesty, honesty brings self-actualization! The Code of the Sith is right!
Of course, human intelligence evolved largely to win arguments, thus we think up our best arguments while engaging in mind-killing debate, sort of like Kafers but without the need for physical violence.
Also, this Orwell quote.
Are you sure you aren’t just a right wing person hiding in the closet? (^_^)
I did say around here that I’m a little bit of a fascist. My ethics are really contradictory. Although if you think that all socialists are really as toothless and compromise-loving as modern social democrats (“Liberals”, as Americans call them), you’d be surprised.
Being human is tough, my sympathy module sympathizes.
The funny thing is, I don’t even feel bad about that. It’s just like becoming bored with useful activities, a psychological given.
I haven’t heard of many socialist utopia’s that included aggression, conflict and division. Maybe it can bring about self-actualization without conflict, thought that makes a dull story and remember humans love stories, especially about themselves. I think it was Orwell who pointed out that a Socialist utopia as normally imagined would overall be a pretty boring place to live.
Which is funny in a way, since the ideology of class struggle itself is far more inspiring that the ends the ideology seeks.
That’s because the wiser socialists, like Orwell himself, are aware that they aren’t wise enough for a consistent description of their utopia—like Eliezer is aware that he wouldn’t be able to describe precisely how society could work post-Singularity.
One possible left-wing utopia with conflict is just the Matrix running a massively multiplayer action/strategy game—with a global lobby/chat and economy organized.on socialist principles. This description makes some sense only because it’s a cop-out; in a virtual world we can resolve our nature’s inconsistencies without affecting real others, so this is just a milder form of wireheading. If you’re pissed about your guild’s high taxes, just take over a bot guild, murder a bot CEO in visceral detail and get high on fake power. Presumably you could also gank real people, but this would drive your taxes sky-high to do something nice, like paying for noobs’ personalized education and counselling.
I don’t know exactly what I want, but goddamit I’m going to get it!
Arguably the essence of heroic man.
Also a good way to build an Unfriendly AI.
Or an Unfriendly political regime.
Oh I agree. However it matches the advancement of the hero in myths, it is psychologically appealing and works well as a story.
Yup.
Dear downvoters, in order to help me optimize my writing, please take care to explain your reasons for every downvoted comment. Thank you. (This one looks particularly innocent and non-inflammatory to me.)
I’m pretty confident your comments in this thread are getting downvoted on the merits of the original comment, not on the merits of each individual subsequent comment. Happens to me all the time, but most of the time the trend reverses after a day or two.
That was my implication, yeah; getting karmassassinated is more unpleasant than just getting a slap for an isolated stupid comment.
What draws you to this website, then? Ostensibly, this is a venue for weirdos with an absolute fetish for rational discourse.
YAY 1000 KARMA!
I’ve immortalised this comment as it looked when I first saw it. It tells a beautiful and hilarious story.
I’ve waited until I was at 1004 as a precaution for that exact reason.
Well done, me. I don’t know what kinda game I’m playing, other than “being obnoxious”.
Earlier this year I made an extremely satisfying but mildly obnoxious comment, knowing full well it would get downvotes, but upvotes in the descendent discussion put me in credit.
Perhaps you have to speculate to accumulate.