Every now and then I like to review my old writings so I can cringe at all the wrong things I wrote, and say “oops” for each of them. Here we go...
There was once a time when the average human couldn’t expect to live much past age thirty. (Jul 2012)
That’s probably wrong. IIRC, previous eras’ low life expectancy was mostly due to high child mortality.
We have not yet mentioned two small but significant developments leading us to agree with Schmidhuber (2012) that “progress toward self-improving AIs is already substantially beyond what many futurists and philosophers are aware of.” These two developments are Marcus Hutter’s universal and provably optimal AIXI agent model… and Jurgen Schmidhuber’s universal self-improving Godel machine models… (May 2012)
This sentence is defensible for certain definitions of “significant,” but I think it was a mistake to include this sentence (and the following quotes from Hutter and Schmidhuber) in the paper. AIXI and Godel machines probably aren’t particularly important pieces of progress to AGI worth calling out like that. I added those paragraphs to section 2.4. not long before the submission deadline, and regretted it a couple months later.
one statistical prediction rule developed in 1995 predicts the price of mature Bordeaux red wines at auction better than expert wine tasters do. (Jan 2011)
The Wiki link in the linked LW post seems to be closer to “Stanislav Petrov saved the world” than “not really”:
Petrov judged the report to be a false alarm, and his decision is credited with having prevented an erroneous retaliatory nuclear attack
...
His colleagues were all professional soldiers with purely military training and, following instructions, would have reported a missile strike if they had been on his shift.
...
Petrov, as an individual, was not in a position where he could single-handedly have launched any of the Soviet missile arsenal. … But Petrov’s role was crucial in providing information to make that decision. According to Bruce Blair, a Cold War nuclear strategies expert and nuclear disarmament advocate, formerly with the Center for Defense Information, “The top leadership, given only a couple of minutes to decide, told that an attack had been launched, would make a decision to retaliate.”
Petrov’s responsibilities included observing the satellite early warning network and notifying his superiors of any impending nuclear missile attack against the Soviet Union. If notification was received from the early warning systems that inbound missiles had been detected, the Soviet Union’s strategy was an immediate nuclear counter-attack against the United States (launch on warning), specified in the doctrine of mutual assured destruction.
That he didn’t literally have his finger on the “Smite!” button, or that the SU might still not have retaliated if he’d raised the alarm, is not the point.
previous eras’ low life expectancy was mostly due to high child mortality.
I have long thought that the very idea of “life expectancy at birth” is a harmful one, because it encourages exactly that sort of confusion. It lumps together two things (child mortality and life expectancy once out of infancy) with sufficiently different causes and sufficiently different effects that they really ought to be kept separate.
Does anybody have a source that separates the two out? For example, to what age can the average X year old today expect to live? Or even at a past time?
Does anybody have a source that separates the two out? For example, to what age can the average X year old today expect to live?
Sure, there is the concept of life expectancy at specific age. For example, there is the “default” life expectancy at birth, there is the life expectancy for a 20 year-old, life expectancy for a 60-year-old, etc. Just google it up.
On the AIXI and such… you see, its just hard to appreciate just how much training it takes to properly understand something like that. Very intelligent people, with very high mental endurance, train for decades, to be able to mentally manipulate the relevant concepts at their base level. Now, let’s say someone only spent a small fraction of the time—either because they’ve pursued a wrong topic through the most critical years, or because they have low mental endurance. Unless they’re impossibly intelligent, they have no chance of forming even a merely good understanding.
I’ve been systematically downvoted for the past 16 days. Every day or two, I’d lose about 10 karma. So far, I’ve lost a total of about 160 karma.
It’s not just somebody just going through my comments and downvoting the ones they disagree with. Even a comment where I said “thanks” when somebody pointed out a formatting error in my comments is now at −1.
I’m not sure what can/should be done about this, but I thought I should post it here. And if the person who did this is here and there is a reason, I would appreciate it if you would say it here.
A quick look at the first page of your recent comments shows most of your recent activity to have been in the recent “Is Less Wrong too scary to marginalized groups?” firestorm.
One of the most recent users to complain about mass downvoting also cited participation in flame-bait topics (specifically gender).
How is this victim blaming? As I interpret it the claim is that the person was probably NOT the victim of systematic downvoting but instead made a lot of comments that are counter to what people like to hear, creating the illusion of same.
a comment where I said “thanks” when somebody pointed out a formatting error in my comments
as being about saying things “counter to what people like to hear”. Which is why I didn’t interpret CAE_Jones as suggesting that that’s what was going on.
For what it’s worth, I agree with gjm that “flame-bait” was a poor choice of words on my part, and I understand how it could have been taken as victim-blaming in spite of my intentions.
Gah… This is becoming way too common, and it seems like there’s pretty good evidence (further supported in this instance) regarding the responsible party. I wish someone with the power to do so would do something about it.
I got a seemingly one-time hit of this about a week ago. For what it’s worth I had just been posting comments on the subject of rape, but a whole bunch of my unrelated comments got it too.
(Since then it’s been having an obnoxious deterrent effect on my commenting, because I feel so precariously close to just accumulating negative karma every time I post, leaving readers with the impression that my ideas have all been identified as worthless by someone probably cleverer than themselves. I’m now consciously trying to avoid thinking like this)
I have experienced this also, though roughly a month ago, after an extended debate on trans* issues specifically.
I responded by messaging the person I had argued with, and politely asking that, if it was them who had been downvoting me, they please stop going through my comment history. I got no response, but the stream of downvotes seemed to tail off shortly thereafter.
EDIT: As a side note, the person with whom I had been debating/arguing was the same one that showed up in the thread ChrisHallquist linked. It looks like it’s a pattern of behavior for him.
a comment where I said “thanks” when somebody pointed out a formatting error in my comments is now at −1.
That sounds like a pretty low value comment. Is it beneficial to third parties to be able to read it? If not, just make the correction and PM your thanks. Otherwise you’re unnecessarily wasting everyone’s time in the guise of politeness.
Acknowledging that there indeed an error and that you weren’t intentionally doing something is helpful.
If you make the correction and not mention that the comment that points out the issue is right, it will seem to anybody who reads the discussion later as if the comment points out a nonexisting problem.
It is not necessary that the argument I gave be right. All that is necessary for the g’grandparent to be wrong is for there to be a plausible reason why someone would want to downvote such a comment, other than malice.
Academic futurism has low status. This causes people interested in futurism to ignore those academics and instead listen to people who talk about futurism after gaining high status via focusing on other topics. As a result, the people who are listened to on the future tend to be amateurs, not specialists. And this is why “we” know a lot less about the future than we could.
Larry Carter, a UCSD emeritus professor of computer science, didn’t mince words. The first time he heard about Wells’s theories, he thought, “Oh my God, is this guy a crackpot?”
But persuaded by Well’s credentials, which include a PhD from Caltech in math and theoretical physics, a career that led him L-3 Photonics and the Caltech/Jet Propulsion Laboratory, and an invention under his belt, Carter gave the ideas a chance. And was intrigued.
For a taste of the book, here is Wells’ description of one specific risk:
When advanced robots arrive… the serious threat [will be] human hackers. They may deliberately breed a hostile strain of androids, which then infects normal ones with its virus. To do this, the hackers must obtain a genetic algorithm and pervert it, probably early in the robotic age before safeguards become sophisticated… Excluding hackers, it seems unlikely that androids will turn against us as they do in some movies… computer code for hostility is too complex… In the very long term, androids will become conscious for the same reasons humans did, whatever those reasons may be… In summary, the androids have powerful instincts to nurture humans, but these instincts will be unencumbered by concerns for human rights. Androids will feel free to impose a harsh discipline that saves us from ourselves while violating many of our so-called human rights.
Now, despite Larry Carter’s being “persuaded by Wells’ credentials” — which might have been exaggerated or made-up by the journalist, I don’t know — I suspect very few people have taken Wells seriously, for good reason. He’s clearly just making stuff up, with almost no study of the issue whatsoever. (On this topic, the only people he cites are Joy, Kurzweil, and Posner, despite the book being published in 2009.)
But reading that passage did drive home again what it must be like for most people to read FHI or MIRI on AI risk, or Robin Hanson on ems. They probably can’t tell the difference between someone who is making stuff up and an argument that has gone through a gauntlet of 15 years of heated debate and both theoretical and empirical research.
Yes by judging someone on their credentials in other fields, you can’t tell if they are just making stuff up on this subject vs. studied it for 15 years.
I took a quick skim through the book. Your focused criticism of Wells’s book is somewhat unfair. The majority of the book (ch. 1–4) is about a survival analysis of doomsday risks. The scenario you quoted is in the last chapter (ch. 5), which looks like an afterthought to the main intent of the book (i.e., providing the survival analysis), and is prepended by the following disclaimer:
This set serves as a foil to the balanced discussions by Rees, Leslie, Powell, and others. The choice of eight examples is purely arbitrary. Their purpose is not orderly coverage but merely examples that indicate a range of possibilities. The actual number of such complex unorthodox scenarios is virtually infinite, hence the high risk.
I think it is fair to criticize the crackpot scenario that he gave as an example, but your criticism seems to suggest that his entire book is of the same crackpot nature, which it is not. It is unfortunate that PR articles and public attention focuses on the insubstantial parts of the book, but I am sure you know what that is like as the same occurs frequently to MIRI/SIAI’s ideas.
Orthogonal notes on the book’s content: Wells seems unaware of Bostrom’s work on observation selection effects, and it appears that he implicitly uses SSA. (I have not carefully read enough of his book to form an opinion on his analysis, nor do I currently know enough about survival analysis to know whether what he does is standard.)
Yes, I’m an academic and I get a similar reaction from telling people I study the Singularity as when I say I’ve signed up for cryonics. Thankfully, I have tenure.
Do you actually say you “study the singularity” or give a more in depth explanation? I ask because the word study is usually used only in reference to things that do or have exisited, rather than to speculative future events.
It might be a worthwhile endeavor to modify our wiki such that it serves not only as a mostly local reference on current terms and jargon, but also as an independent guide to the various arguments for and against various concepts, where applicable. It could create a lot of credibility and exposure to establish a sort of neutral reference guide / an argument map / the history and iterations an idea has gone through, in a neutral voice. Ideally, neutrality regarding PoV works in favor of those with the balance of arguments in their favor.
This need not be entirely new material, but instead simply a few mandatory / recommended headers in each wiki entry, pertaining to history, counterarguments etc. Could be worth it lifting the wiki from relative obscurity, with a new landing page, and marketed potentially as a reference guide for journalists researching current topics. Kruel’s LW interview with Shane Legg got linked to in a NYTimes blog, why not a suitable LW wiki article, too?
Academic futurism has low status. This causes people interested in futurism to ignore those academics and instead listen to people who talk about futurism after gaining high status via focusing on other topics. As a result, the people who are listened to on the future tend to be amateurs, not specialists. And this is why “we” know a lot less about the future than we could.
I don’t think that’s the case. Most people who are listened to on the future don’t tend to speak to an audience primarily consisting of futurists.
There are think tanks who employee people to think about the future and those think tanks tend generally to be quite good at influencing the public debate.
I also don’t think that academic has any special claim to be specialists about the future.
When I think about specialists on futurism names like Stewart Brand or Bruce Sterling.
I don’t think that’s the case. Most people who are listened to on the future don’t tend to speak to an audience primarily
consisting of futurists.
This is a very important and general point. While it is important to communicate ideas to a general audience, generally excessive communication to general audiences at the expense of communication to peers should be “bad news” when it comes to evaluating experts. Folks like Witten mostly just get work done, they don’t write popular science books.
I mean Edward Witten, one of the most prominent physicists alive. The fact that his name does not ring a bell is precisely my point. The names that do ring a bell are the names of folks who are “good at the media,” not necessarily folks who are the best in their field.
Witten is one of the greatest physicists alive, if not the greatest. He is the one who unified the various string theories into M-theory. He is also the only physicist to receive a Fields Medal.
Google is shelling out $400 million to buy a secretive artificial intelligence company called DeepMind....Based in London, DeepMind was founded by games prodigy and neuroscientist Demis Hassabis, Skype & Kazaa developer Jaan Tallin and researcher Shane Legg.
I liked Legg’s blog & papers and was sad when he basically stopped in the interests of working on his company, but one can hardly argue with the results.
I’m going to do the unthinkable: start memorizing mathematical results instead of deriving them.
Okay, unthinkable is hyperbole. But I’ve noticed a tendency within myself to regard rote memorization of things to be unbecoming of a student of mathematics and physics. An example: I was recently going through a set of practice problems for a university entrance exam, and calculators were forbidden. One of the questions required a lot of trig, and half the time I spent solving the problem was just me trying to remember or re-derive simple things like the arcsin of 0.5 and so on. I knew how to do it, but since I only have a limited amount of working memory, actually doing it was very inefficient because it led to a lot of backtracking and fumbling. In the same sense, I know how to derive all of my multiplication tables, but doing it every time I need to multiply two numbers together is obviously wrong. I don’t know how widespread this is, but at least in my school, memorization was something that was left to the lower-status, less able people who couldn’t grasp why certain results were true. I had gone along with this idea without thinking about it critically.
So these are the things I’m going to add to my anki decks, with the obligatory rule that I’m only allowed to memorize results if I could theoretically re-derive them (or if the know-how needed to derive them is far beyond my current ability). These will include common trig results, derivatives and integrals of all basic functions, most physical formulae relating heat, motion, pressure and so on. I predict that the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems, though I can’t think of a way to measure this. Also, recommendations for other things to memorize are welcome.
In my experience memorization often comes for free when you strive for fluency through repetition. You end up remembering the quadratic formula after solving a few hundred quadratic equations. Same with the trig identities. I probably still remember all the most common identities years out of school, owing to the thousands (no exaggeration) of trig problems I had to solve in high school and uni. And can derive the rest in under a minute.
Memorization through solving problems gives you much more than anki decks, however: you end up remembering the roads, not just the signposts, so to speak, which is important for solving test problems quickly.
You are right that “the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems”, I am not sure that anki is the best way to achieve this reduction, though it is certainly worth a try.
In general there the core principle of spaced repetition that you don’t put something into the system that you don’t already understand.
When trying to memorize mathematical results make sure that you only add cards when you really have a mental understanding. Using Anki to avoid forgetting basic operations is great. If you however add a bunch of information that’s complex, you will forget it and waste a lot of time.
That’s true if you’re just using spaced repetition to memorize, although I’d add that it’s still often helpful to overlearn definitions and simple results just past the boundaries of your understanding, along the lines of Prof. Ravi Vakil’s advice for potential students:
Here’s a phenomenon I was surprised to find: you’ll go to talks, and hear various words, whose definitions you’re not so sure about. At some point you’ll be able to make a sentence using those words; you won’t know what the words mean, but you’ll know the sentence is correct. You’ll also be able to ask a question using those words. You still won’t know what the words mean, but you’ll know the question is interesting, and you’ll want to know the answer. Then later on, you’ll learn what the words mean more precisely, and your sense of how they fit together will make that learning much easier. The reason for this phenomenon is that mathematics is so rich and infinite that it is impossible to learn it systematically, and if you wait to master one topic before moving on to the next, you’ll never get anywhere. Instead, you’ll have tendrils of knowledge extending far from your comfort zone. Then you can later backfill from these tendrils, and extend your comfort zone; this is much easier to do than learning “forwards”. (Caution: this backfilling is necessary. There can be a temptation to learn lots of fancy words and to use them in fancy sentences without being able to say precisely what you mean. You should feel free to do that, but you should always feel a pang of guilt when you do.)
The second point I’d make is that the spacing effect (distributed practice) works for complex learning goals as well, although it will help if your practice consists of more than rote recall.
If you learn definitions it’s important to sit down and actually understand the definition. If you write a card before you understand it, that will lead to problems.
Nice, and good luck! I’m glad to see that my post resonated with someone. For rhetorical purposes, I didn’t temper my recommendations as much as I could have—I still think building mental models through deliberate practice in solving difficult problems is at the core of physics education.
I treat even “signpost” flashcards as opportunities to rehearse a web of connections rather than as the quiz “what’s on the other side of this card?” If an angle-addition formula came up, I’d want to recall the easy derivation in terms of complex exponentials and visualize some specific cases on the unit circle, at least at first. I also use cards like that in addition to cards which are themselves mini-problems.
I don’t know what you mean precisely by confusion, but I personally can’t always control what my immediate primal level response is to certain situations. If I try to strictly avoid certain feelings, I usually end up convincing myself that I’m not feeling that way when actually I am. I’d rather notice what I’m feeling and then move on from there, it’s probably easier to control your thinking that way. Just because you’re angry doesn’t mean you have to act like angry.
That’s basically what I meant. The move is to notice the anger, fear or disgust and then realize that this emotion isn’t useful and can be actively detrimental. Then consciously try to switch to curiosity.
Of course, I couldn’t condense the full messiness of reality into a pithy saying.
“Make yourself feel curiosity” is not very concretely actionable in the short term. If you want to coin near-mode actionable advice, instead of a far-mode affirmation of positive emotions, you might say something like, “The proper responses to feelings of confusion are orienting and exploring behaviors”.
Those behaviors should be unpacked to more specific things like looking at your surroundings, asking questions of nearby people, searching your memory for situation-relevant information, and planning an experiment or other (navigable) (causal) path to sources of information. Those levels should be fleshed out and made more concrete too.
Now that I’ve given some helpful advice, I think I’ve earned an opportunity to express some cynicism: cheering for curiosity and exploration over anger, disgust, and fear shows a stereotypical value alignment of affluent, socially tolerant people in safe environments. The advice you give will not serve you will in adversarial games like chess. It will not serve you well in combat or social competition. It is in many situations harmful advice.
Separate and unrelated, I would not like to see this template for inarticulately expressing advice continued. Mostly I say this for the same reasons that we don’t make use of reaction .gifs and image macros on lesswrong. There is also a small concern that variants of familiar phrases are harder to evaluate critically, much as the mnemonic device of rhyme apparently makes some specific phrasings of claims more credible than others phrasings of those same claims.
PSA: You can download from scribd without paying, you just need to upload a file first (apparently any file—it can be a garbage pdf or even a pdf that’s already on scribd). They say this at the very bottom of their pricing page, but I didn’t notice until just now.
Hello, we are organizing monthly rationality meetups in Vienna—we have previously used the account of one of our members (ratcourse) but would like to switch to this account (rationalityvienna). Please upvote this account for creating rationality vienna meetups.
They really help me get over akrasia. I beemind how many pomodoros I do per week, so I do tasks I would otherwise procrastinate if I can do 20 minutes of them (yes, I do short pomodoros) and get to enter a data point at the end. Often I find that the task is much shorter/less awful than it felt in the abstract.
Example: I just moved today, and didn’t have that much to unpack, but decided I’d do it tomorrow, because I felt tired and it would presumably be long and unpleasant. But then I realized I could get a pomodoro out of it (plus permission from myself to stop after 20 min and go to bed). Turns out it took 11 minutes and now I’m all set up!
Even if you know that signaling is stupid, it doesn’t escape the cost of not signaling.
It’s a longstanding trope that Eliezer gets a lot of flack for having no formal education. Formal education is not the only way to gain knowledge, but it is a way of signaling knowledge, and it’s not very easy to fake (Not so easy to fake that it falls apart as a credential on its own). Has anyone toyed around with the idea of sending him off to get a math degree somewhere? He might learn something, and if not it’s a breezy recap of what he already knows. He comes out the other side without the eternal “has no formal education” tagline, and a whole new slew of acquaintances.
Now, I understand that there may be good reasons not to, and I’d very much appreciate someone pointing me to any previous discussion in which this has been ruled out. Otherwise, how feasible does it sound to crowdfund a “Here’s your tuition and an extra sum of money to cover the opportunity cost of your time, I don’t care how unfair it is that people won’t take you seriously without credentials, go study something useful, make friends with your professors, and get out with the minimum number of credits possible” scholarship?
Has anyone toyed around with the idea of sending him off to get a math degree somewhere?
I think the bigger issue w/ people not taking EY seriously is he does not communicate (e.g. publish peer reviewed papers). Facebook stream of consciousness does not count. Conditional on great papers, credentials don’t mean that much (otherwise people would never move up the academic status chain).
Yes it is too bad that writing things down clearly takes a long time.
Somehow I doubt I will ever persuade Eliezer to write in a style fit for a journal, but even still, I’ll briefly mention that Eliezer is currently meeting with a “mathematical exposition aimed at math researchers” tutor. I don’t know yet what the effects will be, but it seemed (to Eliezer and I) a worthwhile experiment.
True. It seems like the great-papers avenue is being pursued full-steam these days with MIRI, but I wonder if they’re going to run out of low-hanging fruit to publish, or if mainstream academia is going to drag their heels replying to them.
Eliezer managed signaling well enough to get a billionaire to fund him on his project. A billionaire who fund people who drop out of college systematically in projects like his 20 Under 20 program.
Trying to go the traditional route wouldn’t fit into the highly effective image that he already signals.
Put another way, the purpose of signaling isn’t so nobody will give you crap. It’s so somebody will help you accomplish your goals.
People will give you crap, especially if they can get paid to do so. See gossip journalists, for instance. They are not paid to give boring and unsuccessful people crap; they are paid to give interesting and successful people crap.
Your last para would imply that not getting crap from gossip journalists means you are not interesting or successful. Eliezer/MIRI gets almost no press. Are you sure that’s what you meant?
Well, yes, there is going to be some inevitable crap, but the purpose of signalling is so that you could impress a much larger pool of people. So it might not be much help for gossip journalists, but it might help with the marginal professional ethicist, mathematician, or public figure. In that area, you might get some additional “Anybody who can do that must be damn impressive.”. Does the additional damn-impressive outweigh the cost? I don’t know, that’s why I’m asking.
Peter Thiel (the billionaire) has the proven ability to spot talent, which is why he is a billionaire. Eliezer has traits that Thiel values, and this is probably much more important than any signal Eliezer sent.
Impressing Thiel is independent of a future degree or not, because he’s already impressed. Where’s the next billionaire going to come from, and will they coincidentally also be as contrarian as Thiel? Maybe MIRI doesn’t need another billionaire, but I don’t think they’d turn one away.
Impressing Thiel is independent of a future degree or not, because he’s already impressed.
I think the deal that Eliezer has with Thiel is that Eliezer does MIRI full time. Switching focus to getting a degree might violate the deal.
Gives that Thiel has a lot of money impressing Thiel more might also be very useful if they want more money from him.
Where’s the next billionaire going to come from, and will they coincidentally also be as contrarian as Thiel?
Do you really think that someone who isn’t contrarian will put his money into MIRI?
The present set up is quite okay. Those who want people with academic credentials can give their money to FHI. Those who want more contrarian people can give their money to MIRI.
Whether or not Eliezer has a degree doesn’t change that he’s the kind of person who has a public Okcupid profile detailing his sexual habits and the fact that he’s polyamorous.
When Steve Job was alive and run around in a sweater, he didn’t cause people to disregard him because he wasn’t wearing a suit.
People respect the person who’s a contrarian who’s okay with not everyone liking him. The contrarian who tries to get every to like them on the other hand get’s no respect.
On the other hand if he decides to get a degree and pulls it off in a year or something impressive like that it could just feed into the contrarian genius image.
Yes, but that would prokbably either mean paying someone else to do your homework with means that you are vunerable to attack or making studying the sole focus for a year.
In addition “getting flak” isn’t necessarily a bad thing.
It can be counter-signaling if you can get flak and stay standing.
It can also polarize people and separate those who can evaluate the inside arguments to realize that you’re good from those who can’t and have to just write you off for having no formal education.
Eddie has some math talent. He can invest some time, money, and effort C to get a degree, which allows other people to discern that he has a higher probability of having that math talent. This higher probability confers some benefit in that other people will more readily take his advice in mathematical matters, or talk with him about his math.
The fun twist is that Eddie lives in a society with many other individuals with varying degrees of math talent, each of whom can expend C to get a degree and the associated benefits. People with almost no mathematical talent have a prohibitively high C, because even if they can pony up the time and money, they have to work very hard to fake their way through. But people with high math ability often choose to stand out by getting the degree, because their C is relatively lower, and a very high proportion of them get degrees. This creates a high association between degrees and mathematical ability, and makes it unlikely to see high mathematical ability in the absence of a degree.
That’s the basic idea, plus degrees signal other things which may be completely unrelated to math, but are still nice. Even in the case where the degree has no causal effect no math ability, there are benefits to having one, in that the other math people can judge very quickly that they’re interested in talking to you.
Hopefully that demonstrates that I understand signalling. My question is about the costs and benefits of a particular signal.
It demonstrate that you don’t. Humans make decisions via something called the availability heuristic.
If you bring into the awareness of the person that you are talking that you are a mathematician that only has a bachleor, no master, no PHD and no professorship that you aren’t bringing expertise into his mind.
If you are however a self taught person who managed to published multiple papers among them a paper titled “Complex Value Systems in Friendly AI” in Artificial General Intelligence Lecture Notes in Computer Science Volume and who has his own research institute that’s a better picture.
If you have published papers that a lot more relevant for relevant experts than whether you have a degree that verifies basic understanding.
If a person really cares whether Eliezer has a math degree he already lost that person.
I’m not certain that getting a degree now counts as the traditional route. Also, I don’t think that an additional degree is particularly damaging to his image. People aren’t going to lose interest in FAI if he sells out and gets a traditional degree. Or they are and I have no idea what kind of people are involved.
4 years (or even 1 year if you are super hard-core) of time is a pretty non-trivial investment. I was 2 classes away from a second degree and declined to take them, because the ~100 hours of work it would have taken wasn’t worth the additional letters after my name. I also just really don’t know anyone relevant who thinks that a college degree or lack thereof particularly matters (although the knowledge and skills acquired in the course of pursuing said degree may matter a lot). Good people will judge you by what you’ve done to demonstrate skill, not based on a college diploma.
I think IlyaShpitser’s comment pretty much nails it.
I came to the same conclusion, and in general a lack of degree has not impacted me as I get employment based on demonstrated skill.
The main limitation is that any formal Postgrad study is impossible without a degree and this was a regret for me, prior to getting access to the coursera type courses.
This might have been a good call 10 years ago but nowadays Eliezer is participating in regular face to face meetings with skilled mathematicians and scientists in the context of constructing and analyzing theorems and decision strategies. This means that for a large amount of the people who are most important to convince, he gets to screen out all the “evidence” of not having a degree. And to a large extent, someone having the respect of a bunch of math phds is more important a qualifier of talent than having that phd themselves.
There’s theoretically still the problem of selling Eliezer to the muggles but I don’t think that’s anywhere near as important as getting serious thinkers on board.
Different target groups may use different signals.
For example, for a scientist the citations may be more important than formal education. For an ordinary person with a university diploma who never published anything anywhere, formal education will probably remain the most important signal, because that’s what they use. A smart sponsor may instead consider the ability of getting things done. And the New Age fans will debate about how much Eliezer fits the definitions of an “indigo child”.
If the goal is to impress people for whom having an university diploma is the most important signal (they are a majority of the population), the best way would be to find an university which gives the diploma for minimum time and energy spent. Perhaps one where you just pay some money (hopefully not too much), take a few easy exams, and that’s it; you don’t have to spend time at the lessons. After this, no one can technically say that Eliezer has “no formal education”. (And if they start discussing the quality of the university, then Eliezer can point to his citations.) The idea is to do this as easily as possible… assuming it’s even worth doing.
There are also other things to consider, such as the fact that other people working with Eliezer do have formal education… so why exactly is it a problem if Eliezer doesn’t? Does MIRI seem from outside like one man show? Maybe that should be fixed.
A recent experience reminded me that basics are really important. On LW we talk a lot about advanced aspects of rationality.
If you would have to describe the basics, what would you say?
What things are so obvious for you about rationality that they usually go without saying?
You can frequently make your life better by paying attention to what you’re doing, looking for possible improvements, trying your ideas, and observing whether the improvements happen.
I run on hardware that was optimized by millions of years of evolution to do the sort of things my ancestors did tens of thousands of years ago, not the sort of things I do now.
This might be useful for staying honest to yourself and perhaps your allies, but it’s also useful to keep in mind that most people give different kinds of lies different degrees of moral weight.
Nice list, even a bit that’s basic enough that I can put it into an Anki deck about teaching rationality (a long term project of mine but at the moment I doesn’t have enough cards for release).
I’d like to hear about the experience if you’re willing to share it. How basic are we talking about?
This older discussion thread seems to ask a similar question and some answers are relevant to your question. If you think your question phrased in a more specific way would elicit different kinds of responses, it might deserve its own thread.
I’d like to hear about the experience if you’re willing to share it.
The experience wasn’t about the domain of rationality but another subject and the relationships of concepts in that framework. If don’t think it’s useful for people without the experience of the framework.
How basic are we talking about?
As basic as you can get. What is the most basic thing you can say about rationality. If your reaction is: “Duh, I don’t know nothing comes to mind”, that’s exactly why it might be worthwhile to investigate the issue.
Recently there was a discussion about vocabulary for rationality and someone made the point that things can be said either implicit or explicit.
Implicitness and explicitness are pretty basic concepts.
I’m coming at this from ten years of brain fog, unrefreshing sleep, “feeling sick all the time,” etc. Mostly better now; I did a lot of stuff highly specific to my situation. The below mostly helped with enduring it. Remember, I’m just some random idiot on the internet, hope this is helpful, and in no particular order:
John_Maxwell_IV and I were recently wondering about whether it’s a good idea to try to drink more water. At the moment my practice is “drink water ad libitum, and don’t make too much of an effort to always have water at hand”. But I could easily switch to “drink ad libitum, and always have a bottle of water at hand”. Many people I know follow the second rule, and this definitely seems like something that’s worth researching more because it literally affects every single day of your life. Here are the results of 3 minutes of googling:
Dehydration of as little as 1% decrease in body weight results in impaired physiological and performance responses (4), (5) and (6), and is discussed in more detail below. It affects a wide range of cardiovascular and thermoregulatory responses (7), (8), (9), (10), (11), (12), (13) and (14).
The Nationwide Food Consumption Surveys indicate that a portion of the population may be chronically mildly dehydrated. Several factors may increase the likelihood of chronic, mild dehydration, including a poor thirst mechanism, dissatisfaction with the taste of water, common consumption of the natural diuretics caffeine and alcohol, participation in exercise, and environmental conditions. Dehydration of as little as 2% loss of body weight results in impaired physiological and performance responses. New research indicates that fluid consumption in general and water consumption in particular can have an effect on the risk of urinary stone disease; cancers of the breast, colon, and urinary tract; childhood and adolescent obesity; mitral valve prolapse; salivary gland function; and overall health in the elderly. Dietitians should be encouraged to promote and monitor fluid and water intake among all of their clients and patients through education and to help them design a fluid intake plan.
The effect of dehydration on mental performance has not been adequately studied, but it seems likely that as physical performance is impaired with hypohydration, mental performance is impaired as well (62) and (63). Gopinathan et al (29) studied variation in mental performance under different levels of heat stress-induced dehydration in acclimatized subjects. After recovery from exercise in the heat, subjects demonstrated significant and progressive reductions in the performance of arithmetic ability, short-term memory, and visuomotor tracking at 2% or more body fluid deficit compared with the euhydrated state.
So how much is 2% dehydration? http://en.wikipedia.org/wiki/Dehydration#Differential_diagnosis : “A person’s body, during an average day in a temperate climate such as the United Kingdom, loses approximately 2.5 litres of water.[citation needed]” http://en.wikipedia.org/wiki/Body_water quotes Arthur Guyton ’s Textbook of Medical Physiology: “the total amount of water in a man of average weight (70 kilograms) is approximately 40 litres, averaging 57 percent of his total body weight.” So effects on cognition become apparent after 40l*2%=800ml of water has been lost, which takes roughly 800ml/(2.5l/24h) = 8 hours. Now, this assumes water is lost at a constant rate, which is false, but it still seems like it would take a while to lose a full 800ml. Which implies that you don’t have to make a conscious effort to drink more water because everybody gets at least mildly thirsty after, say, half an hour of walking around outside on a warm day, which seems like it would be a lot less than 800ml.
http://freebeacon.com/michelle-obamas-drink-more-water-campaign-based-on-faulty-science/ : “There really isn’t data to support this,” said Dr. Stanley Goldfarb of the University of Pennsylvania. “I think, unfortunately, frankly, they’re not basing this on really hard science. It’s not a very scientific approach they’ve taken. … To make it a major public health effort, I think I would say it’s bizarre.” Goldfarb, a kidney specialist, took particular issue with White House claims that drinking more water would boost energy. ”The idea drinking water increases energy, the word I’ve used to describe it is: quixotic,” he said. “We’re designed to drink when we’re thirsty. … There’s no need to have more than that.”
http://ask.metafilter.com/166600/Drinking-more-water-should-make-me-less-thirsty-right : When you don’t drink a lot of water your body retains liquid because it knows it’s not being hydrated. It will conserve and reabsorb liquid. When you start drinking enough water to stay more than hydrated your body will start using the water and then dispensing of it as needed. Your acuity for thirst will be activated in a different way and in a sense work better.
Some thoughts:
More frequent water-drinking makes you urinate more often, which is probably a bad thing for productivity.
There might be negative effects with chronic mild dehydration at levels less severe than in the studies above.
There might also be hormetic effects. (As in, your body functions best under frequent mild dehydration because that’s what happened in the EEA, and always giving it as much water as it wants will be bad.)
Thoughts? Please post your own opinion if you’re knowledgeable about this or if you’ve researched it.
While you’re at it, you probably should also research how much water is too much, because on the other side of the spectrum lies hyponatremia and having suboptimal electrolyte levels from overdosing water could be harmful to your cognition too, although I think it’s unlikely anyone here will develop a measurable hyponatremia just from drinking too much water. Sweating a lot for example might change the situation.
this definitely seems like something that’s worth researching more because it literally affects every single day of your life
This doesn’t look like a selective enough heuristic alone.
As far as water consumption goes I feel the difference between drinking one liter or four liter per day. I just feel much better with four liter.
There were times two years ago when unless I had drunk 4 liter by the time I entered my Salsa dancing location in the evening, my muscle coordination was worse and the dancing didn’t flow well.
Does that mean that everyone has to drink 4 liters to be at his optimum? No, it doesn’t. Get a feel how different amounts of water consumption effect you. For me the effect was clear to see without even needing to do QS. Even it’s not as clear for you do QS.
this definitely seems like something that’s worth researching more because it literally affects every single day of your life
Lots of things fall in to this category :)
“A person’s body, during an average day in a temperate climate such as the United Kingdom, loses approximately 2.5 litres of water.[citation needed]”
In case it’s not obvious: this probably means in the absence of food/fluid consumption. You can’t go on losing 2.5 litres of water a day indefinitely.
Repeating my post from the last open thread, for better visibility:
I want to study probability and statistics in a deeper way than the Probability and Statistics course I had to take in the university. The problem is, my mathematical education isn’t very good (on the level of Calculus 101). I’m not afraid of math, but so far all the books I could find are either about pure application, with barely any explanations, or they start with a lot of assumptions about my knowledge and introduce reams of unfamiliar notation.
I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a sample. Intuitively, it makes sense. But why this particular formula of sum/n? You can apply all kinds of mathematical stuff to the sample. And it’s even worse with variance...
I too spent a few years with a similar desire to understand probability and statistics at a deeper level, but we might have been stuck on different things. Here’s an explanation:
Suppose you have 37 numbers. Purchase a massless ruler and 37 identical weights. For each of your numbers, find the number on the ruler and glue a weight there. You now have a massless ruler with 37 weights glued onto it.
Now try to balance the ruler sideways on a spike sticking out of the ground. The mean of your numbers will be the point on the ruler where it balances.
Now spin the ruler on the spike. It’s easy to speed up or slow down the spinning ruler if the weights are close together, but more force is required if the weights are far apart. The variance of your numbers is proportional to the amount the ruler resists changes to its angular velocity—how hard you have to twist the ruler to make it spin, or to make it stop spinning.
“I’d like to understand this more deeply” is a thought that occurs to people at many levels of study, so this explanation could be too high or low. Where did my comment hit?
If you are frustrated with hand waving in calculus, read a Real Analysis textbook. The magic words which explain how the heck you can have a probability distributions over real numbers is measure theory).
How does that answer the question? It’s true that the center of gravity is a mean, but the moment of inertia is not a variance. It’s one thing to say something is “proportional to a variance” to mean that the constant is 2 or pi, but when the constant is the number of points, I think it’s missing the statistical point.
But the bigger problem is that these are not statistical examples! Means and sums of squares occur many places, but why are they are a good choice for the central tendency and the tendency to be central? Are you suggesting that we think of a random variable as a physical rod? Why? Does trying to spin it have any probabilistic or statistical meaning?
I wasn’t aiming to answer Locaha’s question as much as figure out what question to answer. The range of math knowledge here is high, and I don’t know where Locaha stands. I mean,
But why [is the mean calculated as] sum/n?
That could be a basic question about the meaning of averages—the sort of knowledge I internalized so deeply that I have trouble forming it into words.
But maybe Locaha’s asking a question like:
Why is an unbiased estimator of population mean a sum/n, but an unbiased estimator of population variance a sum/(n-1)?
That’s a less philosophical question. So if Locaha says “means are like the centers of mass! I never understood that intuition until now!”, I’ll have a different follow up than if Locaha says “Yes, captain obvious, of course means are like centers of mass. I’m asking about XYZ”.
Mean and variance are closely related to center of mass and moment of inertia. This is good intuition to have, and it’s statistical. The only difference is that the first two are moments of a probability distribution, and the second two are moments of a mass distribution.
If you are frustrated with explanations in calculus, read a Real Analysis textbook. And the magic words that explain how the heck you can have probability distributions over real numbers is measure theory.
When you have thousands of different pieces of data, to grasp it mentally, you need to replace them with some simplification. For example, instead of a thousand different weights you could imagine a thousand identical weights, such that the new set is somehow the same as the original set; and then you would focus on the individual weight from the new set.
What precisely does “somehow the same as the original set” mean? Well, it depends on what did the numbers from the original set do; how exactly they join together.
For example, if we speak about weights, the natural way of “joining together” is to add their weight. Thus the new set of the identical weights is equivalent to the original set if the sum of the new set is the same as sum of the old set. The sum of the new set = number of pieces × weight of one piece. Therefore the weight of the piece in the new set is the sum of the pieces in the original set divided by their number; the “sum/n”.
Specifically, if addition is the natural thing to do, the set 3, 4, 8 is equivalent to 5, 5, 5, because 3 + 4 + 8 = 5 + 5 + 5. Saying that “5 is the mean of the original set” means “the original set behaves (with regards to the natural thing to do, i.e. addition) as if it was composed of the 5′s”.
There are situations where some other operation is the natural thing to do. Sometimes it is multiplication. For example, if you multiply some original value with 2, and they you multiply it by 8, the result of these two operations is the same as if you would multiply it twice by 4. In this case it’s called geometric mean, and it’s a root of product.
It can be even more complicated, so it doesn’t necessarily have a name, but the idea is always replacing the original set with a set of identical values such that in the original context they would behave the same way. For example, the example above could be described as a 100% growth (multiplication by 2) and 700% growth (multiplication by 8), and you need to get a result 300% (multiplication by 4); in which case it would be “root of (product of (Xi + 100%)) − 100%”.
If there is no meaningful operation in the original set, if the set can be ordered, we can pick the median. If the set can’t even be ordered, if there are discrete values, we can pick the most frequent value as the best approximation of the original set.
I don’t think that’s really what means are. That intuition might fit the median better. One reason means are nice is that they have really nice properties, e.g. they’re linear under addition of random variables. That makes them particularly easy to compute with and/or prove theorems about. Another reason means are nice is related to betting and the interpretation of a mean as an expected value; the theorem justifying this interpretation is the law of large numbers.
Nevertheless in many situations the mean of a random variable is a very bad description of it (e.g. mean income is a terrible description of the income distribution and median would be much more appropriate).
Edit: On the other hand, here’s one very undesirable property of means: they’re not “covariant under increasing changes of coordinates,” which on the other hand is true of medians. What I mean is the following: suppose you decide to compute the mean population of all cities in the US, but later decide this is a bad idea because there are some really big cities. If you suspect that city populations grow multiplicatively rather than additively (e.g. the presence of good thing X causes a city to be 1.2x bigger than it otherwise would, as opposed to 200 people bigger), you might decide that instead of looking at population you should look at log population. But the mean of log population is not the log of mean population!
On the other hand, because log is an increasing function, the median of log population is still the log of median population. So taking medians is in some sense insensitive to these sorts of decisions, which is nice.
I asked a similar question a while back, and I was directed to this book, which I found to be incredibly useful. It is written at an elementary level, has minimal little maths, yet is still technical, and brings across so many central ideas in very clear, Bayesian, terms. It is also on Lukeprog’s CSA book recommendations for ‘Become Smart Quickly’.
Note: this is the only probability textbook I have read. I’ve glanced through the openings of others, and they’ve tended to be above my level. I am sixteen.
As a first step, I suggest Dennis Lindley’s Understanding Uncertainty. It’s written for the layperson, so there’s not much in the way of mathematical detail, but it is very good for clarifying the basic concepts, and covers some surprisingly sophisticated topics.
ETA: Ah, I didn’t notice that Benito had already recommended this book. Well, consider this a second opinion then.
The problem with most Probability and Statistics courses is the axiomatic approach. Purely formalism. Here are the rules—you can play by them if you want to.
Jaynes was such a revelation for me, because he starts with something you want, not arbitrary rules and conventions. He builds probability theory on basic desiredata of reason that you that make sense. He had reasons for my “whys?”.
Also, standard statistics classes always seemed a bit perverse to me—logically backward. They always just felt wrong. Jaynes approach replaced that tortured backward thinking with clear, straight lines going forward. You’re always asking the same basic question “What is the probability of A given that I know B?”
And he also had the best notation. Even if I’m not going to do any math, I’ll often formulate a problem using his notation to clarify my thinking.
IS this a good book to start with? I know it’s the standard “Bayes” intro around here, but is it good for someone with, let’s say, zero formal probability/statistics training?
I was under the impression that the “this is definitely not a book for beginners” was the standard consensus here: I seem to recall seeing some heavily-upvoted comments saying that you should be approximately at the level of a math/stats graduate student before reading it. I couldn’t find them with a quick search, but here’s one comment that explicitly recommends another book over it.
I think it’s even better if you’re not familiar with frequentist statistics because you won’t have to unlearn it first, but I know many people here disagree.
I suppose it’s better that to never have suffered through frequentist statistics first, but I think you appreciate the right way a lot more after you’ve had to suffer through the wrong way for a while.
Well, Jaynes does point out how bad frequentism is as often as he can get away with. I guess the main thing you’re missing out if you weren’t previously familiar with it is knowing whether he’s attacking a strawman.
I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a sample. Intuitively, it makes sense. But why this particular formula of sum/n? You can apply all kinds of mathematical stuff to the sample.
The mean of the sum of two random variables is the sum of the means (ditto with the variances); there’s no similarly simple formula for the median. (See ChristianKl’s comment for why you’d care about the sum.)
The mean if the value of x that minimizes SUM_i (x—x_i)^2; if you have to approximate all elements in your sample with the same value and the cost of an imperfect approximation is the square distance from the exact value (and any smooth function looks like the square when you’re sufficiently close to the minimum), then you should use the mean.
(Of course, all this means that if you’re more likely to multiply things together than add them, the badness of an approximation depends on the ratio between it and the true value rather than the difference, and things are distributed log-normally, you should use the geometric mean instead. Or just take the log of everything.)
This isn’t at introductory level, but try exploring the ideas around Fisher information—it basically ties together information theory and some important statistical concepts.
Fisher Information is hugely important in that it lets you go from just treating a family of distributions as a collection of things to treating them as a space with its own meaningful geometry. The wikipedia page doesn’t really convey it but this write-up by Roger Grosse does. This has been known for decades but the inferential distance to what folks like Amari and Barndorff-Nielsen write is vast.
Attending a CFAR workshop and session on Bayes (the ‘advanced’ session) helped me understand a lot of things in an intuitive way. Reading some online stuff to get intuitions about how Bayes’ theorem and probability mass work was helpful too. I took an advanced stats course right after doing these things, and ended up learning all the math correctly, and it solidified my intuitions in a really nice way. (Other students didn’t seem to have as good a time without those intuitions.) So that might be a good order to do things in.
Some multidimensional calc might be helpful, but other than that, I think you don’t need too much other math to support learning more probability and stats.
Not really—but I do agree that it’s absolutely vital to understand the basic concepts or terms. I think that’s a major reason why people fail to learn—they just don’t really grasp the most vital concepts. That’s especially true of fields with lots of technical terms. If you don’t understand the terms you’ll struggle to follow even basic lines of reasoning.
For this reason I sometimes provide students with a list of central terms, together with comprehensive explanations of what they mean, when I teach.
I don’t have a good resource for you—I’ve had too much math education to pin down exactly where I picked up this kind of logic. I’d recommend set theory in general for getting an understanding of how math works and how to talk and read precisely in mathematics.
For your specific question about the mean, it’s the only number such that the sum of all (samples—mean) equals zero. Go ahead and play with the algebra to show it to yourself. What it means is that if you go off of the mean, you’re going to be more positive of the numbers in the sample than you are negative, or more negative than positive.
http://intelligence.org/courses/ has information on set theory. I also enjoyed reading Bertrand Russell’s “Principia Mathematica”, but haven’t evaluated it as a source for learning set theory.
Did someone here ask about the name of a fraud where the fraudster makes a number of true predictions for free, then says “no more predictions, I’m selling my system.”? There’s no system, instead the fraudster divided the potential victims into groups, and each group got different predictions. Eventually, a few people have the impression of an unbroken accurate series.
People often ask why MIRI researchers think decision theory is relevant for AGI safety. I, too, often wonder myself whether it’s as likely to be relevant as, say, program synthesis. But the basic argument for the relevance of decision theory was explained succinctly in Levitt (1999):
If robots are to put to more general uses, they will need to operate without human intervention, outdoors, on roads and in normal industrial and residential environments where unpredictable physical and visual events routinely occur. It will not be practical, or even safe, to halt robotic actions whenever the robot encounters an unexpected event or ambiguous visual interpretation.
Currently, commercial robots determine their actions mostly by control-theoretic feedback. Control-theoretic algorithms require the possibilities of what can happen in the world be represented in models embodied in software programs that allow the robot to pre-determine an appropriate action response to any task-relevant occurrence of visual events. When robots are used in open, uncontrolled environments, it will not be possible to provide the robot with a priori models of all the objects and dynamical events that might occur.
In order to decide what actions to take in response to un-modeled, unexpected or ambiguously interpreted events events in the world, robots will need to augment their processing beyond controlled feedback response, and engage in decision processes.
A year ago, I was asked to follow up on my post about the January 2013 CFAR workshop in a year. The time to write that post is fast approaching. Are there any issues / questions that people would be particularly interested in seeing this post address / answer?
Somewhere I saw the claim that in choosing sperm donors the biggest factor turns out to be how cute the baby pictures are, but at this point it’s just a cached thought. Looking now I’m not able to substantiate it. Does anyone know where I might have seen this claim?
Yes, although it would help if you could be a bit more specific, the term is somewhat overloaded.
As for the strategy, depends. Find a better community (than the one you feel alienated from) in the sense of better matching values? We both seem to feel quite at home in this one (for me, if not for the suffocating supremacy of EA).
I meant alienated from society at large, not from LW, although the influence of society at large obviously affects discussion on LW.
One aspect of my feeling is that I increasingly suspect that the fundamental reason people believe things in the political realm is that they feel a powerful psychological need to justify hatred. The naive view of political psychology is that people form ideological beliefs out of their experience and perceptions of the world, and those beliefs suggest that a certain category of people is harming the world, and so therefore they are justified in feeling hatred against that category of people. But my new view is that causality flows in the opposite direction: people feel hatred as a primal psychological urge, and so their conscious forebrain is forced to concoct an ideology that justifies the hatred while still allowing the individual to maintain a positive pro-social self-image.
This theory is partially testable, because it posits that a basic prerequisite of an ideology is that it identifies an out-group and justifies hatred against that out-group.
There is a quote commonly mis-attributed to August Bebel and indeed to Marx: “Antisemitismus ist der Sozialismus des dummen Kerls.” (“Antisemitism is the socialism of the stupid guy”, or perhaps colloquially, “Antisemitism is a dumb-ass version of socialism”) That is to say, politically naïve people were attracted to antisemitism because it offered them someone to blame for the problems they faced under capitalism, which — to the quoted speaker’s view, anyway — would be better remedied by changing the political-economic structure.
Jay Smooth recently put out a video, “Moving the Race Conversation Forward”, discussing recent research to the effect that mainstream-media discussions of racial issues tend to get bogged down in talking about whether an individual did or said something racist, as opposed to whether institutions and social structures produce racially biased outcomes.
There are probably other sources for similar ideas from around the political spectra. (I’ll cheerfully admit that the above two sources are rather lefter than I am, and I just couldn’t be arsed to find two rightish ones to fit the politesse of balance.) People do often look for individuals or out-groups to blame for problems caused by economic conditions, social structures, institutions, and so on. The individuals blamed may have precious little to do with the actual problems.
That said, if someone’s looking to place blame for a problem, that does suggest the problem is real. It’s not that they’re inventing the problem in order to have something to pin on an out-group. (It also doesn’t mean that a particular structural claim, Marxist or whatever, is correct on what that problem really is — just that the problem is not itself confabulated.)
There is a quote commonly mis-attributed to August Bebel and indeed to Marx: “Antisemitismus ist der Sozialismus des dummen Kerls.” (“Antisemitism is the socialism of the stupid guy”, or perhaps colloquially, “Antisemitism is a dumb-ass version of socialism”) That is to say, politically naïve people were attracted to antisemitism because it offered them someone to blame for the problems they faced under capitalism, which — to the quoted speaker’s view, anyway — would be better remedied by changing the political-economic structure.
Does that make socialism the anti-semitism of the smart? Or perhaps of the ambitious—they’re attracted to it because it gives them an enemy big enough to justify taking over everything?
Sure, obviously there are real problems in the world. Your examples seem to support my thesis that people believe in ideologies not because those ideologies are capable of solving the problems, but because the ideologies justify their feelings of hatred.
I suppose I see it as more a case of biased search: people have actual problems, and look for explanations and solutions to those problems, but have a bias towards explanations that have to do with blaming someone. The closer someone studies the actual problems, though, the less credibility blame-based explanations have.
The part where the emotional needs come first, and the ideological belief comes later as a way of expressing and justifying them, that feels credible. I just don’t think that everyone starts from the position of hatred (or, in the naive view, not everyone ends with hatred). There are other emotions, too.
But maybe the people motivated by hatred make a large part of the most mindkilled crowd. Because other emotions can be expressed legitimately also outside of the politics.
Tentatively: Look for what “and therefore” you’ve got associated with the feeling. Possibilities that come to my mind—and therefore people are frightening, or and therefore I should be angry at them all the time, or and therefore I should just hide, or and therefore I shouldn’t be seeing this.
In any case, if you’ve got an “and therefore” and you make it conscious, you might be able to think better about the feeling.
Feeling usually become a problem when you resist them.
My general approach with feelings:
Find someone towards which you can express the content behind the feeling. This works best in person. Online communication isn’t good for resolving feelings. Speak openly about whatever comes to mind.
Track the feeling down in your body. Be aware where it happens to be. Then release it.
I think this feeling arises from social norms feeling unnatural to you. This feeling should be expected if your interests are relevant to this site, since people are not trying to be rational by default.
The difference between a pathetic misfit and and an admirable eccentric is their level of awesomeness. If you become good enough at anything relevant to other people, you don’t have to live through their social expectations. Conform to the norms or rise above them.
Note that I think most social norms are nice to have, but this doesn’t mean there aren’t enough of the kind that make me feel alienated. It could be that the feeling of alienation is a necessary side effect of some beneficial cognitive change, in which case I’d try to cherish the feeling. I’ve found that rising to a leadership position diminishes the feeling significantly, however.
I think that feeling is more common than you might think. Especially if you deviate enough from the societal norm (which Less Wrong generally does).
My general strategy for dealing with is social interaction with people who’ll probably understand. Just talk it over with them. It’s best if you do this with people you care about. It doesn’t have to be in person, if you’ve got someone relevant on Skype, that works as well.
Hmm, this is probably good advice. Part of my problem is that my entire family is made up of people who are both 1) Passionate advocates of an American political tribe and 2) Not very sophisticated philosophically.
A common condition with geeks in general and aspiring rationalists in particular, I’d say.
I’ve recently been expanding my network of like-minded people both by going to the local meetups and also by being invited in a Skype group for tumblr rationalists.
I know that a feeling of alienation isn’t conductive to meeting new people, so I’m not sure I can offer other advice. Contact some friends who might be open to new ideas? I’d offer to help myself, but I’m not sure if I’m the right person to talk to. (In any case, I’ve PM’d my Skype name if you do need a complete stranger to talk to.)
Is it always correct to choose that action with the highest expected utility?
Suppose I have a choice between action A, which grants −100 utilons with 99.9% chance and +1000000 utilons with 0.1% chance, or action B which grants +1 utilon with 100% chance. A has an expected utility of +900.1 utilons, while B has an expected utility of +1 utilon. This decision will be available to me only once, and all future decision will involve utility changes on the order of a few utilons.
Intuitively, it seems like action A is too risky. I’ll almost certainly end up with a huge decrease in utility, just because there’s a remote chance of a windfall. Risk aversion doesn’t apply here, since we’re dealing in utility, right? So either I’m failing to truly appreciate the chance at getting 1M utilons—I’m stuck thinking about it as I would money—or this is a case where there’s reason to not take the action that maximizes expected value. Help?
EDIT: Changed the details of action A to what was intended
I think the non-intuitive nature of the A choice is because we naturally think of utilons as “things”. For any valuable thing (money, moments of pleasure, whatever) anybody who is minimally risk adverse would choose B. But utllons are not things, they are abstractions defined by one’s preferences. So that A is the rational choice is a tautology, in the standard versions of utility theory.
It may help to think it the other way around, starting from the actual preference. You would choose a 99.9% chance of losing ten cents and 0.1% chance of winning 10000 dollars over winning one cent with certainty, right? So then perhaps, as long as we don’t think of other bets and outcomes, we can map winning 1 cent to +1 utilon, losing 10 cents to −100 utilons and winning 10000 dollars to +10000 utilons. Then we can refine and extend the “outcomes ⇔ utilons” map by considering your actual preferences under more and more bets. As long as your preferences are self-consistent in the sense of the VNM axioms, then there will a mapping that can be constructed.
ETA: of course, it is possible that your preferences are not self-consistent. The Allais paradox is an example where many people’s intuitive preferences are not self-consistent in the VNM sense. But constructing such a case is more complicated that just considering risk-aversion on a single bet.
Since utility functions are only unique up to affine transformation, I don’t know what to make of this comment. Do you have some sort of canonical representation in mind or something?
In the context of this thread, you can consider U(status quo) = 0 and U(status quo, but with one more dollar in my wallet) = 1. (OK, that makes +10000 an unreasonable estimate of the upper bound; pretend I said +1e9 instead.)
Yes, this seems almost certainly true (and I think is even necessary if you want to satisfy the VNM axioms, otherwise you violate the continuity axiom).
Yes I’m quite aware… note that if there’s a sequence of outcomes whose values increase without bound, then you could construct a lottery that has infinite value by appropriately mixing the lotteries together, e.g. put probability 2^-k on the outcome with value 2^k. Then this lottery would be problematic from the perspective of continuity (or even having an evaluable utliity function).
Are lotteries allowed to have infinitely many possible outcomes? (The Wikipedia page about the VNM axioms only says “many”; I might look it up on the original paper when I have time.)
There are versions of the VNM theorem that allow infinitely many possible outcomes, but they either
1) require additional continuity assumptions so strong that they force your utility function to be bounded
or
2) they apply only to some subset of the possible lotteries (i.e. there will be some lotteries for which your agent is not obliged to define a utility).
I might look it up on the original paper when I have time.
The original statement and proof given by VNM are messy and complicated. They have since been neatened up a lot. If you have access to it, try “Follmer H., and Schied A., Stochastic Finance: An Introduction in Discrete Time, de Gruyter, Berlin, 2004”
See also Kreps, Notes on the Theory of Choice. Note that one of these two restrictions are required in order to specifically prevent infinite expected utility. So if a lottery spits out infinite expected utility, you broke something in the VNM axioms.
For anyone who’s interested, a quick and dirty explanation is that the preference relation is primitive, and we’re trying to come up with an index (a utility function) that reproduces the preference relation. In the case of certainty, we want a function U:O->R where O is the outcome space and R is the real numbers such that U(o1) > U(o2) if and only if o1 is preferred to o2. In the case of uncertainty, U is defined on the set of probability distributions over O, i.e. U:M(O) → R. With the VNM axioms, we get U(L) = E_L[u(o)] where L is some lottery (i.e. a probability distribution over O). U is strictly prohibited from taking the value of infinity in these definition. Now you probably could extend them a little bit to allow for such infinities (at the cost of VNM utility perhaps), but you would need every lottery with infinite expected value to be tied for the best lottery according to the preference relation.
I’m not sure, although I would expect VNM to invoke the Hahn-Banach theorem, and it seems hard to do that if you only allow finite lotteries. If you find out I’d be quite interested. I’m only somewhat confident in my original assertion (say 2:1 odds).
I’d flip that around. Whatever action you end up choosing reveals what you think has highest utility, according to the information and utility function you have at the time. It’s almost a definition of what utility is—if you consistently make choices that rank lower according to what you think your utility function is, then your model of your utility function is wrong.
If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have (probably related to risk).
I’ve recently come to terms with how much fear/anxiety/risk avoidance is in my revealed preferences. I’m working on working with that to do effective long-term planning—the best trick I have so far is weighing “unacceptable status quo continues” as a risk. That, and making explicit comparisons between anticipated and experienced outcomes of actions (consistently over-estimating risks doesn’t help any, and I’ve been doing that).
I sometimes have the same intuition as banx. You’re right that the problem is not in the choice, but in the utility function and it most likely stems from thinking about utility as money.
Lets examine the previous example and make it into money (dollars):
−100 [dollars] with 99.9% chance and +10,000 [dollars] with 0.1% vs 100% chance at +1 [dollar]
When doing the math, you have to take into future consequences as well. For example, if you knew you would be offered 100 loaded bets with an expected payoff of $0.50 in the future, each of which only cost you $1 to participate in, then you have to count this in your original payoff calculation if losing the $100 would prohibit you from being able to take these other bets.
Basically, you have to think through all the long-term consequences when calculating expected payoff, even in dollars.
Then when you try to convert this to utility, it’s even more complicated. Is the utility per dollar gained in the +$10,000 case equivalent to the utility per dollar lost in the -$100 case? Would you feel guilty and beat yourself up afterwards if you took a bet that you had a 99.9% chance of losing? Even though a purely rational agent probably shouldn’t feel this, it’s still likely a factor in most actual humans’ utility functions.
TrustVectoring summed it up well above:
If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have.
If you still prefer picking the +1 option, then most likely your assessment that the first choice only gives a negative utility of 100 is probably wrong. There are some other factors that make it a less attractive choice.
Depending on your preferred framework, this is in some sense backwards: utility is, by definition, that thing which it is always correct to choose the action with the highest expected value of (say, in the framework of the von Neumann-Morgenstern theorem).
Thanks for the link, that was an excellent exposition and defense of compatibilism. Here is one particularly strong paragraph:
If we are interested in whether somebody has free will, it is some kind of ability that we want to assess, and you can’t assess any ability by “replaying the tape.”… This is as true of the abilities of automobiles as of people. Suppose I am driving along at 60 MPH and am asked if my car can also go 80 MPH. Yes, I reply, but not in precisely the same conditions; I have to press harder on the accelerator. In fact, I add, it can also go 40 MPH, but not with conditions precisely as they are. Replay the tape till eternity, and it will never go 40MPH in just these conditions. So if you want to know whether some rapist/murderer was “free not to rape and murder,” don’t distract yourself with fantasies about determinism and rewinding the tape; rely on the sorts of observations and tests that everyday folk use to confirm and disconfirm their verdicts about who could have done otherwise and who couldn’t.
rely on the sorts of observations and tests that everyday folk use to confirm and disconfirm their verdicts about who could have done otherwise and who couldn’t.
It is common for incompatibilists to say that their conception of free will (as requiring the ability to do otherwise in exactly the same conditions) matches everybody’s intuitions and that compatibilism is a philosopher’s trick based on changing the definition. Dennett is arguing that, contrary to this, what actual people in actual circumstances do when they want to know if someone was “free to do otherwise” is never to think about global determinism; rather, as compatibilism requires, they think about whether that person (or relevantly similar people) actually does/do different when placed under very similar (but not precisely identical) conditions.
they think about whether that person (or relevantly similar people) actually does/do different when placed under very similar (but not precisely identical) conditions.
I think the key is consideration people “in similar, but not exactly identical, circumstance”. It’s how the person compares to hypothetical others. Free will is a concept used to sort people for blame based on intention.
The MIRI course list bashes on “higher and higher forms of calculus” as not being useful for their purposes and calculus is not on the list at all. However, I know that at least some kind of calculus is needed for things like probability theory.
So imagine a person wanted to work their way through the whole MIRI course list and deeply understand each topic. How much calculus is needed for that?
Not much. The kind of probability relevant to MIRI’s interests is not the kind of probability you need calculus to understand (the random variables are usually discrete, etc.). The closest thing to needing a calculus background is maybe numerical analysis (I suspect it would be helpful to at least have the intuition that derivatives measure the sensitivity of a function to changes in its input), but even then I think that’s more algorithms. Not an expert on numerical analysis by any means, though.
If you have a general interest in mathematics, I still recommend that you learn some calculus because it’s an important foundation for other parts of mathematics and because people, when explaining things to you, will often assume that you know calculus after a certain point and use that as a jumping-off point.
Thanks. I took single variable calculus, differential equations, and linear algebra in college, but its been four years since then and I haven’t really used any of it since (and I think I really only learned it in context, not deeply). I’ve just been trying to figure out how much of my math foundations i’m going to need to re-learn.
Last night we had meetup in Ljubljana. It was a good debate, but quite a heretical one for the LW standards. Especially when organizers left us. Which was unfortunate. We mostly don’t see ourselves particularly bonded to LW at all. Especially I.
We discussed personal identity, possible near super-intelligence (sudden hack, if you wish), Universe transformation following this eventuality, and some lighter topics like fracking for gas and oil, language revolutions throughout history, neo-reactionaries and their points, Einstein’s brains (whether they were lighter or heavier than average—I am quite sure they were heavier but it seems that the Cathedral says otherwise).
We discussed Three Worlds Collide, IBM brain simulations, MIRI endeavors and progress, genetics …
Heretical? Well, considering that ‘heretic’ means ‘someone who thinks on their own’, I’m not sure how we’re supposed to interpret that negatively.
I assume however that you meant ‘disagreeing with common positions displayed on LW’ - which of those common positions did you differ on, and why, and just how homogeneous do you think LW is on those?
I can speak mostly for myself. Still, we the locals go back decade and more, discussing some topics.
It is kind of clear to me, that there is a race toward superintelligence. As it was always the race toward some future technology, be it flying, be it atomic bomb, be it Moon race … you name it.
Except, that this is the final, most important race ever. What can you expect then from the competitors? You can expect them to claim, that the Singularity/Transcendence is still far, far away. You can expect, that the competition will try to persuade you to abandon your own project, if you have any. For example, by saying that an uncontrollable monster is lurking in the dark, named UFAI. They will say just about anything, to persuade you to quit.
This works both ways, between almost any two competitors, to be clear.
My view is the following. If you are clever and dare enough, you can write a 10000 lines or there about long computer program, and there will be the Singularity the very next month.
I am not sure, if there is a human (group) currently able to accomplish this. Very well might be. It’s likely NOT THAT difficult.
We discussed the Marylin vos Savant’s toying with Paul Erdos. A smartass against a top scientist is occasionally like a cat and mouse game, where the mouse mistakenly thinks he’s a cat. There are many other examples, like Ballard against all the historians and archeologists. Or Moldbug against Dawkins.
Of course, that does not automatically mean another smartass is preying upon the MIRI and AI academia combined, in the real AI case. But it’s not impossible. May be several different big cats in the wild who keep a low profile for a time being. Might be lion with his pride, inhabiting the academia also.
The most interesting outcome would be no Singularity for a few decades.
If you are clever and dare enough, you can write a 10000 lines or there about long computer program, and there will be the Singularity the very next month.
That seems an… unusual view. Have you actually tried writing code that exhibits something related to intelligence?
It depends on your language and coding style, doesn’t it? I’ve seen C style guides that require you to stretch out onto 15 lines what I’d hope to take 4, and in a good functional language shouldn’t take more than 2.
Yes, and the number of lines is a ridiculously bad metric of the code’s complexity anyway.
Was a funny moment when someone I know was doing a Java assignment, I got curious, and it turned out that a full page of Java code is three lines in Perl :-)
That really depends on coding style, again. I find that common Java coding styles are hideously decompressed, and become far more readable if you do a few things per line instead of maybe half a thing. Even they aren’t as bad as the worst C coding styles I’ve seen, though, where it takes like 7 lines to declare a function.
As for Perl vs Java… was it solved in Perl by a Regex? That’s one case where if you don’t know what you’re doing, Java can end up really bloated but it usually doesn’t need to be all that bad.
Is there a reasonably well researched list of behaviors that correlate positively with lifespan? I’m interested in seeing if there are any low hanging fruit I’m missing.
I found this previously posted, and a series of posts by gwern, but was wondering if there is anything else?
A quick google will give you a lot of lists but most of them are from news sources that I don’t trust.
I found this list of causes of death by age and gender enlightening (it doesn’t necessarily tell you that a particular action will increase your lifespan, but then again neither do correlations). For example, I was surprised by how often people around my age or a bit older die of suicide and “poisoning” (not sure exactly what this covers but I think it covers stuff like alcohol poisoning and accidentally overdosing on medicine?).
“Scientists from Dana-Farber Cancer Institute, Brigham and Women’s Hospital, and the Harvard School of Public Health came to this conclusion after analyzing data on nearly 120,000 people collected over 30 years.”
“The most obvious benefit was a reduction of 29 percent in deaths from heart disease—the major killer of people in America. But we also saw a significant reduction − 11% - in the risk of dying from cancer.”
The researchers point out that the study was not designed to examine cause and effect and so cannot conclude that eating more nuts causes people to live longer.
Indeed, the study consists only of observational data, not interventional, so what causal conclusions could be drawn from it?
Sorry, there are two separate issues: the data itself (which is a big dataset where they following a big set of nurses for many years, and recorded lots of things about them), and how the data could be used to maybe get causal conclusions.
Plenty of folks at Harvard (e.g. Miguel Hernan, Jamie Robins) used this data in a sensible way to account for confounding (naturally their results are relatively low on the ‘hierarchy of evidence’, but still!) Trying to draw causal conclusions from observational data is 95% of modern causal inference!
Has anyone had experiences with virtual assistants? I’ve been aware of the concept for many years but always been wary of what I perceive to be the risks involved in letting a fundamentally unknown party read my email.
I’d like to hear about any positive or negative experiences.
One problem with searching for information about the trustworthiness of entities like these is that one suspects any positive reports one finds via Googling to be astroturfing, and if one finds negative reports, well, negatives are always over-reported in consumer services. That’s why I’m asking here.
I don’t, but in Tim Ferris’ book Four-Hour Work Week, I think I recall him recommending them. I think this was the one he recommended: https://www.yourmaninindia.com/.
Let me know if you come across some good findings on this. If effective, virtual assistants could be very useful, and thus they’re something I’m interested in. On that note, it’d probably be worth writing a post about them.
Has anyone paired Beeminder and Project Euler? I’d like to be able to set a goal of doing x problems per week and have it automatically update, instead of me entering the data in manually. Has anyone cobbled together a way to do it, which I could piggyback off of?
The purpose of the Templeton Foundation is to spray around more money than most academics could dream of - $9 million for philosophy! - seeking to try to blur the lines between science and religion and corrupt the public discourse. The best interpretation that can reasonably be put on taking the Templeton shilling is that one is doing so cynically.
The purpose of the Templeton Foundation is [...] to try to blur the lines between science and religion and corrupt the public discourse.
What’s your basis for this interpretation? And particularly the “corrupt the public discourse” bit? I read your link, and I remember it getting briefly badmouthed in The God Delusion, but I’d prefer something a little more solid to go on, since this seems to lie on the sharp side of Hanlon’s razor.
Well, here’s Sean Carroll’s take on the matter. They don’t seem like the worst organization in the world or anything, but I too was disappointed to hear about Max accepting their money.
Thanks, that’s the kind of thing I was looking for. I’d expect (boundedly) rational people to be able to disagree on the utility of promoting secularism, but Carroll’s take on it does seem like a reasonable and un-Hanlony approach to the issue.
Any book recommendations for a good intro to evolutionary psychology? I remember Eliezer suggested The Moral Animal, but I also vaguely remember some other people recommending against it. I’ll probably just go with TMA unless some other book gets suggested multiple times.
I found TMA was too full of just so stories. I also think it disturbingly rationalized a particular brand of sexism$ and overemphazised status which was very unexpected since I don’t think I’m squeamish at all on those fronts. I don’t think it helped me to predict human behavior better.
This said I’d be interested too if someone could recommend some other book.
$ rigid view of differences between the sexes, incompatible with my experience (which does suggest the sexes are different)
“Dont”. Intro over. Evo-Psyc is .. profoundly useless unless you want to use it as a case study of how pre-existing biases and social norms can utterly take over a field operating in a nigh-total vaccum of actual data. Now, I cant guarantee that the entire field is noise and fury signifying nothing, but every sample of it that I have encountered has been.
“Dont”. Intro over. Evo-Psyc is .. profoundly useless unless you want to use it as a case study of how pre-existing biases and social norms can utterly take over a field operating in a nigh-total vaccum of actual data. Now, I cant guarantee that the entire field is noise and fury signifying nothing, but every sample of it that I have encountered has been.
Pre-existing biases and social norms have utterly taken over something here, in a vacuum of actual data. They may also have applied to some degree to Evo-psych works.
I would assume that it’s considered worse than death by some because with death it’s easier to ignore the opportunity cost. Wireheading makes that cost clearer, which also explains why it’s considered negative compared to potential alternatives.
Speaking for myself, I consider wireheading to be very negative, but better than information-theoretic death, and better than a number of scenarios I can think of.
I think the big fear is stasis. In each case you’re put in a certain state of being without any recourse to get out of it, but wireheading seems to be like a state of living death.
I concur, but I think it wise to draw a distiction between wireheading as in an extreme example of a blissed out opiate haze, where one does nothing but feel content and so has no desire to acheve anything, and wireheading as in a state of strongly positive emotions where curisity, creativity etc remain intact.
Yes, if a rat is given a choice it will keep on pressing the lever, but maybe a human would wedge the lever open and then go and continue with life as normal? To continue the drug analogy, some drugs leave people in a stupor, some make people socialable, some result in weird music.
I would say the first type is certainly better then death, and the latter ‘headonistic imperitive’ wireheading sounds utopic.
Some people value “actual things” being achieved by entities and like Slackson implied a society of wireheads takes away resources and has opportunity costs.
“Sexy” isn’t signaling—it’s a characteristic that people (usually) try to signal, more or less successfully. “I’m sexy” basically means “You want me” : note the difference in subjects :-)
Pretty much the same thing. Regardless of an, um, widespread misunderstanding :-D sexy behavior does NOT signal either promiscuity or sexual availability. It signals “I want you to desire me” and being desired is a generally advantageous position to be in.
If a man succeeds in signaling a high sexuality to a women, the woman might still treat him as a creep. Especially if there no established trust, signal really high amounts of sexuality doesn’t result in “You want me”.
In my own interactions with professional dancers there are plenty of situations where the woman succeeds in signaling a high amount of sexyness. I however know that I”m dancing with a professional dancer who going to sent that signal to a lot of guys so she doesn’t enter my mental category of potential mates.
I think people frequently go wrong when the confuse impression of characteristics with goals.
For a reasonable definition of sexy, the term refers to letting a woman feel sexual tension. If you talk about social interactions it’s useful to have a word that refers to making another person feel sexual tension.
Of course you can define beautiful, attractive and sexy all the same way. Then you get a one dimensional model where Bob wants Alice with utility rating X. I don’t think that’s model is very useful to understanding how humans behave in mating situations.
I define it as “arousing sexual interest and desire in people of appropriate gender and culture”. Note that this is quite different from “beautiful” and is a narrow subset of “attractive”.
the term refers to letting a woman feel sexual tension.
“Tension” generally implies conflict or some sort of a counterforce.
“Tension” generally implies conflict or some sort of a counterforce.
Testosterone which is commonly associated with sexiness in males is about dominance. It has something to do with power that does create tension.
Of course a woman can decide to have sex with shy a guy because he’s nice and she thinks that he’s intelligent or otherwise a good match. Given that there are shy guys who do have sex that’s certainly happening in reality.
Does that mean that the behavior of that guy deserves the label “sexy”? I don’t think he’s commonly given that label.
There also words like sensual and empathic. A guy can get layed by being very empathic and just making woman that feel really great by interacting with him in a sensual way. I think it’s useful to separate that mentally from the kind of behavior that comes from testosterone that commonly get’s called sexy.
If you read an exciting thriller you are also feeling tension even when you aren’t in conflict with the book or there some counterforce. Building up tension and then releasing it is a way for human to feel pleasure.
Sexy is a quite broad word that probably used by different people in different ways. I think for most people it about what they feel when looking at the person. Those feeling where set up by evolution over large time frames.
Evolution doesn’t really care about whether you get a fun intercourse partner.
But it’s not only evolution. It also has a lot to do with culture. Culture also doesn’t care about whether you get a fun intercourse partner. People who watch a lot of TV get taught that certain characteristics are sexy.
For myself I would guess that most of my cultural imprint regarding what I find sexy comes from dancing interactions.
If a woman moves in a way that suggests that she doesn’t dance well, that will reduce her sex appeal to me more than it probably does with the average male.
Being sexy signals health, youth, and fertility. This is quite well supported by evidence and discussed in many books and articles.
I would agree with what Lumifer says below, but I think sexy can be signalling when many people are involved: look at the sexy people I hang out with. Being with sexy people brings high status because it’s high status.
I keep looping through the same crisis lately, which comes up any time someone points out that I’m pretentious / an idiot / full of shit / lebens unwertes leben / etc.:
Is there a good way for me to know if I’m actually any good at anything? What are appropriate criteria to determine whether I deserve to have pride in myself and my abilities? And what are appropriate criteria to determine whether I have the capacity to determine whether I’ve met those criteria?
Having followed your posts here and on #lesswrong, I got an impression of your personality as a bizarre mix of insecurities and narcissism (but without any malice), and this comment is no exception. You are certainly in need of a few sessions with a good therapist, but, judging by your past posts, you are not likely to actually go for it, so that’s a catch 22. Alternatively, taking a Dale Carnegie course and actually taking its lessons to heart and putting an effort into it might be a good idea. Or a similar interpersonal relationship course you can find locally and afford.
Yeah, the narcissism is something that I’ve been trying to come up with a good plan for purging since I first became aware of it. (I sometimes think that some of the insecurities originally started as a botched attempt to undo the narcissism).
The therapy will absolutely happen as soon as I have a reasonable capacity to distinguish “good” therapists from “bad” ones.
The therapy will absolutely happen as soon as I have a reasonable capacity to distinguish “good” therapists from “bad” ones.
Bad plan (and also a transparent, falsely humble excuse to procrastinate). Picking a therapist at random will give you distinctly positive expected value. Picking a therapist recommended by a friend or acquaintance will give you somewhat better expected value.
Incidentally, one of the methods by which you can most effectively boost your ability to distinguish between good therapists from bad therapists is by having actual exposure to therapists.
Some things are easier to tell whether you’re good at than others. I guess you aren’t talking about the more assessable things (school/university studies, job, competitive sport, weightlifting, …) but about things with a strong element of judgement (quality as a friend or lover, skill in painting, …) or a lot of noise mixed with any signal there might be (stock-picking[1], running a successful startup company, …).
[1] Index funds are the canonical answer to that one, but you know that already.
So, anyway, the answer to “how do I tell if I’m any good at X?” depends strongly on X.
But maybe you really mean not “(know if I’m actually any good at) anything” but know if I’m actually (any good at anything)”—i.e., the question isn’t “am I any good at X?” but “is there anything I’m any good at?”. The answer to that is almost certainly yes; if someone is seriously suggesting otherwise then they are almost certainly dishonest or stupid or malicious or some combination of those, and should be ignored unless they have actual power to harm you; if some bit of your brain is seriously suggesting otherwise then you should learn to ignore it.
There are almost certainly specific X you have good evidence of being good at, which will imply a positive answer to “is there anything I’m good at?”. Pick a few, inspect them as closely as you feel you have to to be sure you aren’t fooling yourself, and remember the answer.
If someone else is declaring publicly that you are a pretentious idiot and full of shit, it is likely that what’s going on is not at all that they’re trying to make an objective assessment of your capabilities or character, but that they are engaged in some sort of fight over status or influence or something, and are saying whatever seems like it may do damage. I expect you have good reasons for getting into that sort of fight, so I’ll just say: bear in mind when you do that this is a thing that happens, and that such comments are usually not useful feedback for self-assessment.
If you want to mention some specific X, I expect you’ll get some advice on ways to assess whether you’re any good at it/them. But I think the most important thing here is that the thing that’s provoking your self-doubt, although it looks like an assessment of your capabilities, really isn’t any such thing.
You could take a cognitive psych approach to some of this. What are the other person’s qualifications?
I recommend exploring the concept of good enough.
There’s a bit in Nathaniel Branden about “a primitive sense of self-affirmation”—which I take to be the assurance that babies start out with that they get to care about their pain and pleasure. It isn’t even a question for them. And animals are pretty much the same.
You don’t need to have a right to be on your own side, you can just be on your own side.
Something I’ve been working on is getting past the idea that the universe is keeping score, and I have to get everything right.
What I believe about your situation is that you’ve been siding with your internal attack voice, and you need to associate your sense of self with other aspects of yourself like overall physical sensations.
Do you have people who are on your side? If so, can you explore taking their opinion seriously?
The attack voice comes on so strong it seems like the voice of reality, but it’s just a voice. I’ve found that it’s hard work to change my relationship to my attack voice, but it’s possible.
For what it’s worth, I think your prose is good. It’s clear, and the style (as distinct from the subject matter) is pleasant.
Generally, their qualifications are that the audience is rallying around them. Also, they don’t know me, which makes them less likely to be biased in my favor. (I.e., the old “my mom says I’m great at , so shut up!” problem)
...the assurance that babies start out with that they get to care about their pain and pleasure.
This flies in the face of the political climate I exist within, that talks primarily about the gallish “entitlement” of poor people who believe they have the right to food and shelter and work.
Do you have people who are on your side? If so, can you explore taking their opinion seriously?
It’s very, very difficult, primarily because people who are INTENSELY on my side are never as vocal as people who are casually against me.
I.e., people who clearly love me and are willing to share portions of their life with me are willing to go so far as to say “I think you do pretty well.” People whom I’ve never met are willing to go so far as to say “fucking kill yourself you fucking loser. Stop acting like you even know how to person, let alone . Fuck it, I’m looking up your address; I’ll kill you.”
That churns up all sorts of emotional and social reactions, which makes processing the whole thing rationally even harder.
Generally, their qualifications are that the audience is rallying around them. Also, they don’t know me, which makes them less likely to be biased in my favor. (I.e., the old “my mom says I’m great at , so shut up!” problem)
On the other hand, they might be more likely to be biased against you, and they certainly don’t know a lot about your situation.
...the assurance that babies start out with that they get to care about their pain and pleasure.
This flies in the face of the political climate I exist within, that talks primarily about the gallish “entitlement” of poor people who believe they have the right to food and shelter and work.
Can you find a different political environment?
I’ve noticed that conservatives tend to think that everything bad that happens to a person is the fault of that person, and progressives tend to think that people generally don’t have any responsibility for their misfortunes. Both are overdoing it, but you might need to spend some time with progressives for the sake of balance.
Also, I’ve found it helps to realize that malice is an easy way of getting attention, so there are incentives for people to show malice just to get attention—and some of them are getting paid for it. The thing is, it’s an emotional habit, not the voice of reality.
Unfortunately, people are really vulnerable to insults. I don’t have an evo psy explanation, though I could probably whomp one up.
Do you have people who are on your side? If so, can you explore taking their opinion seriously?
It’s very, very difficult, primarily because people who are INTENSELY on my side are never as vocal as people who are casually against me.
It is very difficult, but I think you’ve made some progress. All I can see is what you write, but it seems like you’re getting some distance from your self-attacks in something like the past year or so.
I find it helps to think about times when I’ve been on my own side and haven’t been struck by lightning.
It’s very, very difficult, primarily because people who are INTENSELY on my side are never as vocal as people who are casually against me.
I.e., people who clearly love me and are willing to share portions of their life with me are willing to go so far as to say “I think you do pretty well.” People whom I’ve never met are willing to go so far as to say “fucking kill yourself you fucking loser. Stop acting like you even know how to person, let alone . Fuck it, I’m looking up your address; I’ll kill you.”
I might be an outlier, but a spiel like “fucking kill yourself you fucking loser. Stop acting like you even know how to person, let alone . Fuck it, I’m looking up your address; I’ll kill you” doesn’t signal casualness to me. The only people I’d expect to say that casually are trolls trying to get a rise out of people. Idle trolling aside, someone laying down a fusillade of abuse like that is someone who cares quite a bit (and doubtless more than they’d like to admit) about my behaviour. Hardly an unbiased commentator! (I recognize that’s easier said than internalized.)
I’ve started playing with SimpleWebRTC and its component parts
I am precommitting to another update by February 10th
This is a minimally-viable update on account of recent travel and imminent job interviews, but the precommitments seem to be succeeding in at least forcing something like progress and keeping some attention on the problem.
I’m in art school and I have a big problem with precision and lack of “sloppiness” in my work. I’m sort of hesitant to try to improve in this area, however, because I suspect it reflects some sort of biological limit—maybe the size of some area in the cerebellum or something, I don’t know. Am I right in thinking this?
Seems to me that that’s likely a self-fulfilling prophecy, which I subjectively estimate is at least as likely to prevent you from doing better as an actual biological problem. Maybe try to think of more ways to get better at it—perhaps some different kind of exercises—and do your best at those, before drawing any conclusions about your fundamental limits… because those conclusions themselves will limit you even more.
I have never biked twenty miles in one go. It could be that this reflects some inherent limit. Or it could be that I just haven’t tried yet.
If I believe that it is an inherent limit, how might I test my belief? Only by trying anyway. If I try and succeed, then I will update.
If I believe that it is not an inherent limit, how might I test my belief? Only by trying anyway. If I try and fail, then I will update.
In either case, the test of my ability Is not in contemplating what mechanisms of self might limit me, But in trying anyway, when I have the opportunity to do so, And seeing what happens.
Be careful not to find yourself 7 miles away from home on your bike and too tired to keep on cycling.
If that means arranging with a friend to pick you up in their car if you have to bail out, or picking a circular route that never takes you that far from home, or any other way of handling the contingency. Going “but suppose I fail!” and not trying is an even worse piece of wormtonguing than the one fubarobfusco is addressing.
I think it’s a metaphor thing. Like, in writing, if you say “The shadow of a lamppost lay on the ground like a spear. He walked and it pierced him like a spear.” What more description of the scene do you need than that? In fact, talking about the color of the path or what kind of trousers our character was wearing would be counterproductive to the quality of the writing.
One could view sloppiness in art in the same way—use of metaphor that captures the scene without the need for detail.
Maybe your tendency towards precision is at the wrong times? If practicing, for example, it might be counterproductive since you probably want quantity instead of quality, or maybe you’re trying to get everything down precisely too early on and it’s making your work stiff.
Manfred’s point is good- “metaphor that captures the scene without the need for detail.”… If you render background details overmuch, they can distract the viewer from the focal point of the work. Maybe put some effort into looking at how the “metaphors” of different things work? For example, how more skilled artists draw/paint grass in the distance, or whatnot.
I think it’s a common thing to sort of notice something wrong in an area, and to spend a lot of time on that area in hopes of fixing it, which would make it less sloppy… Maybe sketch that thing a lot for practice.
If you’re drawing from life, it’s possible that lack of sloppiness comes from not making sense of the gestalt, so to speak. I’d think that understanding the form of the subject and how the lighting on it works means you can simplify things away. I don’t do much (read: any) figure drawings from life, but I’d imagine that understanding the figure and what’s important and what isn’t would be helpful. Maybe doing some master copies of skilled, more abstract drawings of the figure would help. Maybe look up a comic artist or cartoonist you like and look at what they do.
ETA:
To address your actual question, I’d say I don’t know any particular evidence for why that should be so.
Rationality-technique-wise: It’s good that you asked people, since that would bring you evidence of the idea being true or false. In the future it might be even more useful to suppress hypothesizing until some more investigating has gone on- “biological limit” is the sort of thing that feels true if you don’t understand how to do something or how to understand how to do something. I think there’s a post about this, or something; let me see if I can find it… ETA2: The exact anecdote I was thinking of doesn’t apply as much as I thought it did, but maybe the post “Fake Explanations” or something applies?
I would guess that you try to exert too much control. The kind of “sloppiness” that’s useful for creativity is about letting things go.
Meditation might help.
As you are female, dancing a partner dance where you have to follow and can’t control everything might be useful. Letting go of trying to control is lesson 101 for a lot of woman who pick up Salsa dancing.
I would guess that you try to exert too much control. The kind of “sloppiness” that’s useful for creativity is about letting things go.
I’m already good at this part of creativity, but precision is also pretty important. Right now I’m working on a project where I have to trace in pen (can’t erase, flaws are obvious) photographs that I took. Letting things go won’t help here.
As a lead, you learn that you aren’t really controlling much of anything in Salsa either. You’re setting boundary conditions; follows have a fascinating way of exploring the space of those boundaries in ways you often don’t expect.
But I’m guessing that you’ve hit on the right direction of interpretation of sloppiness as letting go of control. I’d extend that to too much self conscious control. Great art, and particularly great dancing, is finding a clear intention and a method of focusing your discursive consciousness and voluntary attention that harnesses the rest* of your capabilities for the same intention.
When the self monitoring person in your head tries to do too much, he gets in the way of the rest of you doing it right.
As a lead, you learn that you aren’t really controlling much of anything in Salsa either. You’re setting boundary conditions; follows have a fascinating way of exploring the space of those boundaries in ways you often don’t expect.
For advanced dancing that’s true. For beginners, not so much. At the beginning Salsa is the guy leading a move and the woman following.
If you are a guy and want to learn dancing for the sake of letting go control I wouldn’t recommend Salsa. I think it took me 1 1⁄2 years to get to that point.
A whole 1 1⁄2 years? Took me a lot longer than that. I’ve been at Salsa mainly for about a decade.
Yes, the unfortunate fact is that most leads are taught to “lead moves” when they start. If they were taught to lead movement, they’d make faster progress, IMO. Leading should be leading, to the point of manipulation, and not signaling a choreographed maneuver. I’ve seen a West Coast instructor teach a beginning class that way, and thought it was the best beginning class I had ever seen.
A whole 1 1⁄2 years? Took me a lot longer than that.
I think on of the turning events was for me my first Bachata Congress in Berlin. I didn’t know too many Bachata patterns and after hours of dancing the brain just switches off and let’s the body do it’s thing.
But you are right that it might well take longer for the average guy. That means it’s not a good training exercise to pick up the skill of letting go control for man.
For woman on the other hand it’s something to be learned at the beginning.
Yes, the unfortunate fact is that most leads are taught to “lead moves” when they start.
At the beginning I mainly thought I didn’t understand what teaching dance is all about and that a bunch of teachers have something like real expertise.
The more I dance the more I think that their teaching is very suboptimal. A local Salsa teacher teaches mainly patterns in her lessons. On the other hand she writes on her blog about how it’s all in the technique and about traits like confidence. It’s also not like she didn’t learn dance at formal dance university courses for 5 years, so she should know a bit.
Things like telling a guy who dances with a bit of distance to the girl to dance closer, just aren’t good advice when the girl isn’t comfortable with dancing closer. Yes, if they would dance closer things would be nicer, but there usually a reason why a pair has the distance it has.
Leading should be leading, to the point of manipulation, and not signaling a choreographed maneuver.
Manipulation is an interesting choice of words. What do you mean with it?
I remember a Kizomba dance a year ago where I didn’t know much Kizomba. I did have a lot of partner perception from Bachata. I picked up enough information from my dance partner that I could just follow her movements in a way where she didn’t thought she was leading but I was certainly dancing a bunch of steps with her I hadn’t learned in a lecture.
To use sort of what “manipulation” means in osteopathy I think you could call that nonmanipluative leading. In Bachata I think there are a lot of situation where a movement is there in the body but surpressed and things get good if they lead can “free” the movement and stabilize it. I think such nonmanipulative dancing is quite beautiful.
Unfortunately I’m not good enough to do that in Salsa and even in Bachata I’m not always having good enough perception.
But I’m guessing that you’ve hit on the right direction of interpretation of sloppiness as letting go of control. I’d extend that to too much self conscious* control. Great art, and particularly great dancing, is finding a clear intention and a method of focusing your discursive consciousness and voluntary attention that harnesses the rest of your capabilities for the same intention.
That seems related with the common observation that it’s easier to speak a foreign language when drunk than when sober: in the latter case I feel I’m so worried of saying something grammatically incorrect that I end up speaking in very simple sentences and very haltingly. (And the widespread use of drugs among rock musicians is well-known.)
If other people working the same craft have managed to achieve precision, it’s very unlikely to be a biological limit, right? The resolution of human fine motor skills is really high.
You didn’t mention what the craft was or the nature of the sloppiness, but have you considered using simple tools to augment technical skills? Perhaps a magnifying glass, rulers. pieces of string/clay or other suitably shaped objects to guide the hand, etc?
You could try doing something that gives immediate feedback for sloppiness, like simple math problems for example. You might gain some generalizable insight like that speed affects sloppiness. Since you already practice meditation, it should be easier to become aware of the specific failure modes that contribute to sloppiness, which doesn’t seem to be a well defined thing in itself.
I’m recalling a Less Wrong post about how rationality only leads to winning if you “have enough of it”. Like if you’re “90% rational”, you’ll often “lose” to someone who’s only “10% rational”. I can’t find it. Does anyone know what I’m talking about, and if so can you link to it?
I’m like 60% sure that its not that article I had in mind, but the idea is the same (incremental increases in rationality don’t necessarily lead to incremental increases in winning), so I feel pretty satisfied regardless. Thanks!
In any case, Eliezer has touched on this point multiple times in the sequences, often as a side note in posts on other topics. (See for example in Why Our Kind Can’t Cooperate.) It’s an important point, regardless.
I’m quite new to LW, and find myself wondering whether Hidden Markov models (HMM) are underappreciated as a formal reasoning tool in the rationalist community, especially compared to Bayesian networks?
Perhaps it’s because HMM seem to be more difficult to grasp?
Or it’s because formally HMM are just a special case of Bayesian networks (i.e. dynamic Bayes nets)? Still, HMM are widely used in science on their own.
For comparison, Google search “bayes OR bayesian network OR net” site:lesswrong.com gives 1,090 results.
Google search hidden markov model site:lesswrong.com gives 91 results.
Well, Kurzweil is an extremely accomplished inventor aside from being a pie-in-the-sky futurist, so when he says something about a particular algorithm working well, I assume he knows what he’s talking about. He seems to think hidden hierarchical Markov models are the best way to represent the hierarchical nature of abstract thought.
I’m not saying he’s correct, just saying, it seems to be a popular idea.
There’s a proliferation of terminology in this area; I think a lot of these are in some sense equivalent and/or special cases of each other. I guess “Bayesian network” is more consistent with the other Bayes-based vocabulary around here.
Is there a good way of finding what kind of job might fit a person? Common advice such as “do what you like to do” or “do what you’re good at” is relatively useless for finding a specific job or even a broader category of jobs.
I’ve did some reading on 80000 hours, and most of the advice there is on how to choose between a couple of possible jobs, not on finding a fitting one from scratch.
I think for most people who ask this question, the range of fitting jobs is much wider than they think. You learn to like what you become good at.
If I were to pick a career right now, I’d just take a long list of reasonably complex jobs and remove any that contain an obvious obstacle like a skill requirement I’m unlikely to improve at. Then from what is left, I’d narrow the choice by some other criteria than perceived fit, income and future employment prospects for example and then pick one of them either by some additional criteria or randomly. I’m confident I’d learn to like almost any job chosen this way.
If you make money you can do whatever you like in the future even if you chose your job poorly in the first place. So please don’t choose to become an English major.
Is there a good way of finding what kind of job might fit a person?
That’s a strange question.
Either you want to know how to pick up the skill of being a career adviser. Alternatively you want to find a job for yourself. You might also be a parent who tries to find a job that fits his child instead of letting the child decide for themselves.
I think the answers to those three possibilities are very different.
Does anyone have a simple, easily understood definition of “logical fallacy” that can be used to explain the concept to people who have never heard of it before?
I was trying to explain the idea to a friend a few days ago but since I didn’t have a definition I had to show her www.yourlogicalfallacyis.com. She understood the concept quickly, but it would be much more reliable and eloquent to actually define it.
She understood the concept quickly, but it would be much more reliable and eloquent to actually define it.
You think she would’ve understood the concept even more quickly if you had a definition? I think people underestimate the value of showing people examples as a way of communicating a concept (and overestimate the value of definitions).
Well I know I won’t be around a computer 24⁄7, and I’d like something to explain it if I’m out and about. Although I suppose I could use a couple examples that I can just memorize, like strawman arguments and ad hominum.
It’s a bad concept, at least the way it’s traditionally used in introductory philosophy classes. It encourages people to believe that certain patterns of argument are always wrong, even though there are many cases in which those patterns do constitute good (non-deductive) arguments. Instructors will often try to account for these cases by carving out exceptions (“argument from authority is OK if the authority is actually a recognized expert on the topic at hand”), but if you have to carve out loads of exceptions in order to get a concept to make sense, chances are you’re using a crappy concept.
Ultimately, I can’t find any unifying thread to “logical fallacy” other than “commonly seen bad argument”, but even that isn’t a very good definition because there are many commonly seen bad arguments that aren’t usually considered logical fallacies (the base rate fallacy, for instance). Also, by coming up with cute names to label entire patterns of argument, and by failing to carve out enough exceptions, most expositions of “logical fallacy” end up labeling many good arguments as fallacious.
So I guess my advice would be to stop using the concept altogether, rather than trying to explicate it. If you encounter a particular instance of a “logical fallacy” that you think is a bad argument, explain why that particular argument doesn’t work, rather than just saying “that’s an argumentum ad populum” or something like that.
A logical fallacy is an argument that doesn’t hold together. All of its assumptions might be true, but the conclusion doesn’t actually follow from them.
“Fallacy” is used to mean a few different things, though.
Formal fallacies happen when you try to prove something with a logical argument, but the structure of your argument is broken. For instance, “All weasels are furry; Spot is furry; therefore Spot is a weasel.” Any argument of this “shape” will have the same problem — regardless of whether it’s about weasels, politics, or Java programming.
Informal fallacies happen when you try to convince people of your conclusion through arguments that are irrelevant. A lot of informal fallacies try to argue that a statement is true because of something else — like its popularity, or the purported opinion of a smart person; or that its opponents are villains.
To a “regular person”, I might say something like “a logical fallacy is a form of reasoning that seems good to many humans, but actually isn’t very good”.
I don’t think this is so simple to explain, because to really understand logical fallacies you need to understand what a proof is. Not a lot of people understand what a proof is.
I just feel there is a difference between a “fallacy enthusiast” (someone who knows lists of logical fallacies, can spot them, etc.) and a “mathematician” (who realizes a ‘logical fallacy’ is just ‘not a tautology’), in terms of being able to “regenerate the understanding.”
This is similar to how you can try to explain to lawyers how they should update their beliefs in particular cases as new evidence comes to light, but to really get them to understand, you have to show them a general method:
Could you explain why it is necessary to understand what a proof is in order to understand logical fallacies? Most commonly mentioned fallacies are informal. I’m not seeing how understanding the notion of proof is necessary (or even relevant) for understanding informal fallacies.
Can anyone recommend a good replacement for flagfic.com ? This was a site that could download stories from various archives (fanfiction.net, fimfiction.net, etc) transform them to various e-reader formats, and email them to you. I used it to email fanfics I wanted to read directly to my Kindle as .mobi files.
Many thanks for the suggestion! I’ve started trying it out, and though it doesn’t seem to work perfectly for fimfiction.net (half the .mobi files I create from fanfics there get rejected for some reason when I email them to my kindle), it so far seems to work fine with fanfiction.net at least.
An excuse for me to learn Python so I can fix whatever it’s doing wrong. :-)
EDIT: On second thought, fimfiction.net allows me to get html downloads of the stories, which I can then email to kindle anyway—so as long as fanficdownloader works with fanfiction.net, I’m all set :-) Thanks again.
I tried the video at the url, and it seemed a lot more like straining (little pun about the mistaken url), but that might not be a fair test.
The basic idea of getting hip mobility seems sound, but I recommend Scott Sonnon’s Ageless Mobility and IntuFlow, and the The Five Tibetan Rites—sorry for the cheesy name on the latter, but they’re a cross between yoga and calisthenics with a lot of emphasis on getting backwards/forwards pelvis mobility.
Every now and then I like to review my old writings so I can cringe at all the wrong things I wrote, and say “oops” for each of them. Here we go...
That’s probably wrong. IIRC, previous eras’ low life expectancy was mostly due to high child mortality.
This sentence is defensible for certain definitions of “significant,” but I think it was a mistake to include this sentence (and the following quotes from Hutter and Schmidhuber) in the paper. AIXI and Godel machines probably aren’t particularly important pieces of progress to AGI worth calling out like that. I added those paragraphs to section 2.4. not long before the submission deadline, and regretted it a couple months later.
No, that’s a misreading of the study.
Eh, not really.
Silly. Donor-advised funds basically always fund as the donor wishes.
The Wiki link in the linked LW post seems to be closer to “Stanislav Petrov saved the world” than “not really”:
...
...
A closely related article says:
That he didn’t literally have his finger on the “Smite!” button, or that the SU might still not have retaliated if he’d raised the alarm, is not the point.
I have long thought that the very idea of “life expectancy at birth” is a harmful one, because it encourages exactly that sort of confusion. It lumps together two things (child mortality and life expectancy once out of infancy) with sufficiently different causes and sufficiently different effects that they really ought to be kept separate.
Does anybody have a source that separates the two out? For example, to what age can the average X year old today expect to live? Or even at a past time?
Sure, there is the concept of life expectancy at specific age. For example, there is the “default” life expectancy at birth, there is the life expectancy for a 20 year-old, life expectancy for a 60-year-old, etc. Just google it up.
It’s kind of important to the life insurance business ….
Thanks. Interestingly, My numbers never matched up between any 2 sources.
The US SSA’s actuarial tables give me a number that’s 5 years different from their own “additional life expectancy” calculator.
Huh. I followed the link to the correction of the Petrov story, and found I’d already upvoted it.
But if you’d asked me yesterday for examples of heroes yesterday, I’d have cited Petrov immediately. S
hows how hard it is to unlearn false information once you’ve learned it.
Smart move not only to review but post the results. Shows humbleness and at the same time prevents being called on it later.
This is an approach I’d like to see more often. Maybe you should add it to the http://lesswrong.com/lw/h7d/grad_student_advice_repository/ or some such.
On the AIXI and such… you see, its just hard to appreciate just how much training it takes to properly understand something like that. Very intelligent people, with very high mental endurance, train for decades, to be able to mentally manipulate the relevant concepts at their base level. Now, let’s say someone only spent a small fraction of the time—either because they’ve pursued a wrong topic through the most critical years, or because they have low mental endurance. Unless they’re impossibly intelligent, they have no chance of forming even a merely good understanding.
I’ve been systematically downvoted for the past 16 days. Every day or two, I’d lose about 10 karma. So far, I’ve lost a total of about 160 karma.
It’s not just somebody just going through my comments and downvoting the ones they disagree with. Even a comment where I said “thanks” when somebody pointed out a formatting error in my comments is now at −1.
I’m not sure what can/should be done about this, but I thought I should post it here. And if the person who did this is here and there is a reason, I would appreciate it if you would say it here.
A quick look at the first page of your recent comments shows most of your recent activity to have been in the recent “Is Less Wrong too scary to marginalized groups?” firestorm.
One of the most recent users to complain about mass downvoting also cited participation in flame-bait topics (specifically gender).
I would prefer to see a little less victim-blaming here.
(I’m not sure whether you intended it as such—but that phrase “participation in flame-bait topics” sounds like it.)
That was not my intention. (If it’s any consolation, I participated in the same firestorm.)
How is this victim blaming? As I interpret it the claim is that the person was probably NOT the victim of systematic downvoting but instead made a lot of comments that are counter to what people like to hear, creating the illusion of same.
Hard to explain getting downvoted for
as being about saying things “counter to what people like to hear”. Which is why I didn’t interpret CAE_Jones as suggesting that that’s what was going on.
For what it’s worth, I agree with gjm that “flame-bait” was a poor choice of words on my part, and I understand how it could have been taken as victim-blaming in spite of my intentions.
Gah… This is becoming way too common, and it seems like there’s pretty good evidence (further supported in this instance) regarding the responsible party. I wish someone with the power to do so would do something about it.
For context, link to past discussion of mass-downvoting.
I got a seemingly one-time hit of this about a week ago. For what it’s worth I had just been posting comments on the subject of rape, but a whole bunch of my unrelated comments got it too.
(Since then it’s been having an obnoxious deterrent effect on my commenting, because I feel so precariously close to just accumulating negative karma every time I post, leaving readers with the impression that my ideas have all been identified as worthless by someone probably cleverer than themselves. I’m now consciously trying to avoid thinking like this)
I have experienced this also, though roughly a month ago, after an extended debate on trans* issues specifically.
I responded by messaging the person I had argued with, and politely asking that, if it was them who had been downvoting me, they please stop going through my comment history. I got no response, but the stream of downvotes seemed to tail off shortly thereafter.
EDIT: As a side note, the person with whom I had been debating/arguing was the same one that showed up in the thread ChrisHallquist linked. It looks like it’s a pattern of behavior for him.
I have blindly upvoted your 10 most recent comments. This is meant as consolation but likely a one-time action .
I also hereby claim without evidence that I have been subject to mass downvoting. Please give me 10 karma.
If you like I can pick someone I don’t like to accuse without evidence of being the guilty party.
Here obviously the rule applies that once a pattern is estabilished it is immediately prone to be gamed.
Contrary to the downvoters that befell you I see this as a humorus comment and correspondingly upvated it.
Just to make this clear: I will not do so reliably in the future.
Mix it up a bit, yo.
That sounds like a pretty low value comment. Is it beneficial to third parties to be able to read it? If not, just make the correction and PM your thanks. Otherwise you’re unnecessarily wasting everyone’s time in the guise of politeness.
I think saying thanks is public for corrections contributes to a courteous atmosphere. It encourages third parties to make and accept corrections.
If such thanks were getting to be a drag on discussion, I’d recommend a different policy, but I don’t think we’re at that point.
Acknowledging that there indeed an error and that you weren’t intentionally doing something is helpful.
If you make the correction and not mention that the comment that points out the issue is right, it will seem to anybody who reads the discussion later as if the comment points out a nonexisting problem.
It is not necessary that the argument I gave be right. All that is necessary for the g’grandparent to be wrong is for there to be a plausible reason why someone would want to downvote such a comment, other than malice.
Robin Hanson on Facebook:
Consider the case of Willard Wells and his Springer-published book Apocalypse When?: Calculating How Long the Human Race Will Survive (2009). From a UCSD news story about a talk Wells gave about the book:
For a taste of the book, here is Wells’ description of one specific risk:
Now, despite Larry Carter’s being “persuaded by Wells’ credentials” — which might have been exaggerated or made-up by the journalist, I don’t know — I suspect very few people have taken Wells seriously, for good reason. He’s clearly just making stuff up, with almost no study of the issue whatsoever. (On this topic, the only people he cites are Joy, Kurzweil, and Posner, despite the book being published in 2009.)
But reading that passage did drive home again what it must be like for most people to read FHI or MIRI on AI risk, or Robin Hanson on ems. They probably can’t tell the difference between someone who is making stuff up and an argument that has gone through a gauntlet of 15 years of heated debate and both theoretical and empirical research.
Yes by judging someone on their credentials in other fields, you can’t tell if they are just making stuff up on this subject vs. studied it for 15 years.
Wells’s book: Apocalypse when.
I took a quick skim through the book. Your focused criticism of Wells’s book is somewhat unfair. The majority of the book (ch. 1–4) is about a survival analysis of doomsday risks. The scenario you quoted is in the last chapter (ch. 5), which looks like an afterthought to the main intent of the book (i.e., providing the survival analysis), and is prepended by the following disclaimer:
I think it is fair to criticize the crackpot scenario that he gave as an example, but your criticism seems to suggest that his entire book is of the same crackpot nature, which it is not. It is unfortunate that PR articles and public attention focuses on the insubstantial parts of the book, but I am sure you know what that is like as the same occurs frequently to MIRI/SIAI’s ideas.
Orthogonal notes on the book’s content: Wells seems unaware of Bostrom’s work on observation selection effects, and it appears that he implicitly uses SSA. (I have not carefully read enough of his book to form an opinion on his analysis, nor do I currently know enough about survival analysis to know whether what he does is standard.)
Ah, you’re right that I should have quoted the “This set serves as a foil” paragraph as well.
I found chs. 1-4 pretty unconvincing, too, though I’m still glad that analysis exists.
Yes, I’m an academic and I get a similar reaction from telling people I study the Singularity as when I say I’ve signed up for cryonics. Thankfully, I have tenure.
What happens when you say, “I study the economic implications of advanced artificial inteligence,” to people?
I don’t phrase it this way.
Do you actually say you “study the singularity” or give a more in depth explanation? I ask because the word study is usually used only in reference to things that do or have exisited, rather than to speculative future events.
I go into more depth, especially when I (unsuccessfully) came up for promotion for full professor.
It might be a worthwhile endeavor to modify our wiki such that it serves not only as a mostly local reference on current terms and jargon, but also as an independent guide to the various arguments for and against various concepts, where applicable. It could create a lot of credibility and exposure to establish a sort of neutral reference guide / an argument map / the history and iterations an idea has gone through, in a neutral voice. Ideally, neutrality regarding PoV works in favor of those with the balance of arguments in their favor.
This need not be entirely new material, but instead simply a few mandatory / recommended headers in each wiki entry, pertaining to history, counterarguments etc. Could be worth it lifting the wiki from relative obscurity, with a new landing page, and marketed potentially as a reference guide for journalists researching current topics. Kruel’s LW interview with Shane Legg got linked to in a NYTimes blog, why not a suitable LW wiki article, too?
I don’t think that’s the case. Most people who are listened to on the future don’t tend to speak to an audience primarily consisting of futurists.
There are think tanks who employee people to think about the future and those think tanks tend generally to be quite good at influencing the public debate.
I also don’t think that academic has any special claim to be specialists about the future. When I think about specialists on futurism names like Stewart Brand or Bruce Sterling.
This is a very important and general point. While it is important to communicate ideas to a general audience, generally excessive communication to general audiences at the expense of communication to peers should be “bad news” when it comes to evaluating experts. Folks like Witten mostly just get work done, they don’t write popular science books.
Witten doesn’t ring a bell with me. Googling the name might mean Edward Witten and Tarynn Madysyn Witten. Do you mean either or them or someone else?
I mean Edward Witten, one of the most prominent physicists alive. The fact that his name does not ring a bell is precisely my point. The names that do ring a bell are the names of folks who are “good at the media,” not necessarily folks who are the best in their field.
Okay, given that the subject is theoretical physics and I’m not much into that field I understand why I have no recognition.
When looking at his Wikipedia I see he made Time 100 so it still might be worth knowing the name.
Witten is one of the greatest physicists alive, if not the greatest. He is the one who unified the various string theories into M-theory. He is also the only physicist to receive a Fields Medal.
Some names familiar to LWers seem to have just made their fortunes (again, in some cases); http://recode.net/2014/01/26/exclusive-google-to-buy-artificial-intelligence-startup-deepmind-for-400m/ (via HN)
I liked Legg’s blog & papers and was sad when he basically stopped in the interests of working on his company, but one can hardly argue with the results.
EDIT: bigger discussion at http://lesswrong.com/r/discussion/lw/jks/google_may_be_trying_to_take_over_the_world/#comments—new aspects: $500m, not $400m; DeepMind proposes an ethics board
I’m going to do the unthinkable: start memorizing mathematical results instead of deriving them.
Okay, unthinkable is hyperbole. But I’ve noticed a tendency within myself to regard rote memorization of things to be unbecoming of a student of mathematics and physics. An example: I was recently going through a set of practice problems for a university entrance exam, and calculators were forbidden. One of the questions required a lot of trig, and half the time I spent solving the problem was just me trying to remember or re-derive simple things like the arcsin of 0.5 and so on. I knew how to do it, but since I only have a limited amount of working memory, actually doing it was very inefficient because it led to a lot of backtracking and fumbling. In the same sense, I know how to derive all of my multiplication tables, but doing it every time I need to multiply two numbers together is obviously wrong. I don’t know how widespread this is, but at least in my school, memorization was something that was left to the lower-status, less able people who couldn’t grasp why certain results were true. I had gone along with this idea without thinking about it critically.
So these are the things I’m going to add to my anki decks, with the obligatory rule that I’m only allowed to memorize results if I could theoretically re-derive them (or if the know-how needed to derive them is far beyond my current ability). These will include common trig results, derivatives and integrals of all basic functions, most physical formulae relating heat, motion, pressure and so on. I predict that the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems, though I can’t think of a way to measure this. Also, recommendations for other things to memorize are welcome.
Also, relevant
In my experience memorization often comes for free when you strive for fluency through repetition. You end up remembering the quadratic formula after solving a few hundred quadratic equations. Same with the trig identities. I probably still remember all the most common identities years out of school, owing to the thousands (no exaggeration) of trig problems I had to solve in high school and uni. And can derive the rest in under a minute.
Memorization through solving problems gives you much more than anki decks, however: you end up remembering the roads, not just the signposts, so to speak, which is important for solving test problems quickly.
You are right that “the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems”, I am not sure that anki is the best way to achieve this reduction, though it is certainly worth a try.
In general there the core principle of spaced repetition that you don’t put something into the system that you don’t already understand.
When trying to memorize mathematical results make sure that you only add cards when you really have a mental understanding. Using Anki to avoid forgetting basic operations is great. If you however add a bunch of information that’s complex, you will forget it and waste a lot of time.
That’s true if you’re just using spaced repetition to memorize, although I’d add that it’s still often helpful to overlearn definitions and simple results just past the boundaries of your understanding, along the lines of Prof. Ravi Vakil’s advice for potential students:
The second point I’d make is that the spacing effect (distributed practice) works for complex learning goals as well, although it will help if your practice consists of more than rote recall.
If you learn definitions it’s important to sit down and actually understand the definition. If you write a card before you understand it, that will lead to problems.
Yeah, I’m wary of that fact and I’ve learned the downsides of it through experience :)
Nice, and good luck! I’m glad to see that my post resonated with someone. For rhetorical purposes, I didn’t temper my recommendations as much as I could have—I still think building mental models through deliberate practice in solving difficult problems is at the core of physics education.
I treat even “signpost” flashcards as opportunities to rehearse a web of connections rather than as the quiz “what’s on the other side of this card?” If an angle-addition formula came up, I’d want to recall the easy derivation in terms of complex exponentials and visualize some specific cases on the unit circle, at least at first. I also use cards like that in addition to cards which are themselves mini-problems.
In this article, Eliezer says:
Recently, a similar phrase popped into my head, which I found quite useful:
That’s all.
I don’t know what you mean precisely by confusion, but I personally can’t always control what my immediate primal level response is to certain situations. If I try to strictly avoid certain feelings, I usually end up convincing myself that I’m not feeling that way when actually I am. I’d rather notice what I’m feeling and then move on from there, it’s probably easier to control your thinking that way. Just because you’re angry doesn’t mean you have to act like angry.
That’s basically what I meant. The move is to notice the anger, fear or disgust and then realize that this emotion isn’t useful and can be actively detrimental. Then consciously try to switch to curiosity.
Of course, I couldn’t condense the full messiness of reality into a pithy saying.
“Make yourself feel curiosity” is not very concretely actionable in the short term. If you want to coin near-mode actionable advice, instead of a far-mode affirmation of positive emotions, you might say something like, “The proper responses to feelings of confusion are orienting and exploring behaviors”.
Those behaviors should be unpacked to more specific things like looking at your surroundings, asking questions of nearby people, searching your memory for situation-relevant information, and planning an experiment or other (navigable) (causal) path to sources of information. Those levels should be fleshed out and made more concrete too.
Now that I’ve given some helpful advice, I think I’ve earned an opportunity to express some cynicism: cheering for curiosity and exploration over anger, disgust, and fear shows a stereotypical value alignment of affluent, socially tolerant people in safe environments. The advice you give will not serve you will in adversarial games like chess. It will not serve you well in combat or social competition. It is in many situations harmful advice.
Separate and unrelated, I would not like to see this template for inarticulately expressing advice continued. Mostly I say this for the same reasons that we don’t make use of reaction .gifs and image macros on lesswrong. There is also a small concern that variants of familiar phrases are harder to evaluate critically, much as the mnemonic device of rhyme apparently makes some specific phrasings of claims more credible than others phrasings of those same claims.
PSA: You can download from scribd without paying, you just need to upload a file first (apparently any file—it can be a garbage pdf or even a pdf that’s already on scribd). They say this at the very bottom of their pricing page, but I didn’t notice until just now.
Hello, we are organizing monthly rationality meetups in Vienna—we have previously used the account of one of our members (ratcourse) but would like to switch to this account (rationalityvienna). Please upvote this account for creating rationality vienna meetups.
Reason #k Why I <3 Pomodoros:
They really help me get over akrasia. I beemind how many pomodoros I do per week, so I do tasks I would otherwise procrastinate if I can do 20 minutes of them (yes, I do short pomodoros) and get to enter a data point at the end. Often I find that the task is much shorter/less awful than it felt in the abstract.
Example: I just moved today, and didn’t have that much to unpack, but decided I’d do it tomorrow, because I felt tired and it would presumably be long and unpleasant. But then I realized I could get a pomodoro out of it (plus permission from myself to stop after 20 min and go to bed). Turns out it took 11 minutes and now I’m all set up!
I do this all the time and it’s great!
Even if you know that signaling is stupid, it doesn’t escape the cost of not signaling.
It’s a longstanding trope that Eliezer gets a lot of flack for having no formal education. Formal education is not the only way to gain knowledge, but it is a way of signaling knowledge, and it’s not very easy to fake (Not so easy to fake that it falls apart as a credential on its own). Has anyone toyed around with the idea of sending him off to get a math degree somewhere? He might learn something, and if not it’s a breezy recap of what he already knows. He comes out the other side without the eternal “has no formal education” tagline, and a whole new slew of acquaintances.
Now, I understand that there may be good reasons not to, and I’d very much appreciate someone pointing me to any previous discussion in which this has been ruled out. Otherwise, how feasible does it sound to crowdfund a “Here’s your tuition and an extra sum of money to cover the opportunity cost of your time, I don’t care how unfair it is that people won’t take you seriously without credentials, go study something useful, make friends with your professors, and get out with the minimum number of credits possible” scholarship?
I think the bigger issue w/ people not taking EY seriously is he does not communicate (e.g. publish peer reviewed papers). Facebook stream of consciousness does not count. Conditional on great papers, credentials don’t mean that much (otherwise people would never move up the academic status chain).
Yes it is too bad that writing things down clearly takes a long time.
Somehow I doubt I will ever persuade Eliezer to write in a style fit for a journal, but even still, I’ll briefly mention that Eliezer is currently meeting with a “mathematical exposition aimed at math researchers” tutor. I don’t know yet what the effects will be, but it seemed (to Eliezer and I) a worthwhile experiment.
Presumably if MIRI were awash with funding you’d pay experts to make papers out of Eliezer’s work, freeing Eliezer up for other things?
That’s basically what another of our ongoing experiments is.
True. It seems like the great-papers avenue is being pursued full-steam these days with MIRI, but I wonder if they’re going to run out of low-hanging fruit to publish, or if mainstream academia is going to drag their heels replying to them.
I don’t think you understand signaling well.
Eliezer managed signaling well enough to get a billionaire to fund him on his project. A billionaire who fund people who drop out of college systematically in projects like his 20 Under 20 program.
Trying to go the traditional route wouldn’t fit into the highly effective image that he already signals.
Put another way, the purpose of signaling isn’t so nobody will give you crap. It’s so somebody will help you accomplish your goals.
People will give you crap, especially if they can get paid to do so. See gossip journalists, for instance. They are not paid to give boring and unsuccessful people crap; they are paid to give interesting and successful people crap.
Your last para would imply that not getting crap from gossip journalists means you are not interesting or successful. Eliezer/MIRI gets almost no press. Are you sure that’s what you meant?
Eliezer gets a lot more press than I do, which is just fine with me.
Well, yes, there is going to be some inevitable crap, but the purpose of signalling is so that you could impress a much larger pool of people. So it might not be much help for gossip journalists, but it might help with the marginal professional ethicist, mathematician, or public figure. In that area, you might get some additional “Anybody who can do that must be damn impressive.”. Does the additional damn-impressive outweigh the cost? I don’t know, that’s why I’m asking.
The discussion about mean vs variance in this post may be relevant.
Peter Thiel (the billionaire) has the proven ability to spot talent, which is why he is a billionaire. Eliezer has traits that Thiel values, and this is probably much more important than any signal Eliezer sent.
Impressing Thiel is independent of a future degree or not, because he’s already impressed. Where’s the next billionaire going to come from, and will they coincidentally also be as contrarian as Thiel? Maybe MIRI doesn’t need another billionaire, but I don’t think they’d turn one away.
I think the deal that Eliezer has with Thiel is that Eliezer does MIRI full time. Switching focus to getting a degree might violate the deal. Gives that Thiel has a lot of money impressing Thiel more might also be very useful if they want more money from him.
Do you really think that someone who isn’t contrarian will put his money into MIRI? The present set up is quite okay. Those who want people with academic credentials can give their money to FHI. Those who want more contrarian people can give their money to MIRI.
Whether or not Eliezer has a degree doesn’t change that he’s the kind of person who has a public Okcupid profile detailing his sexual habits and the fact that he’s polyamorous.
When Steve Job was alive and run around in a sweater, he didn’t cause people to disregard him because he wasn’t wearing a suit.
People respect the person who’s a contrarian who’s okay with not everyone liking him. The contrarian who tries to get every to like them on the other hand get’s no respect.
On the other hand if he decides to get a degree and pulls it off in a year or something impressive like that it could just feed into the contrarian genius image.
Yes, but that would prokbably either mean paying someone else to do your homework with means that you are vunerable to attack or making studying the sole focus for a year.
Yes, the autodidact signal can be tremendously effective, particularly in tech/libertarian company.
In addition “getting flak” isn’t necessarily a bad thing.
It can be counter-signaling if you can get flak and stay standing.
It can also polarize people and separate those who can evaluate the inside arguments to realize that you’re good from those who can’t and have to just write you off for having no formal education.
Eddie has some math talent. He can invest some time, money, and effort C to get a degree, which allows other people to discern that he has a higher probability of having that math talent. This higher probability confers some benefit in that other people will more readily take his advice in mathematical matters, or talk with him about his math.
The fun twist is that Eddie lives in a society with many other individuals with varying degrees of math talent, each of whom can expend C to get a degree and the associated benefits. People with almost no mathematical talent have a prohibitively high C, because even if they can pony up the time and money, they have to work very hard to fake their way through. But people with high math ability often choose to stand out by getting the degree, because their C is relatively lower, and a very high proportion of them get degrees. This creates a high association between degrees and mathematical ability, and makes it unlikely to see high mathematical ability in the absence of a degree.
That’s the basic idea, plus degrees signal other things which may be completely unrelated to math, but are still nice. Even in the case where the degree has no causal effect no math ability, there are benefits to having one, in that the other math people can judge very quickly that they’re interested in talking to you.
Hopefully that demonstrates that I understand signalling. My question is about the costs and benefits of a particular signal.
It demonstrate that you don’t. Humans make decisions via something called the availability heuristic.
If you bring into the awareness of the person that you are talking that you are a mathematician that only has a bachleor, no master, no PHD and no professorship that you aren’t bringing expertise into his mind.
If you are however a self taught person who managed to published multiple papers among them a paper titled “Complex Value Systems in Friendly AI” in Artificial General Intelligence Lecture Notes in Computer Science Volume and who has his own research institute that’s a better picture.
If you have published papers that a lot more relevant for relevant experts than whether you have a degree that verifies basic understanding. If a person really cares whether Eliezer has a math degree he already lost that person.
I’m not certain that getting a degree now counts as the traditional route. Also, I don’t think that an additional degree is particularly damaging to his image. People aren’t going to lose interest in FAI if he sells out and gets a traditional degree. Or they are and I have no idea what kind of people are involved.
4 years (or even 1 year if you are super hard-core) of time is a pretty non-trivial investment. I was 2 classes away from a second degree and declined to take them, because the ~100 hours of work it would have taken wasn’t worth the additional letters after my name. I also just really don’t know anyone relevant who thinks that a college degree or lack thereof particularly matters (although the knowledge and skills acquired in the course of pursuing said degree may matter a lot). Good people will judge you by what you’ve done to demonstrate skill, not based on a college diploma.
I think IlyaShpitser’s comment pretty much nails it.
I came to the same conclusion, and in general a lack of degree has not impacted me as I get employment based on demonstrated skill. The main limitation is that any formal Postgrad study is impossible without a degree and this was a regret for me, prior to getting access to the coursera type courses.
If you buy into the “crunch time” narrative, that’s a lot of opportunity cost.
This might have been a good call 10 years ago but nowadays Eliezer is participating in regular face to face meetings with skilled mathematicians and scientists in the context of constructing and analyzing theorems and decision strategies. This means that for a large amount of the people who are most important to convince, he gets to screen out all the “evidence” of not having a degree. And to a large extent, someone having the respect of a bunch of math phds is more important a qualifier of talent than having that phd themselves.
There’s theoretically still the problem of selling Eliezer to the muggles but I don’t think that’s anywhere near as important as getting serious thinkers on board.
Different target groups may use different signals.
For example, for a scientist the citations may be more important than formal education. For an ordinary person with a university diploma who never published anything anywhere, formal education will probably remain the most important signal, because that’s what they use. A smart sponsor may instead consider the ability of getting things done. And the New Age fans will debate about how much Eliezer fits the definitions of an “indigo child”.
If the goal is to impress people for whom having an university diploma is the most important signal (they are a majority of the population), the best way would be to find an university which gives the diploma for minimum time and energy spent. Perhaps one where you just pay some money (hopefully not too much), take a few easy exams, and that’s it; you don’t have to spend time at the lessons. After this, no one can technically say that Eliezer has “no formal education”. (And if they start discussing the quality of the university, then Eliezer can point to his citations.) The idea is to do this as easily as possible… assuming it’s even worth doing.
There are also other things to consider, such as the fact that other people working with Eliezer do have formal education… so why exactly is it a problem if Eliezer doesn’t? Does MIRI seem from outside like one man show? Maybe that should be fixed.
A diploma mill degree like you describe is not going to get any respect from the (large) population that went to a real university.
Would getting more citations partly nullify the lack of formal education?
A recent experience reminded me that basics are really important. On LW we talk a lot about advanced aspects of rationality.
If you would have to describe the basics, what would you say? What things are so obvious for you about rationality that they usually go without saying?
You can frequently make your life better by paying attention to what you’re doing, looking for possible improvements, trying your ideas, and observing whether the improvements happen.
There is no magic.
I am not in a story.
Words are detachable handles.
Brilliant.
I run on hardware that was optimized by millions of years of evolution to do the sort of things my ancestors did tens of thousands of years ago, not the sort of things I do now.
People can change (e.g. update on beliefs, self-improve).
How to choose your actions—think about your goals, think what steps achieve them in the best way, act on those steps.
There is such a thing as objective truth.
Amazing how the basic pillars of rationality are things other people so often don’t agree with, even though they seem so dead obvious to me.
This is a fun exercise. The list could be a lot longer than I originally expected.
belief is about evidence
0 and 1 are not probabilities
Occam’s razor
strawman and steelman
privileging the hypothesis
tabooing
instrumental-terminal distinction of values
don’t pull probabilities out of your posterior
introspection is often wrong
intuitions are often wrong
general concept of heuristics and biases
confirmation and disconfirmation bias
halo effect
knowing about biases doesn’t unbias you
denotations and connotations
many more
“not technically lying” is de facto lying
This might be useful for staying honest to yourself and perhaps your allies, but it’s also useful to keep in mind that most people give different kinds of lies different degrees of moral weight.
Nice list, even a bit that’s basic enough that I can put it into an Anki deck about teaching rationality (a long term project of mine but at the moment I doesn’t have enough cards for release).
I’d like to hear about the experience if you’re willing to share it. How basic are we talking about?
This older discussion thread seems to ask a similar question and some answers are relevant to your question. If you think your question phrased in a more specific way would elicit different kinds of responses, it might deserve its own thread.
The experience wasn’t about the domain of rationality but another subject and the relationships of concepts in that framework. If don’t think it’s useful for people without the experience of the framework.
As basic as you can get. What is the most basic thing you can say about rationality. If your reaction is: “Duh, I don’t know nothing comes to mind”, that’s exactly why it might be worthwhile to investigate the issue.
Recently there was a discussion about vocabulary for rationality and someone made the point that things can be said either implicit or explicit. Implicitness and explicitness are pretty basic concepts.
My meditation blog from a (somewhat) rationalist perspective is now past 40 posts:
http://meditationstuff.wordpress.com/
Do you have any material for dealing with chronic pain? Or material that could conceivably be leveraged to apply to chronic pain management?
I’m coming at this from ten years of brain fog, unrefreshing sleep, “feeling sick all the time,” etc. Mostly better now; I did a lot of stuff highly specific to my situation. The below mostly helped with enduring it. Remember, I’m just some random idiot on the internet, hope this is helpful, and in no particular order:
http://www.amazon.com/Awareness-Through-Movement-Easy—Do/dp/0062503227/
http://www.amazon.com/The-Lover-Within-Opening-Practice/dp/1581770170
http://www.amazon.com/Male-Multiple-Orgasm-Step—Step/dp/1882899067/
http://store.breathingcenter.com/books—in-english/buteyko-breathing-manual-download
http://www.amazon.com/Acceptance-Commitment-Therapy-Second-Edition/dp/1609189620/
http://www.amazon.com/Get-Your-Mind-Into-Life/dp/1572244259
http://www.amazon.com/Exposure-Therapy-Anxiety-Principles-Practice/dp/146250969X/
http://www.coherencetherapy.org/resources/manual.htm
http://meditationstuff.wordpress.com/2013/07/22/additive-meditation/
http://www.amazon.com/Compassion-Focused-Therapy-Distinctive-Features/dp/0415448077/
http://www.butyoudontlooksick.com/wpress/articles/written-by-christine/the-spoon-theory/
http://www.amazon.com/HIIT-Intensity-Interval-Training-Explained/dp/1477421599/
Some of John Sarno’s stuff
John_Maxwell_IV and I were recently wondering about whether it’s a good idea to try to drink more water. At the moment my practice is “drink water ad libitum, and don’t make too much of an effort to always have water at hand”. But I could easily switch to “drink ad libitum, and always have a bottle of water at hand”. Many people I know follow the second rule, and this definitely seems like something that’s worth researching more because it literally affects every single day of your life. Here are the results of 3 minutes of googling:
http://www.sciencedirect.com/science/article/pii/S0002822399000486:
So how much is 2% dehydration? http://en.wikipedia.org/wiki/Dehydration#Differential_diagnosis : “A person’s body, during an average day in a temperate climate such as the United Kingdom, loses approximately 2.5 litres of water.[citation needed]” http://en.wikipedia.org/wiki/Body_water quotes Arthur Guyton ’s Textbook of Medical Physiology: “the total amount of water in a man of average weight (70 kilograms) is approximately 40 litres, averaging 57 percent of his total body weight.” So effects on cognition become apparent after 40l*2%=800ml of water has been lost, which takes roughly 800ml/(2.5l/24h) = 8 hours. Now, this assumes water is lost at a constant rate, which is false, but it still seems like it would take a while to lose a full 800ml. Which implies that you don’t have to make a conscious effort to drink more water because everybody gets at least mildly thirsty after, say, half an hour of walking around outside on a warm day, which seems like it would be a lot less than 800ml.
http://freebeacon.com/michelle-obamas-drink-more-water-campaign-based-on-faulty-science/ : “There really isn’t data to support this,” said Dr. Stanley Goldfarb of the University of Pennsylvania. “I think, unfortunately, frankly, they’re not basing this on really hard science. It’s not a very scientific approach they’ve taken. … To make it a major public health effort, I think I would say it’s bizarre.” Goldfarb, a kidney specialist, took particular issue with White House claims that drinking more water would boost energy. ”The idea drinking water increases energy, the word I’ve used to describe it is: quixotic,” he said. “We’re designed to drink when we’re thirsty. … There’s no need to have more than that.”
http://ask.metafilter.com/166600/Drinking-more-water-should-make-me-less-thirsty-right : When you don’t drink a lot of water your body retains liquid because it knows it’s not being hydrated. It will conserve and reabsorb liquid. When you start drinking enough water to stay more than hydrated your body will start using the water and then dispensing of it as needed. Your acuity for thirst will be activated in a different way and in a sense work better.
Some thoughts:
More frequent water-drinking makes you urinate more often, which is probably a bad thing for productivity.
There might be negative effects with chronic mild dehydration at levels less severe than in the studies above.
There might also be hormetic effects. (As in, your body functions best under frequent mild dehydration because that’s what happened in the EEA, and always giving it as much water as it wants will be bad.)
Thoughts? Please post your own opinion if you’re knowledgeable about this or if you’ve researched it.
Extended sedentary periods are bad for you, so if drinking extra water also makes you get up and walk to the bathroom, that’s a win-win.
Except when you’re trying to sleep.
While you’re at it, you probably should also research how much water is too much, because on the other side of the spectrum lies hyponatremia and having suboptimal electrolyte levels from overdosing water could be harmful to your cognition too, although I think it’s unlikely anyone here will develop a measurable hyponatremia just from drinking too much water. Sweating a lot for example might change the situation.
This doesn’t look like a selective enough heuristic alone.
As far as water consumption goes I feel the difference between drinking one liter or four liter per day. I just feel much better with four liter.
There were times two years ago when unless I had drunk 4 liter by the time I entered my Salsa dancing location in the evening, my muscle coordination was worse and the dancing didn’t flow well.
Does that mean that everyone has to drink 4 liters to be at his optimum? No, it doesn’t. Get a feel how different amounts of water consumption effect you. For me the effect was clear to see without even needing to do QS. Even it’s not as clear for you do QS.
Thanks for writing this up.
Lots of things fall in to this category :)
In case it’s not obvious: this probably means in the absence of food/fluid consumption. You can’t go on losing 2.5 litres of water a day indefinitely.
I assumed it wasn’t net, but the amount of water excreted, regardless of consumption. Though those probably are not unrelated processes.
Anecdotally, I feel less lazy when I drink lots of water, but for all I know it might well be placebo.
We should do a placebo study on the effects of drinking water.
Repeating my post from the last open thread, for better visibility:
I want to study probability and statistics in a deeper way than the Probability and Statistics course I had to take in the university. The problem is, my mathematical education isn’t very good (on the level of Calculus 101). I’m not afraid of math, but so far all the books I could find are either about pure application, with barely any explanations, or they start with a lot of assumptions about my knowledge and introduce reams of unfamiliar notation.
I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a sample. Intuitively, it makes sense. But why this particular formula of sum/n? You can apply all kinds of mathematical stuff to the sample. And it’s even worse with variance...
Any ideas how to proceed?
I too spent a few years with a similar desire to understand probability and statistics at a deeper level, but we might have been stuck on different things. Here’s an explanation:
Suppose you have 37 numbers. Purchase a massless ruler and 37 identical weights. For each of your numbers, find the number on the ruler and glue a weight there. You now have a massless ruler with 37 weights glued onto it.
Now try to balance the ruler sideways on a spike sticking out of the ground. The mean of your numbers will be the point on the ruler where it balances.
Now spin the ruler on the spike. It’s easy to speed up or slow down the spinning ruler if the weights are close together, but more force is required if the weights are far apart. The variance of your numbers is proportional to the amount the ruler resists changes to its angular velocity—how hard you have to twist the ruler to make it spin, or to make it stop spinning.
“I’d like to understand this more deeply” is a thought that occurs to people at many levels of study, so this explanation could be too high or low. Where did my comment hit?
Moments of mass in physics is a good intro to moments in stats for people who like to visualize or “feel out” concepts concretely. Good post!
A different level explanation, which may or may not be helpful:
Read up on affine space, convex combinations, and maybe this article about torsors.
If you are frustrated with hand waving in calculus, read a Real Analysis textbook. The magic words which explain how the heck you can have a probability distributions over real numbers is measure theory).
How does that answer the question?
It’s true that the center of gravity is a mean, but the moment of inertia is not a variance. It’s one thing to say something is “proportional to a variance” to mean that the constant is 2 or pi, but when the constant is the number of points, I think it’s missing the statistical point.
But the bigger problem is that these are not statistical examples! Means and sums of squares occur many places, but why are they are a good choice for the central tendency and the tendency to be central? Are you suggesting that we think of a random variable as a physical rod? Why? Does trying to spin it have any probabilistic or statistical meaning?
I wasn’t aiming to answer Locaha’s question as much as figure out what question to answer. The range of math knowledge here is high, and I don’t know where Locaha stands. I mean,
That could be a basic question about the meaning of averages—the sort of knowledge I internalized so deeply that I have trouble forming it into words.
But maybe Locaha’s asking a question like:
That’s a less philosophical question. So if Locaha says “means are like the centers of mass! I never understood that intuition until now!”, I’ll have a different follow up than if Locaha says “Yes, captain obvious, of course means are like centers of mass. I’m asking about XYZ”.
Mean and variance are closely related to center of mass and moment of inertia. This is good intuition to have, and it’s statistical. The only difference is that the first two are moments of a probability distribution, and the second two are moments of a mass distribution.
Using the word “distribution” doesn’t make it statistical.
Telegraph to a younger me:
If you are frustrated with explanations in calculus, read a Real Analysis textbook. And the magic words that explain how the heck you can have probability distributions over real numbers is measure theory.
When you have thousands of different pieces of data, to grasp it mentally, you need to replace them with some simplification. For example, instead of a thousand different weights you could imagine a thousand identical weights, such that the new set is somehow the same as the original set; and then you would focus on the individual weight from the new set.
What precisely does “somehow the same as the original set” mean? Well, it depends on what did the numbers from the original set do; how exactly they join together.
For example, if we speak about weights, the natural way of “joining together” is to add their weight. Thus the new set of the identical weights is equivalent to the original set if the sum of the new set is the same as sum of the old set. The sum of the new set = number of pieces × weight of one piece. Therefore the weight of the piece in the new set is the sum of the pieces in the original set divided by their number; the “sum/n”.
Specifically, if addition is the natural thing to do, the set 3, 4, 8 is equivalent to 5, 5, 5, because 3 + 4 + 8 = 5 + 5 + 5. Saying that “5 is the mean of the original set” means “the original set behaves (with regards to the natural thing to do, i.e. addition) as if it was composed of the 5′s”.
There are situations where some other operation is the natural thing to do. Sometimes it is multiplication. For example, if you multiply some original value with 2, and they you multiply it by 8, the result of these two operations is the same as if you would multiply it twice by 4. In this case it’s called geometric mean, and it’s a root of product.
It can be even more complicated, so it doesn’t necessarily have a name, but the idea is always replacing the original set with a set of identical values such that in the original context they would behave the same way. For example, the example above could be described as a 100% growth (multiplication by 2) and 700% growth (multiplication by 8), and you need to get a result 300% (multiplication by 4); in which case it would be “root of (product of (Xi + 100%)) − 100%”.
If there is no meaningful operation in the original set, if the set can be ordered, we can pick the median. If the set can’t even be ordered, if there are discrete values, we can pick the most frequent value as the best approximation of the original set.
I don’t think that’s really what means are. That intuition might fit the median better. One reason means are nice is that they have really nice properties, e.g. they’re linear under addition of random variables. That makes them particularly easy to compute with and/or prove theorems about. Another reason means are nice is related to betting and the interpretation of a mean as an expected value; the theorem justifying this interpretation is the law of large numbers.
Nevertheless in many situations the mean of a random variable is a very bad description of it (e.g. mean income is a terrible description of the income distribution and median would be much more appropriate).
Edit: On the other hand, here’s one very undesirable property of means: they’re not “covariant under increasing changes of coordinates,” which on the other hand is true of medians. What I mean is the following: suppose you decide to compute the mean population of all cities in the US, but later decide this is a bad idea because there are some really big cities. If you suspect that city populations grow multiplicatively rather than additively (e.g. the presence of good thing X causes a city to be 1.2x bigger than it otherwise would, as opposed to 200 people bigger), you might decide that instead of looking at population you should look at log population. But the mean of log population is not the log of mean population!
On the other hand, because log is an increasing function, the median of log population is still the log of median population. So taking medians is in some sense insensitive to these sorts of decisions, which is nice.
I asked a similar question a while back, and I was directed to this book, which I found to be incredibly useful. It is written at an elementary level, has minimal little maths, yet is still technical, and brings across so many central ideas in very clear, Bayesian, terms. It is also on Lukeprog’s CSA book recommendations for ‘Become Smart Quickly’.
Note: this is the only probability textbook I have read. I’ve glanced through the openings of others, and they’ve tended to be above my level. I am sixteen.
As a first step, I suggest Dennis Lindley’s Understanding Uncertainty. It’s written for the layperson, so there’s not much in the way of mathematical detail, but it is very good for clarifying the basic concepts, and covers some surprisingly sophisticated topics.
ETA: Ah, I didn’t notice that Benito had already recommended this book. Well, consider this a second opinion then.
Read Edwin Jaynes.
The problem with most Probability and Statistics courses is the axiomatic approach. Purely formalism. Here are the rules—you can play by them if you want to.
Jaynes was such a revelation for me, because he starts with something you want, not arbitrary rules and conventions. He builds probability theory on basic desiredata of reason that you that make sense. He had reasons for my “whys?”.
Also, standard statistics classes always seemed a bit perverse to me—logically backward. They always just felt wrong. Jaynes approach replaced that tortured backward thinking with clear, straight lines going forward. You’re always asking the same basic question “What is the probability of A given that I know B?”
And he also had the best notation. Even if I’m not going to do any math, I’ll often formulate a problem using his notation to clarify my thinking.
I think this is a most awesome mistype of desiderata.
Here, have a book!
http://www-biba.inrialpes.fr/Jaynes/prob.html
Actually, I started reading that one and found it too hard.
IS this a good book to start with? I know it’s the standard “Bayes” intro around here, but is it good for someone with, let’s say, zero formal probability/statistics training?
I was under the impression that the “this is definitely not a book for beginners” was the standard consensus here: I seem to recall seeing some heavily-upvoted comments saying that you should be approximately at the level of a math/stats graduate student before reading it. I couldn’t find them with a quick search, but here’s one comment that explicitly recommends another book over it.
I think it’s even better if you’re not familiar with frequentist statistics because you won’t have to unlearn it first, but I know many people here disagree.
I suppose it’s better that to never have suffered through frequentist statistics first, but I think you appreciate the right way a lot more after you’ve had to suffer through the wrong way for a while.
Well, Jaynes does point out how bad frequentism is as often as he can get away with. I guess the main thing you’re missing out if you weren’t previously familiar with it is knowing whether he’s attacking a strawman.
I agree, that’s why I’m glad I learned Bayes first. Makes you appreciate the good stuff more.
Did you misread the comment you’re replying to, are you sarcastic, or am I missing something?
The mean of the sum of two random variables is the sum of the means (ditto with the variances); there’s no similarly simple formula for the median. (See ChristianKl’s comment for why you’d care about the sum.)
The mean if the value of x that minimizes SUM_i (x—x_i)^2; if you have to approximate all elements in your sample with the same value and the cost of an imperfect approximation is the square distance from the exact value (and any smooth function looks like the square when you’re sufficiently close to the minimum), then you should use the mean.
The mean and variance are jointly sufficient statistics for the normal distribution
Possibly something else which doesn’t come to my mind at the moment.
(Of course, all this means that if you’re more likely to multiply things together than add them, the badness of an approximation depends on the ratio between it and the true value rather than the difference, and things are distributed log-normally, you should use the geometric mean instead. Or just take the log of everything.)
This isn’t at introductory level, but try exploring the ideas around Fisher information—it basically ties together information theory and some important statistical concepts.
Fisher Information is hugely important in that it lets you go from just treating a family of distributions as a collection of things to treating them as a space with its own meaningful geometry. The wikipedia page doesn’t really convey it but this write-up by Roger Grosse does. This has been known for decades but the inferential distance to what folks like Amari and Barndorff-Nielsen write is vast.
Attending a CFAR workshop and session on Bayes (the ‘advanced’ session) helped me understand a lot of things in an intuitive way. Reading some online stuff to get intuitions about how Bayes’ theorem and probability mass work was helpful too. I took an advanced stats course right after doing these things, and ended up learning all the math correctly, and it solidified my intuitions in a really nice way. (Other students didn’t seem to have as good a time without those intuitions.) So that might be a good order to do things in.
Some multidimensional calc might be helpful, but other than that, I think you don’t need too much other math to support learning more probability and stats.
Not really—but I do agree that it’s absolutely vital to understand the basic concepts or terms. I think that’s a major reason why people fail to learn—they just don’t really grasp the most vital concepts. That’s especially true of fields with lots of technical terms. If you don’t understand the terms you’ll struggle to follow even basic lines of reasoning.
For this reason I sometimes provide students with a list of central terms, together with comprehensive explanations of what they mean, when I teach.
I don’t have a good resource for you—I’ve had too much math education to pin down exactly where I picked up this kind of logic. I’d recommend set theory in general for getting an understanding of how math works and how to talk and read precisely in mathematics.
For your specific question about the mean, it’s the only number such that the sum of all (samples—mean) equals zero. Go ahead and play with the algebra to show it to yourself. What it means is that if you go off of the mean, you’re going to be more positive of the numbers in the sample than you are negative, or more negative than positive.
Can you recommend a place to start learning about set theory?
http://intelligence.org/courses/ has information on set theory. I also enjoyed reading Bertrand Russell’s “Principia Mathematica”, but haven’t evaluated it as a source for learning set theory.
A few years back, the Amanda Knox murder case was extensively discussed on LW.
Today, Amanda Knox has been convicted again.
Did someone here ask about the name of a fraud where the fraudster makes a number of true predictions for free, then says “no more predictions, I’m selling my system.”? There’s no system, instead the fraudster divided the potential victims into groups, and each group got different predictions. Eventually, a few people have the impression of an unbroken accurate series.
Anyway, the scam is called The Inverted Pyramid, and the place I’d seen it described was in the thoroughly charming “Adam Had Three Brothers. by R.A. Lafferty.
Edited to add: It turned out that someone had asked at Making Light.
People often ask why MIRI researchers think decision theory is relevant for AGI safety. I, too, often wonder myself whether it’s as likely to be relevant as, say, program synthesis. But the basic argument for the relevance of decision theory was explained succinctly in Levitt (1999):
A year ago, I was asked to follow up on my post about the January 2013 CFAR workshop in a year. The time to write that post is fast approaching. Are there any issues / questions that people would be particularly interested in seeing this post address / answer?
I’d like to know how many techniques you were taught at the meetup you still use regularly. Also which has had the largest effect on your life.
Somewhere I saw the claim that in choosing sperm donors the biggest factor turns out to be how cute the baby pictures are, but at this point it’s just a cached thought. Looking now I’m not able to substantiate it. Does anyone know where I might have seen this claim?
Does anyone else experience the feeling of alienation? And does anyone have a good strategy for dealing with it?
Yes, although it would help if you could be a bit more specific, the term is somewhat overloaded.
As for the strategy, depends. Find a better community (than the one you feel alienated from) in the sense of better matching values? We both seem to feel quite at home in this one (for me, if not for the suffocating supremacy of EA).
I meant alienated from society at large, not from LW, although the influence of society at large obviously affects discussion on LW.
One aspect of my feeling is that I increasingly suspect that the fundamental reason people believe things in the political realm is that they feel a powerful psychological need to justify hatred. The naive view of political psychology is that people form ideological beliefs out of their experience and perceptions of the world, and those beliefs suggest that a certain category of people is harming the world, and so therefore they are justified in feeling hatred against that category of people. But my new view is that causality flows in the opposite direction: people feel hatred as a primal psychological urge, and so their conscious forebrain is forced to concoct an ideology that justifies the hatred while still allowing the individual to maintain a positive pro-social self-image.
This theory is partially testable, because it posits that a basic prerequisite of an ideology is that it identifies an out-group and justifies hatred against that out-group.
There is a quote commonly mis-attributed to August Bebel and indeed to Marx: “Antisemitismus ist der Sozialismus des dummen Kerls.” (“Antisemitism is the socialism of the stupid guy”, or perhaps colloquially, “Antisemitism is a dumb-ass version of socialism”) That is to say, politically naïve people were attracted to antisemitism because it offered them someone to blame for the problems they faced under capitalism, which — to the quoted speaker’s view, anyway — would be better remedied by changing the political-economic structure.
Jay Smooth recently put out a video, “Moving the Race Conversation Forward”, discussing recent research to the effect that mainstream-media discussions of racial issues tend to get bogged down in talking about whether an individual did or said something racist, as opposed to whether institutions and social structures produce racially biased outcomes.
There are probably other sources for similar ideas from around the political spectra. (I’ll cheerfully admit that the above two sources are rather lefter than I am, and I just couldn’t be arsed to find two rightish ones to fit the politesse of balance.) People do often look for individuals or out-groups to blame for problems caused by economic conditions, social structures, institutions, and so on. The individuals blamed may have precious little to do with the actual problems.
That said, if someone’s looking to place blame for a problem, that does suggest the problem is real. It’s not that they’re inventing the problem in order to have something to pin on an out-group. (It also doesn’t mean that a particular structural claim, Marxist or whatever, is correct on what that problem really is — just that the problem is not itself confabulated.)
Does that make socialism the anti-semitism of the smart? Or perhaps of the ambitious—they’re attracted to it because it gives them an enemy big enough to justify taking over everything?
I’ve seen it phrased as “Anti-semitism is the socialism of fools”.
Sure, obviously there are real problems in the world. Your examples seem to support my thesis that people believe in ideologies not because those ideologies are capable of solving the problems, but because the ideologies justify their feelings of hatred.
I suppose I see it as more a case of biased search: people have actual problems, and look for explanations and solutions to those problems, but have a bias towards explanations that have to do with blaming someone. The closer someone studies the actual problems, though, the less credibility blame-based explanations have.
The part where the emotional needs come first, and the ideological belief comes later as a way of expressing and justifying them, that feels credible. I just don’t think that everyone starts from the position of hatred (or, in the naive view, not everyone ends with hatred). There are other emotions, too.
But maybe the people motivated by hatred make a large part of the most mindkilled crowd. Because other emotions can be expressed legitimately also outside of the politics.
Do you have an in-person community that you feel close to?
What I’m trying to get at is, does it bother you specifically that you are alienated from “society at large,” or do you feel alienated in general?
Tentatively: Look for what “and therefore” you’ve got associated with the feeling. Possibilities that come to my mind—and therefore people are frightening, or and therefore I should be angry at them all the time, or and therefore I should just hide, or and therefore I shouldn’t be seeing this.
In any case, if you’ve got an “and therefore” and you make it conscious, you might be able to think better about the feeling.
But of course.
Accept that you’re not average and not even typical.
Feeling usually become a problem when you resist them.
My general approach with feelings:
Find someone towards which you can express the content behind the feeling. This works best in person. Online communication isn’t good for resolving feelings. Speak openly about whatever comes to mind.
Track the feeling down in your body. Be aware where it happens to be. Then release it.
I think this feeling arises from social norms feeling unnatural to you. This feeling should be expected if your interests are relevant to this site, since people are not trying to be rational by default.
The difference between a pathetic misfit and and an admirable eccentric is their level of awesomeness. If you become good enough at anything relevant to other people, you don’t have to live through their social expectations. Conform to the norms or rise above them.
Note that I think most social norms are nice to have, but this doesn’t mean there aren’t enough of the kind that make me feel alienated. It could be that the feeling of alienation is a necessary side effect of some beneficial cognitive change, in which case I’d try to cherish the feeling. I’ve found that rising to a leadership position diminishes the feeling significantly, however.
I think that feeling is more common than you might think. Especially if you deviate enough from the societal norm (which Less Wrong generally does).
My general strategy for dealing with is social interaction with people who’ll probably understand. Just talk it over with them. It’s best if you do this with people you care about. It doesn’t have to be in person, if you’ve got someone relevant on Skype, that works as well.
Hmm, this is probably good advice. Part of my problem is that my entire family is made up of people who are both 1) Passionate advocates of an American political tribe and 2) Not very sophisticated philosophically.
A common condition with geeks in general and aspiring rationalists in particular, I’d say.
I’ve recently been expanding my network of like-minded people both by going to the local meetups and also by being invited in a Skype group for tumblr rationalists.
I know that a feeling of alienation isn’t conductive to meeting new people, so I’m not sure I can offer other advice. Contact some friends who might be open to new ideas? I’d offer to help myself, but I’m not sure if I’m the right person to talk to. (In any case, I’ve PM’d my Skype name if you do need a complete stranger to talk to.)
Is it always correct to choose that action with the highest expected utility?
Suppose I have a choice between action A, which grants −100 utilons with 99.9% chance and +1000000 utilons with 0.1% chance, or action B which grants +1 utilon with 100% chance. A has an expected utility of +900.1 utilons, while B has an expected utility of +1 utilon. This decision will be available to me only once, and all future decision will involve utility changes on the order of a few utilons.
Intuitively, it seems like action A is too risky. I’ll almost certainly end up with a huge decrease in utility, just because there’s a remote chance of a windfall. Risk aversion doesn’t apply here, since we’re dealing in utility, right? So either I’m failing to truly appreciate the chance at getting 1M utilons—I’m stuck thinking about it as I would money—or this is a case where there’s reason to not take the action that maximizes expected value. Help?
EDIT: Changed the details of action A to what was intended
I think the non-intuitive nature of the A choice is because we naturally think of utilons as “things”. For any valuable thing (money, moments of pleasure, whatever) anybody who is minimally risk adverse would choose B. But utllons are not things, they are abstractions defined by one’s preferences. So that A is the rational choice is a tautology, in the standard versions of utility theory.
It may help to think it the other way around, starting from the actual preference. You would choose a 99.9% chance of losing ten cents and 0.1% chance of winning 10000 dollars over winning one cent with certainty, right? So then perhaps, as long as we don’t think of other bets and outcomes, we can map winning 1 cent to +1 utilon, losing 10 cents to −100 utilons and winning 10000 dollars to +10000 utilons. Then we can refine and extend the “outcomes ⇔ utilons” map by considering your actual preferences under more and more bets. As long as your preferences are self-consistent in the sense of the VNM axioms, then there will a mapping that can be constructed.
ETA: of course, it is possible that your preferences are not self-consistent. The Allais paradox is an example where many people’s intuitive preferences are not self-consistent in the VNM sense. But constructing such a case is more complicated that just considering risk-aversion on a single bet.
Also, it’s well possible that your utility function doesn’t evaluate to +10000 for any value of its argument, i.e. it’s bounded above.
Since utility functions are only unique up to affine transformation, I don’t know what to make of this comment. Do you have some sort of canonical representation in mind or something?
In the context of this thread, you can consider U(status quo) = 0 and U(status quo, but with one more dollar in my wallet) = 1. (OK, that makes +10000 an unreasonable estimate of the upper bound; pretend I said +1e9 instead.)
Yes, this seems almost certainly true (and I think is even necessary if you want to satisfy the VNM axioms, otherwise you violate the continuity axiom).
An unbounded function is one that can take arbitrarily large finite values, not necessarily one that actually evaluates to infinity somewhere.
Yes I’m quite aware… note that if there’s a sequence of outcomes whose values increase without bound, then you could construct a lottery that has infinite value by appropriately mixing the lotteries together, e.g. put probability 2^-k on the outcome with value 2^k. Then this lottery would be problematic from the perspective of continuity (or even having an evaluable utliity function).
Are lotteries allowed to have infinitely many possible outcomes? (The Wikipedia page about the VNM axioms only says “many”; I might look it up on the original paper when I have time.)
There are versions of the VNM theorem that allow infinitely many possible outcomes, but they either
1) require additional continuity assumptions so strong that they force your utility function to be bounded
or
2) they apply only to some subset of the possible lotteries (i.e. there will be some lotteries for which your agent is not obliged to define a utility).
The original statement and proof given by VNM are messy and complicated. They have since been neatened up a lot. If you have access to it, try “Follmer H., and Schied A., Stochastic Finance: An Introduction in Discrete Time, de Gruyter, Berlin, 2004”
EDIT: It’s online.
See also Kreps, Notes on the Theory of Choice. Note that one of these two restrictions are required in order to specifically prevent infinite expected utility. So if a lottery spits out infinite expected utility, you broke something in the VNM axioms.
For anyone who’s interested, a quick and dirty explanation is that the preference relation is primitive, and we’re trying to come up with an index (a utility function) that reproduces the preference relation. In the case of certainty, we want a function U:O->R where O is the outcome space and R is the real numbers such that U(o1) > U(o2) if and only if o1 is preferred to o2. In the case of uncertainty, U is defined on the set of probability distributions over O, i.e. U:M(O) → R. With the VNM axioms, we get U(L) = E_L[u(o)] where L is some lottery (i.e. a probability distribution over O). U is strictly prohibited from taking the value of infinity in these definition. Now you probably could extend them a little bit to allow for such infinities (at the cost of VNM utility perhaps), but you would need every lottery with infinite expected value to be tied for the best lottery according to the preference relation.
I’m not sure, although I would expect VNM to invoke the Hahn-Banach theorem, and it seems hard to do that if you only allow finite lotteries. If you find out I’d be quite interested. I’m only somewhat confident in my original assertion (say 2:1 odds).
Um, A actually has a utility of −89.9.
That explains why it seems less appealing!
I’d flip that around. Whatever action you end up choosing reveals what you think has highest utility, according to the information and utility function you have at the time. It’s almost a definition of what utility is—if you consistently make choices that rank lower according to what you think your utility function is, then your model of your utility function is wrong.
If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have (probably related to risk).
I’ve recently come to terms with how much fear/anxiety/risk avoidance is in my revealed preferences. I’m working on working with that to do effective long-term planning—the best trick I have so far is weighing “unacceptable status quo continues” as a risk. That, and making explicit comparisons between anticipated and experienced outcomes of actions (consistently over-estimating risks doesn’t help any, and I’ve been doing that).
I sometimes have the same intuition as banx. You’re right that the problem is not in the choice, but in the utility function and it most likely stems from thinking about utility as money.
Lets examine the previous example and make it into money (dollars): −100 [dollars] with 99.9% chance and +10,000 [dollars] with 0.1% vs 100% chance at +1 [dollar]
When doing the math, you have to take into future consequences as well. For example, if you knew you would be offered 100 loaded bets with an expected payoff of $0.50 in the future, each of which only cost you $1 to participate in, then you have to count this in your original payoff calculation if losing the $100 would prohibit you from being able to take these other bets.
Basically, you have to think through all the long-term consequences when calculating expected payoff, even in dollars.
Then when you try to convert this to utility, it’s even more complicated. Is the utility per dollar gained in the +$10,000 case equivalent to the utility per dollar lost in the -$100 case? Would you feel guilty and beat yourself up afterwards if you took a bet that you had a 99.9% chance of losing? Even though a purely rational agent probably shouldn’t feel this, it’s still likely a factor in most actual humans’ utility functions.
TrustVectoring summed it up well above: If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have.
If you still prefer picking the +1 option, then most likely your assessment that the first choice only gives a negative utility of 100 is probably wrong. There are some other factors that make it a less attractive choice.
Depending on your preferred framework, this is in some sense backwards: utility is, by definition, that thing which it is always correct to choose the action with the highest expected value of (say, in the framework of the von Neumann-Morgenstern theorem).
People who play with money don’t like high variance, and sometimes trade off some of the mean to reduce variance.
Daniel Dennett quote to share, on an argument in Sam Harris’ book Free Will;
From: http://www.samharris.org/blog/item/reflections-on-free-will#sthash.5OqzuVcX.dpuf
Just thought that was pretty damn funny.
That’s known as Strawman Has A Point (Warning: TVTropes).
Thanks for the link, that was an excellent exposition and defense of compatibilism. Here is one particularly strong paragraph:
Isn’t that begging the question?
It is common for incompatibilists to say that their conception of free will (as requiring the ability to do otherwise in exactly the same conditions) matches everybody’s intuitions and that compatibilism is a philosopher’s trick based on changing the definition. Dennett is arguing that, contrary to this, what actual people in actual circumstances do when they want to know if someone was “free to do otherwise” is never to think about global determinism; rather, as compatibilism requires, they think about whether that person (or relevantly similar people) actually does/do different when placed under very similar (but not precisely identical) conditions.
I think the key is consideration people “in similar, but not exactly identical, circumstance”. It’s how the person compares to hypothetical others. Free will is a concept used to sort people for blame based on intention.
The MIRI course list bashes on “higher and higher forms of calculus” as not being useful for their purposes and calculus is not on the list at all. However, I know that at least some kind of calculus is needed for things like probability theory.
So imagine a person wanted to work their way through the whole MIRI course list and deeply understand each topic. How much calculus is needed for that?
Not much. The kind of probability relevant to MIRI’s interests is not the kind of probability you need calculus to understand (the random variables are usually discrete, etc.). The closest thing to needing a calculus background is maybe numerical analysis (I suspect it would be helpful to at least have the intuition that derivatives measure the sensitivity of a function to changes in its input), but even then I think that’s more algorithms. Not an expert on numerical analysis by any means, though.
If you have a general interest in mathematics, I still recommend that you learn some calculus because it’s an important foundation for other parts of mathematics and because people, when explaining things to you, will often assume that you know calculus after a certain point and use that as a jumping-off point.
Thanks. I took single variable calculus, differential equations, and linear algebra in college, but its been four years since then and I haven’t really used any of it since (and I think I really only learned it in context, not deeply). I’ve just been trying to figure out how much of my math foundations i’m going to need to re-learn.
This was helpful.
Last night we had meetup in Ljubljana. It was a good debate, but quite a heretical one for the LW standards. Especially when organizers left us. Which was unfortunate. We mostly don’t see ourselves particularly bonded to LW at all. Especially I.
We discussed personal identity, possible near super-intelligence (sudden hack, if you wish), Universe transformation following this eventuality, and some lighter topics like fracking for gas and oil, language revolutions throughout history, neo-reactionaries and their points, Einstein’s brains (whether they were lighter or heavier than average—I am quite sure they were heavier but it seems that the Cathedral says otherwise).
We discussed Three Worlds Collide, IBM brain simulations, MIRI endeavors and progress, genetics …
More than 5 hours of an interesting debate.
Heretical? Well, considering that ‘heretic’ means ‘someone who thinks on their own’, I’m not sure how we’re supposed to interpret that negatively.
I assume however that you meant ‘disagreeing with common positions displayed on LW’ - which of those common positions did you differ on, and why, and just how homogeneous do you think LW is on those?
I can speak mostly for myself. Still, we the locals go back decade and more, discussing some topics.
It is kind of clear to me, that there is a race toward superintelligence. As it was always the race toward some future technology, be it flying, be it atomic bomb, be it Moon race … you name it.
Except, that this is the final, most important race ever. What can you expect then from the competitors? You can expect them to claim, that the Singularity/Transcendence is still far, far away. You can expect, that the competition will try to persuade you to abandon your own project, if you have any. For example, by saying that an uncontrollable monster is lurking in the dark, named UFAI. They will say just about anything, to persuade you to quit.
This works both ways, between almost any two competitors, to be clear.
My view is the following. If you are clever and dare enough, you can write a 10000 lines or there about long computer program, and there will be the Singularity the very next month.
I am not sure, if there is a human (group) currently able to accomplish this. Very well might be. It’s likely NOT THAT difficult.
We discussed the Marylin vos Savant’s toying with Paul Erdos. A smartass against a top scientist is occasionally like a cat and mouse game, where the mouse mistakenly thinks he’s a cat. There are many other examples, like Ballard against all the historians and archeologists. Or Moldbug against Dawkins.
Of course, that does not automatically mean another smartass is preying upon the MIRI and AI academia combined, in the real AI case. But it’s not impossible. May be several different big cats in the wild who keep a low profile for a time being. Might be lion with his pride, inhabiting the academia also.
The most interesting outcome would be no Singularity for a few decades.
That seems an… unusual view. Have you actually tried writing code that exhibits something related to intelligence?
10K lines is not a big program.
It depends on your language and coding style, doesn’t it? I’ve seen C style guides that require you to stretch out onto 15 lines what I’d hope to take 4, and in a good functional language shouldn’t take more than 2.
Yes, and the number of lines is a ridiculously bad metric of the code’s complexity anyway.
Was a funny moment when someone I know was doing a Java assignment, I got curious, and it turned out that a full page of Java code is three lines in Perl :-)
That really depends on coding style, again. I find that common Java coding styles are hideously decompressed, and become far more readable if you do a few things per line instead of maybe half a thing. Even they aren’t as bad as the worst C coding styles I’ve seen, though, where it takes like 7 lines to declare a function.
As for Perl vs Java… was it solved in Perl by a Regex? That’s one case where if you don’t know what you’re doing, Java can end up really bloated but it usually doesn’t need to be all that bad.
I don’t remember the details by now, but I think that yes, there was a regexp and a map, and a few of Perl’s shortcuts turned out to be useful...
I have certain abilities. This is the product of the product of mine from 10 years ago.
Smartass I am. Probably not smart enough to really make a difference, though.
Smartass is good. Saying things which are clearly not true without a hidden smartassy implication behind them—not so much :-)
Is there a reasonably well researched list of behaviors that correlate positively with lifespan? I’m interested in seeing if there are any low hanging fruit I’m missing.
I found this previously posted, and a series of posts by gwern, but was wondering if there is anything else?
A quick google will give you a lot of lists but most of them are from news sources that I don’t trust.
Romeo Stevens made this comprehensive doc.
This is really great, do you know if the sources are compiled anywhere?
I found this list of causes of death by age and gender enlightening (it doesn’t necessarily tell you that a particular action will increase your lifespan, but then again neither do correlations). For example, I was surprised by how often people around my age or a bit older die of suicide and “poisoning” (not sure exactly what this covers but I think it covers stuff like alcohol poisoning and accidentally overdosing on medicine?).
Eating a handful of nuts a day.
http://www.medicalnewstoday.com/articles/269206.php
But:
Indeed, the study consists only of observational data, not interventional, so what causal conclusions could be drawn from it?
You act like people never did a valid causal analysis of the data in the Nurses’ health study.
I know I overstated things. There are such things as natural experiments, having some causal information already, etc.
I’m not familiar with the Nurses’ health study, and a quick google only turns up its conclusions. What methods did they use?
Sorry, there are two separate issues: the data itself (which is a big dataset where they following a big set of nurses for many years, and recorded lots of things about them), and how the data could be used to maybe get causal conclusions.
Plenty of folks at Harvard (e.g. Miguel Hernan, Jamie Robins) used this data in a sensible way to account for confounding (naturally their results are relatively low on the ‘hierarchy of evidence’, but still!) Trying to draw causal conclusions from observational data is 95% of modern causal inference!
Depends on what you’d call “well-researched” but, unfortunately, most of it is fuzzy platitudes. For example:
Do physical exercise. But not too much.
Be happy, avoid stress.
Get happily married.
Don’t get obese.
and most importantly
Choose your parents well, their genes matter :-P
Has anyone had experiences with virtual assistants? I’ve been aware of the concept for many years but always been wary of what I perceive to be the risks involved in letting a fundamentally unknown party read my email.
I’d like to hear about any positive or negative experiences.
One problem with searching for information about the trustworthiness of entities like these is that one suspects any positive reports one finds via Googling to be astroturfing, and if one finds negative reports, well, negatives are always over-reported in consumer services. That’s why I’m asking here.
I don’t, but in Tim Ferris’ book Four-Hour Work Week, I think I recall him recommending them. I think this was the one he recommended: https://www.yourmaninindia.com/.
Let me know if you come across some good findings on this. If effective, virtual assistants could be very useful, and thus they’re something I’m interested in. On that note, it’d probably be worth writing a post about them.
Has anyone paired Beeminder and Project Euler? I’d like to be able to set a goal of doing x problems per week and have it automatically update, instead of me entering the data in manually. Has anyone cobbled together a way to do it, which I could piggyback off of?
I hadn’t realised before that Max Tegmark’s work was actually funded by a massive grant from the Templeton Foundation. $9 million to found FQXI.
The purpose of the Templeton Foundation is to spray around more money than most academics could dream of - $9 million for philosophy! - seeking to try to blur the lines between science and religion and corrupt the public discourse. The best interpretation that can reasonably be put on taking the Templeton shilling is that one is doing so cynically.
This is not pleasing news, not at all.
What’s your basis for this interpretation? And particularly the “corrupt the public discourse” bit? I read your link, and I remember it getting briefly badmouthed in The God Delusion, but I’d prefer something a little more solid to go on, since this seems to lie on the sharp side of Hanlon’s razor.
Well, here’s Sean Carroll’s take on the matter. They don’t seem like the worst organization in the world or anything, but I too was disappointed to hear about Max accepting their money.
Thanks, that’s the kind of thing I was looking for. I’d expect (boundedly) rational people to be able to disagree on the utility of promoting secularism, but Carroll’s take on it does seem like a reasonable and un-Hanlony approach to the issue.
If I was offered $9m, I’d take it! Not that anyone’s offering. But it’s definitely a significant hit to his credibility.
Any book recommendations for a good intro to evolutionary psychology? I remember Eliezer suggested The Moral Animal, but I also vaguely remember some other people recommending against it. I’ll probably just go with TMA unless some other book gets suggested multiple times.
I found TMA was too full of just so stories. I also think it disturbingly rationalized a particular brand of sexism$ and overemphazised status which was very unexpected since I don’t think I’m squeamish at all on those fronts. I don’t think it helped me to predict human behavior better.
This said I’d be interested too if someone could recommend some other book.
$ rigid view of differences between the sexes, incompatible with my experience (which does suggest the sexes are different)
Evolutionary Psychology: The New Science of the Mind, by David Buss is a pretty good, mainstream, and accessible introduction to the field. I don’t regret reading it.
I second the recommendation. It was used as one of two textbooks for my evo-psyc class, and worked quite well.
I think “Evil” by Roy F Baumeister is a really good exploration that includes evo psych elements though is not primarily about evo psych.
This is not a book, but looks interesting.
“Dont”. Intro over. Evo-Psyc is .. profoundly useless unless you want to use it as a case study of how pre-existing biases and social norms can utterly take over a field operating in a nigh-total vaccum of actual data. Now, I cant guarantee that the entire field is noise and fury signifying nothing, but every sample of it that I have encountered has been.
Pre-existing biases and social norms have utterly taken over something here, in a vacuum of actual data. They may also have applied to some degree to Evo-psych works.
I don’t understand why wireheading is almost universally considered worse than death, or at least really really negative.
I would assume that it’s considered worse than death by some because with death it’s easier to ignore the opportunity cost. Wireheading makes that cost clearer, which also explains why it’s considered negative compared to potential alternatives.
Speaking for myself, I consider wireheading to be very negative, but better than information-theoretic death, and better than a number of scenarios I can think of.
I think the big fear is stasis. In each case you’re put in a certain state of being without any recourse to get out of it, but wireheading seems to be like a state of living death.
I concur, but I think it wise to draw a distiction between wireheading as in an extreme example of a blissed out opiate haze, where one does nothing but feel content and so has no desire to acheve anything, and wireheading as in a state of strongly positive emotions where curisity, creativity etc remain intact. Yes, if a rat is given a choice it will keep on pressing the lever, but maybe a human would wedge the lever open and then go and continue with life as normal? To continue the drug analogy, some drugs leave people in a stupor, some make people socialable, some result in weird music. I would say the first type is certainly better then death, and the latter ‘headonistic imperitive’ wireheading sounds utopic.
Some people value “actual things” being achieved by entities and like Slackson implied a society of wireheads takes away resources and has opportunity costs.
How would you define the word “sexy” in terms of signaling?
“Sexy” isn’t signaling—it’s a characteristic that people (usually) try to signal, more or less successfully. “I’m sexy” basically means “You want me” : note the difference in subjects :-)
Would it change for particular behavior, e.g. clothes, dancing/gestures, language?
Pretty much the same thing. Regardless of an, um, widespread misunderstanding :-D sexy behavior does NOT signal either promiscuity or sexual availability. It signals “I want you to desire me” and being desired is a generally advantageous position to be in.
If a man succeeds in signaling a high sexuality to a women, the woman might still treat him as a creep. Especially if there no established trust, signal really high amounts of sexuality doesn’t result in “You want me”.
In my own interactions with professional dancers there are plenty of situations where the woman succeeds in signaling a high amount of sexyness. I however know that I”m dancing with a professional dancer who going to sent that signal to a lot of guys so she doesn’t enter my mental category of potential mates.
I think people frequently go wrong when the confuse impression of characteristics with goals.
In which case he failed to signal “sexy” and (a common failure mode) signaled “creepy” instead.
It depends on how you define the term.
For a reasonable definition of sexy, the term refers to letting a woman feel sexual tension. If you talk about social interactions it’s useful to have a word that refers to making another person feel sexual tension.
Of course you can define beautiful, attractive and sexy all the same way. Then you get a one dimensional model where Bob wants Alice with utility rating X. I don’t think that’s model is very useful to understanding how humans behave in mating situations.
I define it as “arousing sexual interest and desire in people of appropriate gender and culture”. Note that this is quite different from “beautiful” and is a narrow subset of “attractive”.
“Tension” generally implies conflict or some sort of a counterforce.
Testosterone which is commonly associated with sexiness in males is about dominance. It has something to do with power that does create tension.
Of course a woman can decide to have sex with shy a guy because he’s nice and she thinks that he’s intelligent or otherwise a good match. Given that there are shy guys who do have sex that’s certainly happening in reality.
Does that mean that the behavior of that guy deserves the label “sexy”? I don’t think he’s commonly given that label.
There also words like sensual and empathic. A guy can get layed by being very empathic and just making woman that feel really great by interacting with him in a sensual way. I think it’s useful to separate that mentally from the kind of behavior that comes from testosterone that commonly get’s called sexy.
If you read an exciting thriller you are also feeling tension even when you aren’t in conflict with the book or there some counterforce. Building up tension and then releasing it is a way for human to feel pleasure.
Sexy is a quite broad word that probably used by different people in different ways. I think for most people it about what they feel when looking at the person. Those feeling where set up by evolution over large time frames.
Evolution doesn’t really care about whether you get a fun intercourse partner.
But it’s not only evolution. It also has a lot to do with culture. Culture also doesn’t care about whether you get a fun intercourse partner. People who watch a lot of TV get taught that certain characteristics are sexy.
For myself I would guess that most of my cultural imprint regarding what I find sexy comes from dancing interactions. If a woman moves in a way that suggests that she doesn’t dance well, that will reduce her sex appeal to me more than it probably does with the average male.
Being sexy signals health, youth, and fertility. This is quite well supported by evidence and discussed in many books and articles.
I would agree with what Lumifer says below, but I think sexy can be signalling when many people are involved: look at the sexy people I hang out with. Being with sexy people brings high status because it’s high status.
I think you confuse the label “sexy” with the label “attractive”. As far as my reading goes few articles use the term sexy.
I keep looping through the same crisis lately, which comes up any time someone points out that I’m pretentious / an idiot / full of shit / lebens unwertes leben / etc.:
Is there a good way for me to know if I’m actually any good at anything? What are appropriate criteria to determine whether I deserve to have pride in myself and my abilities? And what are appropriate criteria to determine whether I have the capacity to determine whether I’ve met those criteria?
Having followed your posts here and on #lesswrong, I got an impression of your personality as a bizarre mix of insecurities and narcissism (but without any malice), and this comment is no exception. You are certainly in need of a few sessions with a good therapist, but, judging by your past posts, you are not likely to actually go for it, so that’s a catch 22. Alternatively, taking a Dale Carnegie course and actually taking its lessons to heart and putting an effort into it might be a good idea. Or a similar interpersonal relationship course you can find locally and afford.
If you don’t mind, I’m gonna use this in my twitter’s bio.
Yeah, the narcissism is something that I’ve been trying to come up with a good plan for purging since I first became aware of it. (I sometimes think that some of the insecurities originally started as a botched attempt to undo the narcissism).
The therapy will absolutely happen as soon as I have a reasonable capacity to distinguish “good” therapists from “bad” ones.
Bad plan (and also a transparent, falsely humble excuse to procrastinate). Picking a therapist at random will give you distinctly positive expected value. Picking a therapist recommended by a friend or acquaintance will give you somewhat better expected value.
Incidentally, one of the methods by which you can most effectively boost your ability to distinguish between good therapists from bad therapists is by having actual exposure to therapists.
Some things are easier to tell whether you’re good at than others. I guess you aren’t talking about the more assessable things (school/university studies, job, competitive sport, weightlifting, …) but about things with a strong element of judgement (quality as a friend or lover, skill in painting, …) or a lot of noise mixed with any signal there might be (stock-picking[1], running a successful startup company, …).
[1] Index funds are the canonical answer to that one, but you know that already.
So, anyway, the answer to “how do I tell if I’m any good at X?” depends strongly on X.
But maybe you really mean not “(know if I’m actually any good at) anything” but know if I’m actually (any good at anything)”—i.e., the question isn’t “am I any good at X?” but “is there anything I’m any good at?”. The answer to that is almost certainly yes; if someone is seriously suggesting otherwise then they are almost certainly dishonest or stupid or malicious or some combination of those, and should be ignored unless they have actual power to harm you; if some bit of your brain is seriously suggesting otherwise then you should learn to ignore it.
There are almost certainly specific X you have good evidence of being good at, which will imply a positive answer to “is there anything I’m good at?”. Pick a few, inspect them as closely as you feel you have to to be sure you aren’t fooling yourself, and remember the answer.
If someone else is declaring publicly that you are a pretentious idiot and full of shit, it is likely that what’s going on is not at all that they’re trying to make an objective assessment of your capabilities or character, but that they are engaged in some sort of fight over status or influence or something, and are saying whatever seems like it may do damage. I expect you have good reasons for getting into that sort of fight, so I’ll just say: bear in mind when you do that this is a thing that happens, and that such comments are usually not useful feedback for self-assessment.
If you want to mention some specific X, I expect you’ll get some advice on ways to assess whether you’re any good at it/them. But I think the most important thing here is that the thing that’s provoking your self-doubt, although it looks like an assessment of your capabilities, really isn’t any such thing.
You could take a cognitive psych approach to some of this. What are the other person’s qualifications?
I recommend exploring the concept of good enough.
There’s a bit in Nathaniel Branden about “a primitive sense of self-affirmation”—which I take to be the assurance that babies start out with that they get to care about their pain and pleasure. It isn’t even a question for them. And animals are pretty much the same.
You don’t need to have a right to be on your own side, you can just be on your own side.
Something I’ve been working on is getting past the idea that the universe is keeping score, and I have to get everything right.
What I believe about your situation is that you’ve been siding with your internal attack voice, and you need to associate your sense of self with other aspects of yourself like overall physical sensations.
Do you have people who are on your side? If so, can you explore taking their opinion seriously?
The attack voice comes on so strong it seems like the voice of reality, but it’s just a voice. I’ve found that it’s hard work to change my relationship to my attack voice, but it’s possible.
For what it’s worth, I think your prose is good. It’s clear, and the style (as distinct from the subject matter) is pleasant.
Generally, their qualifications are that the audience is rallying around them. Also, they don’t know me, which makes them less likely to be biased in my favor. (I.e., the old “my mom says I’m great at , so shut up!” problem)
This flies in the face of the political climate I exist within, that talks primarily about the gallish “entitlement” of poor people who believe they have the right to food and shelter and work.
It’s very, very difficult, primarily because people who are INTENSELY on my side are never as vocal as people who are casually against me.
I.e., people who clearly love me and are willing to share portions of their life with me are willing to go so far as to say “I think you do pretty well.” People whom I’ve never met are willing to go so far as to say “fucking kill yourself you fucking loser. Stop acting like you even know how to person, let alone . Fuck it, I’m looking up your address; I’ll kill you.”
That churns up all sorts of emotional and social reactions, which makes processing the whole thing rationally even harder.
On the other hand, they might be more likely to be biased against you, and they certainly don’t know a lot about your situation.
Can you find a different political environment?
I’ve noticed that conservatives tend to think that everything bad that happens to a person is the fault of that person, and progressives tend to think that people generally don’t have any responsibility for their misfortunes. Both are overdoing it, but you might need to spend some time with progressives for the sake of balance.
Also, I’ve found it helps to realize that malice is an easy way of getting attention, so there are incentives for people to show malice just to get attention—and some of them are getting paid for it. The thing is, it’s an emotional habit, not the voice of reality.
Unfortunately, people are really vulnerable to insults. I don’t have an evo psy explanation, though I could probably whomp one up.
It is very difficult, but I think you’ve made some progress. All I can see is what you write, but it seems like you’re getting some distance from your self-attacks in something like the past year or so.
I find it helps to think about times when I’ve been on my own side and haven’t been struck by lightning.
I might be an outlier, but a spiel like “fucking kill yourself you fucking loser. Stop acting like you even know how to person, let alone . Fuck it, I’m looking up your address; I’ll kill you” doesn’t signal casualness to me. The only people I’d expect to say that casually are trolls trying to get a rise out of people. Idle trolling aside, someone laying down a fusillade of abuse like that is someone who cares quite a bit (and doubtless more than they’d like to admit) about my behaviour. Hardly an unbiased commentator! (I recognize that’s easier said than internalized.)
I recommend empirical reality. The kind that exists outside of your (and other people’s) head.
Following up on http://lesswrong.com/lw/jij/open_thread_for_january_17_23_2014/af90 :
I’ve created a minimally (possibly sub-minimally) viable wiki page: http://wiki.lesswrong.com/wiki/Study_Hall
I’ve started playing with SimpleWebRTC and its component parts
I am precommitting to another update by February 10th
This is a minimally-viable update on account of recent travel and imminent job interviews, but the precommitments seem to be succeeding in at least forcing something like progress and keeping some attention on the problem.
http://www.edge.org/responses/what-scientific-idea-is-ready-for-retirement
Some of these ideas are very poorly thought out. Some are interesting.
This Dinosaur Comic is very LessWrongish
I’m in art school and I have a big problem with precision and lack of “sloppiness” in my work. I’m sort of hesitant to try to improve in this area, however, because I suspect it reflects some sort of biological limit—maybe the size of some area in the cerebellum or something, I don’t know. Am I right in thinking this?
Seems to me that that’s likely a self-fulfilling prophecy, which I subjectively estimate is at least as likely to prevent you from doing better as an actual biological problem. Maybe try to think of more ways to get better at it—perhaps some different kind of exercises—and do your best at those, before drawing any conclusions about your fundamental limits… because those conclusions themselves will limit you even more.
I have never biked twenty miles in one go.
It could be that this reflects some inherent limit.
Or it could be that I just haven’t tried yet.
If I believe that it is an inherent limit, how might I test my belief?
Only by trying anyway.
If I try and succeed, then I will update.
If I believe that it is not an inherent limit, how might I test my belief?
Only by trying anyway.
If I try and fail, then I will update.
In either case, the test of my ability
Is not in contemplating what mechanisms of self might limit me,
But in trying anyway, when I have the opportunity to do so,
And seeing what happens.
Be careful not to find yourself 7 miles away from home on your bike and too tired to keep on cycling.
If that means arranging with a friend to pick you up in their car if you have to bail out, or picking a circular route that never takes you that far from home, or any other way of handling the contingency. Going “but suppose I fail!” and not trying is an even worse piece of wormtonguing than the one fubarobfusco is addressing.
Just to be clear: you’re worried that you aren’t sloppy enough?
If so, for us non-artists, can you explain how ‘sloppiness’ can be a good thing?
Sorry, I communicated poorly. I meant [introducting] lack of sloppiness into my work. That’s not what I meant. I’m too sloppy.
You should edit the original question. People seem to be answering the wrong question below.
I think it’s a metaphor thing. Like, in writing, if you say “The shadow of a lamppost lay on the ground like a spear. He walked and it pierced him like a spear.” What more description of the scene do you need than that? In fact, talking about the color of the path or what kind of trousers our character was wearing would be counterproductive to the quality of the writing.
One could view sloppiness in art in the same way—use of metaphor that captures the scene without the need for detail.
And no, of course it’s not a biological limit.
Some guesses on my part-
Maybe your tendency towards precision is at the wrong times? If practicing, for example, it might be counterproductive since you probably want quantity instead of quality, or maybe you’re trying to get everything down precisely too early on and it’s making your work stiff.
Manfred’s point is good- “metaphor that captures the scene without the need for detail.”… If you render background details overmuch, they can distract the viewer from the focal point of the work. Maybe put some effort into looking at how the “metaphors” of different things work? For example, how more skilled artists draw/paint grass in the distance, or whatnot.
I think it’s a common thing to sort of notice something wrong in an area, and to spend a lot of time on that area in hopes of fixing it, which would make it less sloppy… Maybe sketch that thing a lot for practice.
If you’re drawing from life, it’s possible that lack of sloppiness comes from not making sense of the gestalt, so to speak. I’d think that understanding the form of the subject and how the lighting on it works means you can simplify things away. I don’t do much (read: any) figure drawings from life, but I’d imagine that understanding the figure and what’s important and what isn’t would be helpful. Maybe doing some master copies of skilled, more abstract drawings of the figure would help. Maybe look up a comic artist or cartoonist you like and look at what they do.
ETA:
To address your actual question, I’d say I don’t know any particular evidence for why that should be so.
Rationality-technique-wise: It’s good that you asked people, since that would bring you evidence of the idea being true or false. In the future it might be even more useful to suppress hypothesizing until some more investigating has gone on- “biological limit” is the sort of thing that feels true if you don’t understand how to do something or how to understand how to do something. I think there’s a post about this, or something; let me see if I can find it… ETA2: The exact anecdote I was thinking of doesn’t apply as much as I thought it did, but maybe the post “Fake Explanations” or something applies?
I would guess that you try to exert too much control. The kind of “sloppiness” that’s useful for creativity is about letting things go.
Meditation might help.
As you are female, dancing a partner dance where you have to follow and can’t control everything might be useful. Letting go of trying to control is lesson 101 for a lot of woman who pick up Salsa dancing.
He isn’t.
I’m already good at this part of creativity, but precision is also pretty important. Right now I’m working on a project where I have to trace in pen (can’t erase, flaws are obvious) photographs that I took. Letting things go won’t help here.
I already do meditate.
I’m not, sorry.
Swing classes are pretty good about letting either gender learn to follow, if you’d like.
As a lead, you learn that you aren’t really controlling much of anything in Salsa either. You’re setting boundary conditions; follows have a fascinating way of exploring the space of those boundaries in ways you often don’t expect.
But I’m guessing that you’ve hit on the right direction of interpretation of sloppiness as letting go of control. I’d extend that to too much self conscious control. Great art, and particularly great dancing, is finding a clear intention and a method of focusing your discursive consciousness and voluntary attention that harnesses the rest* of your capabilities for the same intention.
When the self monitoring person in your head tries to do too much, he gets in the way of the rest of you doing it right.
For advanced dancing that’s true. For beginners, not so much. At the beginning Salsa is the guy leading a move and the woman following.
If you are a guy and want to learn dancing for the sake of letting go control I wouldn’t recommend Salsa. I think it took me 1 1⁄2 years to get to that point.
A whole 1 1⁄2 years? Took me a lot longer than that. I’ve been at Salsa mainly for about a decade.
Yes, the unfortunate fact is that most leads are taught to “lead moves” when they start. If they were taught to lead movement, they’d make faster progress, IMO. Leading should be leading, to the point of manipulation, and not signaling a choreographed maneuver. I’ve seen a West Coast instructor teach a beginning class that way, and thought it was the best beginning class I had ever seen.
I think on of the turning events was for me my first Bachata Congress in Berlin. I didn’t know too many Bachata patterns and after hours of dancing the brain just switches off and let’s the body do it’s thing.
But you are right that it might well take longer for the average guy. That means it’s not a good training exercise to pick up the skill of letting go control for man.
For woman on the other hand it’s something to be learned at the beginning.
At the beginning I mainly thought I didn’t understand what teaching dance is all about and that a bunch of teachers have something like real expertise.
The more I dance the more I think that their teaching is very suboptimal. A local Salsa teacher teaches mainly patterns in her lessons. On the other hand she writes on her blog about how it’s all in the technique and about traits like confidence. It’s also not like she didn’t learn dance at formal dance university courses for 5 years, so she should know a bit.
Things like telling a guy who dances with a bit of distance to the girl to dance closer, just aren’t good advice when the girl isn’t comfortable with dancing closer. Yes, if they would dance closer things would be nicer, but there usually a reason why a pair has the distance it has.
Manipulation is an interesting choice of words. What do you mean with it?
I remember a Kizomba dance a year ago where I didn’t know much Kizomba. I did have a lot of partner perception from Bachata. I picked up enough information from my dance partner that I could just follow her movements in a way where she didn’t thought she was leading but I was certainly dancing a bunch of steps with her I hadn’t learned in a lecture.
To use sort of what “manipulation” means in osteopathy I think you could call that nonmanipluative leading. In Bachata I think there are a lot of situation where a movement is there in the body but surpressed and things get good if they lead can “free” the movement and stabilize it. I think such nonmanipulative dancing is quite beautiful.
Unfortunately I’m not good enough to do that in Salsa and even in Bachata I’m not always having good enough perception.
That seems related with the common observation that it’s easier to speak a foreign language when drunk than when sober: in the latter case I feel I’m so worried of saying something grammatically incorrect that I end up speaking in very simple sentences and very haltingly. (And the widespread use of drugs among rock musicians is well-known.)
If other people working the same craft have managed to achieve precision, it’s very unlikely to be a biological limit, right? The resolution of human fine motor skills is really high.
You didn’t mention what the craft was or the nature of the sloppiness, but have you considered using simple tools to augment technical skills? Perhaps a magnifying glass, rulers. pieces of string/clay or other suitably shaped objects to guide the hand, etc?
You could try doing something that gives immediate feedback for sloppiness, like simple math problems for example. You might gain some generalizable insight like that speed affects sloppiness. Since you already practice meditation, it should be easier to become aware of the specific failure modes that contribute to sloppiness, which doesn’t seem to be a well defined thing in itself.
I’m recalling a Less Wrong post about how rationality only leads to winning if you “have enough of it”. Like if you’re “90% rational”, you’ll often “lose” to someone who’s only “10% rational”. I can’t find it. Does anyone know what I’m talking about, and if so can you link to it?
This, maybe?
http://lesswrong.com/lw/7k/incremental_progress_and_the_valley/
I’m like 60% sure that its not that article I had in mind, but the idea is the same (incremental increases in rationality don’t necessarily lead to incremental increases in winning), so I feel pretty satisfied regardless. Thanks!
Could the article you had in mind be this?
In any case, Eliezer has touched on this point multiple times in the sequences, often as a side note in posts on other topics. (See for example in Why Our Kind Can’t Cooperate.) It’s an important point, regardless.
No, that wasn’t it. I don’t think it was by Eliezer. And I think it was a featured or promoted article in Main.
I’m quite new to LW, and find myself wondering whether Hidden Markov models (HMM) are underappreciated as a formal reasoning tool in the rationalist community, especially compared to Bayesian networks?
Perhaps it’s because HMM seem to be more difficult to grasp?
Or it’s because formally HMM are just a special case of Bayesian networks (i.e. dynamic Bayes nets)? Still, HMM are widely used in science on their own.
For comparison, Google search “bayes OR bayesian network OR net” site:lesswrong.com gives 1,090 results.
Google search hidden markov model site:lesswrong.com gives 91 results.
Hidden Markov models are a reasoning model to solve a specific problem. If you don’t face that specific problem they are no use.
Most of the problems we discuss aren’t modeled well with HMMs.
Out of curiosity, did you happen to read Kurzweil’s recent book on HHMMs?
I think the safest answer is that a HMM is just a specific way of mathematically writing down an updating Bayesian network.
No, never heard of it. I’m not an Utopian, and from what I know about Kurzweil’s ideas and arguments, they don’t seem to be sound enough.
Well, Kurzweil is an extremely accomplished inventor aside from being a pie-in-the-sky futurist, so when he says something about a particular algorithm working well, I assume he knows what he’s talking about. He seems to think hidden hierarchical Markov models are the best way to represent the hierarchical nature of abstract thought.
I’m not saying he’s correct, just saying, it seems to be a popular idea.
There’s a proliferation of terminology in this area; I think a lot of these are in some sense equivalent and/or special cases of each other. I guess “Bayesian network” is more consistent with the other Bayes-based vocabulary around here.
Is there a good way of finding what kind of job might fit a person? Common advice such as “do what you like to do” or “do what you’re good at” is relatively useless for finding a specific job or even a broader category of jobs.
I’ve did some reading on 80000 hours, and most of the advice there is on how to choose between a couple of possible jobs, not on finding a fitting one from scratch.
I think for most people who ask this question, the range of fitting jobs is much wider than they think. You learn to like what you become good at.
If I were to pick a career right now, I’d just take a long list of reasonably complex jobs and remove any that contain an obvious obstacle like a skill requirement I’m unlikely to improve at. Then from what is left, I’d narrow the choice by some other criteria than perceived fit, income and future employment prospects for example and then pick one of them either by some additional criteria or randomly. I’m confident I’d learn to like almost any job chosen this way.
If you make money you can do whatever you like in the future even if you chose your job poorly in the first place. So please don’t choose to become an English major.
That’s a strange question.
Either you want to know how to pick up the skill of being a career adviser. Alternatively you want to find a job for yourself. You might also be a parent who tries to find a job that fits his child instead of letting the child decide for themselves.
I think the answers to those three possibilities are very different.
It’s this option, although the general skill of being a career advisor also sounds appealing in the abstract.
You managed to give this answer without using the word I. If you want to live a self-determined life, don’t speak of yourself in the third person.
Start associating with yourself. I think that will bring you a huge step in the right direction.
Does anyone have a simple, easily understood definition of “logical fallacy” that can be used to explain the concept to people who have never heard of it before?
I was trying to explain the idea to a friend a few days ago but since I didn’t have a definition I had to show her www.yourlogicalfallacyis.com. She understood the concept quickly, but it would be much more reliable and eloquent to actually define it.
You think she would’ve understood the concept even more quickly if you had a definition? I think people underestimate the value of showing people examples as a way of communicating a concept (and overestimate the value of definitions).
Well I know I won’t be around a computer 24⁄7, and I’d like something to explain it if I’m out and about. Although I suppose I could use a couple examples that I can just memorize, like strawman arguments and ad hominum.
It’s a bad concept, at least the way it’s traditionally used in introductory philosophy classes. It encourages people to believe that certain patterns of argument are always wrong, even though there are many cases in which those patterns do constitute good (non-deductive) arguments. Instructors will often try to account for these cases by carving out exceptions (“argument from authority is OK if the authority is actually a recognized expert on the topic at hand”), but if you have to carve out loads of exceptions in order to get a concept to make sense, chances are you’re using a crappy concept.
Ultimately, I can’t find any unifying thread to “logical fallacy” other than “commonly seen bad argument”, but even that isn’t a very good definition because there are many commonly seen bad arguments that aren’t usually considered logical fallacies (the base rate fallacy, for instance). Also, by coming up with cute names to label entire patterns of argument, and by failing to carve out enough exceptions, most expositions of “logical fallacy” end up labeling many good arguments as fallacious.
So I guess my advice would be to stop using the concept altogether, rather than trying to explicate it. If you encounter a particular instance of a “logical fallacy” that you think is a bad argument, explain why that particular argument doesn’t work, rather than just saying “that’s an argumentum ad populum” or something like that.
A logical fallacy is an argument that doesn’t hold together. All of its assumptions might be true, but the conclusion doesn’t actually follow from them.
“Fallacy” is used to mean a few different things, though.
Formal fallacies happen when you try to prove something with a logical argument, but the structure of your argument is broken. For instance, “All weasels are furry; Spot is furry; therefore Spot is a weasel.” Any argument of this “shape” will have the same problem — regardless of whether it’s about weasels, politics, or Java programming.
Informal fallacies happen when you try to convince people of your conclusion through arguments that are irrelevant. A lot of informal fallacies try to argue that a statement is true because of something else — like its popularity, or the purported opinion of a smart person; or that its opponents are villains.
To a “regular person”, I might say something like “a logical fallacy is a form of reasoning that seems good to many humans, but actually isn’t very good”.
I don’t think this is so simple to explain, because to really understand logical fallacies you need to understand what a proof is. Not a lot of people understand what a proof is.
On the other hand, I think people can acquire a pretty good ability to recognize fallacies without a formal understanding of what a good proof is.
I just feel there is a difference between a “fallacy enthusiast” (someone who knows lists of logical fallacies, can spot them, etc.) and a “mathematician” (who realizes a ‘logical fallacy’ is just ‘not a tautology’), in terms of being able to “regenerate the understanding.”
This is similar to how you can try to explain to lawyers how they should update their beliefs in particular cases as new evidence comes to light, but to really get them to understand, you have to show them a general method:
http://en.wikipedia.org/wiki/Wigmore_chart
(Yes, belief propagation was more or less invented in 1913 by a lawyer.)
Could you explain why it is necessary to understand what a proof is in order to understand logical fallacies? Most commonly mentioned fallacies are informal. I’m not seeing how understanding the notion of proof is necessary (or even relevant) for understanding informal fallacies.
Can anyone recommend a good replacement for flagfic.com ? This was a site that could download stories from various archives (fanfiction.net, fimfiction.net, etc) transform them to various e-reader formats, and email them to you. I used it to email fanfics I wanted to read directly to my Kindle as .mobi files.
fanficdownloader. I haven’t tried the webapp version of it, but I’m happy with the CLI.
Many thanks for the suggestion! I’ve started trying it out, and though it doesn’t seem to work perfectly for fimfiction.net (half the .mobi files I create from fanfics there get rejected for some reason when I email them to my kindle), it so far seems to work fine with fanfiction.net at least.
An excuse for me to learn Python so I can fix whatever it’s doing wrong. :-)
EDIT: On second thought, fimfiction.net allows me to get html downloads of the stories, which I can then email to kindle anyway—so as long as fanficdownloader works with fanfiction.net, I’m all set :-) Thanks again.
Repost as there were no answers:
Has anyone here done Foundation Training? How is the evidence supporting them?
Corrected url: Foundation Training
I tried the video at the url, and it seemed a lot more like straining (little pun about the mistaken url), but that might not be a fair test.
The basic idea of getting hip mobility seems sound, but I recommend Scott Sonnon’s Ageless Mobility and IntuFlow, and the The Five Tibetan Rites—sorry for the cheesy name on the latter, but they’re a cross between yoga and calisthenics with a lot of emphasis on getting backwards/forwards pelvis mobility.
(I think there’s a typo in the URL.)
You are right. Fixed.