Stupid Questions January 2015
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don’t be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people’s admitting ignorance and don’t mock them for it, as they’re doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the “stupid_questions” tag.
- 3 Jan 2015 8:46 UTC; 10 points) 's comment on Stupid Questions January 2015 by (
- Meetup : Atlanta January Meetup: Boring Advice & Stupid Questions by 10 Jan 2015 20:04 UTC; 4 points) (
How do you deal with people who are dominating conversations? I had a New Years party and it was basically 4 hours where either this one guy was talking or someone was talking to him.
Most solutions require either the consent of the talker or that they not be present. For example, one solution is that the group splits into multiple subgroups, so that multiple people can talk at once- but there are ways for the talker to counter that, like abandoning their subgroup to join another subgroup (effectively merging the two) or asking people to keep to one conversational thread.
There’s a broad blunt-subtle continuum of ways to communicate to someone that’s they’re dominating the conversation, and it generally seems best to use the most subtle option that they’ll actually notice, but again, communicating to someone that they’re dominating the conversation doesn’t mean they’ll stop dominating it.
There’s a chance that the rest of the group really likes listening to the talker.
Good point.
I’ve been in groups in which the conversation seems to require someone dominate the conversation in order to lead it to somewhere interesting. Of course, its nice if that person finds a way to get other involved...but conversation in many groups does not flow at all without someone driving it and, therefore, sometimes seeming as if they are dominating (at least to some people). Sitting in a group of people who can’t keep a conversation going can be just as frustrating as someone who can’t shut up.
The DC LessWrong group has a strong norm of splitting up conversations into multiple, which works well if people are being bored by a single person talking—one person will turn to someone else who looks bored and strike up a different thread with them. (Then if other people also are bored, they will join the separate thread or start their own.)
This fixes a few other conversational problems as well.
When someone says something to Big Talker, you can respond to what they’ve said before Big Talker does. (I don’t think Big Talkers are generally the sort to mind interruptions, so it’s not necessarily rude)
One of many strategies is to change the subject. The most domineering conversationalists I know tend to be specialists talking about their own field, and the one-sided nature of the interaction flows partly from their local expertise. Many people talk less if they see a large chance that they will accidentally say something embarrassing, and an unfamiliar topic is a quick and polite way to get them there.
It requires some status and a consistent record of not being a jerk to do this (or to convince yourself to do this), but: “[Big Talker] has been talking for 2 hours, and [Small Talker] hasn’t really had much opportunity to talk about [thing Small Talker does]. Mind if we hear from [Small Talker] for a bit? ”
I have a developing opinion that I’m not quite sure how to word.
It seems that schools all over the world are teaching the same lessons, but are all trying to recreate the wheel. I sense that it’d be more efficient if a bunch of effort and resources went in to each lesson, and that lesson was made available for everyone.
Elon Musk gave a good analogy (paraphrasing)
I sense that there is some sort of economic logic/terminology that applies here and that better articulates what I’m trying to say.
My attempt at explaining it a bit more formally. Consider a lesson on mitosis. Say you have 100 classrooms you need to teach this lesson to. And say you have 100 employees. I think it’d be more efficient for those 100 employees to work at creating an optimal lesson, and then providing that lesson (via a website or something) to students. Given that the lesson can (largely) be delivered via software, it’s non-rivalrous (my consumption doesn’t take anything away from your consumption), and thus can be distributed to everyone at no marginal cost.
Anyway, I hope I did a good enough job explaining such that someone can recognize what I’m trying to say. I’d be really happy if anyone was able to help me further my understanding.
There’s two problems here. First, we have duplication of labor in that we have something like 1% of the population doing essentially the same task, even though it’s fairly straightforward to reproduce and distribute en masse after it’s been done once. This encompasses things like lesson plans, lectures, and producing supplementary materials (e.g. a sheet of practice problems).
This leads into the second problem, which is a resulting quality issue: if you have a large population of diverse talent doing the same task, you expect it to form some sort of a bell curve. As noted above, we can take any lecture, tape it, and broadcast in en masse fairly easily. When we choose a system where each student is subjected to their instructor’s particular lecture, a relatively small portion of them get an excellent lecture, a very large portion get an average lecture (rather than an excellent lecture), and a relatively small portion get an execrable lecture (rather than an excellent lecture). If you’re really ambitious, you could even get the top, say, ten lecturers together and have them collaborate to make a super-lecture, and then get feedback on that particular unit, so they can improve the superlecture into a super-duperlecture.
(IMO, this is still a suboptimal way to do things. Try that process on textbooks (which are much easier to write collaboratively), and instead of getting feedback on hour-long chunks, get feedback on section-sized chunks (which, depending on the subject, can something like one-tenth the size). A good textbook is also cheaper to write, cheaper to distribute, more updateable, and better didactic material to begin with.)
It’s worth noting that there’s still a few wrinkles. Most importantly, there’s really no such thing as a “best” lecture, lesson plan, problem set, or textbook; the “goodness” quality depends, not just on the lecture’s content, but the intended audience. Think of this as a callibration issue. For instance:
Last I checked, MIT uses Sadava as their introductory biology textbook. If you dig around the reviews, you will find endorsements of another introductory biology book by Campbell that claim it’s “SO much easier to understand. It’s better organized, more clearly written”. When I found myself needing to relearn introductory biology (this time with Anki so I actually retain the knowledge), I tried Campbell, since that’s what my high school used, but gave up not halfway through the first chapter, frustrated by the difficulty I had understanding, the poor organization, and unclear writing; I find Sadava, however, to be much easier to understand, better organized, and more clearly written. Is the quoted reviewer lying, perhaps paid off by Big Textbooks? Perhaps, but a much better explanation is that Sadava is more technical; it’s much closer to the “definition-theorem-proof” feel of a math text. This makes it a fantastic text if you’re most students at MIT (or a typical LWer), but much less so if you’re in the other 99% of the population. This also solves the callibration problem: write two (or more) supertextbooks.
(This also neatly explains why MIT sometimse seems like the only school that uses good textbooks and why SICP only has 3.5 stars on Amazon.)
A second wrinkle is individual attention, which I tend to be dismissive of (if the textbook is good enough, you shouldn’t need any individual attention! And it’s not like the current education system, with its one-way lectures, is very good at giving very much individual attention), but if we’re optimizing education, there probably is more individual attention given to every student. However, because of reasons, I suspect that most of it should come from students in the same class, not staff. Also, it belongs after the reading.
A third wrinkle is a narrowing of perspectives. In any particular domain, there’s usually several approaches to solving problems, often coming from different ways of looking at it. In the current system, if you wind up on a team and come across a seemingly intractable problem, there’s a good chance that someone else has happened across a nonstandard approach that makes the problem very easy. If we standardize everything, we lose this. This is somewhat mitigated by the solution to the callibration problem, wherein people are going to be reading different texts with the different approaches because they’re different people, but we still kind of expect most mathematicians to learn their analysis from super!Rudin, meaning that they all lack some trick that Pugh mentions. The best solution I have is to have students learn in the highly standardized manner first, and once they have a firm grasp on that, expose them to nonstandard methods (according to my Memory text, this is an effective manner for increasing tranfer-of-learning).
A good writeup. But you downplay the role of individual attention. No textbook is going to have all the answers to questions someone might formulate after reading the material. They also won’t provide help to students who get stuck doing exercises. In books, it’s either nothing or all (the complete solution).
The current system does not do a lot of personalized teaching because the average university has a tightly limited amount of resources per student. The very rich universities (such as Oxford) can afford to give a training personalized to a much larger extent, via tutors.
Yeah. I’ve taught myself several courses just from textbooks, with much more success than in traditional setups that come with individual attention. I am probably unusual in this regard and should probably typical-mind-fallacy less.
However, I will nitpick a bit. While most textbooks won’t quite have every answer to every question a student could formulate whilst reading it (although the good ones come very close), answers to these questions are typically 30 seconds away, either on Wikipedia or Google. Point about the importance of having people to talk to still stands.
Also, some textbooks (e.g. the AoPS books) have hints for when a student gets stuck on a problem. Point about the importance of having people to help students when they get stuck still stands, although I believe the people best-suited to do this are their classmates; by happy coincidence, these people don’t cost educational organizations anything.
I’m tinkering with a system in which a professor, instead of lecturing, has it as their job to give each of 20 graduate students an hour a week of one-on-one attention (you know, the useful type of individual attention), which the graduate student is expected to prepare for extensively. Similarly, each graduate student is tasked with giving undergraduates 1 hour/week of individual attention. This maintains a professor:student ratio of 200:1 (so MIT needs a grand total of… 57 professors), doesn’t overly burden the mentors, and gives the students much more quality individual attention than I sense they’re currently getting. (Also, I believe that 1 hour of a grad student’s time is going to be more helpful to a student than 1 hour of a professor’s time. Graduate students haven’t become so well-trained in their field they’re no longer able to simulate a non-understanding undergrad in their head (an inability Dr. Mazur claims is shared among lecturers) and I expect there’s benefit from shrinking the age/culture gap. Also, no need to worry about appearing to be the class idiot in front of the person assigning your grade and potentially not giving you the benefit of the doubt on account of being the class idiot.) (Also, it has not escaped my attention that this falls apart at schools that are small or don’t have graduate students. And there’s other problems. Just an idea I’ve had floating around that may be enough in the right direction to effect a positive change.)
As for your point about quality I sense that it’d be inefficient to just take the lectures at the top of the bell curve and distribute them. I sense that it’d be more efficient to pool resources and “have them collaborate to make a super-lecture, and then get feedback on that particular unit, so they can improve the superlecture into a super-duperlecture”.
Could you elaborate a bit on this?
Note: I agree with you about the wrinkles and I think they need to be accounted for. This may be oversimplified, but I think of it as a spectrum of how much you pool resources. The wrinkles explain why it isn’t best to simply pool all resources. However, I think we both agree that right now we’re hardly pooling resources at all and that we should be way more towards the side of pooling. I sense that talking about the wrinkles may be distracting from the core point of “why do you receive gains from pooling”, but if you disagree please do what you think is best.
The argument goes “paying 20k camera-people for one year can replace 2M full-time equivalent jobs next year, which can either go into something more useful without changing anything else (1). Of course, once you’re going to do that, you’d do well to look into seeing what elements of anything else could be changed to make it even more awesome.”
If we optimize properly, I believe we wind up open-sourcing textbooks, somewhat like Linux. We have a core textbook, which has recieved enough feedback to make sure that everything is explained well enough that students generally don’t come away with misconceptions, but because they’re open source, every time you need to write for a particular audience, you have something to work from. LaTeX also supports comments, which makes it easy to include nonconventional perspectives for interested students (i.e. the ones who really need them).
But, yeah, pooling resources. Definitely something we should do more of and WHY HASN’T THE FREE MARKET SOLVED THIS 10 YEARS AGO?
(1) Fermi estimate is as follows: Cursory search indicates Harvard offers a bit over 3k undergraduate classes. Round it up to 5k to include secondary school and the few undergraduate courses not offered at Harvard (for instance, I can’t find an equivalent to 8.012.) Multiply by 4 for different levels, and we arrive at 20k camera-people needed to tape all these courses. (It’s actually less than that, since most courses are one semester.)
Cursory Googling indicates there are 3700k teachers in America; add in other English-speaking countries and eliminate primary- and graduate-level teachers should bring you to 4M teachers (I’m guessing that we add more teachers from English-speaking countries than we lose from not considering primary- and graduate-level teachers, since most classes are at these levels.) Assume that half their teaching job is replaceable by the videos we’ve created, and we’ve freed up the equivalent of 2M full-time jobs.
This is very much a Fermi estimate, but I feel I was liberal enough with the camera-people portion (we’re only hiring them a few hours a week!) to say that the cost of getting high-quality video of all secondary and undergraduate courses is 1% of the savings it should theoretically yield every year in the future. This upper limit goes down once we start writing textbooks instead of taping lectures, especially since most secondary and undergraduate courses already have very good textbooks to work from.
One issue with the chain of logic: The value proposition of school is NOT the lectures. It’s other things:
Good teachers who can individualize instruction (software cannot do this yet, even state of the art like Knewton is rudimentary compared to a good teacher)
Signaling (Everyone knows that if you learned at harvard you’ve already been preselected, nobody knows this about random person watching a video online)
I agree in large part with what you said, but the two issues above need to be solved.
I agree that other things like personalized attention and signaling matter. But I think the lessons and lectures do matter a lot (enough to be talked about anyway). And I think that getting into that other stuff now would be going down a deep enough rabbit hole such that it’d be unproductive for this conversation.
It might be worthwhile to distinguish between lesson content and the student experience.
For instance, if a million students watch the same video lecture on mitosis, have all of them had the same experience? Of course not. Different students have had different backgrounds. Some understand particular analogies that the lecturer makes better than others do. Some are colorblind and have more difficulty understanding a particular animation that is used.
And then there is the context in which that lecture is presented —
Ten of those million students are watching the lecture in a seminar classroom; and when one student gets confused, they pause the video and discuss it. Another ten students are watching the lecture in a different classroom; and when one student gets confused and looks out the window, he or she is punished for being inattentive.
Some students are watching at home on their laptops, and pausing the video to look things up on Wikipedia. Some are listening to the lecture as they drive to work or mow the lawn.
Five other students don’t watch the lecture at all. They agree to read the Wikipedia article on mitosis and whichever linked articles or sources they think might be interesting. Then they meet at a coffee shop and discuss it.
True, but I don’t see how that relates to the central point. Do you think that individual differences are large enough such that the gains to be made from specialization aren’t large enough to justify the investment I’m proposing?
I wasn’t refuting something or even disagreeing; I was elaborating on something else that is worth attending to in order to meet (what I suspect to be) your goal. This isn’t a refutation; it’s a “yes, and also …”
Part of what schools all over the world are doing is not just re-creating lesson plans, but providing specific student experiences. They may sometimes be doing this by deliberate design, and sometimes by following rules that are not terribly good, and sometimes pretty much winging it.
And just as some lessons might be better than others for learning (and thus, worth replicating rather than reinventing), some student experiences might be better than others for learning as well.
Oh ok.
Could that kind of thing be a loss in the long haul? You’re able to create the superb lecture (assuming it’s actually superb) because there are a large number of teachers whose knowledge about lecturing you can draw on.
Use those superior lectures enough, and you have many fewer experienced teachers as new subject matter gets added.
Interesting point. I don’t know. Some thoughts:
I still think that there’s a place for teachers. I agree with richard_reitz that individual attention is overrated. That if lessons were good enough there’d be much less of a need for teachers to diagnose holes in students understanding and tutor them. And that there isn’t really too much individual attention in todays system anyway.
However, I think that even with these great lectures, there will still be holes in students’ understanding, and that using a human is the best way to diagnose and address them. Like Sal Khan has talked about, I think that if these lectures were available, it’d actually free up teachers to spend more time providing personalized attention. I suspect that there’s enough of a need for this such that teachers will still be employed.
I agree with you, I think you’ve explained it well, I think many other people in education are thinking along the same lines, and I think that’s sort of the idea with Khan Academy, Vi Hart, ASAP science, and all those other youtube things with fast-talking voices and sketches. The whole process could definitely do with a little more up-scaling, budget, and legitimacy as a lesson plan.
What you (and many other folks in education) are talking about is basically the function that textbooks fulfill- it’s just a matter of incorporating new media.
You may be interested in the term ‘inverted classroom’, if you’re not already aware of it.
The basic idea is that it’s the normal school system you grew up with, except students watch video lectures as homework, then do all work in class while they’ve got an expert there to help. Also, the time when the student is stuck in one place and forced to focus is when they’re actually doing the hard stuff.
There’s so many reasons why it’s better than traditional education. I just hope inverted classrooms start to catch on sooner rather than later.
(Edit: I know this isn’t your exact proposal, but it uses many of the features you mention and it can be immediately grafted into the existing public school system with a single change of curriculum and the creation of some videos. It’s the low hanging fruit for education.)
Is the following a correct restatement of your point?
We already have regional or state standardization of (some) subjects, exams, textbooks, and homework assignments (which are often given out of textbooks), and (in MOOCs and in computer-facilitated learning) actual lessons. All of these things have increased in use over time. Should we go even further in that direction and also standardize individual lesson scripts, in grade school as well as college? Is anything stopping or delaying this development apart from sheer inertia?
That would not be very efficient, because 100 employees working on the same (small) task would be inefficient, might get bogged down in politics, and the quality of the result would be dragged down to the level of the average employee (everyone-must-contribute mentality) or below it (designed-by-committee issues).
Naively, we might expect the following to be a strict improvement on current practice: let each employee build their own lesson, then discard 99 of the results, and let all 100 employees teach the best lesson any of them built. (Of course you’d need to try out all 100 lessons first to figure out which one is the best.) This is an extension of the current standardized curricula and textbooks to lesson plans and maybe to actual lessons a la MOOCs and computer teaching software. If instead of 100 employees you take all the employees across the world, and you let small self-selected groups work together, the result might be promising.
On the other hand, a teacher needs to adapt the lesson to the class. They need to understand it well enough themselves to teach well, to answer questions and help students with particular problems. They need to encourage or even force students to pay attention, study, and not interfere with one another. All of these things can’t be standardized because they require realtime reactions to student behavior.
I don’t have any answers here, I’m joining you in asking the question.
Somewhat. I’m not saying that lessons should be standardized in the same way that textbooks and exams are currently standardized. I don’t think enough resources are being applied towards textbooks and exams (considering how widely used they are, even a small improvement would have a big effect because it’d be multiplied by the amount of people it touches).
My central point is, “I sense that there is a more abstract economic principle behind what I’m trying to say. Can anyone help me to articulate/understand it?”.
You’re right. The 100 employees example was bad.
I agree. I don’t think that lessons can be so good that we don’t need teachers (yet). I think that there will still be holes in the students’ knowledge after/while going through the lesson, and the most efficient way (right now) to identify and address these holes is to use a human.
It looks like your point could be summarized, in economics jargon, as: education is now a field where the superstar effect should apply.
Thank you! I think that’s getting closer to what I’m thinking. But it isn’t quite the same thing.
The superstar effect seems to be explaining a phenomena, whereas I’m trying to make an argument as to how resources can be allocated most efficiently. The superstar effect says, “you see these high salaries among, say singers, because technology has enabled them to reach large audiences, and technology has enabled consumers to easily listen to the best singers” (please correct me if my understanding is flawed).
I’m trying to do something similar, but from what I understand, slightly different. I’m trying to answer the question, “Why is this more efficient? Why is there an opportunity for firms to create and capture value?”.
I sense that equilibrium is a relevant concept. You invest in the resource until the marginal benefit is ⇐ the marginal cost. Investing into a resource that serves a large market has a large marginal benefit because the effects are multiplied by the size of the market.
Edit: I spent the whole day thinking about it and at some point the thoughts started flowing, so I wrote up a post. Thanks again for referring me to The Superstar Effect!
Is anyone aware of the explanation behind why technetium is radioactive while molybdenum and ruthenium, the two elements astride it in the periodic table are perfectly normal? Searching on google on why certain elements are radioactive are giving results which are descriptive, as in X is radioactive, Y is radioactive, Z is what happens when radioactive decay occurs, etc. None seem to go into the theories which have been proposed to explain why something is radioactive.
The dynamics of the strong nuclear force are not well understood when high numbers of nucleons are involved. By which I mean, we have some empirical models that kinda-sorta work for various regimes, given some tinkering with the constants, but we have no from-first-principles understanding. You by no means need to go as far as biology before you get into stuff we cannot calculate from the equations; but in this case we don’t even know the equations all that well, because the strong-force constant (ie, the equivalent of G in gravity and alpha in electromagnetism) varies drastically with the energy involved, and we don’t know exactly how it varies. (“So why”, you ask plaintively, “is it called a constant?” By analogy with G and alpha, which genuinely are constants so far as anyone knows.) So while nuclear dynamics are not my particular subfield of physics, I would be unsurprised to learn that the answer to your question is “N. N.’s PhD thesis, submitted 2025”.
One more observation: Nuclear dynamics is the field in which physicists refer unironically to Magic Numbers; that is, some numbers of protons and neutrons are particularly stable compared to their neighbours, and it’s not quite clear why. Presumably there’s some sort of symmetry involved.
The answer to the specific question about technetium is “it’s complicated, and we may not know yet”, according to physics Stack Exchange.
For the general question “why are some elements/isotopes less or more stable”—generally an isotope is more stable if it has a balanced number of protons and neutrons .
Here’s what I know about the matter:
At low atomic number, isotopes that are more stable tend to be close to a 1:1 ratio of neutrons to protons. At high atomic number, this ratio approaches 3:2. I do not know why this is the case, and I believe it is not entirely understood by anyone. Also, this is not a very good predictor anyway.
The real problem is that unlike electron energy levels in an atom, which are well known and easily approximable by various systems and techniques, the nuclear energy levels are not very well understood, and I think to an extent they are even difficult to measure. I believe it is known that unlike the electrons’ spherical potential well, the nucleons are bound in a well that is a mixture of a spherical and cubic well, and the exact form is unknown, thus we can’t predict the levels very well. I don’t know why this is the case, and I believe it is not entirely understood by anyone else either.
In short, I think that a good theoretical model that predicts these kind of things has yet to come.
We know exactly why the balance tends more towards the neutrons for heavier elements, but the system is messy enough that it’s very hard to predict just how much it does.
Sphericality and cubicality are orthogonal issues, and not that big a deal in the grand scheme of things. The main issues that make nucleons harder than electrons are:
1) There isn’t an externally imposed force that dominates the system (for the electrons, that’s the nucleus); it’s all internal, and that’s harder. Every time you add a new particle, the new ground state is little like the old ground state. For electrons, a thorough understanding of Hydrogen tells you nearly everything you need to know about, say, Oxygen; at a nuclear level, a thorough understanding of Hydrogen barely tells you anything about Oxygen.
2) The questions you need to answer are much much harder. You aren’t perturbing the system and finding the new ground state, like in chemistry. You need to find barrier heights and transition rates on upheavals to the whole system.
3) Last and least, there are two species (electrons → protons AND neutrons), with differences in how they feel the forces.
Looking at http://www.frankswebspace.org.uk/ScienceAndMaths/physics/physicsGCE/nuclearImages/islandOfStability.jpg , one of the patterns I see is that even numbers of protons and neutrons are systematically more stable than odd numbers. So that might answer the specific part of your question about its neighbors. (As to why even numbers, I don’t know but I bet it’s related to spins.)
EDIT: Apparently this is enough of a thing that it even has its own Wikipedia page. http://en.wikipedia.org/wiki/Even_and_odd_atomic_nuclei
I lived my first 46 years in a world of intuition, people that are bigger that life, somewhat magic thinking (even though I am not religious), and fallacy. Now that I am working in cryptocurrencies I discovered a data-driven, less wrong, in search for the truth world. I don’t want my kids (19, 17, and 15) to waist their time like I did, how do I teach them about this new world of critical thinking and data?
A phrase for this that I came across recently is ‘unmediated access to reality.’ That is, most things that people do are mediated by the people around them; you write an essay, and the teacher decides if it’s a good essay on not based on their subjective standards. You write code, and the compiler determines whether or not there’s a syntax error based on its objective standards, and there is no pleading with the compiler.
I think programming is the easiest way to get experience of living in an objective-truth-world, but it’s worth pointing out that this is one of the reasons programming is so painful, and encouraging people to program because hunting for bugs is character-building* instead of because programming is useful seems like it might not go far.
*Specifically, it builds a lot of rationalist habits, and people talking about the heuristics and biases as ‘bugs’ in ‘mental programming’ seem to be making a very close analogy. I don’t think it’s an accident there are so many CS people running around here.
My 19 year old started traditional engineering in college I am glad he did this and I shared with him this concern about balancing intuition with rational thinking. My second is thinking of doing music or law, so he is more “liberal arts” oriented, although law can have links to philosophy and rationality.
My 15 year old girl is still thinking about Taylor Swift so that is a longer term project!
Note that programming as experienced by beginners leads one to a lot of “objective truths” about how programming works that are actually choices made by the designers of the language, the operating system, or other layers of the total system one’s program executes on. And some of those choices are so commonly adhered to that you’ll never see past them just by trying different languages, only by making an effort to understand the system.
I agree that programming provides “objective-truth-world” in the sense that there are definitive true answers; but those answers are still built out of two-place predicates — they refer to the particular system you’re working with.
Teach them to argue with you
I’m going to sign up for reply notifications to this question in case someone responds well, but I don’t know if this question is going to get the attention it deserves in a “Stupid Questions” thread. If it doesn’t, you might try asking in an open question thread or even in its own Discussion thread.
Thx for the suggestion and implying it’s not a stupid question! I will wait too and if there is no reaction will transport it to an independent thread.
My suggestion is to use critical thinking and data to illuminate something else they’re already interested in. Rather than looking at data as a standalone activity separate from other things, you might analyze (for example) baseball statistics if one of your kids has an interest in baseball. This may be easier said than done...
I don’t think I’ve ever experienced schadenfreude. As in, I’m not even sure what that emotion is supposed to feel like, from the inside. I get the impression that the few people I’ve said this to think that I’m lying about it for signalling purposes.
Is it common just not to feel schadenfreude, like not ever, for any reason? Lately I’ve started to wonder if I’ve been committing the typical mind fallacy on this.
I feel it, but it’s a weak emotion. I could easily imagine going without it.
Do you find any slapstick or dark comedy funny? I’m curious.
I don’t think so, I’ve never read of a case of it. I think most folks feel schadenfruede.
That’s an emotion humankind can do without, but that idea makes me wonder about the ethicality of genetically removing the potential for specific emotions.
Removing the schadenfreude response from humanity as a whole would—I think—be a beautiful thing, but lacking this emotion has certainly been damaging to my own personal fitness.
How?
If a rival in some competitive domain (think work, or romance) is falling behind me, instead of feeling happy about this (schadenfreude) I feel sad and I tend to dissipate my own relative advantage by trying to bring my rival up to my level.
I also have limited emotional motivation to take revenge or even strategic retribution (because I don’t enjoy the suffering of those who wrong me). I get angry or morally outraged, but anger can only take you so far—you need to be able to follow through with the punishment. So when I play real life zero sum prisoner’s dilemma style games, I tend to cooperate far too long before punishing defecting opponents.
Basically, lacking schadenfreude makes it so that I don’t feel any strong desire to defeat or punish anyone, even direct rivals or wrongdoers.
Why did LessWrong split off from Overcoming Bias?
I have the following questions regarding the simulation argument:
Unless I’m missing something, it is always (?) talked about “ancestor” simulations. So, my questions are:
Why does it have to be an “ancestor” simulation?
Couldn’t it be also considered that there are civilizations that run simulations of civilizations that were never “real”?
I.e. couldn’t we be a simulated civilization that was never an ancestor to the civilization that is simulating us?
And sub-questions would be:
So, maybe consciousness is unique in our simulation, i.e. those who simulate us don’t have consciousness?
So, maybe death is unique/special in our simulation, i.e. those who simulate us don’t die?
Thanks very much in advance.
I forget the details, but I think the argument intentionally focuses on ancestor simulations for epistemic reasons, to preserve a similarity between the simulating and simulated universes. If you don’t assume that the basement-level universe is quite similar to our own, it’s hard to reason about its computational resources. It’s also hard to tell in what proportion a totally different civilization would simulate human civilizations, hence the focus on ancestor simulations. I’m not sure if this is a conservative assumption (giving some sort of lower bound) or just done for tractability.
ETA: See FAQs #4 and #11 here.
What’s a relatively easy, not very time-or-resource-consuming thing I’m probably not doing that would have a noticeable popularly desired positive impact on my life within a month? :)
Maybe read a good book that teaches you how to acquire some valuable skill, e.g., Steel’s The procrastination equation (summary), Carnegie’s How to win friends and influence people, or Young’s The little book of productivity (summary).
The last link looks great, thanks.
How the hell did slate star codex get to be so much more popular than lesswrong? It’s an offshoot of this site right?
Yes, but.
One of the reasons Yvain posts there instead of here is because the standards are different; a post on LW talking about the problems with internet feminism would get a “hey, politics is the mind-killer,” but on Slate Star Codex it’s his blog and he gets to talk about whatever he wants to talk about. Besides content, there are presentation differences; Yvain has a jokey style that irritates some of the pedant crowd on LW, but engages more typical readers.
gwern also thinks the different branding has something to do with it; instead of being ‘just another’ LW poster, and LW having a scattered focus with many components that won’t appeal to any particular reader, it’s a clearly distinct site with a narrow focus. The average post quality on SSC is far higher than the average post quality of LW (note that Yvain has the second highest karma of any poster here), which makes it easier to recommend, because you’re recommending Yvain specifically.
I think the comments on SSC are terrible relative to LW- the system is bad (without an inbox, upvoting, downvoting, etc.) which makes bad incentives (I’m nowhere near as careful commenting there, and have noticed that I’m much more likely to be snarky and negative) but also the commenters just seem dumber and less worth reading (but this may have to do with the increased focus on politics and decreased focus on self-improvement). But you don’t share a site on social media / email it to others because of the comments, you do it because of the posts.
Also, consider how frequently a new post shows up in the main vs. how frequently there’s a new post on Star Slate Codex.
If Yvain was posting all his SSC stuff to Discussion/Main, wouldn’t there be even more frequently a new post in Main than there is on SSC?
Sure, but LessWrong is a forum for a set of very specific topics, specifically instrumental and epistemic rationality, not just a place for whoever is on LessWrong to post about whatever they want to.
Many of us have our personal blogs and they fill a different niche than LessWrong does.
You should note a few things. According to alexa.com:
lesswrong.com: global rank − 51,809, rank in the United States − 18,847, 46.3% of visitors come from the US.
slatestarcodex.com: global rank − 87,006, rank in the United States − 16,366, 71.7% of visitors come from the US.
According to similarweb.com:
lesswrong.com: global rank − 40,877, rank in the US − 19,570, 57.81% of total traffic come from the US.
slatestarcodex.com: global rank − 69,386, rank in the US - 31,060, 61.97% of total traffic come from the US.
Therefore, LessWrong seems to have a little bit more visitors (especially from outside the US) than SlateStarCodex. SSC seems to have a lot of comments, but you should note that LW requires having an account for posting comments, whereas SSC allows you to post comments anonymously, it doesn’t even ask your email address.
RSS readers:
feedly.com shows lesswrong.com (i.e. http://lesswrong.com/.rss, which is the same as http://lesswrong.com/promoted/.rss) as having 7k readers and slatestarcodex.com as having 2k readers. It is interesting to note that http://lesswrong.com/new/.rss and http://lesswrong.com/r/discussion/new/.rss have few subscribers. This is probably because if new people visit LessWrong, they can see only one RSS Feed icon on the front page so naturally it is the one they subscribe to. I think this is a very bad thing and it should be corrected as soon as possible. If someone tells a person to check out LessWrong, they are likely to end up reading promoted posts only.
inoreader.com has similar stats: lesswrong.com has 378 subscribers, slatestarcodex.com has 137 subscribers. Few people to subscribe to lesswrong.com/new and Discussion.
Curiously, overcomingbias.com has 6k readers in feedly.com and 778 in inoreader.com, which is comparable to LessWrong itself, but it has fewer visitors than LW or SSC. Perhaps most people use their RSS readers to read it and don’t visit the blog itself.
I think there is a difference between LW and SSC. On LW, we have 2-4 new discussion threads every day, a lot of threads are active at the same time. Whereas SSC usually has 2-4 thread a week, only perhaps a couple threads are active at any given point in time. In addition to that, almost all SSC posts are written by the same person who builds upon his previous posts. Therefore everyone is on the same page both literally and figuratively. Therefore it is quite easy to tell what ideas float on SSC now. Whereas LessWrong has many individual posters who tend to be interested in various things. Few of them build upon the ideas of each other, few explore the possibilities suggested by other LW posts. Therefore it is much harder for a reader to get a big picture. A question “what X has been up to recently?” is much harder to answer if X=LW than if X=SSC.
As of now, LessWrong does not have posters whose new ideas would move opinions of the majority of the readers forward. People seem to form ideas on their own, whereas SSC offers its readers a “journey”. Therefore comments are relatively much more important on LessWrong than they are on SlateStarCodex. However, if one is supposed to read the comments, it becomes much harder to use the site casually. On SSC, while comments sometimes do add to the post themselves, you can simply read Scott’s essays if you do not have time for anything more. The question how to optimize your LessWrong reading, which is relevant for casual readers, is harder to answer. The question “how has X consensus changed in the last 3 months” is much harder to answer if X=LW than if X=SSC, because in order to answer the latter question, reading Scott’s essays seems to be enough.
To sum up. LW probably has more readers (especially from the outside the US), SSC probably has more comments due to the possibility to post comments anonymously. SSC posts, all being written by the same person, build upon each other and thus offer their readers a “journey” and are able to move consensus opinions more quickly than individual LW posts.
Reading SSC brings back the feeling I got when I first discovered Less Wrong (right after the split with Overcoming Bias, when there were still sequences being posted). Here’s this extremely intelligent and articulate guy, posting very insightful things on topics I didn’t even know I was interested in—and he’s doing it pretty regularly!
I like what Less Wrong has evolved into in the post-Sequences era, but reading Less Wrong today produces a very different feeling from when it did early on.
I do have a feeling that LW is passed it’s heyday.
What are some good examples of rationality as “systematized winning”? E.g. a personal example of someone who practices rationality systematically for a long time, and there are good reasons to think doing that has substantially improved their life.
It’s easy to name a lot of famous examples where irrationally has caused harm. I’m looking for the opposite. Ideally, some stories that could interest intelligent, but practically minded people who have no previous exposure to the LW memeplex.
The easiest examples are typically business examples, but there’s always the risk of the thing people attributing their success to not being the actual cause of their success. (“I owe it all to believing in myself” vs. “I owe it all to sleeping with the casting director.”)
I think the cleanest example is Buffet and Munger, whose stated approach to investing is “we’re not going to be ashamed of only picking obviously good investments.” They predated LW by a long while, but they’re aware of the Heuristics and Biases literature (consider this talk Munger gave on it in 1995).
Scott Adams claims being rational in that sense in his book.
Why is the Newcomb problem… such a problem? I’ve read analysis of it and everything, and still don’t understand why someone would two-box. To me, it comes down to:
1) Thinking you could fool an omniscient super-being 2) Preserving some strictly numerical ideal of “rationality”
Time-inconsistency and all these other things seem totally irrelevant.
Well, there are people who would say the same thing but in reverse. There is a rationale behind that, even if I think it’s wrong.
I don’t think two-boxers think they can fool an omniscient super-being. They do think that whatever is in the box cannot be changed now, so it’s foolish to give away $1,000. Would you one-box even with transparent boxes? If not, then you understand this logic. There’s a reasonable argument there, especially as in the original paradox Omega is not perfect, so there’s a chance that you’ll get nothing while passing up $1,000.
I am a person who has trouble focusing on work and fidgets a lot. I work on the computer frequently. I figure treadmill desks might cure this. Also, it’s cold and icy where I live and if I can get my older family members to walk or run around more, it will be good for them, so treadmills are a good investment for removing trivial inconveniences..
Should I buy a cheapest possible treadmill and just put a stool on top of my desk to create a standing desk?
Should I buy a nice expensive regular treadmill, and use it for desking and running?
Should I buy the (expensive) treadmills specifically designed for treadmill desking, and only use it for desking?
Other options?
Try before you buy.
What is the best way to relatively quickly gain some elementary proficiency in world history? I notice that I have little to no awareness as to how the world came to be as it is (there were cavemen… they discovered fire… thus the technological progress started… gets us to steam engines, then elecricity, and computers...). Is there a good textbook that outlines the issue?
This is the YouTube series—Crash Course World History. In ten minute videos it brings you from the start of agriculture to the modern day covering topics at an elementary level. These videos are produced by the prolific vlogbrothers Hank and John Green, whose material fits under the rationalist and altruistic categories.
But your examples are very technology oriented. So perhaps look at Wheels, Clocks, and Rockets: A History of Technology or Guns, Germs, and Steel.
Even better: History 115: Technology and History syllabus, go to page three for additional reading.
Those videos look promising. Thank you. My examples were technology oriented, becuase when I tried to make a non-technological history example, I went blank. Well, I know that there was a World War II, when we kicked down a supervillain, and the name implies that there was another World War, unless it is a misnomer...
I strongly recommend this one.
I’ll look into this. Thank you.
How do you avoid sending nerdy signals?
Dressing well, projecting confidence, focusing more on relatability than truth-seeking or novelty (esp. in conversation), developing interests that others also hold, seeking status, adopting more mainstream ideas of what’s fun, becoming more extroverted. What are you looking for specifically?
Should I correct my posts after unambiguous mistakes are mentioned (specifically, mistakes in argumentation and the like, rather than grammatical errors, etc., which I would always correct), or should the original post be left alone for posterity? I figured that it would be better to correct it to save the time of future readers. In the particular instance that I’m talking about (I opted to correct, by the way), one of the commenters seemed to imply (very diplomatically, and I can’t be certain) that I was too concerned with editing the post for image-management-related purposes and that it would be better for me to simply move on and write more good posts as opposed to trying to improve that one. I also suspect that the answer is more nuanced than, “Always correct,” or, “Never correct.”
I think it’s up to you, but if you correct a post you should always make it explicit that you have done so and say what you did. Like Lumifer, I think it’s better to add a note at the end (or, in extreme cases, at the beginning) than just to overwrite the old content with the new; otherwise you risk making other people’s comments nonsensical.
I prefer writing postscriptums to drastic edits. If I was shown wrong, I usually concede in a separate comment rather than try to rewrite the original one.
I’m considering creating a Linkedin profile. I probably should have made one long ago, but, because of my severe social anxiety and a visceral reaction to any activity which involves selling myself, I have avoided it. However, I think it’s probably best to bite the bullet and work through creating the profile and to at least send connection requests to people who I am currently working with. However, first I’d like to know if it looks bad to have a profile with only a few connections. Is that worse than having no profile at all?
I dont think it is a problem to have few connections. Everyone starts with zero connections at the start, and many (if not most?) people only use it occasionally.
I think it will become more valuable in the future for employment, as it does provide a fairly easy ‘living resume’, that potential employers can see; so make your work history well written and polished (as it, treat it exactly like a resume).
Having a profile, even a new one, will almost certainly be a net positive over not having one, and it just gets better.
It’s worth noting that while a few people do actually use LI as a social network (with the joining groups and posting statuses/comments/links and so on), and that there may be a benefit of doing so if you’re actively job hunting (gets your name out more), most people seem to just basically treat it as a “here’s my employability credentials, click that button to message me with offers” site. It works too; I get regular offers despite having almost no activity on the site and indicating that I’m currently employed. It doesn’t seem to have Facebook’s “gotta have all the friends!” mentality so much, though.
You’ll probably find that connections will grow quickly. Pretty much any recruiter—either at a career fair or similar, or a headhunter looking for people online—will offer to connect, and it’s generally fine to connect with all your current colleagues and any past ones that have/had a non-negative association with you. I’ve got connections through my current employer, my past employers including internships, friends and faculty from university, people I met through my work (admittedly, as a consultant, I work with a lot of people in my field but outside of my company), people I met at conferences, and various recruiters who’ve tried to rope me in. Almost all of them added me first, not the other way around.
Be aware that it’s really easy to leak info that you would want to keep private. A few years ago, there were a bunch of interesting leaks out of big tech companies when employees posted stuff that they were working on before it became public. Yeah, you shouldn’t put still-confidential stuff on your resume anyhow, but much worse when it’s publicly searchable on the Internet...
This is not a legitimate concern (that is, “the way opposes your fear”). No one’s judging anyone (as far as I know) on the basis of how many connections they have.
If I want to marry refugees that need visas to the US as an act of Altruism, is there a place where I can find and contact such refugees?
Is there anything I should know that?
The consular clerk clears his throat and goes through the papers with visible boredom.
“So, Mr. Capla, here we are again. Seventeenth time, I think? Yes, seventeenth. How sad, indeed, that your marriages never seem to last happily ever after. Also, your stories have a notorious property of holding less and less water each time.”
“Miss Chang and I are truly in love, I swear by all that is holy.”
“You don’t say. Go ahead. Surprise me.”
I’d be surprised if fake marriages turned out to be the most cost-effective way to help poor people immigrate to the US, even if you want to focus on refugees specifically.
Why? What’s a marriage cost?
Time, legal risk, reputation. The opportunity cost is lower if you were going to marry a random/non-specific person anyway, but I’m assuming you’re asking about a sham marriage that you’re going to end later.
Time: sure.
Reputation: Actually, I think going above and beyond for a stranger in need is pretty strong signalling of my Altruism, especially among the communities I walk through.
Legal risk: This is what I want to hear about. What legal risks?
Maybe, but I don’t see any particular reason to get a divorce except 1) the refugee in question wants to marry someone else or 2) so I can go marry another refugee.
It looks like marrying specifically for US residency purposes is illegal. This report gives the impression that only a tiny fraction of people actually get prosecuted. You’ll have to convincingly lie to a consul and likely undergo some investigation (see e.g. here).
Thanks. Anywhere else where I can get relevant information?
I found the links by googling “green card marriage”.
Is sex significantly more pleasurable than masturbation? Why?
Yes and no. It’s a different experience—like taking a bath and going swimming.
I can really only give info from personal experience and the occasional offhand comment from really close friends / SOs here, but in short: yes. I should note here that I don’t use masturbation aids, aside from online porn; I could probably improve the masturbation experience somewhat. On the other hand, that’s probably also true for the sex experience. In any case, YMMV.
Masturbation is (for me) usually faster, but the orgasms aren’t nearly so intense and don’t last so long (I’m a guy, so a “long” orgasm is like 10 seconds unless I’m really lucky). Masturbation never produces the “post-coital glow” feeling I get after good sex. Masturbation almost always feels like simply a means to an end (orgasm), whereas sex is pleasurable pretty much the whole way through. Part of that is probably that I’m extremely tactile-oriented; touching my partner is way more pleasing than touching myself.
Sex also has the huge advantage of being able to please your partner; giving orgasms isn’t quite as good as receiving them, but it’s still good and most women can have several of them for each one you get. If nothing else, it’s a feeling of achievement and acknowledgement. Of course, this assumes that the sex goes well; if it doesn’t (for whatever reason) that is a pretty bad feeling even if your partner is kind about it. In any case, the first time I had sex after realizing I was in love with my then-girlfriend, and telling her so, made me feel like an idiot for waiting so long because it made everything about sex together better.
Reposted from here:
How is the name “Yvain” pronounced? Also, is there any meaning behind it?
From http://slatestarcodex.com/2013/12/23/we-are-all-msscribe/ :
An early use of the name is https://en.wikipedia.org/wiki/Ywain but I don’t know if Yvain got it via some other source.
I pronounce it ‘i’ as in lip, ‘vain’ as in vain, and I don’t recall ever hearing it pronounced differently. Wikipedia replaces the v sound with a w sound, but I’m going to ignore that.
Incidentally, my only prior exposure to the name was Yvaine in Stardust, and it didn’t strike me as an unusual name, so for a long time I assumed Yvain was female and using her given name. I only just today realised the names are spelt differently.
Thanks! Upvoted.
I currently use doxycycline to treat acne vulgaris. How much harm am I doing by contributing to antibiotic resistance, and should I stop using it? I’ve been using it daily for years, if that helps with the harm estimate.
I don’t understand sign language, but in recent years I’ve become a fan of sign language poetry. Some of my acquaintances seem surprised that poetry is possible in sign language, but I think everything that can communicate can be used beautifully. Then today an idea struck me: can you make poetry in a programming language? I’m sure there is some sense in which an algorithm may feel “beautiful,” but is that poetry? And if that isn’t, what would be?
Poetry is not only about rhymes. See (any) Obfuscated contest winners for programs that are beautiful, in both form and content, but unlike “classical” poetry, require some programming knowledge to appreciate.
There are also whole programming languages designed to be an artistic work, not to be used In Real Life—e.g. Perligata.
People write poetry in Perl. Notably, here is a well-known piece by Larry Wall.
Did the Enlightenment manage to break the Christian faith experience after all, so that today’s Christians have more in common with today’s pagan revivalists than they realize? The fact that people in traditionally Christian countries seem to have lost interest in maintaining the forms of the religion suggests that Christianity in its former strongholds has begun to decline.
Of course, Christians themselves apparently come up with the idea that religions have expiration dates—their contrived BC/AD division in history. Has Christianity therefore entered its own late, BC-like era?
What is “the Christian faith experience”?
Specifically, what is “the” doing there?
The idea of a unitary Christendom in Europe prior to the Reformation was not produced by a “faith experience”, but by the violent destruction of alternative Christian faiths such as the Gnostics, Cathars, Waldensians, and other dissident Christian groups; by the perpetual hounding of the Jews; the various wars and Crusades against the Islamic caliphates; and so on. This started long before the Enlightenment or even the Renaissance.
One of the things common to many of the dissident Christian groups of Europe is that they did not see a need for a centralized hierarchy in religion. They either held to equality among all believers, or a relatively flat system of initiation or priesthood. For this they were persecuted, in some cases to utter extermination, by the Roman Church and its kingdoms.
What changed in the Reformation was that the Lutherans, Calvinists, and later Anglicans had sufficient political power to create their own zone of violent destruction of alternatives — including Roman Catholics and Jews. Unlike the Cathars, the Protestants were too strong to be exterminated. The Wars of Religion were bloody, but they could not become a one-sided massacre like the Albigensian Crusade.
Ultimately this led to Westphalia and the replacement of the ideal of a unitary Christendom in Europe with the idea of nation-states.
First paragraph: Seems like wishful thinking to me. Christians still run the world.
Second paragraph: It seems like you are asking if the Messiah is about to return. I guess I’d say No?
With tongue less in cheek, its incredibly hard to imagine a religion today pulling off the same kind of growth that Islam/Christianity did during their explosive first couple of centuries. Culture is all wrong for that sort of thing.
Um, Mormonism. Also Islam doesn’t appear to be doing too badly these days.
I don’t understand what you mean. 1 AD is the beginning of a religion, not the end of any other religion. BC, ‘Before Christ’, doesn’t refer to any non-Christian religion expiring, does it?
He likely had something like the Old Covenant/New Covenant distinction in mind. Out with Judaism, in with Christianity.
Historically Christians have made life hard for adherents to other religions because they believe they have the final, authoritative revelation from the right deity, hence the implication that all of the other religions lost legitimacy and expired long ago.