Open thread, Apr. 24 - Apr. 30, 2017
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options “Notify me of new top level comments on this article” and “
I had alot of fun helping to co-organize the NYC SSC meetup. I was wondering if people had advice on what questions to use an icebreaker when welcoming new people into the rationalist community. Here are two questions I have tried at the SSC meetup:
-”How long have you been reading Scot?” : Some people found this question obvious and uninspired but it worked pretty well.
-”Has Rationality has had direct practical benefits in your daily life?” : I think the later question would have worked better at a “lesswrong” not a SSC meetup. A common feeling was they valued “rationality” in discussion and weren’t especially interested in using rationality for self improvement.
Does anyone have suggestions for better questions?
At the solstice last year someone asked me “What’s your origin story?” which I thought was a pretty cool way to phrase what was in the end a sort of generic question.
It’s NSFW :-P
What made you decide to come to the meetup?
What do you expect from the meetup?
What do you think you do that nobody else who attends the meetup does?
A bit harder: Where did you change your beliefs in the last year?
While that’s an interesting question it doesn’t seem like a good icebreaker question. A person might feel bad about having to good answer to it.
Reading an old comment under “Unequally yoked” that says:
I think this description is wonderful, and while a sin for Catholics, I invite all aspiring rationalists at living with a question mark at the center of their lives!
I really would prefer to the ability to see IR and UV, but in the meantime this is interesting. Sample:
I’m also interested in what expanded color vision would be like, but it looks like that paper was describing one of the obvious approaches I’d already thought of. I didn’t read the whole paper, but from the abstract, it looks analogous to one of the widely-used treatments for color blindness, namely a single red-tinted contact lens.
Many other vertebrates are tetrachromats. Mammals are unusual for being more color-blind. The loss of two types of cone cells is thought to be due to the nocturnal phase of our evolution when dinosaurs ruled the earth. Primates have since evolved a third kind again. Birds never went through this phase and are still tetrachromats.
A gene-therapy experiment gave color vision to color-blind monkeys. This approach could theoretically produce a fourth type of cone cell as well, but could they then distinguish more colors?
Some women may be natural tetrachromats, due to a mutation in a photopigment gene in one X chromosome, but not the other. It’s not clear if the tetrachromat ability of certain women is due to ancestral neural pathways from when we were tetrachromats, but given the random way gene therapy works in the monkey’s cells, it seems likely that neuroplasticity is enough. Given consistent “pixels” that react preferentially to certain colors, the brain learns to perceive them as colors. Thus, I believe it’s likely that the human brain could learn to perceive a color gamut built from even five or more primary colors, given proper inputs.
The fact that sensory substitution works suggests a non-invasive approach. If you could track and target the eye well enough for a display to consistently change the color sensitivity of a scattered subset of retinal cells, it’s likely you could use it to train your brain to not only distinguish new colors, but to perceive new color qualia.
The color processing system in the human brain is not that plastic. The higher levels probably yes, but the lower levels: No. Sure you can perceive and have benefits from these filters, but it’s not exactly the same as having earch processing or luminance and chrominance built into your hardware.
http://www.allpsych.uni-giessen.de/rauisch/readings/Gegenfurtner.NatRevNeurosc.2003.pdf
Do natural tetrachromats have an expanded gamut? They are able to distinguish between colours which normal people see as identical, but are they capable of seeing colours which normals just cannot?
From the physics point of view colours are particular mixes of light with different wavelengths (or photons with different energy). “New” cones could perceive wavelengths that were not seen before—or they could, basically, turn out to be a different filter and so allow new combinations of perceptions, but no gamut extension.
Over the Hump, and Starting a Return to Normality
There are some downsides to being a data pack-rat, as well as the obvious up-sides.
I’m in the process of moving to a new house, and the last month has pretty much been dedicated to that project—everything from a new set of floorboards being laid down to finding the best stores near the new place to buy my favourite beverage (grapefruit Perrier). The process is still ongoing, and I’m still going to be paying rent at the old place for some months to come; for example, even after getting rid of nearly all my mass-market paperback novels, there are still a /lot/ of books in the old family library that are still going to have to be shlepped over to the new one, and not a single member of my family has great strength or endurance.
But most of the hard work and planning is done, and life is settling into a new normal: today, I hope and plan to apply for a new library card, do some banking, grab some income tax forms, and just maybe visit the nearby branch of a computer store to upgrade my laptop’s RAM. My sleep schedule is still ridiculous, if I lose 50 pounds I’ll still going to be overweight, asthma sucks… but a lot of the stresses from the old home are just plain gone. I am, as I see it, in about as good a mental state as I’m likely to be in the foreseeable future.
Which means that, barring unexpectable crises, it’s time for me to start writing again. My current plan: When I hit my new local public library today, I’m going to sit down for a while and start going over my partial draft of ‘Extracted’, to both refamiliarize myself with it and to start nudging any details I find that seem to need editing. And, by the time I’ve gone over what I’ve already written, to start finishing writing what I didn’t get around to typing out the last time I worked on the piece.
The main bit of uncertainty around this plan is that I have insufficient data to predict whether, how soon, and how severely I will go through my next bout of more-severe-than-everyday anhedonic depression. I’m hopeful that the release of stress from the old home will make such a bout less likely; but I’m also aware of the statistics that show that the act of moving to a new home adds its own form of stress. Barring low-probability black-swan events, my range of expected mid-term futures runs from going back to my previous levels of depression, all the way up to completing a novel and beginning the brand-new venture of learning about e-publishing.
Does anybody here know about http://www.mindhabits.com ? Seems they are building
Anybody tried this or know whether this is legit?
What exactly do you mean with that question?
This looks interesting. Sample:
and
Never ever, once, sometimes...
One problem I have with that post on generalizing from one example is that it somehow presupposes that the conclusions I draw from observing an isolated occurrence are somehow ‘idle’. It’s not for nothing that I think a man kicking a soda machine ‘aggressive’. I might not even think it, unless I am asked; but I will certainly be wary of leaving my kid in his presence. I know what my kid’s capable of—soda machines have nothing on him, and I don’t want there to be any reason whatsoever to suppose he might be kicked. So yes, my labeling the angry man ‘aggressive’ is just a way to make a mental note in fine print.
...and so these are the kinds of statements that I expect to see on LW, but not in RL except as a joke.
Did you intend your headline to be a link? I don’t know what you’re responding to.
No… I’ll re-format it.
I just thought how in usual life, having something happen once supports its being rare more than having something happen never does.
Claim: EAs should spend a lot of energy and time trying to end the American culture war.
America, for all its terrible problems, is the world’s leading producer of new technology. Most of the benefits of the new technology actually accrue to people who are far removed from America in both time and space. Most computer technology was invented in America, and that technology has already done worlds of good for people in places like China, India, and Africa; and it’s going to continue help people all over the world in the centuries and millennia to come. Likewise for medical technology. If an American company discovers a cure for cancer, that will benefit people all over the globe… and it will also benefit the citizens of Muskington, the capitol of the Mars colony, in the year 4514.
It should be obvious to any student of history that most societies, in most historical eras, are not very innovative. Europe in the 1000s was not very innovative. China in the 1300s was not very innovative, India in the 1500s was not very innovative, etc etc. France was innovative in the 1700s and 1800s but not so much today. So the fact that the US is innovative today is pretty special: the ability to innovate is a relatively rare property of human societies.
So the US is innovative, and that innovation is enormously beneficial to humanity, but it’s naive to expect that the current phase of American innovation will last forever. And in fact there are a lot of signs that it is about to die out. Certainly if there were some large scale social turmoil in the US, like revolution, civil war, or government collapse, it would pose a serious threat to America’s ability to innovate.
That means there is an enormous ethical rationale for trying to help American society continue to prosper. There’s a first-order rationale: Americans are humans, and helping humans prosper is good. But more important is the second-order rationale: Americans are producing technology that will benefit all humanity for all time.
Currently the most serious threat to the stability of American society is the culture war: the intense partisan political hatred that characterizes our political discourse. EAs could have a big impact by trying to reduce partisanship and tribalism in America, thereby helping to lengthen and preserve the era of American innovation.
I think it’s an interesting point about innovation actually being very rare, and I agree. It takes a special combination of things for to happen and that combination doesn’t come around much. Britain was extremely innovative a few hundred years ago. In fact, they started the industrial revolution, literally revolutionising humanity. But today they do not strike me as particularly innovative even with that history behind them.
I don’t think America’s ability to innovate is coming to end all that soon. But even if America continues to prosper, will that mean it continues to innovate? It takes more than prosperity for innovation to happen. It takes a combination of factors that nobody really understands. It takes a particular culture, a particular legal system, and much more.
I don’t know about that. People have been discussing how does an innovation hub (like Silicon Valley) appear and how one might create one—that is a difficult problem, partially because starting a virtuous circle is hard.
But general innovation in a society? Lemme throw in some factors off the top of my mind:
Low barriers to entry (to experimentation, to starting up businesses, etc.). That includes a permissive legal environment and a light regulatory hand.
A properly Darwinian environment where you live or die (quickly) by market success and not by whether you managed to bribe the right bureaucrat.
Relatively low stigma attached to failure
Sufficient numbers of high-IQ people who are secure enough to take risks
Enough money floating around to fund high-risk ventures
For basic science, enough money coupled with the willingness to throw it at very high-IQ people and say “Make something interesting with it”
That’s a partial list. It also takes good universities, a culture that produces a willingness to take risks, a sufficient market for good products, and I suspect a litany of other things.
I think once you’ve got a society that genuinely innovates started, it can be hard to kill that off, but it can be and has been done. The problem is, as you mentioned, very few societies have ever been particularly innovative.
It’s easy to use established technology to build a very prosperous first world society. For example: Australia, Canada, Sweden. But it’s much harder for a society to genuinely drive humanity forwards and in the history of humanity it has only happened a few times. We forget that for a very long time, very little invention happened in human society anywhere.
Yes, of course.
This implies that there are good reasons for it. You can look at it in the exploit/explore framework and going full explore is rarely a good choice. Notably, betting on innovation produces a large variance of outcomes and you need to be sure you can survive that variance.
yes, to those.
The Mojave Spaceport private space project is coming together, with lots of innovation and high-tech machine tooling driving a small seed of development, and that could enable lots of support and design help for the high profile private space projects.
You write about its importance, yet I suspect EAs mostly avoid it due to doubts about tractability and neglectedness.
France is at place 18 in the global innovation index with a score of 54.04 while the US is at place 4 with a score of 61.40.
Given that you live in Berkely US innovation is more visible to you than French innovation. You don’t see the French trains that are much better than anything that the US has at present.
The US is a bit more innovative then France but if you say that France isn’t innovating at all today while the US does, that produces a flawed view.
True.
Not true. There’s rationale to help America continue be inventive, but that’s not the same thing at all as “continue to prosper” since the US looks at the moment like an empire in decline—one that will continue to prosper for a while, but will be too ossified and sclerotic to continue innovating.
Note that it’s received wisdom in Silicon Valley (and elsewhere) that you need to innovate in the world of bits because the world of atoms is too locked-down. There are some exceptions (see e.g. Musk), but overall the difference between innovations in bits and innovations in atoms is huge and stark.
Not true at all. Even in Berkeley what you have is young males playing political-violence LARP games (that’s how you get laid, amirite?) and that’s about it.
Read less media—it optimizes for outrage.
We can agree to disagree, but my view is that the US has dozens or hundreds of problems we can’t solve—education, criminal justice, the deficit, the military-industrial complex—because the government is paralyzed because of partisan hatred.
True
Not true. The government is paralyzed (see the grandparent: “ossified and sclerotic”) because people and institutions which find the status quo convenient and profitable are powerful and able to block changes.
And if you want a dominant active government, well, be careful of what you wish for.
Eh, no. Not in the case of USA. Republicans have locked the Congress, and they have a (theoretically) Republican president. It should be smooth sailing if it were only for partisan hatred.
First you will have to fight against the current trend of rationalists avoiding even discussing culture-war topics. SSC is currently edging towards more limitations on what can be discussed, where culture-war topics can be banned and are at least siloed into separate discussion areas. I think we should try to keep LessWrong an area where there are no limitations on topics that can be discussed—although we might try to enforce the level and quality of discussion to a certain standard. Politics is the mind-killer, but that doesn’t mean you can’t avoid being mind-killed when you talk about it.
That’s not really true. The culture war in the US is not about HBD. Most people in the US don’t even know what HBD stands for. The fight over it is a relatively small internet debate.
SSC get’s a lot of traffic from discussing culture-war topics to the point that he complains that posts he write about feminism get more traffic than other posts he considers more important.
HBD is not the whole culture war, and it’s not the most important piece of it overall, but it does make up a big portion of the debate within academia and among intellectuals (among which rationalists are an important subset of intellectuals). Scott would not have singled out that topic if it wasn’t especially good at inciting internet fights. The reason it’s such an important topic in the culture war is because the truth or falsity of it has enormous consequences on how we might politically and economically approach the issue of inequality.
‘HBD’ is not the kind of thing you can usefully describe as “true or false” (which incidentally is why the terms ‘HBD’ and ‘human biodiversity’ are quite unhelpful except as a source of controversy). There clearly is some biodiversity among members of the human species; we are not biologically identical. The question is how important that biodiversity is for practical issues (such as inequality), and the most likely answer seems to be “not very, except inasmuch as it will probably cause some especially salient ethnic/racial groups to be somewhat less represented in very high-skill jobs, which people will inevitably blame on ‘structural racism’ when the underlying cause for this imbalance is in fact rather different.” But this is hardly a significant problem; it might even go entirely unnoticed, were it not for the underlying political contentiousness that’s the natural product of ethnic diversity in a modern mass society!
Au contraire, the most likely answer seems to be “very”. And it is a significant problem.
This seems to actually be the main source of contention, in that there are many claims of the sort such as “genetic differences are too small to really matter”, “the environment a person is raised in is the main, if not the only, causal determiner of life outcomes”, etc.
Except that some ethnic groups are already far less represented in high-skill jobs, and our economy continues to transform into one with higher need for high-skill workers and a disappearing number of low-skill jobs. This is not just a problem due to differences in intelligence between ethic groups but also a problem due to the high variation in intelligence in general, and the fact that it is highly heritable. Low-skill, blue-collar jobs just aren’t going to exist forever and that’s something that has to be dealt with at some point if we don’t want people to suffer because of it.
All human jobs become redundant when AGI takes off, but before that, narrow AIs will take many. But that doesn’t mean there will be no “blue-collar” jobs left, nor does it mean that “white-collar” jobs are safe.
For example, radiology is traditionally white-collar, but image recognition AIs can be trained to be more accurate than any human. In the near future, I expect the demand for radiologists to plummet as these take over. You’ll still need a handful to help train the AIs, but that’s not nearly as many as we’re employing now.
On the other hand, something like a maid service is a traditionally blue-collar job, but I do not expect robots to replace them any time soon. Something we humans might consider a simple job, like “clean your room” is an extremely difficult AI/robotics problem. You have to safely navigate small spaces without breaking things. You have to classify a broad range of objects to decide what can be thrown away. But even this is easy compared to more human-service-oriented jobs, where an AI would basically have to pass the Turing Test before it could replace a human. If you want a blue-collar job safe from AI, look no further than “Task Rabbit handyman”.
See also Moravec’s paradox.
Robot-Proof Jobs
https://features.marketplace.org/robotproof/
“The McKinsey Global Institute analyzed the work activities of more than 800 occupations in the U.S. to determine what percentage of a job could be automated using current technology. It turns out, a small fraction of jobs are either entirely automatable or entirely robot-proof.”
I don’t agree with their conclusions, but at least they put their views down..
Moravec’s paradox doesn’t actually tell us very much. All it tells us is that our intuitions about the relative difficulty of specific problems were incorrect. This is not very surprising—we know extremely little about how intelligence works, and we have a long history of underestimating / overestimating the difficulty of various problems which goes far back into the history of mathematics. Our brains evolved to solve some pretty specific tasks, which made some tasks (like playing Chess) seem very difficult although they could easily be solved by a computer program, because we simply had no need for Chess playing abilities in order to survive. This doesn’t mean that the low-level tasks aren’t difficult in an absolute sense, it just means we shouldn’t extrapolate this observation to other tasks we currently consider very difficult. Note that computer-vision was usually thought to be one of the low-level tasks AI researchers thought was too difficult to be solved very soon, and then deep-learning changed that pretty quickly.
The way that I read your argument (and I may be reading it wrong) is that we shouldn’t expect all blue-collar jobs to be taken by robots before all white-color jobs. This assertion I think is probably true in the specific way it is stated, but I think you imply that we won’t need to worry about human jobs eventually only being available to a cognitive elite, where people who have lower cognitive ability find themselves unemployed and their jobs being automated out. This claim does not follow from the argument you gave, except in the specific case in which AI tends to replace jobs in an evenly-distributed way across the intelligence spectrum, or in the case in which we find other economically useful jobs for humans as quickly as AI replaces them. Both of those situations I think are unlikely.
What it tells us is that the nervous system is doing a lot of processing subconsciously. The kind of cognition we’re most aware of, the linguistic, step-by-step, system 2, frontal lobe stuff, is what we can program a computer to do by thinking through the steps and constraints. I think we need to be careful about using the word “difficulty” in this context. We figured out the system 2 stuff first not because it was easier, but because we knew more about it. The algorithms structuring the human brain are encoded in the genome, which is way simpler than the connectome it eventually builds. I don’t expect building general intelligence to be particularly difficult. I expect figuring out how it works to be the hard part. The question isn’t “How hard?”, but “How obscure?”.
That’s not quite what I meant. But what a typical human considers difficult isn’t aligned with what an AI programmer considers difficult to teach robots. They are separate axes, though there is probably some correlation. I wasn’t suggesting a perfectly even distribution would magically emerge. We should expect some of both blue- and white-collar jobs to be lost to AI early on, and some of both to hold out for a long time, right up until the singularity.
Oh we should be worried. Mass automation has already been disruptive. Those factory jobs are never coming back. But the disruption might not go the way you expect.
Yes, those with high IQs will be better able to retrain to do other high-IQ jobs. But that can take years! I agree that expecting low-IQ people to retrain for high-IQ jobs is not realistic. (Unless some kind of brain-computer interface is developed soon enough to change the playing field.)
But studies indicate a significant inverse correlation between g and conscientiousness. The kind of people you want for Turing-test-complete service, or intimate, in-home maid/handyman/nanny jobs are exactly the obedient, dutiful, vigilant, lower-IQ, blue-collar, conscientious-type people. Your so-called “cognitive elite” aren’t that. The blue collar workers might actually be more flexible.
(Those who are both lower-IQ, and not conscientious don’t make good employees even now. Robots are not going to make this problem go away.)
I’m honestly not sure which group will get hit hardest. But let’s consider base rates. Are there more blue- or white-collar workers? Narrow AIs will probably have to be trained for each task. Is there more diversity of tasks in blue- or white-collar work?
Your stereotypes are both inaccurate and harmful. All the handymen I know are extremely intelligent. Electrical systems, plumbing systems, etc. are both complex and require reasoning to work with. A lot of fix-it stuff is a mix of puzzles, and figuring out how to do things on the fly.
I myself am a nanny (if you do a SAT to IQ conversion, my IQ is 144, which I am only saying because that seems to be of particular importance to you). Nannies tend to be of about average intelligence, and if I were to think of the most common trait it’s that they were pioneering enough to either immigrate or leave their entire family behind to come to America to work.
Let’s decide what the truth is before we go calling it harmful. First, “dutiful”/”vigilant”, etc. are just synonyms with “conscientious”. That’s by definition, not stereotype. As for the “low-IQ” part, I only claimed that
studies found an inverse correlation between conscientiousness and the general intelligence factor, and
you want conscientious people for in-home and human-service jobs. (regardless of IQ)
It’s only an inverse correlation, and nowhere near a perfect −1. (Maybe −0.25) As I mentioned, there exist some who have both low-IQ and are not conscientious (who don’t make good employees), I thought that also implied the existence of the reverse.
If you want to claim we’re being inaccurate, we need data, not anecdotes. Stereotypes often have some statistical truth.
The chart michaelkeenan linked to is instructive. There is considerable overlap in these curves. Average-IQ (~100) people can get most jobs on that chart, but would find it difficult to get the high-IQ jobs near the bottom, and probably can’t get a medical doctor job at all. An IQ-85 person could realistically get an electrician job, but not an electrical engineering job.
If we believe the chart, then we should also expect a significant number of above-average-IQ people working blue collar jobs. I do not dispute this. You claim to be an example of that. But they can retrain and even get merit scholarships. I pointed out that this process would still be disruptive, since the training process could take years.
But they (and you) are part of the “cognitive elite” that tristanm isn’t worried about becoming perpetually unemployed. It’s the other side, precisely the low-IQ people who can’t retrain for high-IQ jobs that were cause for concern. I pointed out they may have other advantages (conscientiousness) that could mitigate that somewhat, and furthermore, what is easy for humans is not the same as what is easy for robots anyway.
Who, me? Why are you so surprised I’m talking about it when this is the direct topic of the thread? The g factor is real, and significant to life outcomes. This is settled science.
This is google-able—I found this chart. It’s probably imperfect, but from a brief glance at the source I’d trust it more than anecdote or my own experience.
Even in your chart, the top 25% of janitors (the lowest IQ occupation) are smarter than the bottom 25% of college professors (the second highest IQ occupation). IQ ranges within an occupation are MUCH bigger than IQ ranges between occupations.
That is not true for me. But I am curious—if you think that this type of service/blue-collar jobs are occupied by highly intelligent people, where are the stupids? Half of the population is below the median intelligence, where are they? Where do they work? What kind of jobs do they take?
Again, not according to my observations (though I admit we may have different baselines). I agree that immigrant nannies—like other immigrants—have to demonstrate a certain level of capability and independence to get to where they are. But I don’t think this level is very high.
On the other hand, for some intelligent people becoming a nanny in the US is the easiest way to improve their condition. So there is a lot of variance—some nannies are very bright and some are not. Just like most people, really :-)
Clearly you’ve never worked at a big corporation.
Your username is delicious :-)
But low-IQ people won’t do well in a large corporation. They are not that good at covering their asses and are not very valuable as minions. A large corp will probably have difficulties getting rid of them (for a variety of reasons), but it’s not their natural habitat.
Good point. I imagine many are in prison, homeless, or perpetually unemployed on welfare or supported by family, but no way that accounts for half of the working-age population at present. The rest must be working, and not as rocket surgeons.
The New York Times which serves intellectuals who want to inform themselves about the world has used the term “human biodiversity” 5 times according to Google. Neither of those 5 times happened in this decade.
If I search for HBD there are more hits but most are not about human biodiversity. I find ’The case is listed by the highway patrol as ″HBD,″ its shorthand for ″Had Been Drinking but not drunk.‴ Some of Scott’s readers didn’t even know what HBD was and Googling told them that it’s an abbreviation for “happy birthday”.
Internet fights are about conflicts between subcultures. The fact that something is infuriating people in a tiny bubble doesn’t mean that it’s central for the core social debate.
That may have been true a while ago. Nowadays NYT mostly serves intellectuals who prefer to not peek out of their bubble.
Try the common synonym: “racism”.
That’s not a term Scott banned. If it’s a synonym than nobody should have any problem with Scott’s ban because they can simply exchange on term with another without losing the ability to communicate anything.
If I take a look at a story like http://www.thedailybeast.com/articles/2015/12/20/oberlin-students-cafeteria-food-is-racist.html?via=desktop&source=twitter , I also think it’s heavily misleading to say that it’s about the heritability of IQ.
If one denies the heritability of IQ, one is obligated to provide another explanation for the differences in educational attainment, income, crime, etc. between races. Thus, one minds oneself seeking out ever subtler forms of “racism” to explain the difference. It’s slightly more complicated than this in that signaling spirals also get involved.
You got that the wrong way around. A person who sees the social conflict as the oppressor vs. the oppressed doesn’t need any motivation to “explain the differences in educational attainment etc” to motivate himself.
Trying to understand demographic statistics is important for us nerds but it’s not what drives the main social conflict.
Once someone is already a volk-Marxist fanatic that is true. However, to convince others to give them power, not to mention to recruit more fanatics they need a semi-plausible argument for their position.
Whether someone is interested in demographic statistics or not, demographic statistics are real and hence have real effects. The need to deal with those effects drives a lot of the social conflicts.
It doesn’t serve people who are interested in debates going on online, within academia, or within the scientific community. Not very well at least.
The term “HBD” I think has popped up mostly recently to refer to a collection of ideas, mostly surrounding the idea of genetic determinism or related issues. I am not sure if it has ever referred to anything else or if there used to be other terms to describe those issues. From the way I understand most of the conversations about it, it’s usually used in the context of the heritability of intelligence or IQ.
This is an important topic, and definitely not something that will always be constrained to “tiny internet bubbles” like Scott’s blog. It has huge repercussions for how we discuss things like education, income inequality, employment, etc. If the fights over this topic really were constrained to internet subcultures, you wouldn’t see violent riots popping up at various universities throughout America in response to speakers wanting to present their case for it, or the SPLC claiming that people like Charles Murray are white nationalists when that’s not really the case.
The fact that this issue is so core to social debate is precisely why it incites negative emotions and why people tend to immediately move to absolute certainty over it one way or the other.
Online debates are driven by filter bubbles. It’s a mistake to assume that the conflict in your own filter bubble reflects the core underlying social conflict of the US.
Especially if it isn’t newsworthy enough for a New York Times journalist to explain to his readers what the conflict is even about, so that his readers understand what the phrase is supposed to mean.
Scott didn’t ban any of the collection of ideas but the term itself: “I am banning the terms “human biodiversity” and “hbd” – this doesn’t necessarily mean banning all discussion of those topics, but it should force people to concentrate on particular claims rather than make sweeping culture-war-ish declarations about the philosophy as a whole. ”
You can discuss specific claims in that field on this blog.
There’s a conflict but the conflict isn’t about “human biodiversity” and some of the protestors might not even know what the phrase means.
Peter Singer got his speech squelched for ableism even when we agree that disabled people are per definition biologically different. People on the left don’t deny human biodiversity on that point.
Which specific debates within the scientific community are held under the label of human biodiversity? If I type “human biodiversity” into Google Scholar most of the papers aren’t recent. The first recent paper I find is “Human Biodiversity Conservation: A Consensual Ethical Principle”. It’s about the case for conserving human disability. That’s not the kind of writing that was found at Scott’s comment section.
There has been a lot of important events that New York Times journalists didn’t see fit to explain to their readers. The failure of Soviet collective agriculture is probably the most infamous historical example.
Imagine trying to discuss the history of life on a forum that bans the term “evolution”.
The failure of Soviet agriculture wasn’t very salient to Americans. If a topic would be the center of a culture war in the US they would notice and in today’s traffic driven times feel like it’s a good idea to write an article that ranks decently on the keyword.
The US culture war isn’t secular in nature. Many people on the right care about issues like the War on Christmas even when the kind of people in online discussions like Scott’s log don’t.
In other matters, was there a single time Trump uttered the words human biodiversity? He did have some pollsters who tried to understand what the US Republican public cares about.
Getting people to say natural selection instead of evolution has its benefits given that plenty of people think the terms are interchangeable and use the term wrongly. Additionally, Scott blog isn’t a forum for discussing US culture wars. It isn’t even forum in the first place but the blog of a person who wants to be employed in an industry that doesn’t happen to be anti-fragile.
Yes it was. The “success” of Soviet collectivization compared to the apparent failure of capitalism was being used as an argument to justify leftwing/collectivist economic policies.
The NYT isn’t going to publish an article that would offend the world view of it’s liberal readers. Any description of HBD that conceptualizes it as an empirical scientific hypothesis that could be tested and potentially confirmed would certainly fit the bill.
So in the space of two comments you’ve gone from arguing:
to justifying Scott’s decision by saying:
This looks like a straightforward example of what Eliezer calls logical rudeness.
Please expand on “Currently the most serious threat to the stability of American society is the culture war”, and provide some reasoning for “stability” being a driver of producing beneficial technology.
I dispute (or perhaps just don’t understand) both premises. I also am not sure if you mean “end the culture war” or “win the culture war for my side”. Is surrendering your recommended course of action?
I live in Berkeley, where there are literally armed gangs fighting each other in the streets.
Stability isn’t intrinsically valuable. The point is that we know our current civilizational formula is a pretty good one for innovation and most others aren’t, so we should stick to the current formula more or less.
My recommendation is a political ceasefire. Even if we could just decrease the volume of partisan hate speech, without solving any actual problems, that seems like it would have a lot of benefits.
More like one armed gang, and a group of people who have finally had enough and decided to stand up for themselves.
test
A New Form of Social Withdrawal in Japan: A Review of Hikikomori
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4886853/
“hikikomori has become a silent epidemic with tens, perhaps hundreds, of thousands of cases now estimated in Japan. The differential diagnosis includes anxiety and personality disorders, but current nosology in the Diagnostic and Statistic Manual of Mental Disorders may not adequately capture the concept of hikikomori. Treatment strategies are varied and lack a solid evidence basis,”
“A lifestyle centered at home No interest or willingness to attend school or work Persistence of symptoms beyond six months Schizophrenia, mental retardation or other mental disorders have been excluded Among those with no interest or willingness to attend school or work, those who maintain personal relationships (e.g., friendships) have been excluded.”
First sexbot with machine learning set to ship next year
https://www.theguardian.com/technology/2017/apr/27/race-to-build-world-first-sex-robot
“The major breakthrough of McMullen’s prototype is artificial intelligence that allows it to learn what its owner wants and likes. It will be able to fill a niche that no other product in the sex industry currently can: by talking, learning and responding to her owner’s voice, Harmony is designed to be as much a substitute partner as a sex toy.”
“A small-scale 2016 study by the University of Duisburg-Essen found that more than 40% of the 263 heterosexual men surveyed said they could imagine buying a sex robot for themselves now or in the next five years. Men in what they described as fulfilling relationships were no less likely than single or lonely men to express an interest in owning a sex robot.”
Regarding an old comment that made me think.
Do you think that the world today is more fragile? For example, removing one every three people would cause a much greater collapse than what could be caused by a pandemic in the Middle Age?
I think this is the case, due to the level of specialized knowledge required to operate the world today and the very existence of nuclear power plants.
Another possibility is that a civilization can only contain so much complexity given a certain number of people, and so a one billion people civ cannot be more sophisticated than a three billions people civ. The next obvious question: is our optimized for the number of people that there are on the planet?
Anyone has any strong opinion about?
Depends on the threat. More fragile with respect to, say, disruption of trade networks. Less fragile with respect to e.g. a new pathogen.
You also have to be careful about the yardsticks you’re using. Something like a 25% drop in GDP would be treated as a collapse and the end of the world in the developed countries. But in this scenario how many people will starve to death? I expect the number to be approximately zero. In a preindustrial society, on the other hand, a collapse basically meant that most people died.
How many deaths did all the nuclear power plant accidents, etc. cause, in total?
What do you mean, “optimized”? Optimized for what?
Sure, I’ve not specified. With respect to an extinction event that removes a substantial quota of the world population.
Ballpark-y less than a million. On the other hand, if x% people who are operating power plants now would disappear, there would be many more accidents. The point is: how much is that percentage?
Let me rephrase: do you think that the complexity of today society can be sustained by a population that is much lower than what it is today?
No. Human deaths due to nuclear power number less than a hundred. Even extrapolating eventual cancer deaths (dubious), it’s less than ten thousand. Solar panels killed more people than nuclear power plants ever have! People installing them on roofs occasionally fall to their deaths. It also makes firefighters reluctant to chop holes in the roof when the house is on fire, for fear of electrocution. Watt-per-watt, nuclear is about the safest power source we have, even after the all the accidents, because it would take so many other plants to compete with a single nuclear plant.
Eh, no. That’s only true accounting direct exposure deaths. According to the UNSCEAR website: “In addition, according to the UNSCEAR 2008 Report, the majority of the 530,000 registered recovery operation workers received doses of between 0.02 Gy and 0.5 Gy between 1986 and 1990. That cohort is still at potential risk of late consequences such as cancer and other diseases and their health will be followed closely.” and that’s only for Chernobyl.
So we don’t really know how many deaths will be directly related to nuclear plants, just because their consequences are very long term.
Eh, yes. “At potential risk” is very different from “human deaths due to”. The obligatory xkcd might be useful for you.
And how is this different from e.g. living in cities? That, too, puts you “at potential risk” and I’m sure there are very long term consequences.
Plus, the usual nirvana fallacy. Nuclear plants have downsides? Sure they do. But let’s do a proper comparison:
Nuclear power plant
Coal power plant
No power plant at all
Are you going to say that the nuclear power plant is the worst choice here?
That is a thing I’ve never asserted.
To restore my initial argument: the very same presence of nuclear power plants makes the world more fragile, because eliminating a percentage of the population (say, a third as with the Black Plague or 90% as with smallpox in South America) runs the risk of eliminating people who know how to run and maintain the plants, thereby creating multiple nuclear accidents.
I would argue that if you suddenly lose something on the order of half your population, nuclear plant accidents are not going to be the thing you should worry about.
Besides, nuclear plants are over-engineered and have multiple automatic failsafe systems. If most of the humans stop coming, the reactors will shut down by themselves (or the remaining few humans will shut them down).
The only really big nuclear reactor accident (Chernobyl) happened because the operators deliberately disabled a whole lot of safety systems which got in the way of something they wanted to do.
The only? I’d agree that Three Mile Island was a minor case, but Fukushima was definitely severe. There were meltdowns and explosions (chemical, due to the hydrogen the high heat cracked off of the cooling water). It will cost billions over decades to clean it up.
Yes, I don’t expect this to be an issue in the event of a plague. Fukushima’s automated safety systems detected the earthquake and did SCRAM the reactor, but then a freaking tsunami destroyed the backup generators powering the cooling pumps before the fuel had time to cool down. Many Japanese died that day, but that was because of the water, not the uranium.
Nuclear meltdowns are disasters because they are expensive, not because they are deadly. The panic during the ensuing evacuation is probably the most dangerous part.
That was caused by the fourth strongest earthquake in the world in half a century, so it’s not something you’d expect to happen particularly often.
Once every 12 years or so..? :-)
Once every 12 years or so somewhere in the world. Near enough a nuclear reactor to cause trouble, not so often.
Once per decade per planet (i.e. 2e-10/km²/yr) is “particularly often”?
I merely quantified your “not particularly often” :-)
In terms of money, yes, but in terms of lives lost, no.
I’m confused what you mean by “that’s only for Chernobyl”. In any case, from your own reference: “Among the residents of Belarus, the Russian Federation and Ukraine, there had been up to the year 2005 more than 6,000 cases of thyroid cancer reported in children and adolescents who were exposed at the time of the accident, and more cases can be expected during the next decades. Notwithstanding the influence of enhanced screening regimes, many of those cancers were most likely caused by radiation exposures shortly after the accident. Apart from this increase, there is no evidence of a major public health impact attributable to radiation exposure two decades after the accident. There is no scientific evidence of increases in overall cancer incidence or mortality rates or in rates of non-malignant disorders that could be related to radiation exposure.”
We do know how many deaths will be directly related to nuclear plants insofar as their operation up until now, and gilch stated how many that is.
“That’s only for Chernobyl” means that the UNSCEAR report was related only to the Chernobyl accidents, but there has been more nuclear accidents for which we haven’t had the time yet to discover their long-term impact.
Anyway, I stood corrected that the total mortality was “under than a million”, given the appromixation of the data we have, it’s likely they will stay (barring no more accidents) in the range of 10k ~ 20k.
That’s not the kind of threat, that’s magnitude of the consequences.
But anyway, hard to tell. No data. Theoretically speaking, you have to trade-off greater interconnectedness (all our eggs are now in one basket because all the baskets merged) against greater technical capability (we will deal better with, say, a supervolcano erupting than people a few centuries ago).
I think you’re off by several orders of magnitude.
Yes, of course. Imagine, say, that all continents except for North America suddenly sunk beneath the waves. After the initial period of adjustment, exactly which complexity will North America be unable to produce because it doesn’t have enough people?
Survivalists sometimes discuss the issue of the minimum viable population (for a high-tech civilization), but I think the numbers are in the millions, not billions. Besides, it depends on the IQ distribution—the right tail is vastly more important for your ability to keep the tech running than the left tail.
With some caveat, it turns out I was :-O
That’s exactly the question I was asking. And “After the initial period of adjustment” does all the work of carry out your argument, so...
Well then, the answer is “none”.
No, it doesn’t. Are we talking about the minimum size of a more-or-less steady-state high-tech (hyphen-love!) civilization? or are we talking about the minimum size of a seed population from which a high-tech civilization can reconstruct itself while, presumably, growing in the process?
Swarms of Autonomous Aerial Vehicles Test New Dogfighting Skills
http://www.news.gatech.edu/2017/04/21/swarms-autonomous-aerial-vehicles-test-new-dogfighting-skills
“Right now, we’re more interested in the research questions about autonomous coordination among the vehicles and the tactical behavior of the groups of vehicles,” Pippin explained. “We are focusing our efforts on how these vehicles cooperate and want to understand what it means for them to operate as a team.”
“Both teams were trying to solve the same problem of flying a large swarm in a meaningful mission, and we came up with solutions that were similar in some ways and different in others,” said Charles Pippin, a senior research scientist at the Georgia Tech Research Institute. “By comparing how well each approach worked in the air, we were able to compare strategies and tactics on platforms capable of the same flight dynamics.”
Today’s version of what’s wrong with the medical system: https://www.salk.edu/news-release/new-method-predicts-will-respond-lithium-therapy/ was recently shared. The article is about a paper that successfully predicted which patients benefit from lithium supplementation.
It contains a very interesting quote:
The naive person on the street might think that advances in predicting who reponds to lithium is important to allow doctors to give those patients lithium. That doesn’t seem to be where the scientists see the value in this case. It’s that the process helps with developing new drugs that can then be patented and given to patients.
Sorry, but this is absurdly unfair.
First, to be clear, the article itself very plainly doesn’t take such a view; right from the headline, it leads with the obvious application of giving lithium to patients it’s likely to help and not to patients it isn’t likely to help.
Second, the article quotes from “the scientists” several times, and most of what they say is about the actual work and its value in predicting response to lithium treatment.
Third, there is obviously absolutely nothing wrong with seeing “might be useful in development of new drugs” as a good and important thing. It’s not as if mental (or any other sort of) health is a solved problem where it’s impossible to imagine how any new drugs could possibly improve on what we have already.
Fourth, the commercial perspective you imply (”… that can then be patented …”) is something you just made up and is neither stated nor implied in anything the scientists are quoted as saying.
For the avoidance of doubt, it’s eminently possible (1) that drug researchers focus too much on the possibility of developing new drugs rather than improving the effectiveness of existing ones and (2) if so, that they do so partly for commercial reasons. But this article offers no evidence of either.
(The people quoted, and the scientists responsible for the actual research under discussion, all seem to be working for non-profits, universities, and government institutes. These are not the people I would be fingering as most likely to be interested only in commercial profits from patented new drugs.)
Most universities pressure their biochemical researchers to do research that allows them to get money from the private sector.
There are political efforts to get universities to do research that benefits the private sector and that can be commercialized.
Sure. I don’t claim it’s impossible for people in such positions to be interested (or excessively interested) in commercialization. Only that these aren’t exactly prime suspects.
From what I have seen acquiring third-party funding is important for most biochemical researchers these days. A researcher who tells an evaluating committee that they aren’t interested in acquiring third-party funding from industry has a hard time getting a professorship.
New week, new problem.
https://protokol2020.wordpress.com/2017/04/24/a-moral-problem/
Seems like there should be some externalities that overrule, yeah? Like, surely the kid’s preferences don’t end up mattering. Kids are dumb.
Maybe the union won’t let you get rid of the Chinese teacher, so as long as you are paying him you might as well have him teach? Maybe the common core demands that the foreign language be Spanish for whatever reason? Etc. Etc.
What I’m trying to say is that it feels super weird that this moral problem isn’t happening in the shadow of a bigger and more boring problem. It’s the whole ‘for one dramatic thing to need intervention a whole bunch of boring stuff has to set it up’ concept.
From a ‘right’ thing to do perspective, I imagine it would depend on which elective you thought would benefit the students the most. Like, you are the grown up. You know what will be valuable, better than they do. Put your weight behind what will be best for the kids.
...choking sounds followed by a fit of coughing...
Pretty simple for a straight utilitarian—option 2 forces 2 to get their second choice rather than first, and option 1 forces 10 to accept their second choice.
Complexities come when you add motives other than “best for the most”. If you’re not trying to optimize preference-satisfaction, but rather to discover value, you can treat the list as instant-run-off voting, and only offer the winners, forcing everyone to take these.
So lessee, who do we tie up and place on trolley tracks?
Real life trolley, not an imagined one.
Some people choose to throw 10 instead of 2, since nobody will ever know.