Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation
Jamais Cascio writes in the atlantic:
Pandemics. Global warming. Food shortages. No more fossil fuels. What are humans to do? The same thing the species has done before: evolve to meet the challenge. But this time we don’t have to rely on natural evolution to make us smart enough to survive. We can do it ourselves, right now, by harnessing technology and pharmacology to boost our intelligence. Is Google actually making us smarter? …
… Modafinil isn’t the only example; on college campuses, the use of ADD drugs (such as Ritalin and Adderall) as study aids has become almost ubiquitous. But these enhancements are primitive. As the science improves, we could see other kinds of cognitive-modification drugs that boost recall, brain plasticity, even empathy and emotional intelligence. They would start as therapeutic treatments, but end up being used to make us “better than normal.”
Read the whole article here.
This relates to cognitive enhancement as existential risk mitigation, where Anders Sandberg wrote:
Would it actually reduce existential risks? I do not know. But given correlations between long-term orientation, cooperation and intelligence, it seems likely that it might help not just to discover risks, but also in ameliorating them. It might be that other noncognitive factors like fearfulness or some innate discounting rate are more powerful.
The main criticisms of this idea generated in the Less Wrong comments were:
The problem is not that people are stupid. The problem is that people simply don’t give a damn. If you don’t fix that, I doubt raising IQ will be anywhere near as helpful as you may think. (Psychohistorian)
Yes, this is the key problem that people don’t really want to understand. (Robin Hanson)
Making people more rational and aware of cognitive biases material would help much more (many people)
These criticisms really boil down to the same thing: people love their cherished falsehoods! Of course, I cannot disagree with this statement. But it seems to me that smarter people have a lower tolerance for making utterly ridiculous claims in favour of their cherished falsehood, and will (to some extent) be protected from believing silly things that make them (individually) feel happier, but are highly unsupported by evidence. Case in point: religion. This study1 states that
Evidence is reviewed pointing to a negative relationship between intelligence and religious belief in the United States and Europe. It is shown that intelligence measured as psychometric g is negatively related to religious belief. We find that in a sample of 137 countries the correlation between national IQ and disbelief in God is 0.60.
Many people in the comments made the claim that making people more intelligent will, due to human self-deceiving tendencies, make people more deluded about the nature of the world. The data concerning religion detracts support from this hypothesis. There is also direct evidence to show that a whole list of human cognitive biases are more likely to be avoided by being more intelligent—though far from all (perhaps even far from most?) of them. This paper2 states:
In a further experiment, the authors nonetheless showed that cognitive ability does correlate with the tendency to avoid some rational thinking biases, specifically the tendency to display denominator neglect, probability matching rather than maximizing, belief bias, and matching bias on the 4-card selection task. The authors present a framework for predicting when cognitive ability will and will not correlate with a rational thinking tendency.
Anders Sandberg also suggested the following piece of evidence3 in favour of the hypothesis that increased intelligence leads to more rational political decisions:
Political theory has described a positive linkage between education, cognitive ability and democracy. This assumption is confirmed by positive correlations between education, cognitive ability, and positively valued political conditions (N=183−130). Longitudinal studies at the country level (N=94−16) allow the analysis of causal relationships. It is shown that in the second half of the 20th century, education and intelligence had a strong positive impact on democracy, rule of law and political liberty independent from wealth (GDP) and chosen country sample. One possible mediator of these relationships is the attainment of higher stages of moral judgment fostered by cognitive ability, which is necessary for the function of democratic rules in society. The other mediators for citizens as well as for leaders could be the increased competence and willingness to process and seek information necessary for political decisions due to greater cognitive ability. There are also weaker and less stable reverse effects of the rule of law and political freedom on cognitive ability.
Thus the hypothesis that increasing peoples’ intelligence will make them believe fewer falsehoods and will make them vote for more effective government has at least two pieces of empirical evidence on its side.
1. Average intelligence predicts atheism rates across 137 nations, Richard Lynn, John Harvey and Helmuth Nyborg, Intelligence Volume 37, Issue 1,
2. On the Relative Independence of Thinking Biases and Cognitive Ability, Keith E. Stanovich, Richard F. West, Journal of Personality and Social Psychology, 2008, Vol. 94, No. 4, 672–695
3. Relevance of education and intelligence for the political development of nations: Democracy, rule of law and political liberty, Heiner Rindermann, Intelligence, Volume 36, Issue 4
In many debates about cognition enhancement the claim is that it would be bad, because it would produce compounding effects—the rich would use it to get richer, producing a more unequal society. This claim hinges on the assumption that there would be an economic or social threshold to enhancer use, and that it would produce effects that were strongly in favour of just the individual taking the drug.
I think there is good reason to suspect that enhancement has positive externalities—lower costs due to stupidity, individual benefits that produce tax money, perhaps better governance, cooperation and more great ideas. In fact, it might be that these benefits are more powerful than the individual ones. If everybody got 1% smarter, we would not notice much improvement in everyday life, but the economy might grow a few percent and we would get slightly faster technological development and better governance. That might actually turn the problem into a free rider problem: unless you really want to be smarter taking the enhancer might be a cost to you (risk of side-effects, for example). So you might want everybody else to take the enhancers, and then reap the benefit without the cost.
There’s a historical IQ enhancer we can use to look for this effect: food.
I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline). If that is right then increasing the number of such people will increase rather than decrease risk.
And also, this argument is vulnerable to the reversal test. If you think that higher IQ increases existential risk, then you think that lower IQ decreases it. Presumably you don’t believe that putting lead in the water supply would decrease existential risks?
believing lead in the water supply would decrease existential risks != advocating putting lead in the water supply
See correction
If you decreased the intelligence of everyone to 100 IQ points or lower, I think overall quality of life would decrease but that it would also drastically decrease existential risks.
Edit: On second thought, now that I think about nuclear and biological weapons, I might want to take that back while pointing out that these large threats were predominantly created by quite intelligent, well-intentioned and rational people.
If you decreased the intelligence of everyone to 100 IQ points or lower, that would probably eliminate all hope for a permanent escape from existential risk. Risk in this scenario might be lower per time unit in the near future, but total risk over all time would approach 100%.
Consider a world without nuclear weapons. What would there be to prevent world war I ad infinitum? As a male of conscriptable age, I would consider such a scenario to be so bad as to be not much better than global thermonuclear war.
Why do you think it’s the nuclear weapons that keep the current peace, and not the memory of past wars, and more generally/recently cultural moral progress? This is related to your prediction in resource depletion scenario.
List of wars by death toll is very interesting.
There’s little evidence for theory that threat of global thermonuclear war creates global peace.
Even during the world wars, percentage of people who died of violence seems vastly smaller than in typical hunter gatherer societies.
There were long periods of peace before, most notably 1815-1914 where military technology was essentially equivalent to that of World War I. Before that 18th century was relatively bloodless too.
One of top ten most deadly wars happened just a few years ago. So even accepting the premise that thermonuclear threat prevents war, we face either wide proliferation, or it won’t really do much to stop wars.
One of the countries with massive nuclear weapons stockpiles suffered total collapse. This might happen again in the future, in near future most likely to Pakistan or North Korea, but in longer term to any country.
Countries having nuclear weapons engaged in plenty of conventional wars, mostly on smaller scale, and fought each other by proxy.
I had exactly the same thought.
Also, on a more pragmatic and personal level, increasing average human intelligence increases the probability of immortality and other “surprisingly good” outcomes of humans or other intelligences optimizing our world, such as universal beauty, health, happiness and better quality of life. This needn’t be through superintelligence, it could just be through the intelligence/wealth production correlation.
That’s a good point, but it would be more relevant if this were a policy proposal rather than an epistemic probe.
I don’t see why this being an epistemic probe makes risk per near future time unit more relevant than total risk integrated over time.
The whole thing is kind of academic, because for any realistic policy there’d be specific groups who’d be made smarter than others, and risk effects depend on what those groups are.
You seem to be assuming that the relation between IQ and risk must be monotonic.
I think existential risk mitigation is better pursued by helping the most intelligent and rational efforts than by trying to raise the average intelligence or rationality.
This claim is false—The reversal test does not require the function risk(IQ) to be monotonic. It only requires that the function is locally monotonic around the current IQ value of 100.
Could you elaborate a bit more on why you think this? Are there any historical examples you are thinking of?
To answer your second question: No, there aren’t any historical examples I am thinking of. Do you find many historical examples of existential risks?
Edit: Global nuclear warfare and biological weapons would be the best candidates I can think of.
Could you answer my first question, too? Which are the intelligent, well-intentioned, and relatively rational humans you are thinking of? Scientists developing nanotech, biotech, and AI? Policy-makers? Who? How would an example disaster scenario unfold in your view?
Are you saying that the very development of nanotech, biotech, and AI would create an elevated level of existential risk? If so, I would agree. A common counter-argument I’ve heard is that whether we like it or not, someone is going to make progress in at least one of those areas, and that we should try to be the first movers rather than someone less scrupulous.
In terms of safety, using AI as an example:
World with no AI > World where relatively scrupulous people develop an AI > World where unscrupulous people develop an AI
Think about how the world would be if Russia or Germany had developed nukes before the US.
Intelligence did allow the development of nukes. Yet given that we already have them, global intelligence would probably decrease the risk of them being used.
Let’s assume, for the sake of argument, that the mere development of future nanotech, biotech, and AI doesn’t go horribly wrong and create an existential disaster. If so, then the existential risk will lie in how these technologies are used.
I will suggest that there is a certain threshold of intelligence greater than ours where everyone is smart enough not to do globally harmful stunts with nuclear weapons, biotech, nanotech, and AI and/or smart enough to create safeguards where small amounts of intelligent crazy people can’t do so either. The trick will be getting to that level of intelligence without mishap.
I was reading the Wikipedia Cuban Missile Crisis article, and it does seem that intelligence helped avert catastrophe. There are multiple points where things could have gone wrong but didn’t due to people being smart enough not to do something rash. I suggest that even greater intelligence might ensure that situations like this never develop or are resolved.
Here are some interesting parts:
If this guy had been smarter, maybe this mistake would never have been made.
Luckily, Kruschev and McNamara were smart enough not to escalate. Their intelligence protected against the risk caused by the stupid Soviet commander.
Basically, a stupid dude on the sub wanted to use the missile, but a smart dude stopped him.
Yes, existential risk ultimately came from the intelligent developers of nuclear weapons. Yet once the cat was out of the bag, existential risks came from people being stupid, and those risks were counteracted by people being smart. I would expect that more intelligence would be even more helpful in potential disaster situations like this.
The real risk seems to be from weapons developed by smart people falling into the hands of stupid people. Yet if even the stupidest people were smart enough not to play around with mutually assured destruction, then the world would be a safer place.
What relationship does the kind of ‘smartness’ possessed by the individuals in question have with IQ?
I don’t think there are good reasons for thinking they’re one and the same.
I agree with Annoyance here. My guess is that a higher IQ may help the individuals in the situations Hughristik describes, but this is not the type of evidence we should consider very convincing. In this example, I would guess that differences in the individual’s desire and ability to think through the consequences of their actions is far more important than differences in there IQ. This may be explained by the incentives facing each individual.
This may be true, but “ability to think through the consequences of actions” is probably not independent of general intelligence. People with higher g are better at thinking through everything. This is what the research I linked to (and much that I didn’t link to) shows.
This graph from one of the articles shows that people with higher IQ are less likely to be unemployed, have illegitimate children, live in poverty, or be incarcerated. These life outcomes seem potentially related to considering consequences and planning for the long-term. If intelligence is related to positive individual life outcomes, then it would be unsurprising if it is also related to positive group or world outcomes.
In the case of avoiding use of nuclear weapons, there is probably only a certain threshold of intelligence necessary. Yet from the historical example of the Cuban Missile Crisis, the thinking involved wasn’t always trivial:
Both sides were constantly guessing the reasoning of the other.
In short, we do have reasons to suspect a relationship between intelligence and restraint with existentially risky technologies. People with higher intelligence don’t merely have greater “book smarts,” they have better cognitive performance in general and better life and career outcomes on an individual level, which may also extrapolate to a group/world level. Will more research be necessary to make us confident in this notion? Of course, but our current knowledge of intelligence should establish it as probable.
Furthermore, people with higher intelligence probably have a better ability to guess the moves of other people with existentially risky technologies and navigate Prisoners’ Dilemmas of mutually assured destruction, as we see in the historical example of the Cuban Missile Crisis. We don’t have rigorous scientific evidence for this point yet, though I don’t think it’s a stretch, and hopefully we will never have a large sample size of existential crises.
I’m not sure we have serious disagreements on this. Research on intelligence enhancement sounds like a good idea, for many reasons. I’m just choosing to emphasize that there are probably other much more effective approaches to reducing existential risks, and its by no means impossible that intelligence enhancement could increase existential risks.
What about the inherent incentive that motivates people even in the absence of strong external factors?
I’m not sure I understand you. Are you referring to the distinction between intrinsic and extrinsic motivation?
More like a distinction between different types of intrinsic factors.
I still have no idea what you’re talking about and how it relates to my comment.
When I said “smartness,” I was thinking of general intelligence, the g-factor. As it happens, g does have a high correlation with IQ (0.8 as I recall, though I can’t find the source right now). g is a highly general factor related to better performance in many areas including career and general life tasks, not just in academic settings (see p. 342 for a summary of research), so we should hypothesize that nuclear missile restraint is related to g also.
Someone who knows the details of this is welcome to correct me if I’m wrong, but as I understand it g is a hypothetical construct derived via factor analysis on the components of IQ tests, so it will necessarily have a high correlation with those tests (provided the results of the components are themselves correlated).
Correct. g is the degree to which performances on various subtypes of IQ tests are statistically correlated—the degree that performance on one predicts performance on another.
It’s a very crude concept, and one that has not been reliably identified as being detectable without use of IQ tests, although several neurophysiologic properties have been suggested as indicating g.
That’s a kind of the giant cheesecake fallacy. Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn’t help you in deciding which of them wins.
And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.
I’m talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.
I agree, this doesn’t fall clearly under the original concept of giant cheesecake fallacy, but it points to a good non-specious generalization of that concept, for which I gave a self-contained explanation in my comment.
Aside from that, your reply addresses issues irrelevant to my critique of your assertion. It sounds like a soldier-argument.
It’s not the giant cheesecake fallacy, but Vladimir Nesov is completely correct when he says:
Anyone arguing that existential risks are elevated by increasing intelligence must also account for the mitigating factor against existential risk that intelligence also plays.
That is rather easily accounted for, I would think. Attack is easier than defense. It is easier to build a bomb than to defend against bomb attacks; it is easier to build a laser than to defend against laser attacks—and so on.
This is true. Yet capability to attack isn’t the same thing as actually attacking.
Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.
All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn’t exactly “easy” when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).
I propose a study:
The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.
But the US did bomb Japan. For each new existentially threatening tech, the first power to develop it won’t be bound by MAD.
And notice that it didn’t provoke a nuclear war, and the human race still exists. Nuclear weapons weren’t an existential threat until multiple parties obtained them. If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
I dont understand the logic of this sentence. If I create an Earth-destroying bomb in my basement, MAD doesn’t apply but it’s still an existential threat. Similar reasoning works for nanotech, biotech and AI.
There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn’t prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.
This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.
If we assume that causing risk requires a certain intelligence level and mitigating risks requires a certain (higher) level, changing the distribution of intelligence in a way that enlarges both groups will not, in general, enlarge both by the same factor.
Obviously. A coin is also going to land on exactly one of the sides (but you don’t know which one). Why do you pronounce this fact?
That statement shows a way in which the claim that increasing the number of intelligent people will increase rather than decrease risk might be supported.
How the heck is that a giant cheesecake fallacy?
Both are special cases of the following fallacy. A certain factor increases the strength of some possible positive effect, and also the strength of some possible negative effect, with the consequences of these effects taken in isolation being mutually exclusive. An argument is then given that since this factor increases the positive effect (negative effect), the consequences are going to be positive (negative), and therefore the factor itself is instrumentally desirable (undesirable). The argument doesn’t recognize the other side of the possible consequences, ignoring the possibility that the opposite effect is going to dominate instead.
Maybe it has another existing name; the analogy seems useful.
Giant cheesecake is about the jump from capability to motive, usually in the presence of anthropomorphism or other reasons to assume the preference without thinking.
This sounds more like a generic problem of technophilia (phobia) - mostly just confirmation bias or standard filtering of arguments. It probably does need a name, though, like Appeal to Selected Possibilities or something like that.
Really, really, really doubtful that correlations between national IQ and, well, anything prove anything besides that certain countries are generally better off than others. That correlation is probably just differentiating First World countries from Third World countries in general—the First World has better health and education, and also better government. Although I’m agnostic on the existence of racial IQ differences, those aren’t what’s going on here, considering the wide variation in success of countries with similar races.
Same with IQ versus religion within and between countries: it’s probably just an artifact of religion vs. wealth correlations. I scanned those articles and I didn’t see anything saying they’d adjusted for it; if there is, then I’ll start getting excited.
The national/regional IQ literature is messy, because there are so many possible (and even likely) feedback loops between wealth, schooling, nutrition, IQ and GDP. Not to mention the rather emotional views of many people on the topic, as well as the lousy quality of some popular datasets. Lots of clever statistical methods have been used, and IQ seems to retain a fair chunk of explanatory weight even after other factors have been taken into account. Some papers have even looked at staggered data to see if IQ works as a predictor of future good effects, which it apparently does.
Whether it would be best to improve IQ, health or wealth directly depends not just on which has the biggest effect, but also on how easy it is and how the feedbacks work.
If religion is negatively correlated to wealth, then presumably one would attach some likelihood to increasing wealth leading to decreased religious belief. We all take cognitive enhancers causing us to get richer, then we all stop believing in silly things, like God. This still results in increased IQ leading to truer beliefs.
This is a good old causation/correlation debate; but it seems to me that without further evidence we should take the IQ/religiosity study as weak evidence in favour of the hypothesis that IQ causes non-religiosity, possibly mediated by wealth:
high-IQ -----> non-religiosity
high-IQ -----> high-Wealth ------> non-religiosity
Or intelligent people are just better at getting wealthy.
This is almost certainly true. Therefore, we have
high-IQ -----> high-Wealth ------> non-religiosity
The paper you are referring to—reference 3 - “Estimating state IQ: Measurement challenges and preliminary correlates”—is looking at variation over US states, e.g. Alaska, Alabama, … not countries. You should re-write your comment taking this into account.
The study showing a correlation between “IQ” and quality of government (reference 3) estimated IQ based on the performance of public school 4th and 8th graders on standardized tests in math and reading. With that measure, the opposite causal direction seems far more likely: high quality state government leads to better public schools and thus higher test scores (which the author uses as a proxy for IQ).
This is why papers like H. Rindermann, Relevance of Education and Intelligence for the Political Development of Nations: Democracy, Rule of Law and Political Liberty, Intelligence, v36 n4 p306-322 Jul-Aug 2008 are relevant. This one looks at lagged data, trying to infer how much effect schooling, GDP and IQ at time t1 affects schooling, GDP and IQ at time t2.
The bane of this type of studies is of course the raw scores—how much cognitive ability is actually measured by school scores, surveys, IQ tests or whatever means that are used—and whether averages is telling us something important. One could imagine a model where extreme outliers were the real force of progress (I doubt this one, given that IQ does seem to correlate with a lot of desirable things and likely has network effects, but the data is likely not strong enough to rule out an outlier theory).
Thanks Anders. It occurs to me at this point that having a personal Anders to back you up with relevant references when in a tight spot is a significant cognitive enhancement.
This certainly indicates that the opposite causal direction is more likey, given just that evidence.
I suspect that both directions are active; but I would need further evidence to back this up.
See correction to article
Well, what I meant to say was that, we can’t take it for granted that making people smarter won’t make them more biased, in the absence of data. It might not seem likely to happen, but we can’t assign it a probability of “too small to matter” just yet.
(This post does, indeed, contain relevant data that suggests that smarter people believe fewer absurdities...)
One bias that I think is common among smart, academically minded people like us is that the value of intelligence is overestimated. I certainly think we have some pretty good objective reasons to believe intelligence is good, but we also add biases because we are a self-selected group with a high “need for cognition” trait, in a social environment that rewards cleverness of a particular kind. In the population at large the desire for more IQ is noticeably lower (and I get far more spam about Viagra than Modafinil!).
If I were on the Hypothetical Enhancement Grants Council, I think I would actually support enhancement of communication and cooperative ability slightly more than pure cognition. More cognitive bang for the buck if you can network a lot of minds.
Though I lean toward agreeing with the conclusion that increased IQ would mitigate existential risk, I’ve been somewhat skeptical of the assertions you’ve previously made to that effect. This post provides some pretty reasonable support for your position.
The statement “Can I find some empirical data showing a corellation between IQ and quality of government” does make me curious about your search strategy, though. Did you specifically look for contrary evidence? Are there any other correlations with IQ (besides the old “more scientists to kill us” argument) that might directly or indirectly contribute to risk, rather than reduce it?
Kudos and karma to anyone who can dig up evidence unambiguously contradicting Roko’s hypothesis.
My search strategy was to put “IQ” “religion” etc into google scholar and google. I found no papers that suggested IQ correlates with increased religiosity. I found the reference to good governance by chance; it was a pleasant surprise.
I did not actively look for contradictory evidence.
I hate to discourage you when you’re otherwise doing quite well, but the above is a major, major error.
Due to the human tendency towards confirmation bias, it’s vastly important that you try to get a sense of the totality of the evidence, with a heavy emphasis on the evidence that contradicts your beliefs. If you have to prioritize, look for the contradicting stuff first.
I suppose if I thought anyone would do anything with this idea—like if someone said “OK, great idea, we’re going to appoint you as an advisor to the new enhancement panel”, I’d start getting very cautious and go make damn sure I wasn’t wrong.
But as the situation is … I am not particularly incentivized to do this; and others at LW will probably be better at finding evidence against this than I am.
You should be doing that anyway.
Interesting. Does it bother you that you are not strongly motivated to avoid error?
There is a legitimate question of what errors are worth the time to avoid. Roko made a perfectly sensible statement—that it’s not his top priority right now to develop immense certitude about this proposition, but it would become a higher priority if the answer became more important. It is entirely possible to spend all of one’s time attempting to avoid error (less time necessary to eat etc. to remain alive and eradicate more error in the long run); I notice that you choose to spend a fair amount of your time making smart remarks to others here instead of doing that. Does it bother you that you are at certain times motivated to do things other than avoid some possible instances of error?
Positive errors can be avoided by the simple expedient of not committing them. That usually carries very little cost.
I agree completely, but this doesn’t seem to be Roko’s situation: he’s simply not performing the positive action of seeking out certain evidence.
But that action is a necessary part of producing a conclusion.
Holding a belief, without first going through the stages of searching for relevant data, is a positive error—one that can be avoided by the simple expedient of not reaching a conclusion before an evaluation process is complete. That costs nothing.
Asserting a conclusion is costly, in more than one way.
Humans hold beliefs about all sorts of things based on little or no thought at all. It can’t really be avoided. It might be an open question whether one should do something about unjustified beliefs one notices one holds. And I don’t think there’s anything inherently wrong with asserting an unjustified belief.
Of course, I’m even using ‘unjustified’ above tentatively—it would be better to say “insufficiently justified for the context” in which case the problem goes away—certainly seeing what looks like a flower is sufficient justification for the belief that there is a flower, if nothing turns on it.
Not sure which sort of case Roko’s is, though.
At each point, you may reach a conclusion with some uncertainty. You expect the conclusion (certainty) to change as you learn more. It would be an error to immediately jump to inadequate levels of certainty, but not to pronounce an uncertain conclusion.
there’s also the possibility of causality in the other direction—that good governance can raise the IQ of a population (through any number of mechanisms—better nutrition, better health care, better education, etc).
Again, finding correlation between IQ and quality of government constitutes weak evidence for the claim that increased IQ causes better government. Note that the authors of the paper made this claim too.
I am slow and lazy today, so please forgive if I am asking for the obvious:
Do the referenced studies control for the process of acquiring education/intelligence, and test for causality?
It seems that a plausible competing hypothesis for the correlation between intelligence and, for example, religious belief, are:
the process of acquiring intelligence leads to removal of biases, rather than actual possession of intelligence leading to removal of biases. If we change to a different process for acquiring intelligence, we may lose side effects.
the process of disposing of religious beliefs leads to a more measurable or noticeable level of intelligence.
the process of becoming educated in current education systems (and as a result better exposing existing intelligence aptitude) works at eradicating certain sets of beliefs and biases in students
It seems to me that differentiating between data that supports these hypothesis is incredibly hard, and I wonder if the referenced researchers went to the lengths required.
Doh! I think missed the obvious.
This problem is related to the problem of producing FAI, according to the terms and assumptions that Eliezer has been using.
I’m willing to bet that making a human, with a broken value system, more intelligent (according to some measure of intelligence based on some kind of increased computational ability of the brain), suffers from much the same kinds of problems that throwing more computing power at an improperly designed AI does.
This comment seems to miss the idea:
If in fact the future is what the rest of the article envisions, a world of accurate measures and prudent predictions, then the possibilities for collapse will become less and less.
Making the case that such largess will of course lead to the linear probability of increase in damage that would result in collapse, ignores in large part, if not the majority of the science behind cognitive development and AI science—risk mitigation and error elimination.
A relevant Nature editorial: http://www.scribd.com/doc/13134612/Naturrecom456702a
For every Voltaire, there are a hundred Newtons, Increase Mathers, and Descartes. And countless Michael Behes.
And that’s just religion. There are more sacred cows than just the traditional religions, more golden idols than could be worshiped by a hundred thousand faiths. Human cognition is a sepulchre, white-washed walls concealing corruption within.
Nice Heart of Darkness reference.
Hm, where’s the Conrad ref? I see a God Emperor of Dune ref (Dune seems pretty popular here, I’ve noticed), but not that.
It’s the whited sepulchre thing; it’s one of the central themes of Heart of Darkness. (Google tells me the original quote is from Matthew 23:27).
Thanks.
The important point is that when we look at the topics on which we can know with high confidence what the rational and correct positions are, there are often lots and lots and lots of highly intelligent people who take the wrong positions.
There was a point in history where atheism and antitheism was highly correlated with intelligence—as in Voltaire’s day—but intelligence was not at all correlated with atheism or antitheism.
I suspect that’s still true. Most ‘scientists’ are at least atheists, but if you look across all people with above-average intelligence most of them are theists still.
Intelligence gives people the ability to build taller, stronger, and more effective walls. It doesn’t seem to help to induce people not to build them in the first place, or to tear down existing ones.
You keep using this word “correlated”. I do not think it means what you think it means.
Namely, if A is positively correlated with B, then B is positively correlated with A. B does not have to happen the majority of times A happens for this to be the case.
I said highly correlated. A corr B means B corr A, but the strength of one correlation doesn’t have anything to do with the strength of the other.
No, you said, as I quoted, that intelligence was not at all correlated with atheism, despite atheism being highly correlated with intelligence. This is uncontroversially and trivially impossible; if p(A|B)≠p(A) where p(A) and p(B) are positive, then p(B|A)≠p(B).
The coefficient of correlation between A and B is the same as the coefficient of correlation between B and A, so this is false. I believe you mean, rather, that having a positive test for a rare disease can still leave you less than 50% likely to have the disease, while having the disease makes you very likely to test positive for it. However, the correlation is still strong in both directions: your chance of having the disease has jumped from “ridiculously unlikely” to just “unlikely” given that positive test.
No, it’s not false. The vast majority of intelligent people—educated, knowledgeable people—were once theists of one sort of another. The fact that significantly more of them were atheistic/antitheistic than the general population does not change that choosing one at random was still grossly unlikely to produce an AT/AnT.
If you continue to apply a mathematical model that is not being referenced in this context by my use of language, I’m going to become annoyed with you.
So, in other words, you mean precisely what Cyan and I had assumed you meant, but you refuse to acknowledge that the word “correlation” has an unambiguous and universal meaning that differs greatly from your usage of it; if you persist in this, you will misinterpret correlation to mean implication where it does not.
For example, smoking is correlated with lung cancer, but a randomly chosen smoker probably does not have lung cancer.
I don’t know what else to say on this topic, other than that this is not a case of you being contrarian: you are simply wrong, and you should do yourself the favor of admitting it.
ETA: I’m going to leave this thread now, as the delicious irony of catching Annoyance in a tangential error is not a worthy feeling for a rationalist to pursue.
It’s not universal. The general language use has a meaning that isn’t the same as the statistical. That domain-specific definition does not apply outside statistics.
You are simply wrong.
If you mean statistical correlation, then corr(x,y) = corr(y,x). I think you mean something more like implication, e.g., your claim is that at one time in the past, atheist implied intelligent but intelligent did not imply atheist.
If the correlation is sufficiently small, it can be lower than the error rate in detecting it.
And though the two concepts are distinct, in this context they’re the same. Implication and statistical correlation can be the same when what’s implied is a likelihood instead of a certainty.
I can’t tell if I disagree with you in a substantive way or just in your word usage (i.e., semantics). Can you please translate this assertion into math?