Well, ultimately, that was sort of the collective strategy the world used, wasn’t it? (Not quite; a lot of low-level Nazis were pardoned after the war.)
And you can’t ignore the collective action, now can you?
Well, ultimately, that was sort of the collective strategy the world used, wasn’t it? (Not quite; a lot of low-level Nazis were pardoned after the war.)
And you can’t ignore the collective action, now can you?
It’s more a relative thing—”not quite as extremely biased towards academia as the average group of this level of intellectual orientation can be expected to be”.
If so, then we’re actually more rational right? Because we’re not biased against academia as most people are, and aren’t biased toward academia as most academics are.
It’s not quite so dire. You can’t do experiments from home usually, but you can interpret experiments from home thanks to Internet publication of results. So a lot of theoretical work in almost every field can be done from outside academia.
otherwise we would see an occasional example of someone making a significant discovery outside academia.
Should we all place bets now that it will be Eliezer?
Negative selection may be good, actually, for the vast majority of people who are ultimately going to be mediocre.
It seems like it may hurt the occasional genius… but then again, there are a lot more people who think they are geniuses than really are geniuses.
In treating broken arms? Minimal difference.
In discovering new nanotechnology that will revolutionize the future of medicine? Literally all the difference in the world.
I think a lot of people don’t like using percentiles because they are zero-sum: Exactly 25% of the class is in the top 25%, regardless of whether everyone in the class is brilliant or everyone in the class is an idiot.
Well, you want some negative selection: Choose dating partners from among the set who are unlikely to steal your money, assault you, or otherwise ruin your life.
This is especially true for women, for whom the risk of being raped is considerably higher and obviously worth negative selecting against.
I don’t think it’s quite true that “fail once, fail forever”, but the general point is valid that our selection process is too much about weeding-out rather than choosing the best. Also, academic doesn’t seem to be very good at the negative selection that would make sense, e.g. excluding people who are likely to commit fraud or who have fundamentally anti-scientific values. (Otherwise, how can you explain how Duane Gish made it through Berkeley?)
I’m saying that the truth is not so horrifying that it will cause you to go into depression.
This is what I hope and desire to be true. But what I’m asking for here is evidence that this is the case, to counteract the evidence from depressive realism that would seem to say that no, actually the world is so terrible that depression is the only rational response.
What reason do we have to think that the world doesn’t suck?
Politico, PolitiFact, FactCheck.org
The mutilation of male genitals in question is ridiculous in itself but hardly equivalent to the kind of mutilation done to female genitals.
Granted. Female mutilation is often far more severe.
But I think it’s interesting that when the American Academy of Pediatrics proposed allowing female circumcision that really just was circumcision, i.e. cutting of the clitoral hood, people were still outraged. And so we see that even when the situation is made symmetrical, there persists what we can only call female privilege in this circumstance.
I know with 99% probability that the item on top of your computer monitor is not Jupiter or the Statue of Liberty. And a major piece of information that leads me to that conclusion is… you guessed it, the circumference of Jupiter and the height of the Statue of Liberty. So there you go, this “irrelevant” information actually does narrow my probability estimates just a little bit.
Not a lot. But we didn’t say it was good evidence, just that it was, in fact, evidence.
(Pedantic: You could have a model of Jupiter or Liberty on top of your computer, but that’s not the same thing as having the actual thing.)
The statistical evidence is that liberalism, especially social liberalism, is positively correlated with intelligence. This does not prove that liberalism is correct; but it does provide some mild evidence in that direction.
It’s a subtle matter, but… you clearly don’t really mean determinism here, because you’ve said a hundred times before how the universe is ultimately deterministic even at the quantum level.
Maybe predictability is the word we want. Or maybe it’s something else, like fairness or “moral non-neutrality”; it doesn’t seem fair that Hitler could have that large an impact by himself, even though there’s nothing remotely non-deterministic about that assertion.
Yes, think about how none of us would ever have discovered Less Wrong if we never fucked around on the Internet.
This is not to say that we don’t fuck around on the Internet more than we should, which I think I probably do and I wouldn’t be surprised if most of you do as well.
Not critical to your point, but I can’t stand this habitual exchange:
But there’s a lot of small habits in everything we do, that we don’t really notice. Necessary habits. When someone asks you how you are, the habitual answer is ‘Fine, thank you,’ or something similar. It’s what people expect. The entire greeting ritual is habitualness, to the point that if you disrupt the greeting, it throws people off.
When people ask how I am, I want to give them information. I want to tell them, “Actually I’ve had a bad headache all day; and I’m underemployed right now and really lonely.” Or sometimes I’m feeling good, and I want to say “I feel great!” and have them actually know that I feel great and not think that I’m just carrying through the formula.
Human speech is one of the most valuable resources in the universe, and he were are wasting it on things that convey no information.
It’s about ten times easier to become vegetarian than it is to reduce your consumption of meat. Becoming vegetarian means refusing meat every time no matter what, and you can pretty much manage that from day one. Reducing your meat consumption means somehow judging how much meat you’re eating and coming up with an idea of how low you want it to go, and pretty soon you’re just fudging all the figures and eating as much as you were anyway.
Likewise, I tried for a long time to “reduce my soda drinking” and could not achieve this. Now I have switched to “sucralose-based sodas only” and I’ve been able to do it remarkably well.
For the most part I agree with this post, but I am not convinced that this is true:
Anyone can develop any “character trait.” The requirement is simply enough years of thoughts becoming words becoming actions becoming habit.
A lot of measured traits are extremely stable over lifespan (IQ, conscientiousness, etc.) and seem very difficult, if not impossible, to train. So the idea that someone can just get smarter through practice does not appear to be supported by the evidence.
This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.
Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.
Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without disease, where we all live as long as we like and have essentially unlimited resources.
It’s also worth asking whether slowing technology would even help; cultural advancement seems somewhat dependent upon technological advancement. It’s not clear to me that had we taken another 100 years to get nuclear weapons we would have used them any more responsibly; perhaps it simply would have taken that much longer to achieve the Long Peace.
In any case, I don’t really see any simple intervention that would slow technological advancement without causing an enormous amount of collateral damage. So unless you’re quite sure that the benefit in terms of slowing down dangerous technologies like unfriendly AI outweighs the cost in slowing down beneficial technologies, I don’t think slowing down technology is the right approach.
Instead, find ways to establish safeguards, and incentives for developing beneficial technologies faster. To some extent we already do this: Nuclear research continues at CERN and Fermilab, but when we learn that Iran is working on similar technologies we are concerned, because we don’t think Iran’s government is trustworthy enough to deal with these risks. There aren’t enough safeguards against unfriendly AI or incentives to develop friendly AI, but that’s something the Singularity Institute or similar institutions could very well work on. Lobby for legislation on artificial intelligence, or raise funds for an endowment that supports friendliness research.