People voluntarily hand over a bunch of resources (perhaps to a bunch of different AIs) in the name of gaining an edge over their competitors, or possibly for fear of their competitors doing the same thing to gain such an edge. Or just because they expect the AI to do it better.
ChrisHallquist
Truth: It’s Not That Great
Maximizing your chances of getting accepted: Not sure what to tell you. It’s mostly about the coding questions, and the coding questions aren’t that hard—”implement bubble sort” was one of the harder ones I got. At least, I don’t think that’s hard, but some people would struggle to do that. Some people “get” coding, some don’t, and it seems to be hard to move people from one category to another.
Maximizing value given that you are accepted: Listen to Ned. I think that was the main piece of advice people from our cohort gave people in the incoming cohort. Really. Ned, the lead instructor, knows what he’s doing, and really cares about the students who go through App Academy. And he’s seen what has worked or not worked for people in the past.
(I might also add, based on personal experience, “don’t get cocky about the assessments.” Also “get enough sleep,” and should you end up in a winter cohort, “if you go home for Christmas, fly back a day earlier than necessary.”)
Presumably. The question is whether we should accept that belief of theirs.
Effective Effective Altruism Fundraising and Movement-Building
And the solution to how not to catch false positives is to use some common sense. You’re never going to have an aytomated algorithm that can detect every instance of abuse, but even an instance that is not detectable by automatic means can be detectable if someone with sufficient database access takes a look when it is pointed out to them.
Right on. The solution to karma abuse isn’t some sophisticated algorithm. It’s extremely simple database queries, in plain english along the lines of “return list of downvotes by user A, and who was downvoted,” “return downvotes on posts/comments by user B, and who cast the vote,” and “return lists of downvotes by user A on user B.”
Ah, of course, because it’s more important to signal one’s pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.
This is a failure mode I worry about, but I’m not sure ironic atheist re-appropriation of religious texts is going to turn off anyone we had a chance of attracting in the first place. Will reconsider this position if someone says, “oh yeah, my deconversion process was totally slowed down by stuff like that from atheists,” but I’d be surprised.
Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?
What’s your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we’re basically talking about people who all have PhDs, so education can’t be the heuristic. You also proposed IQ and rationality, but admitted we aren’t going to have good ways to measure them directly, aside from looking for “statements that follow proper logical form and make good arguments.” I pointed out that “good arguments” is circular if we’re trying to decide who to read charitably, and you had no response to that.
That leaves us with “proper logical form,” about which you said:
Proper logical form comes cheap, but a surprising number of people don’t bother even with that. Do you frequently see people appending “if everything I’ve said so far is true, then my conclusion is true” to screw with people who judge arguments based on proper logical form?
In response to this, I’ll just point out that this is not an argument in proper logical form. It’s a lone assertion followed by a rhetorical question.
Skimming the “disagreement” tag in Robin Hanson’s archives, I found I few posts that I think are particularly relevant to this discussion:
Username explicitly linked to torture vs. dust specks as a case where it makes sense to use torture as an example. Username is just objecting to using torture for general decision theory examples where there’s no particular reason to use that example.
But then we expect mainstream academia to be wrong in a lot of cases—you bring up the case of mainstream academic philosophy, and although I’m less certain than you are there, I admit I am very skeptical of them.
With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don’t agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything.
The crackpot warning signs are good (although it’s interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out...
Examples?
We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it’s hard to remember that stupid things are still much, much likelier to be believed by stupid people.
(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of “deserve further investigation and charitable treatment”.)
I don’t think “smart people saying stupid things” reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that “I know God really exists and I have no doubts about it,” which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.)
So when I say I judge people by IQ, I think I mean something like what you mean when you say “a track record of making reasonable statements”, except basing “reasonable statements” upon “statements that follow proper logical form and make good arguments” rather than ones I agree with.
Proper logical form comes cheap, just add a premise which says, “if everything I’ve said so far is true, then my conclusion is true.” “Good arguments” is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say “this guy’s arguments are terrible,” and you say, “you should read those arguments more charitably,” it doesn’t do much good for you to defend that claim by saying, “well, he has a track record of making good arguments.”
I question how objective these objective criterion you’re talking about are. Usually when we judge someone’s intelligence, we aren’t actually looking at the results of an IQ test, so that’s subjective. Ditto rationality. And if you were really that concerned about education, you’d stop paying so much attention to Eliezer or people who have a bachelors’ degree at best and pay more attention to mainstream academics who actually have PhDs.
FWIW, actual heuristics I use to determine who’s worth paying attention to are
What I know of an individual’s track record of saying reasonable things.
Status of them and their ideas within mainstream academia (but because everyone knows about this heuristic, you have to watch out for people faking it.
Looking for other crackpot warning signs I’ve picked up over time, e.g. a non-expert claiming the mainstream academic view is not just wrong but obviously stupid, or being more interested in complaining that their views are being suppressed than in arguing for those views.
Which may not be great heuristics, but I’ll wager that they’re better than IQ (wager, in this case, being a figure of speech, because I don’t actually know how you’d adjudicate that bet).
It may be helpful, here, to quote what I hope will be henceforth known as the Litany of Hermione: “The thing that people forget sometimes, is that even though appearances can be misleading, they’re usually not.”
You’ve also succeeded in giving me second thoughts about being signed up for cryonics, on the grounds that I failed to consider how it might encourage terrible mental habits in others. For the record, it strikes me as quite possible that mainstream neuroscientists are entirely correct to be dismissive of cryonics—my biggest problem is that I’m fuzzy on what exactly they think about cryonics (more here).
Oh, I see now. But why would Eliezer do that? Makes me worry this is being handled less well than Eliezer’s public statements indicate.
Plantinga’s argument defines God as a necessary being, and assumes it’s possible that God exists. From this, and the S5 axioms of modal logic, it folllws that God exists. But you can just as well argue, “It’s possible the Goldbach Conjecture is true, and mathematical truths are if true necessarily true, therefore the Goldbach Conjecture is true.” Or even “Possibly it’s a necessary truth that pigs fly, therefore pigs fly.”
(This is as much as I can explain without trying to give a lesson in modal logic, which I’m not confident in my ability to do.)
People on LW have started calling themselves “rationalists”. This was really quite alarming the first time I saw it. People used to use the words “aspiring rationalist” to describe themselves, with the implication that e didn’t consider ourselves close to rational yet.
My initial reaction to this was warm fuzzy feelings, but I don’t think it’s correct, any more than calling yourself a theist indicates believing you are God. “Rationalist” means believing in rationality (in the sense of being pro-rationality), not believing yourself to be perfectly rational. That’s the sense of rationalist that goes back at least as far as Bertrand Russell. In the first paragraph of his “Why I Am A Rationalist”, for example, Russell identifies as a rationalist but also says, “We are not yet, and I suppose men and women never will be, completely rational.”
This also seems like it would be a futile linguistic fight. A better solution might be to consciously avoid using “rationalist” when talking about Aumann’s agreement theorem—use “ideal rationalists” or “perfect rationalist”. I also tend to use phrases like “members of the online rationalist community,” but that’s more to indicate I’m not talking about Russell or Dawkins (much less Descartes).
His assertion that there is no way to check seems to me a better outcome than these posts shouting into the wind that don’t get any response.
Did he assert that, exactly? The comment you linked to sounds more like “it’s difficult to check.” Even that puzzles me, though. Is there a good reason for the powers that be at LessWrong not to have easy access to their own database?
The right rule is probably something like, “don’t mix signaling games and truth seeking.” If it’s the kind of thing you’d expect in a subculture that doesn’t take itself too seriously or imagine its quirks are evidence of its superiority to other groups, it’s probably fine.
You’re right, being bad at signaling games can be crippling. The point, though, is to watch out for them and steer away from harmful ones. Actually, I wish I’d emphasized this in the OP: trying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.
Abuse of the karma system is a well-known problem on LessWrong,
which the admins appear to have decided not to do anything about.Update: actually, it appears Eliezer has looked into this and not been able to find any evidence of mass-downvoting.
I love how understated this comment is.