I haven’t read their book, but an analysis of the pre-theoretic concept of the infinitude of a set needn’t be taken as an analysis of the pre-theoretic concept of infinitude in general. “Unmarried man” doesn’t define “bachelor” in “bachelor of the arts,” but that doesn’t mean it doesn’t define it in ordinary contexts.
antigonus
But let us not forget, that comparing molecular biology and philosophy, is like comparing self-help and physics.
I’m comparing the review processes of molecular biology and philosophy. In both cases, experts with a deep grasp of most/all the relevant pitfalls provide extensive, specific, technical feedback regarding likely sources of error, failure to address existing objections and important points of clarification. That this is superior to a glorified Facebook “Like” button used by individuals with often highly limited familiarity with the subject matter—often consisting of having read a few blog posts by the same individual who himself has highly limited familiarity with the subject matter—should go without saying, right?
The problem with self-help writers is that, in general, they are insufficiently critical. It has never been seriously alleged that philosophers are insufficiently critical, whatever their other faults. Philosophers are virtually dying to bury each other’s arguments, and spend their entire careers successfully honing their abilities to do so. Therefore, surviving the gauntlet of their reviews is a better system of natural selection than having a few casually interested and generally like-minded individuals agree that they like your non-technical idea.
I guess I can’t really imagine how you came to that conclusion. You seem to be going preposterously overboard with your enthusiasm for LW here. Don’t mean to offend, but that’s the only way I know how to express the extent of my incredulity. Can you imagine a message board of dabblers in molecular biology congratulating each other over the advantages their board’s upvoting system has over peer review?
And because of our practices of constant focused argument, and karma selection, to select amongst positions, instead of the usual trend-method of philosophy.
I don’t understand this. Are you saying that a casual voting system by a group of amateurs on a website consisting of informal blog posts is superior to rigorous peer-review by experts of literature-aware arguments?
I agree there’s good reason to imagine that, had further selective pressure on increased intelligence been applied in our evolutionary history, we probably would’ve ended up more intelligent on average. What’s substantially less clear is whether we would’ve ended up much outside the present observed range of intelligence variation had this happened. If current human brain architecture happens to be very close to a local maximum of intelligence, then raising the average IQ by 50 points still may not get us to any IQ 200 individuals. So while there likely is a nearby region of decreasing f(x, x+1), it doesn’t seem so obvious that it’s wide enough to terminate in superintelligence. Given the notorious complexity of biological systems, it’s extremely difficult to extrapolate anything about the theoretical limits of evolutionary optimization.
I didn’t vote down your post (or even see it until just now), but it came across as a bit disdainful while being written rather confusingly. The former is going to poorly dispose people toward your message, and the latter is going to poorly dispose people toward taking the trouble to respond to it. If you try rephrasing in a clearer way, you might see more discussion.
I had the same reaction. The post reads like singularity apologetics.
I think he’s a sincere teenager who’s very new to this sort of thing. They sound, behave and type like that.
Another thing: We need to distinguish between getting better at designing intelligences vs. getting better at designing intelligences which are in turn better than one’s own. The claim that “the smarter you are, the better you are at designing intelligences” can be interpreted as stating that the function f(x, y) outlined above is decreasing for any fixed y. But the claim that the smarter you are, the easier it is to create an intelligence even smarter is totally different and equivalent to the aforementioned thesis about the shape of f(x, x+1).
I see the two claims conflated shockingly often, e.g., in Bostrom’s article, where he simply states:
Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth.
and concludes that superintelligence inevitably follows with no intermediary reasoning on the software level. (Actually, he doesn’t state that outright, but the sentence is at the beginning of the section entitled “Once there is human-level AI there will soon be superintelligence.”) That an IQ 180 AI is (much) better at developing an IQ 190 AI than a human is doesn’t imply that it can develop an IQ 190 AI faster than the human can develop the IQ 180 AI.
For what it’s worth, I’ve posted a fair number of things in my short time here that go against what I assume to be consensus, and I’ve mostly only been upvoted for them. (This includes posts that come close to making the cult comparison.)
Distinguish positive and negative criticisms: Those aimed at demonstrating the unlikelihood of an intelligence explosion and those aimed at merely undermining the arguments/evidence for the likelihood of an intelligence explosion (thus moving the posterior probability of the explosion closer to its prior probability).
Here is the most important negative criticism of the intelligence explosion: Possible harsh diminishing returns of intelligence amplification. Let f(x, y) measure the difficulty (perhaps in expected amount of time to complete development) for an intelligence of IQ x to engineer an intelligence of IQ y. The claim that intelligence explodes is roughly equivalent to the thesis that f(x, x+1) decreases relatively quickly. What is the evidence for this claim? I haven’t seen a huge amount. Chalmers briefly discusses the issue in his article on the singularity and points to how amplifying a human being’s intelligence from average to Alan Turing’s level has the effect of amplifying his intelligence-engineering ability from more or less nil to being able to design a basic computer. But “nil” and “basic computer” are strictly stupider than “average human” and “Alan Turing,” respectively. It’s evidence that a curve like f(x, x-1) - the difficulty of creating a being slightly stupider than yourself given your intelligence level—decreases relatively quickly. But the shapes of f(x, x+1) and f(x, x-1) are unrelated. The one can increase exponentially while the other decays exponentially. (Proof: set f(x, y) = e^(y^2 - x^2).)
See also JoshuaZ’s insightful comment here on how some of the concrete problems involved in intelligence amplification are linked to to some (very likely) computationally intractable problems from CS.
I spent a week looking for counterarguments, to check whether I was missing something
What did you find? Had you missed anything?
All right. Regarding the idea that the meaning of “grue” changes over time—how do you take this to be the case? What do you mean by “meaning” here? Intension, extension or what?
I’m not sure I’ve understood that very well, either. From what I can gather, it seems like you’re arguing that 1. the meaning and physical tests for grue change over time, and consequently 2. grue is a more complicated property than green is, so we’re justified in privileging the green hypothesis. If that’s so, then I no longer see what role the reft/light example plays in your argument. You could’ve just started and finished with that.
I don’t think I really understand what this means. Could you give more detail?
No, that’s a common misunderstanding. No emerald ever has to change color for the grue hypothesis to be true
Well, O.K. “The next observed emerald is green if before T and blue otherwise” doesn’t entail any change of color. I suppose I should have said, “Analogous to assuming that the emeralds’ color (as opposed to anti-color) distribution doesn’t vary before and after T.”
It is analogous to assuming that there is a definite frequency of green emeralds out of emeralds ever made.
I’m really not seeing that analogy. It seems more analogous to assuming there’s a single, time-independent probability of observing a green emerald. (Holding the line fixed means there’s a single, time-independent probability of landing right of the line.) Which is again an assumption the skeptic would deny, preferring instead the existence of a single, time-invariant probability of observing a grue emerald.
Assuming that the line is constant is analogous to assuming that emeralds’ color won’t change after T, correct? The skeptic will refuse to do either of these, preferring instead to assume that the line is anti-constant and that emeralds’ anti-color won’t change after T.
I guess I’m just not currently seeing the arguments for those things (though I may just be confused somehow). It seems more like you’re trying to lobby the burden of proof tennis ball to Pogge’s court: AI “might” turn out to be as good as the scenario (50% chance of permanently ending world poverty forever if we’re uncharitable for 30 years) he assents to, so it’s Pogge’s job to show that AI is probably not like that scenario.
That the line will stay in the same place is not something I induce, it is a premise in the hypothetical.
But that’s question-begging. Let me put this another way. Define the function reft-distance(x) = x’s distance to the rightmost edge of the table before time T, or the distance to the leftmost edge of the table after time T. (Then “x is reft of y” is definable as reft-distance(x) < reft-distance(y). Similarly for the function light-distance(x).) Assuming the line doesn’t move is equivalent to assuming that the line’s right-distance remains constant, but that its reft-distance changes after T. But that’s not a fair assumption, the skeptic will insist: he prefers to assume the line doesn’t “anti-move,” which means its reft-distance remains constant but its right-distance changes.
If we’re simply stipulating that your assumption (that the line doesn’t move) is correct and the skeptic’s assumption (that the line doesn’t anti-move) is incorrect, that’s not very useful. We might as well just stipulate that emeralds remain green for all time or whatever.
What is your basis for concluding this? “Philosophers are really good at demolishing unsound arguments” is compatible with “Philosophers are really bad at coming to agreement.” The primary difference between philosophy and biology that explains the ideological diversity of the former and the consensus of the latter is not that philosophers are worse critical thinkers. It is that, unlike in biology, virtually all of the evidence in philosophy is itself subject to controversy.
I’m not sure that your experiment makes any sense. What exactly are you going to be comparing? Most analytic philosophers in most articles don’t take themselves to be offering “solutions” to any problems. They take themselves to be offering detailed, specific lines of argumentation which suggest a certain conclusion, while accommodating or defusing rival lines of argumentation that have appeared in the literature. That someone here may come up with a vaguely similar position to philosopher X’s on issue Y tells us very little and ignores the meat of X’s contribution.