I’m perfectly willing to grant that, over the scope of human history, the reference classes for cryo/AGI/Singularity have produced near-0 success rates. I’d modify the classes slightly, however:
Inventions that extend human life considerably: Penicillin, if nothing else. Vaccinations. Clean-room surgery.
Inventions that materially changed the fundamental condition of humanity: Agriculture. Factories/mass production. Computers.
Interactions with beings that are so relatively powerful that they appear omnipotent: Many colonists in the Americas were seen this way. Similarly with the cargo cults in the Pacific islands.
The point is, each of these references classes, given a small tweak, has experienced infrequent but nonzero successes—and that over the course of all of human history! Once we update the “all of human history” reference class/prior to account for the last century—in which technology has developed faster than probably the previous millennium—the posterior ends up looking much more promising.
People invented it because they were LOOKING for antibiotics explicitly. Fleming had previously found interferon, had cultivated slides where he could see growth irregularities very well, etc. The claim of fortuitous discovery is basically false modesty (see “Discovering” by Robert Root-Bernstein).
Even if we prefer to frame the reference class that way, we can instead note that anybody who predicted that things would remain the way they are (in any of the above categories) would have been wrong. People making that prediction in the last century have been wrong with increasing speed. As Eliezer put it, “beliefs that the future will be just like the past” have a zero success rate.
Perhaps the inventions listed above suggest that it’s unwise to assign 0% chance to anything on the basis of present nonexistence, even if you could construct a reference class that has that success rate.
Either way, people who predicted that human life would be lengthened considerably, that humanity would fundamentally change in structure, or that some people would interact with beings that appear nigh-omnipotent have all been right with some non-zero success rate, and there’s no particular reason to reject those data.
The negation of “a Singularity will occur” is not “everything will stay the same”, it’s “a Singularity as you describe it probably won’t occur”. I’ve no idea why you (and Eliezer elsewhere in the thread) are making this obviously wrong argument.
Perhaps I was simply unclear. Both my immediately prior comment and its grandparent were arguing only that there should be a nonzero expectation of a technological Singularity, even from a reference class standpoint.
The reference class of predictions about the Singularity can, as I showed in the grandparent, include a wide variety of predictions about major changes in the human condition. The complement or negation of that reference class is a class of predictions that things will remain largely the same, technologically.
Often, when people appear to be making an obviously wrong argument in this forum, it’s a matter of communication rather than massive logic failure.
Whaddaya mean by “negation of reference class”? Let’s see, you negate each individual prediction in the class and then take the conjunction (AND) of all those negations: “everything will stay the same”. This is obviously false. But this doesn’t imply that each individual negation is false, only that at least one of them is! I’d be the first to agree that at least one technological change will occur, but don’t bullshit me by insinuating you know which particular one! Could you please defend your argument again?
I’m perfectly willing to grant that, over the scope of human history, the reference classes for cryo/AGI/Singularity have produced near-0 success rates. I’d modify the classes slightly, however:
Inventions that extend human life considerably: Penicillin, if nothing else. Vaccinations. Clean-room surgery.
Inventions that materially changed the fundamental condition of humanity: Agriculture. Factories/mass production. Computers.
Interactions with beings that are so relatively powerful that they appear omnipotent: Many colonists in the Americas were seen this way. Similarly with the cargo cults in the Pacific islands.
The point is, each of these references classes, given a small tweak, has experienced infrequent but nonzero successes—and that over the course of all of human history! Once we update the “all of human history” reference class/prior to account for the last century—in which technology has developed faster than probably the previous millennium—the posterior ends up looking much more promising.
I think taw asked about reference classes of predictions. It’s easy to believe in penicillin after it’s been invented.
People invented it because they were LOOKING for antibiotics explicitly. Fleming had previously found interferon, had cultivated slides where he could see growth irregularities very well, etc. The claim of fortuitous discovery is basically false modesty (see “Discovering” by Robert Root-Bernstein).
Even if we prefer to frame the reference class that way, we can instead note that anybody who predicted that things would remain the way they are (in any of the above categories) would have been wrong. People making that prediction in the last century have been wrong with increasing speed. As Eliezer put it, “beliefs that the future will be just like the past” have a zero success rate.
Perhaps the inventions listed above suggest that it’s unwise to assign 0% chance to anything on the basis of present nonexistence, even if you could construct a reference class that has that success rate.
Either way, people who predicted that human life would be lengthened considerably, that humanity would fundamentally change in structure, or that some people would interact with beings that appear nigh-omnipotent have all been right with some non-zero success rate, and there’s no particular reason to reject those data.
The negation of “a Singularity will occur” is not “everything will stay the same”, it’s “a Singularity as you describe it probably won’t occur”. I’ve no idea why you (and Eliezer elsewhere in the thread) are making this obviously wrong argument.
Perhaps I was simply unclear. Both my immediately prior comment and its grandparent were arguing only that there should be a nonzero expectation of a technological Singularity, even from a reference class standpoint.
The reference class of predictions about the Singularity can, as I showed in the grandparent, include a wide variety of predictions about major changes in the human condition. The complement or negation of that reference class is a class of predictions that things will remain largely the same, technologically.
Often, when people appear to be making an obviously wrong argument in this forum, it’s a matter of communication rather than massive logic failure.
Whaddaya mean by “negation of reference class”? Let’s see, you negate each individual prediction in the class and then take the conjunction (AND) of all those negations: “everything will stay the same”. This is obviously false. But this doesn’t imply that each individual negation is false, only that at least one of them is! I’d be the first to agree that at least one technological change will occur, but don’t bullshit me by insinuating you know which particular one! Could you please defend your argument again?