Suppose AI_0 designs and builds an improved AI_1, which in turn designs and builds an even more powerful AI_2, which …, and so on. Does that count as recursive self-improvement?
If not, then I think you need to revise your definition of FOOMing.
If yes, then I think that P(P5) = 1.0.
This is an example of why technological skepticism arguments rarely are useful. The skeptic always makes unwarranted assumptions about how the technology must work. He then shows that it really can’t work like that. And then, like Vizzini in the Princess Bride, he announces that it is inconceivable that it might work any other way.
This is an example of why technological skepticism arguments rarely are useful. The skeptic always makes unwarranted assumptions about how the technology must work. He then shows that it really can’t work like that. And then, like Vizzini in the Princess Bride, he announces that it is inconceivable that it might work any other way.
This is to some extent a problem when talking about very broad or very vague technologies. However, technological skepticism can be warranted in specific instances. For example, if I said that soon we’ll have teleportation machines to transport people you’d be right to be skeptical. On the other hand, I can confidently predict that the efficiency of solar panels will continue to go up over the next ten years, even though I have no idea how that will occur.
Moreover, when people are discussing specific paths for a technology, it isn’t at all unreasonable to look at any given path and say “oh, proponents say this is a likely path. Well, it has problems X,Y and Z”.
The skeptic always makes unwarranted assumptions about how the technology must work.
Oh come on...I can’t take this seriously. You say someone who claims that there are design principles of intelligence that can take over the universe in a week is perfectly rational, while someone who says that such an belief is far-fetched makes up unwarranted assumptions.
I’m not disagreeing with your conclusions—only with your argument. In fact, several people have found flaws in your arguments. Technological impossibility ‘proofs’ are always flawed, IMHO. I’m a FOOM skeptic too. The arguments of FOOM believers are flawed. Point out those flaws. Don’t build flawed counter-arguments of your own.
You say someone who claims that there are design principles of intelligence that can take over the universe in a week is perfectly rational, while someone who says that such an belief is far-fetched makes up unwarranted assumptions.
Uh, no. I didn’t say that. What I said was more like “someone who claims to prove that such a belief is far-fetched is probably grounding their ‘proof’ on unwarranted assumptions.”
Technological impossibility ‘proofs’ are always flawed...
That wasn’t my intention. Over the past year I asked the proponents of FOOM to be more specific by mentioning some concrete requirements for FOOM to be feasible (also what evidence led them to make that prediction in the first place). But all they ever do is saying that I am not entitled to that particular proof, as if I am even asking for a proof. And so I went to see what requirements would have to be met to allow FOOM to be possible in the first place.
If someone predicts that the world is going to end, I’ll ask that person to be more specific. If that person refuses to be more specific, but continues to claim that the world will end anyway, then in order to better estimate the probability of the prediction I have to think about ways how the world could end, I’ll have to think about some specific requirements (circumstances) that would cause the world to end, e.g. giant alien death rays. If that person then says that all my examples of how the world could end are flawed, well then that doesn’t increase my probability estimation of him being right. To claim that there are no requirements for the world to end doesn’t make it more likely. You can’t substract details from a story by refusing to be specific.
What I said was more like “someone who claims to prove that such a belief is far-fetched is probably grounding their ‘proof’ on unwarranted assumptions.”
I never claimed to prove that such a belief is far-fetched, at most I made an antiprediction. I believe that such a belief is far-fetched.
The arguments of FOOM believers are flawed. Point out those flaws.
I did by showing that FOOM is a lot of handwaving, a label for some extraordinary assertions. Only its vagueness makes it look like the result of disjunctive reasoning. If you had to substantiate it, it would become apparent that it actually assumes a lot to be true of which we have no idea one way or the other.
...several people have found flaws in your arguments.
I don’t think so. Mostly they just said that I am wrong, but how do they know that?
This is an example of why technological skepticism arguments rarely are useful. The skeptic always makes unwarranted assumptions about how the technology must work.
I’m a sceptic about uploads coming first, the usefulness of eyetaps, the potential of cyborg technology. I also doubt that we will be making chairs and tables out of diamond. It seems necessary to be sceptical about some of this stuff, otherwise you swallow too many stupid dreams. Perhaps we should celebrate the jet-pack sceptics more. After all, they were mostly right—and that should count for something.
Suppose AI_0 designs and builds an improved AI_1, which in turn designs and builds an even more powerful AI_2, which …, and so on. Does that count as recursive self-improvement?
If not, then I think you need to revise your definition of FOOMing.
If yes, then I think that P(P5) = 1.0.
This is an example of why technological skepticism arguments rarely are useful. The skeptic always makes unwarranted assumptions about how the technology must work. He then shows that it really can’t work like that. And then, like Vizzini in the Princess Bride, he announces that it is inconceivable that it might work any other way.
This is to some extent a problem when talking about very broad or very vague technologies. However, technological skepticism can be warranted in specific instances. For example, if I said that soon we’ll have teleportation machines to transport people you’d be right to be skeptical. On the other hand, I can confidently predict that the efficiency of solar panels will continue to go up over the next ten years, even though I have no idea how that will occur.
Moreover, when people are discussing specific paths for a technology, it isn’t at all unreasonable to look at any given path and say “oh, proponents say this is a likely path. Well, it has problems X,Y and Z”.
Oh come on...I can’t take this seriously. You say someone who claims that there are design principles of intelligence that can take over the universe in a week is perfectly rational, while someone who says that such an belief is far-fetched makes up unwarranted assumptions.
I’m not disagreeing with your conclusions—only with your argument. In fact, several people have found flaws in your arguments. Technological impossibility ‘proofs’ are always flawed, IMHO. I’m a FOOM skeptic too. The arguments of FOOM believers are flawed. Point out those flaws. Don’t build flawed counter-arguments of your own.
Uh, no. I didn’t say that. What I said was more like “someone who claims to prove that such a belief is far-fetched is probably grounding their ‘proof’ on unwarranted assumptions.”
That wasn’t my intention. Over the past year I asked the proponents of FOOM to be more specific by mentioning some concrete requirements for FOOM to be feasible (also what evidence led them to make that prediction in the first place). But all they ever do is saying that I am not entitled to that particular proof, as if I am even asking for a proof. And so I went to see what requirements would have to be met to allow FOOM to be possible in the first place.
If someone predicts that the world is going to end, I’ll ask that person to be more specific. If that person refuses to be more specific, but continues to claim that the world will end anyway, then in order to better estimate the probability of the prediction I have to think about ways how the world could end, I’ll have to think about some specific requirements (circumstances) that would cause the world to end, e.g. giant alien death rays. If that person then says that all my examples of how the world could end are flawed, well then that doesn’t increase my probability estimation of him being right. To claim that there are no requirements for the world to end doesn’t make it more likely. You can’t substract details from a story by refusing to be specific.
I never claimed to prove that such a belief is far-fetched, at most I made an antiprediction. I believe that such a belief is far-fetched.
I did by showing that FOOM is a lot of handwaving, a label for some extraordinary assertions. Only its vagueness makes it look like the result of disjunctive reasoning. If you had to substantiate it, it would become apparent that it actually assumes a lot to be true of which we have no idea one way or the other.
I don’t think so. Mostly they just said that I am wrong, but how do they know that?
I’m a sceptic about uploads coming first, the usefulness of eyetaps, the potential of cyborg technology. I also doubt that we will be making chairs and tables out of diamond. It seems necessary to be sceptical about some of this stuff, otherwise you swallow too many stupid dreams. Perhaps we should celebrate the jet-pack sceptics more. After all, they were mostly right—and that should count for something.