If Goertzel’s claim that “SIAI’s arguments are so unclear that he had to construct it himself” can’t be disproven by the simple expedient of posting a single link to an immediately available well-structured top-down argument then the SIAI should regard this as an obvious high-priority, high-value task. If it can be proven by such a link, then that link needs to be more highly advertised since it seems that none of us are aware of it.
But of course the argument is a little large to entirely set out in one paper; the next nearest thing is What I Think, If Not Why and the title shows in what way that’s not what Goertzel was looking for.
Artificial Intelligence as a Positive and Negative Factor in Global Risk
44 pages. I don’t see anything much like the argument being asked for. The lack of an index doesn’t help. The nearest thing I could find was this:
It may be tempting to ignore Artificial Intelligence because, of all the global risks discussed in this book, AI is hardest to discuss. We cannot consult actuarial statistics to assign small annual probabilities of catastrophe, as with asteroid strikes. We cannot use calculations from a precise, precisely confirmed model to rule out events or place infinitesimal upper bounds on their probability, as with proposed physics disasters. But this makes AI catastrophes more worrisome, not less.
He also claims that intelligence could increase rapidly with a “dominant” probabilty.
I cannot perform a precise calculation using a precisely confirmed theory, but my current opinion is that sharp jumps in intelligence are possible, likely, and constitute the dominant probability.
Is this an official position in the first place? It seems to me that they want to give the impression that—without their efforts—the END IS NIGH—without committing to any particular probability estimate—which would then become the target of critics.
Halloween update: It’s been a while now, and I think the response has been poor. I think this means there is no such document (which explains Ben’s attempted reconstruction). It isn’t clear to me that producing such a document is a “high-priority task”—since it isn’t clear that the thesis is actually correct—or that the SIAI folks actually believe it.
Most of the participants here seem to be falling back on: even if it is unlikely, it could happen, and it would be devastating, so therefore we should care a lot—which seems to be a less unreasonable and more defensible position.
It isn’t clear to me that producing such a document is a “high-priority task”—since it isn’t clear that the thesis is actually correct—or that the SIAI folks actually believe it.
Most of the participants here seem to be falling back on: even if it is unlikely, it could happen, and it would be devastating, so therefore we should care a lot—which seems to be a less unreasonable and more defensible position.
You lost me at that sharp swerve in the middle. With probabilities attached to the scary idea, it is an absolutely meaningless concept. What if its probability were 1 / 3^^^3, should we still care then? I could think of a trillion scary things that could happen. But without realistic estimates of how likely it is to happen, what does it matter?
Heh. I’ve read virtually all those links. I still have the three following problems.
Those links are about as internally self-consistent as the Bible.
There are some fundamentally incorrect assumptions that have become gospel.
Most people WON’T read all those links and will therefore be declared unfit to judge anything.
What I asked for was “an immediately available well-structured top-down argument”.
It would be particularly useful and effective if SIAI recruited someone with the opposite point of view to co-develop a counter-argument thread and let the two revolve around each other and solve some of these issues (or, at least, highlight the base important differences in opinion that prevent them from solution). I’m more than willing to spend a ridiculous amount of time on such a task and I’m sure that Ben would be more than willing to devote any time that he can tear away from his busy schedule.
There are some fundamentally incorrect assumptions that have become gospel.
So go ahead and point them out. My guess is that in the ensuing debate it will be found that 1⁄4 of them are indeed fundamentally incorrect assumptions, 1⁄4 of them are arguably correct, and 1⁄2 of them are not really “assumptions that have become gospel”. But until you provide your list, there is no way to know.
If Goertzel’s claim that “SIAI’s arguments are so unclear that he had to construct it himself” can’t be disproven by the simple expedient of posting a single link to an immediately available well-structured top-down argument then the SIAI should regard this as an obvious high-priority, high-value task. If it can be proven by such a link, then that link needs to be more highly advertised since it seems that none of us are aware of it.
The nearest thing to such a link is Artificial Intelligence as a Positive and Negative Factor in Global Risk [PDF].
But of course the argument is a little large to entirely set out in one paper; the next nearest thing is What I Think, If Not Why and the title shows in what way that’s not what Goertzel was looking for.
44 pages. I don’t see anything much like the argument being asked for. The lack of an index doesn’t help. The nearest thing I could find was this:
He also claims that intelligence could increase rapidly with a “dominant” probabilty.
This all seems pretty vague to me.
Is this an official position in the first place? It seems to me that they want to give the impression that—without their efforts—the END IS NIGH—without committing to any particular probability estimate—which would then become the target of critics.
Halloween update: It’s been a while now, and I think the response has been poor. I think this means there is no such document (which explains Ben’s attempted reconstruction). It isn’t clear to me that producing such a document is a “high-priority task”—since it isn’t clear that the thesis is actually correct—or that the SIAI folks actually believe it.
Most of the participants here seem to be falling back on: even if it is unlikely, it could happen, and it would be devastating, so therefore we should care a lot—which seems to be a less unreasonable and more defensible position.
You lost me at that sharp swerve in the middle. With probabilities attached to the scary idea, it is an absolutely meaningless concept. What if its probability were 1 / 3^^^3, should we still care then? I could think of a trillion scary things that could happen. But without realistic estimates of how likely it is to happen, what does it matter?
Here are some links.
Heh. I’ve read virtually all those links. I still have the three following problems.
Those links are about as internally self-consistent as the Bible.
There are some fundamentally incorrect assumptions that have become gospel.
Most people WON’T read all those links and will therefore be declared unfit to judge anything.
What I asked for was “an immediately available well-structured top-down argument”.
It would be particularly useful and effective if SIAI recruited someone with the opposite point of view to co-develop a counter-argument thread and let the two revolve around each other and solve some of these issues (or, at least, highlight the base important differences in opinion that prevent them from solution). I’m more than willing to spend a ridiculous amount of time on such a task and I’m sure that Ben would be more than willing to devote any time that he can tear away from his busy schedule.
So go ahead and point them out. My guess is that in the ensuing debate it will be found that 1⁄4 of them are indeed fundamentally incorrect assumptions, 1⁄4 of them are arguably correct, and 1⁄2 of them are not really “assumptions that have become gospel”. But until you provide your list, there is no way to know.
Multiple links are not an answer—to be what Goertzel was looking for it has to be a single link that sets out this position.