Am I right in thinking you had very low expectations?
“Kurzweil and Bostrom seem to assume that intelligence is a one-dimensional property and that the set of intelligent agents is totally-ordered in the mathematical sense—but neither discusses intelligence at any length in their books. Generally, it is fair to say that despite some efforts, the assumptions made in the powerful narrative of superintelligence and singularity have not been investigated in detail.”
That seems unfair to me. IIRC Superintelligence explicitly states that the argument does not depend on intelligence being a one-dimensional property, and explains why. (In fact, this follows pretty straightforwardly from the definition Bostrom gives.) Also “neither discusses intelligence at any length?” This feels like an isolated demand for rigor; it feels like the author means “neither discusses intelligence as much as I think they should.” Ditto for “the assumptions… have not been investigated in detail.”
You are right that I had very low expectations :) My expectation was that this area of study would be treated as an outcast cousin that we don’t like to mention except in snark. This seems detailed, good-faith and willing to state weird ideas clearly and concisely.
I also think it’s fair to have low expectations here. Although I generally like SEP, I also have experienced Gell-Mann moments with it enough times that now I think of it differently.
It’s not really like a regular encyclopedia written by anonymous authors with a “view from nowhere”. Each article has an author, those authors are allowed to have clear authorial bias, the whole thing has lots of selection bias (articles only exist because someone was willing to write them and they do count as publications though they are strangely influential and not at the same time because they are read by lots of people but most of those people avoid citing them and instead cite the work referenced by the SEP article based on my anecdotal data), and as a result you get things like, for example, articles written by people only because they are against some position not for it but no proponent of the position had written the article and those articles often fail to pass the ITT.
Combine this with the already low expectations around writing about AI safety topics in general and it makes it nicely surprising that this one turned out pretty good.
Also “neither discusses intelligence at any length?” This feels like an isolated demand for rigor; it feels like the author means “neither discusses intelligence as much as I think they should.” Ditto for “the assumptions… have not been investigated in detail.”
These seem correct to me? Bostrom’s discussion of intelligence was pretty vague and hand-wavy, in my opinion; not specific enough to show that it can work the way that Bostrom suggests (as critics tend to point out). I started doing some work to analyze it better in How Feasible is the Rapid Development of Artificial Superintelligence, but I would not call this a particularly detailed investigation either.
I’d love to discuss this sometime with you then. :) I certainly agree there was a lot of room for improvement, but I think the quotes I pulled from this SEP article were pretty unjustified.
Moreover I think Bostrom’s definitions are plenty good enough to support the arguments he makes.
Moreover I think Bostrom’s definitions are plenty good enough to support the arguments he makes.
I would have to reread the relevant sections before discussing this in more detail, but my impression is that Bostrom’s definitions are certainly good enough to support his argument of “this is plausible enough to be worth investigating further”. But as the SEP article correctly points out, not much of that further investigation has been done yet.
Am I right in thinking you had very low expectations?
“Kurzweil and Bostrom seem to assume that intelligence is a one-dimensional property and that the set of intelligent agents is totally-ordered in the mathematical sense—but neither discusses intelligence at any length in their books. Generally, it is fair to say that despite some efforts, the assumptions made in the powerful narrative of superintelligence and singularity have not been investigated in detail.”
That seems unfair to me. IIRC Superintelligence explicitly states that the argument does not depend on intelligence being a one-dimensional property, and explains why. (In fact, this follows pretty straightforwardly from the definition Bostrom gives.) Also “neither discusses intelligence at any length?” This feels like an isolated demand for rigor; it feels like the author means “neither discusses intelligence as much as I think they should.” Ditto for “the assumptions… have not been investigated in detail.”
You are right that I had very low expectations :) My expectation was that this area of study would be treated as an outcast cousin that we don’t like to mention except in snark. This seems detailed, good-faith and willing to state weird ideas clearly and concisely.
I also think it’s fair to have low expectations here. Although I generally like SEP, I also have experienced Gell-Mann moments with it enough times that now I think of it differently.
It’s not really like a regular encyclopedia written by anonymous authors with a “view from nowhere”. Each article has an author, those authors are allowed to have clear authorial bias, the whole thing has lots of selection bias (articles only exist because someone was willing to write them and they do count as publications though they are strangely influential and not at the same time because they are read by lots of people but most of those people avoid citing them and instead cite the work referenced by the SEP article based on my anecdotal data), and as a result you get things like, for example, articles written by people only because they are against some position not for it but no proponent of the position had written the article and those articles often fail to pass the ITT.
Combine this with the already low expectations around writing about AI safety topics in general and it makes it nicely surprising that this one turned out pretty good.
These seem correct to me? Bostrom’s discussion of intelligence was pretty vague and hand-wavy, in my opinion; not specific enough to show that it can work the way that Bostrom suggests (as critics tend to point out). I started doing some work to analyze it better in How Feasible is the Rapid Development of Artificial Superintelligence, but I would not call this a particularly detailed investigation either.
I’d love to discuss this sometime with you then. :) I certainly agree there was a lot of room for improvement, but I think the quotes I pulled from this SEP article were pretty unjustified.
Moreover I think Bostrom’s definitions are plenty good enough to support the arguments he makes.
I would have to reread the relevant sections before discussing this in more detail, but my impression is that Bostrom’s definitions are certainly good enough to support his argument of “this is plausible enough to be worth investigating further”. But as the SEP article correctly points out, not much of that further investigation has been done yet.
This discussion on intelligence is what my work focuses on, I found this lacking as well. I would appreciate more references to similar discussions.