That one’s useful, but IMO, as an introduction document it needs a lot of work. For instance, the main thesis should be stated a lot earlier and clearer. As it is, it spends several pages making different points about the nature of AI before saying anything about why AI is worth discussing as a risk in the first place. That comes as late as page 17, and much of that is an analogue to the nuclear bomb. It spends a lot of time arguing against the proposition that a hard takeoff is impossible, but not much time arguing for the proposition that a hard takeoff is likely, which is a major failing if it’s supposed to convince people.
Mostly the paper suffers from the problem of being too spread out: it doesn’t really give strong support to a narrow set of core claims, instead giving weak support to a wide set of core and non-core claims. I have a memory of reading the thing back when I didn’t know so much about the Singularity, and thinking it was good, even though it took a long time to get to the point and had stuff that seemed unnecessary. It was only later on when I re-read the paper that I realized how those “unnecessary” parts connected with other pieces of FAI theory—but of course, by that time I wasn’t really an outsider anymore.
That one’s useful, but IMO, as an introduction document it needs a lot of work. For instance, the main thesis should be stated a lot earlier and clearer. As it is, it spends several pages making different points about the nature of AI before saying anything about why AI is worth discussing as a risk in the first place. That comes as late as page 17, and much of that is an analogue to the nuclear bomb. It spends a lot of time arguing against the proposition that a hard takeoff is impossible, but not much time arguing for the proposition that a hard takeoff is likely, which is a major failing if it’s supposed to convince people.
Mostly the paper suffers from the problem of being too spread out: it doesn’t really give strong support to a narrow set of core claims, instead giving weak support to a wide set of core and non-core claims. I have a memory of reading the thing back when I didn’t know so much about the Singularity, and thinking it was good, even though it took a long time to get to the point and had stuff that seemed unnecessary. It was only later on when I re-read the paper that I realized how those “unnecessary” parts connected with other pieces of FAI theory—but of course, by that time I wasn’t really an outsider anymore.