Eliezer, I read the link to section 11. After having done so, I strongly encourage other commentors here to read Eliezer’s piece also. In large part, because I think you, Eliezer, are overstating what you’ve accomplished with this piece. I won’t go into detailed criticism here, but instead, I’ll restrict it narrowly to my original question in this thread.
The piece you linked to doesn’t in any authoritative way dismiss the possibility that functional human brain modeling will occur before AGI is developed. If anything, the piece seems to cede a reasonable chance that either could happen before the other, and then talks about why you think it’s important for AGI (of the friendly variety) to be developed before function human brain modeling reaches a certain threshhold level of being likely to result in catastrophic risk.
If your piece represents the apex of critical thinking on this topic, we’re in the very, very beginning stages, and we need, in my opinion, to attract a much higher caliber of thought and intellectual work to this topic. For example, I think it’s a step down from the level of rigor Kurzweil has put into thinking about the topic of likelihood of AGI vs. brain modeling and timelines.
So, I’m still interested in Robin’s take and the take of old-timers on this topic.
Eliezer, I read the link to section 11. After having done so, I strongly encourage other commentors here to read Eliezer’s piece also. In large part, because I think you, Eliezer, are overstating what you’ve accomplished with this piece. I won’t go into detailed criticism here, but instead, I’ll restrict it narrowly to my original question in this thread.
The piece you linked to doesn’t in any authoritative way dismiss the possibility that functional human brain modeling will occur before AGI is developed. If anything, the piece seems to cede a reasonable chance that either could happen before the other, and then talks about why you think it’s important for AGI (of the friendly variety) to be developed before function human brain modeling reaches a certain threshhold level of being likely to result in catastrophic risk.
If your piece represents the apex of critical thinking on this topic, we’re in the very, very beginning stages, and we need, in my opinion, to attract a much higher caliber of thought and intellectual work to this topic. For example, I think it’s a step down from the level of rigor Kurzweil has put into thinking about the topic of likelihood of AGI vs. brain modeling and timelines.
So, I’m still interested in Robin’s take and the take of old-timers on this topic.