The jump from building a high-speed neuron, to building a cortex out of them, to building my cortex out of them, to instantiating my mind in them, elides over quite a few interesting problems. I’d expect the tractability of those problems to be relevant to the kind of estimations you’re trying to do here, and I don’t see them included.
I conclude that your estimates are missing enough important terms to be completely disconnected from likely results.
Leaving that aside, though… suppose all of that is done and I wake up in that dark room. How exactly do you anticipate me “mastering” cognitive science, say?
I mean, OK, I can read every book and article written in the field over the course of the next couple of hundred (subjective) years. Suppose I do that. (Though it’s not really clear to me that I wouldn’t get bored and wander off long before then.)
Now what? I haven’t mastered cognitive science; at best, I have caught up on the state of cognitive science as of the early 21st century. There’s an enormous amount of additional work to be done, and assuming that I can do that work alone in a dark room with no experimental subjects is, again, quite a jump.
Put me in an empty dark room without companionship for a thousand years, and I have no idea what I would do, but making useful breakthroughs in cognitive science doesn’t seem like a particularly likely possibility.
I don’t think you’re thinking this scenario through very carefully.
The jump from building a high-speed neuron, to building a cortex out of them, to building my cortex out of them, to instantiating my mind in them, elides over quite a few interesting problems
I’m sorry, I should have made that part more clear. I just wrote it using “you” in the sense of what would you do? Looking at the situation from the perspective of the AI.
I didn’t mean that it actually was some neural simulation of your brain—it’s a de novo AI.
I conclude that your estimates are missing enough important terms to be completely disconnected from likely results.
Yes, for brevity I had to simplify greatly. A full analysis would be paper-length. I do think this is the most likely scenario to AGI in rough outline, even if it’s not super likely in particular.
Now what? I haven’t mastered cognitive science; at best, I have caught up on the state of cognitive science as of the early 21st century
Yes, I wasn’t assuming much novel research, more just insight based on current knowledge, lots of intelligence, motivation, and an eternity of time.
Put me in an empty dark room without companionship for a thousand years, and I have no idea what I would do,
Perhaps writing it from the subjective viewpoint was a bad idea. However, there is almost certainly some mind that would and could do this.
The jump from building a high-speed neuron, to building a cortex out of them, to building my cortex out of them, to instantiating my mind in them, elides over quite a few interesting problems. I’d expect the tractability of those problems to be relevant to the kind of estimations you’re trying to do here, and I don’t see them included.
I conclude that your estimates are missing enough important terms to be completely disconnected from likely results.
Leaving that aside, though… suppose all of that is done and I wake up in that dark room. How exactly do you anticipate me “mastering” cognitive science, say?
I mean, OK, I can read every book and article written in the field over the course of the next couple of hundred (subjective) years. Suppose I do that. (Though it’s not really clear to me that I wouldn’t get bored and wander off long before then.)
Now what? I haven’t mastered cognitive science; at best, I have caught up on the state of cognitive science as of the early 21st century. There’s an enormous amount of additional work to be done, and assuming that I can do that work alone in a dark room with no experimental subjects is, again, quite a jump.
Put me in an empty dark room without companionship for a thousand years, and I have no idea what I would do, but making useful breakthroughs in cognitive science doesn’t seem like a particularly likely possibility.
I don’t think you’re thinking this scenario through very carefully.
I’m sorry, I should have made that part more clear. I just wrote it using “you” in the sense of what would you do? Looking at the situation from the perspective of the AI.
I didn’t mean that it actually was some neural simulation of your brain—it’s a de novo AI.
Yes, for brevity I had to simplify greatly. A full analysis would be paper-length. I do think this is the most likely scenario to AGI in rough outline, even if it’s not super likely in particular.
Yes, I wasn’t assuming much novel research, more just insight based on current knowledge, lots of intelligence, motivation, and an eternity of time.
Perhaps writing it from the subjective viewpoint was a bad idea. However, there is almost certainly some mind that would and could do this.