Maybe there’s an intermediate possibility between WBE and de novo AI—upload an animal brain (how about a crow? a mouse?) and set it to self-improvement. You’d still have to figure out Friendliness for it, but Friendliness might be a hard problem even for an uploaded human brain. How would you identify sufficient Friendliness when you’re choosing humans to unpload? I’m assuming that human ems would self-improve, even if it’s a much harder problem than improving a de novo AI.
Moore’s Second Law reminds me of a good enough for sf notion I’ve got. Chip fabs keep getting more expensive until there’s one per spiral arm.
More seriously, how stable is the Second Law likely to be? The First Law implies increasing competence at making things, and whether the First Law eventually dominates the Second or the other way around isn’t obvious.
Maybe there’s an intermediate possibility between WBE and de novo AI—upload an animal brain (how about a crow? a mouse?) and set it to self-improvement.
It’s possible, but I don’t know of any reason we would expect an animal brain could recursively improve. As matters stand, all we know is that groups of humans can self-improve—so we don’t even know if a single human is smart enough to self-improve as an upload. (Maybe human brains can fall into ruts or inevitably degenerate after enough subjective time.) This isn’t too optimistic about crows or mice or lobsters, however excellent stories they make for.
More seriously, how stable is the Second Law likely to be? The First Law implies increasing competence at making things, and whether the First Law eventually dominates the Second or the other way around isn’t obvious.
It’s been operating since at least the ’80s as far as I can tell, and is stable out to 2015 or so. That’s a pretty good run.
Single human, or cat, or other mammal, is smart enough to self improve without being uploaded (learning and training, we do start off pretty incapable). Think about it. It may well be enough to upload then just add more cortical columns, over the time, or do some other simple and dumb process that does not require understanding, to make use of the built-in self improvement ability.
Maybe there’s an intermediate possibility between WBE and de novo AI—upload an animal brain (how about a crow? a mouse?) and set it to self-improvement. You’d still have to figure out Friendliness for it, but Friendliness might be a hard problem even for an uploaded human brain. How would you identify sufficient Friendliness when you’re choosing humans to unpload? I’m assuming that human ems would self-improve, even if it’s a much harder problem than improving a de novo AI.
Moore’s Second Law reminds me of a good enough for sf notion I’ve got. Chip fabs keep getting more expensive until there’s one per spiral arm.
More seriously, how stable is the Second Law likely to be? The First Law implies increasing competence at making things, and whether the First Law eventually dominates the Second or the other way around isn’t obvious.
It’s possible, but I don’t know of any reason we would expect an animal brain could recursively improve. As matters stand, all we know is that groups of humans can self-improve—so we don’t even know if a single human is smart enough to self-improve as an upload. (Maybe human brains can fall into ruts or inevitably degenerate after enough subjective time.) This isn’t too optimistic about crows or mice or lobsters, however excellent stories they make for.
It’s been operating since at least the ’80s as far as I can tell, and is stable out to 2015 or so. That’s a pretty good run.
Single human, or cat, or other mammal, is smart enough to self improve without being uploaded (learning and training, we do start off pretty incapable). Think about it. It may well be enough to upload then just add more cortical columns, over the time, or do some other simple and dumb process that does not require understanding, to make use of the built-in self improvement ability.
Or the lobsters in Accelerando.