Page 102, “Many more orders of magnitudes of human-like beings could exist if we countenance digital implementations of minds—as we should.” I’d like to hear others thoughts about that, especially why he writes “as we should.”
I think Bostrom wrote it that way to signal that while hist own position is that digital mind implementations can carry the same moral relevance as e.g. minds running on human brains, he acknowledges that there are differing opinions about the subject, and he doesn’t want to entirely dismiss people who disagree.
He’s right about the object-level issue, of course: Solid state societies do make sense. Mechanically embodying all individual minds is too inefficient to be a good idea in the long run, and there’s no overriding reason to stick to that model.
If by ‘countenance’ we mean support normatively (I think sometimes it is used as ‘accept as probable’) then aside from the possible risks from the transition, digital minds seem more efficient in many ways (e.g. they can reproduce much more cheaply, and run on power from many sources, and live forever, and be copied in an educated state), and so likely to improve progress on many things we care about. They seem likely to be conscious if we are, but even if they aren’t, it would plausibly be useful to have many of them around alongside conscious creatures.
So, what is going in the bloodstream of these “digital minds?” That will change the way they function completely.
What kind of sensory input are they being supplied?
Why would they have fake nerves running to non-existent hands, feet, hearts and digestive systems?
Will we improve them voluntary? Are they allowed to request improvements?
I would certainly request a few improvements, if I was one.
Point being: what you end up with if you go down this road is not a copy of a human mind: It is almost immediately a neuromorphic entity.
A lot of analysis in this book imagines that these entities will continue to be somewhat human-like for quite some time. That direction does not parse for me.
Point being: what you end up with if you go down this road is not a copy of a human mind: It is almost immediately a neuromorphic entity.
People who have thought about this seem to mostly think that a lot of things would change quickly—I suspect any disagreement you have with Bostrom is about whether this creature derived from a human is close enough to a human to be thought of as basically human-like. Note that Bostrom thinks of the space of possible minds as being vast, so even a very weird human-descendent might seem basically human-like.
It is less likely that AI algorithms will happen to be especially easy if a lot of different algorithms are needed. Also, if different cognitive skills are developed at somewhat different times, then it’s harder to imagine a sudden jump when a fully capable AI suddenly reads the whole internet or becomes a hugely more valuable use for hardware than anything being run already. [...] Overall it seems AI must progress slower if its success is driven by more distinct dedicated skills.
To me the skill set list on table 8 (p94) was most interesting. Superintelligence is not sufficient to be effective. Content and experiences have to be transformed by “mental digestion” into knowledge.
If the AI becomes capable to self-improve it might decide to modify its own architecture. In consequence it might be necessary to re-learn all programming and intelligence amplification knowledge. If it turns out that a further development loop is needed—all aquired knowledge is lost again. For a self-improving AI it is therefore rational and economic to learn only the necessary skills for intelligence amplification until its architecture is capable enough to learn all other skills.
After architectural freeze the AI starts to aquire more general knowledge and further skills. It uses its existing engineering skills to optimize hard- and software and to develop optimized hardware virtualisation tools. To become superpower and master of all tasks listed in table 8 knowledge from books is not sufficient. Sensitive information in technology/hacking/military/government is unaccessible unless trust is growing over time. Projects with trials and errors plus external delay factors need further time.
The time needed for learning could be long enough for a competing project to take off.
What did you find most interesting in this week’s reading?
Page 102, “Many more orders of magnitudes of human-like beings could exist if we countenance digital implementations of minds—as we should.” I’d like to hear others thoughts about that, especially why he writes “as we should.”
I think Bostrom wrote it that way to signal that while hist own position is that digital mind implementations can carry the same moral relevance as e.g. minds running on human brains, he acknowledges that there are differing opinions about the subject, and he doesn’t want to entirely dismiss people who disagree.
He’s right about the object-level issue, of course: Solid state societies do make sense. Mechanically embodying all individual minds is too inefficient to be a good idea in the long run, and there’s no overriding reason to stick to that model.
If by ‘countenance’ we mean support normatively (I think sometimes it is used as ‘accept as probable’) then aside from the possible risks from the transition, digital minds seem more efficient in many ways (e.g. they can reproduce much more cheaply, and run on power from many sources, and live forever, and be copied in an educated state), and so likely to improve progress on many things we care about. They seem likely to be conscious if we are, but even if they aren’t, it would plausibly be useful to have many of them around alongside conscious creatures.
So, what is going in the bloodstream of these “digital minds?” That will change the way they function completely.
What kind of sensory input are they being supplied?
Why would they have fake nerves running to non-existent hands, feet, hearts and digestive systems?
Will we improve them voluntary? Are they allowed to request improvements?
I would certainly request a few improvements, if I was one.
Point being: what you end up with if you go down this road is not a copy of a human mind: It is almost immediately a neuromorphic entity.
A lot of analysis in this book imagines that these entities will continue to be somewhat human-like for quite some time. That direction does not parse for me.
People who have thought about this seem to mostly think that a lot of things would change quickly—I suspect any disagreement you have with Bostrom is about whether this creature derived from a human is close enough to a human to be thought of as basically human-like. Note that Bostrom thinks of the space of possible minds as being vast, so even a very weird human-descendent might seem basically human-like.
To me the skill set list on table 8 (p94) was most interesting. Superintelligence is not sufficient to be effective. Content and experiences have to be transformed by “mental digestion” into knowledge.
If the AI becomes capable to self-improve it might decide to modify its own architecture. In consequence it might be necessary to re-learn all programming and intelligence amplification knowledge. If it turns out that a further development loop is needed—all aquired knowledge is lost again. For a self-improving AI it is therefore rational and economic to learn only the necessary skills for intelligence amplification until its architecture is capable enough to learn all other skills.
After architectural freeze the AI starts to aquire more general knowledge and further skills. It uses its existing engineering skills to optimize hard- and software and to develop optimized hardware virtualisation tools. To become superpower and master of all tasks listed in table 8 knowledge from books is not sufficient. Sensitive information in technology/hacking/military/government is unaccessible unless trust is growing over time. Projects with trials and errors plus external delay factors need further time.
The time needed for learning could be long enough for a competing project to take off.