I continue worried with what I posted last week: the fragility of human minds. The more I learn about social endocrynology, glands, neurotransmitters and cognitive neuroscience, the more I notice that the alleged “robustness” of cognition, usually attributed to a combination of plasticity and redundancy, is only robust to the sorts of challenges and problems an animal brain may encounter, problems like firmly memorizing the name of your significant other, hunting in different environments, getting old, and internal bleeding.
But if we had emulations, the sorts of shifts, tweaks and twists that can be done are numerously more than that, and they could well act on subsets of the mind which have no robustness at all, to Minsky’s dismay.
The fear here is that evolution selected for some kinds of robustness, and completely did not worry about others, and we will soon be able to modify minds in that way, inadvertently so.
For the second time while going through these posts: the more I think about Superintelligence and delve into it’s literature, the more skeptical I’m becoming that we will make it through. Everything seems so brittle.
I’m saying that the kind of robustness which minds/brains are famous for is not sufficient once you have a digital version of the brain, where the things you will change are of a different nature.
So current squishy minds and brains are robust, but they would not be robust once virtually implemented.
Responding to Paul’s related skepticism in my other post:
But that seems to make it easier to specify a person precisely, not harder. The differences in observations allow someone to quickly rule out alternative models by observations of people. Indeed, the way in which a human brain is physically represented makes little difference to the kind of predictions someone would make.
There are many ways of creating something inside a virtual black box that does—as seen from the outside - what my brain does here on earth. Let’s go through a few and see where their robustness fails:
1) Scan copy my brain and put it in there.
Failures:
a) You may choose to scan the wrong level of granularity, say, synapses instead of granular cells, neural columns instead of 3d voxels, molecular gates instead of quantum data relevant to microtubule distribution.
b) You may scan the right level of granularity—hoping there is only one! - and fail to find a translation schema for transducers and effectors, the ways in which the box interacts with the outer world.
2) Use an indirect method similar to those Paul described which vastly constrains the output a human can generate (like a typing board which ignores the rest), create a model of the human based on it, when the distinction between the doppelganger model and actor falls below a certain Epsilon, consider that model equivalent to the human and use it.
Failures:
a) It may turn out to be the case that having a brain like neural network/Markov network is actually the most predictive way of simulating a human, so you’d end up with a model that looks like, and behaves like an embedded cognition, physically distributed in space, with all the perks and perils that carries. Tamper with the virtual adrenal glands, and who knows what happens.
b) It may also be that a vastly distinct model from the way we do it would result in similar behavior. Then a whole different realm of completely unexplored confounds, polysemic and synonimic IF THEN gates and chain reactions we never had the chance to even glimpse at would be the virtual entity we are dealing with. This would make me much less comfortable with turning this emulation on than turning on a brain based one. It seems way less predictable (despite having matched the behavior of that human up to that point) once it’s environment, embedment and inner structure changes.
It is worth keeping in mind that we are comparing the
robustness of these minds to tweaks available in the virtual world
to the
robustness of the alternatives, one of which is motivational scaffolding and concept teaching.
We should consider whether teaching language, reference, and moral systems is not easier than simulating a mind without distorting it’s morals.
You’d have to go through a few google tradutor translations to transform a treatise on morality into a treatise on immorality, but—to exapt a Yudkowskian old example—you only have to give Ghandi one or two pills, or a tumor smaller than his toe, to completely change his moral stance on pacifism.
I continue worried with what I posted last week: the fragility of human minds. The more I learn about social endocrynology, glands, neurotransmitters and cognitive neuroscience, the more I notice that the alleged “robustness” of cognition, usually attributed to a combination of plasticity and redundancy, is only robust to the sorts of challenges and problems an animal brain may encounter, problems like firmly memorizing the name of your significant other, hunting in different environments, getting old, and internal bleeding.
But if we had emulations, the sorts of shifts, tweaks and twists that can be done are numerously more than that, and they could well act on subsets of the mind which have no robustness at all, to Minsky’s dismay. The fear here is that evolution selected for some kinds of robustness, and completely did not worry about others, and we will soon be able to modify minds in that way, inadvertently so.
For the second time while going through these posts: the more I think about Superintelligence and delve into it’s literature, the more skeptical I’m becoming that we will make it through. Everything seems so brittle.
To check I understand: you are saying lack of robustness will make it easy to modify minds a lot?
I’m saying that the kind of robustness which minds/brains are famous for is not sufficient once you have a digital version of the brain, where the things you will change are of a different nature.
So current squishy minds and brains are robust, but they would not be robust once virtually implemented.
Responding to Paul’s related skepticism in my other post:
There are many ways of creating something inside a virtual black box that does—as seen from the outside - what my brain does here on earth. Let’s go through a few and see where their robustness fails:
1) Scan copy my brain and put it in there.
Failures:
a) You may choose to scan the wrong level of granularity, say, synapses instead of granular cells, neural columns instead of 3d voxels, molecular gates instead of quantum data relevant to microtubule distribution.
b) You may scan the right level of granularity—hoping there is only one! - and fail to find a translation schema for transducers and effectors, the ways in which the box interacts with the outer world.
2) Use an indirect method similar to those Paul described which vastly constrains the output a human can generate (like a typing board which ignores the rest), create a model of the human based on it, when the distinction between the doppelganger model and actor falls below a certain Epsilon, consider that model equivalent to the human and use it.
Failures:
a) It may turn out to be the case that having a brain like neural network/Markov network is actually the most predictive way of simulating a human, so you’d end up with a model that looks like, and behaves like an embedded cognition, physically distributed in space, with all the perks and perils that carries. Tamper with the virtual adrenal glands, and who knows what happens.
b) It may also be that a vastly distinct model from the way we do it would result in similar behavior. Then a whole different realm of completely unexplored confounds, polysemic and synonimic IF THEN gates and chain reactions we never had the chance to even glimpse at would be the virtual entity we are dealing with. This would make me much less comfortable with turning this emulation on than turning on a brain based one. It seems way less predictable (despite having matched the behavior of that human up to that point) once it’s environment, embedment and inner structure changes.
It is worth keeping in mind that we are comparing the
robustness of these minds to tweaks available in the virtual world
to the
robustness of the alternatives, one of which is motivational scaffolding and concept teaching.
We should consider whether teaching language, reference, and moral systems is not easier than simulating a mind without distorting it’s morals.
You’d have to go through a few google tradutor translations to transform a treatise on morality into a treatise on immorality, but—to exapt a Yudkowskian old example—you only have to give Ghandi one or two pills, or a tumor smaller than his toe, to completely change his moral stance on pacifism.