[paper draft] Coalescing minds: brain uploading-related group mind scenarios
http://www.xuenay.net/Papers/CoalescingMinds.pdf
Abstract: We present a hypothetical process of mind coalescence, where artificial connections are created between two brains. This might simply allow for an improved form of communication. At the other extreme, it might merge two minds into one in a process that can be thought of as a reverse split-brain operation. We propose that one way mind coalescence might happen is via an exocortex, a prosthetic extension of the biological brain which integrates with the brain as seamlessly as parts of the biological brain integrate with each other. An exocortex may also prove to be the easiest route for mind uploading, as a person’s personality gradually moves from away from the aging biological brain and onto the exocortex. Memories might also be copied and shared even without minds being permanently merged. Over time, the borders of personal identity may become loose or even unnecessary.
Like my other draft, this is for the special issue on mind uploading in the International Journal of Machine Consciousness. The deadline is Oct 1st, so any comments will have to be quick for me to take them into account.
This one is co-authored with Harri Valpola.
EDIT: Improved paper on the basis of feedback; see this comment for the changelog.
- 3 Sep 2012 8:55 UTC; 2 points) 's comment on [META] Karma for last 30 days? by (
- 3 Dec 2011 9:07 UTC; 1 point) 's comment on “Ray Kurzweil and Uploading: Just Say No!”, Nick Agar by (
I think the neuroscience is greatly simplified in this paper. Instead, all the problems you list seem to assume that there are still “two people” in there, as if the worst that could happen is some sort of breakdown in negotiations. You don’t seem to deal convincingly with the neurological hurdles, from damage to existing neural structures caused by inevitable errors in assigning “analogous” neurons to the psychological (not like “I feel sad,” but like “schizophrenia”) aftereffects of having your brain rewired and then cut up again.
This should possibly be made more explicit, but those things are supposed to be covered by the “general integration difficulties” part in the problem section. I’ll expand on it.
The paper isn’t really attempting to be a conclusive knock-down argument that demonstrates for certain how mind coalescence could be achieved. It’s mostly just introducing the idea and establishing that this should be possible in principle. The actual gritty details of the implementation are for later work to deal with.
The abstract sounds insane, because it describes these ideas and techniques as something known as fact. The wording at least should be changed to adequately reflect the hypothetical nature of the discussion.
Good point, I’ll edit it. Thanks.
What issues would need to be negotiated in advance before consenting to such a thing?
How would you firewall off the merge, but still benefit from it?
What would be appropriate termination conditions for a merge, considering that separating a long standing merge may leave all entities involved changed in discontinuous ways?
Interesting idea.
It reminds me a bit of how vague the contract for marriage is, and how you don’t know what you get till you’re in it, and even then it changes all the time.
The article feels overoptimistic.
What we can is to create few static motor control connections. If we need to create more, we would need to deal with routing problems—how to make sure nothing conflicts for the same place of 3D space. There is probably something about this in references, but no reference in the place it is mentioned. Or maybe we don’t yet know if we can solve this. Also, is there anything to cite about sensory connections in this place of your article?
It seems that some medical operations could temporarily freeze inter-hemisphere communication. It is quite obvious that re-merging the brain should be simpler than coalescence. Did anyone find any human volounteers to tell about the feeling of that?
Also, inter-hemisphere link could have some particular connection patterns which would be hard to reproduce—and without it maybe we need many more connections?
You say that cortex can shift functions around. But that probably uses changing some neurin connections, doesn’t it? Can exocortex connections bee mutable? Do we expect the neurons directly on the end of connections to reroute the connections? Do we have any ground to expect or not expect locality problems?
And if brain could be quickly trained to overcome all that, is there any ground to expect anything below complete merge with uncontrolled rate after some threshold of learning to use the connection is passed?
A technical note—“Paths to Coalescence” chapter seems to discuss only one—exocortices. Why use plural in that case?
ETA: I do not know the context and the likely effect of the paper on the people in the context. I just tried to explain why this paper feels optimistic to me by listing the unanswered questions that I had during skimming it (and then rereading part 3 to find out whether I missed the answers).
Would sensory prosthetics be the kind of thing you’re looking for? I can add some cites about those.
I don’t think that this has been done.
I’m unsure about the answers to these, though they seem to me like problems that could be eventually overcome. I’ll see if my coauthor has a comment—the neuroscience part was his domain of expertise, not mine.
Are you asking about a (natural brain)-(exocortex) merge, or a (natural brain & exocortex)-(another natural brain & exocortex) merge?
If there were enough connections between the exocortex and the natural brain, then yes, given enough time the exocortex would probably merge with the natural brain so as to become pretty completely a part of it. An uncontrollable rate seems to me unlikely—it takes a long time for a child’s brain to develop to a mature state, and a large part of that development presumably involves various parts of the brain integrating with each other better—but again, the neuroscience side is not my domain of top expertise, so I might be mistaken about that.
As for one exocortex-equipped brain integrating with another, the extent to which that can happen obviously depends on the amount of connections / bandwidth available, and on whether the connections are maintained constantly or only for short periods.
Good point. It was supposed to note that coalescence can be achieved by first “traditionally” uploading a mind and then directly creating connections between the emulated brains, or by an exocortex route. I’ll clarify that or change the title.
Yes, they complement this nicely.
Well, destructive uploading as a whole seems to be a problem that could be eventually overcome, so with this approach you could discuss the upload-first scenario before the neurological one.
About dynamics of distinct-brain merge via exocortices—maybe you could mention what existing knowledge and what possible experiments could help answering these questions in the article.
I have uploaded a new version of the paper. Changelog:
Edited abstract to emphasize the hypothetical nature of the paper a bit more. New version:
“We present a hypothetical process of mind coalescence, where artificial connections are created between two brains. This might simply allow for an improved form of communication. At the other extreme, it might merge two minds into one in a process that can be thought of as a reverse split-brain operation. We propose that one way mind coalescence might happen is via an exocortex, a prosthetic extension of the biological brain which integrates with the brain as seamlessly as parts of the biological brain integrate with each other. An exocortex may also prove to be the easiest route for mind uploading, as a person’s personality gradually moves from away from the aging biological brain and onto the exocortex. Memories might also be copied and shared even without minds being permanently merged. Over time, the borders of personal identity may become loose or even unnecessary.”
Added a paragraph to the introduction section:
“The purpose of this paper is to introduce the concept of mind coalescence as a plausible future development, and to study some of its possible consequences. We also discuss the exocortex brain prosthetic as a viable uploading path. While similar concepts have previously been presented in science fiction, there seems to have been little serious discussion about whether or not they are real possibilities. We seek to establish mind coalescence and exocortexes as possible in principle, but we acknowledge that our argument glosses over many implementation details and empirical questions which need to be solved by experimental work. Attempting to address every possible problem and challenge would drown the reader in neuroscientific details, which we don’t believe to be a productive approach at this stage. Regardless, we believe that compared to some uploading proposals, such as using nanoprobes for correlational mapping of neuronal activity [Strout, 2007] or replacing neurons one by one [Moravec, 1988] our proposal, while still speculative, seems like a much more feasible development to occur in the near future.”
Clarified section 3. It now begins with the statement “Coalescence requires some technological means of connecting minds together. We consider three options: direct-brain-to-brain connections, an exocortex-mediated connection, and an option based on a destructive upload”.
Correspondingly, “direct-brain-to-brain connections” is now section 3.1. Made several minor clarifying changes.
Added mentions of sensory prostheses to section 3.1.:
“The technology exists today for creating hundreds of connections: e.g. Hochberg et al. [2006] used a 96-microelectrode array which allowed a human to control devices and a robotic hand by thought alone. Cochlear implants generally stimulate the auditory nerve with 16-22 electrodes, and allow the many recipients to understand speech in every day environments without needing visual cues [Peterson et al. 2010]. Various visual neuroprostheses are currently under development. Optic nerve stimulation has allowed subjects to recognize simple patterns and localize and discriminate objects. Retinal implants provide better results, but rely on existing residual cells in the retina. [Ong & da Cruz, 2011] Some cortical prostheses have also been recently implanted in subjects. [Normann et al. 2009] We are still likely to be below the threshold of coalescing minds by several orders of magnitude. Nevertheless, the question is merely one of scaling up and improving current techniques.”
Also edited section 3 to more clearly separate uploading-via-an-exocortex and mind-coalescence-via-an-exocortex. Among other things, added the following paragraph:
“Strictly speaking, an exocortex can act as merely an intermediate component that allows for mind coalescence, without necessarily leading to mind uploading. This might happen if the exocortex has insufficient capacity to house all the important parts of its user’s mind, or if it fails to take over subcortical functions necessary for the brain to operate. On the other hand, if a large part of a person’s brain functions have moved to the exocortex, he could be considered a partial upload even while many brain functions persist in the biological brain.”
Somewhat edited section 3 to clarify that when we talk about the biological brain not being capable of supporting two separate attentional/thought processes in the same medium, we refer to conscious attentional thought processes.
Added a brief section 3.3., mind coalescence via full uploading:
“The third possible way to achieve mind coalescence is to first fully upload a human brain to a digital substrate somehow. Once this has been accomplished, connecting two or more brains to each other becomes straightforward. If the brains of Albert and Bob are both emulated in the same computer, then adding a connection between a neuron in Albert’s brain and a neuron in Bob’s brain might not have any essential difference from adding a connection between two neurons in Albert’s brain is. Full uploading can then be used to either implement a direct brain-to-brain connection, or to create a software exocortex to mediate the connection. However, we suspect that the technology for a physical exocortex will become available before the technology for full uploading will. As surveyed above, early brain prostheses in the form of cochlear implants and visual cortical implants already exist. Hippocampal brain prostheses have also been successfully tested in rats [Berger et al., 2011].
Most of the approaches for a full uploading that are currently considered viable also involve destructive uploading, i.e. cutting up the original brain to small slices and scanning them [Sandberg & Bostrom, 2008], which many people may feel uncomfortable with.”
In the “Barriers to Coalescence” section, moved “general integration difficulties” to be the first issue, and expanded on it:
“This is a catch-all category for various technical problems that might crop up. Human brains did not evolve for the purpose of being easily merged, and the process may prove harder than anticipated. Errors and mistakes may prove hazardous to the subjects, and it is currently unknown what kind of a merging process is needed to ensure that the resulting mind will remain sane and functional. As noted in the introduction, we are intentionally glossing over most of the implementation details, and much empirical work will be required before mind coalescence becomes a viable option.”
In the “lack of mutual trust and memetic hazards” section, changed “In considering whether to tell someone how to build a nuclear bomb,” → “In considering whether to reveal someone state secrets”.
I’m curious as to why this is only at 3 upvotes. Do people feel that the content is too obvious? Irrelevant? Badly argued? (The mere 9 upvotes of my previous paper draft also felt a bit puzzling—after all the commentary about “LWers should publish in peer-reviewed journals”, I would have expected more.)
Pardon me. I confess I just hadn’t got around to reading your paper yet. Because there is some element of reading papers that feels like work. And lesswrong often tends to occupy the ‘procrastination’ element of my schedule. I am likely to only read a large article or paper if brief comments in reply to the article catch my attention and I am prompted to catch up on the context to see if I agree.
I’ll now remember to just upvote paper drafts on site because the act of writing papers is praiseworthy. (If it turns out the paper is terrible I can go back and adjust the vote.)
Yes, obvious. Which isn’t to say too obvious. It is a paper that needs to be written and bravo for doing it. But it isn’t something that is going to blow my mind and I can sort of take for granted that you mentioned the relevant details and think “paper written about exocortexes, check”.
It is possible that single-attention-grabbing-thought lmitation is a limitation that can be overcome by learning (maybe this learning can be helped by temporary split-brain?). Some people claim to be able to partially overcome this limitation; it can be explained away as very fast switching, but as every thought has some background afterglow, very fast switching versus parallel thinking can be hard to distinguish.
I wonder what would happen in the end of merging of a temporary split—merging two minds can pass through a mostly-single-mind with two attention processes.
Maybe (probably) I misunderstood the purpose of the paper, but I found everything disappointingly obvious.
Interesting. After reading a bunch of papers that more or less presumed uploads remaining separate individuals (e.g. Robin Hanson’s If Uploads Come First, Carl Shulman’s Whole Brain Emulation and the Evolution of Superorganisms), as well as a bunch of fiction also presuming it (Greg Egan’s stuff, Eclipse Phase, etc.), the notion of mind coalescence being a more likely long-term (and possibly even short-term) outcome was somewhat of a viewquake for me.
I always figured it was a deliberate break from reality for repeatability and/or because it instantly leads to supehuman intelligence making predictions meaningless, and that it wasn’t pointed out as unrealistic because it’s so obviously plot magic that it didn’t need to.
This kind of thing makes me wish even harder I could write and tell stories, although I’m starting to think it might be meaningless as it might be the very same alienness that causes both having something to say and being unable to say it. Like being able to think of myself as “alien” in the sense I’m intending there, which probably is not a concept that exist in other human minds for exactly that reason and thus can’t be summoned with any word-handle.
Heh—I had a bit of the opposite thing: while I had consumed sci-fi with group minds before, I had discounted it because it was obviously plot magic sci-fi and not serious speculation.
I think the main difference is that previous to talking with Harri, I presumed that brains were so variable that no common mental language could easily be found and you’d need a superintelligence to wire human brains together. I thought that yes, there might exist some way of creating group minds, but by that point we’d have been on the other side of a singularity event horizon for a long time. It was the notion of this possibly being the easiest and most feasible route to a singularity that surprised me.
After reading the first paragraph, I concluded that either it was long before you encountered LW, the karma system is completely broken, or I’m irrecoverably wrong about everything.
Then I read the next one which provided the much more likely hypothesis that you encountered a horrible portrayal of the idea which biased you against it.
I have updated in the direction of the paper not being obvious to almost anyone except me, and me having rare and powerful intuitions about this kind of thing that could be very very useful in a lot of ways if utilized properly.
By the way about, if not for the logistics of skull size, brain surgery being hard in general, and the almost comically enormous ethical problems, I’d five a fair chance that we could do something similar to a mind meld today given a pair of identical twins and steam cells. Maybe 20% it’s possible at all and 2% that any given attempt succeeds.
Ok, not really. That was the confidence-5-seconds-after-thinking-of-it value. Calibrating it from a “confidence” to an actual probability and updating on meta stuff puts it at something significantly less than that which I can’t be bothered to calculate.
http://gizmodo.com/5682758/the-fascinating-story-of-the-twins-who-share-brains-thoughts-and-senses
Wow, thanks! That’s AMAZING, it’d be really fun to learn some more about those.
Also, due to this I’ve updated a LOT towards trusting that class of intuitions more, including all the previous Absurd predictions of it. The world is a LOT more interesting place to be now!
Also related to trusting that intuition more, do you know how to get cheap and safe electrodes? >:D