Interesting. After reading a bunch of papers that more or less presumed uploads remaining separate individuals (e.g. Robin Hanson’s If Uploads Come First, Carl Shulman’s Whole Brain Emulation and the Evolution of Superorganisms), as well as a bunch of fiction also presuming it (Greg Egan’s stuff, Eclipse Phase, etc.), the notion of mind coalescence being a more likely long-term (and possibly even short-term) outcome was somewhat of a viewquake for me.
I always figured it was a deliberate break from reality for repeatability and/or because it instantly leads to supehuman intelligence making predictions meaningless, and that it wasn’t pointed out as unrealistic because it’s so obviously plot magic that it didn’t need to.
This kind of thing makes me wish even harder I could write and tell stories, although I’m starting to think it might be meaningless as it might be the very same alienness that causes both having something to say and being unable to say it. Like being able to think of myself as “alien” in the sense I’m intending there, which probably is not a concept that exist in other human minds for exactly that reason and thus can’t be summoned with any word-handle.
Heh—I had a bit of the opposite thing: while I had consumed sci-fi with group minds before, I had discounted it because it was obviously plot magic sci-fi and not serious speculation.
I think the main difference is that previous to talking with Harri, I presumed that brains were so variable that no common mental language could easily be found and you’d need a superintelligence to wire human brains together. I thought that yes, there might exist some way of creating group minds, but by that point we’d have been on the other side of a singularity event horizon for a long time. It was the notion of this possibly being the easiest and most feasible route to a singularity that surprised me.
After reading the first paragraph, I concluded that either it was long before you encountered LW, the karma system is completely broken, or I’m irrecoverably wrong about everything.
Then I read the next one which provided the much more likely hypothesis that you encountered a horrible portrayal of the idea which biased you against it.
I have updated in the direction of the paper not being obvious to almost anyone except me, and me having rare and powerful intuitions about this kind of thing that could be very very useful in a lot of ways if utilized properly.
By the way about, if not for the logistics of skull size, brain surgery being hard in general, and the almost comically enormous ethical problems, I’d five a fair chance that we could do something similar to a mind meld today given a pair of identical twins and steam cells. Maybe 20% it’s possible at all and 2% that any given attempt succeeds.
Ok, not really. That was the confidence-5-seconds-after-thinking-of-it value. Calibrating it from a “confidence” to an actual probability and updating on meta stuff puts it at something significantly less than that which I can’t be bothered to calculate.
By the way about, if not for the logistics of skull size, brain surgery being hard in general, and the almost comically enormous ethical problems, I’d five a fair chance that we could do something similar to a mind meld today given a pair of identical twins and steam cells.
Wow, thanks! That’s AMAZING, it’d be really fun to learn some more about those.
Also, due to this I’ve updated a LOT towards trusting that class of intuitions more, including all the previous Absurd predictions of it. The world is a LOT more interesting place to be now!
Also related to trusting that intuition more, do you know how to get cheap and safe electrodes? >:D
Interesting. After reading a bunch of papers that more or less presumed uploads remaining separate individuals (e.g. Robin Hanson’s If Uploads Come First, Carl Shulman’s Whole Brain Emulation and the Evolution of Superorganisms), as well as a bunch of fiction also presuming it (Greg Egan’s stuff, Eclipse Phase, etc.), the notion of mind coalescence being a more likely long-term (and possibly even short-term) outcome was somewhat of a viewquake for me.
I always figured it was a deliberate break from reality for repeatability and/or because it instantly leads to supehuman intelligence making predictions meaningless, and that it wasn’t pointed out as unrealistic because it’s so obviously plot magic that it didn’t need to.
This kind of thing makes me wish even harder I could write and tell stories, although I’m starting to think it might be meaningless as it might be the very same alienness that causes both having something to say and being unable to say it. Like being able to think of myself as “alien” in the sense I’m intending there, which probably is not a concept that exist in other human minds for exactly that reason and thus can’t be summoned with any word-handle.
Heh—I had a bit of the opposite thing: while I had consumed sci-fi with group minds before, I had discounted it because it was obviously plot magic sci-fi and not serious speculation.
I think the main difference is that previous to talking with Harri, I presumed that brains were so variable that no common mental language could easily be found and you’d need a superintelligence to wire human brains together. I thought that yes, there might exist some way of creating group minds, but by that point we’d have been on the other side of a singularity event horizon for a long time. It was the notion of this possibly being the easiest and most feasible route to a singularity that surprised me.
After reading the first paragraph, I concluded that either it was long before you encountered LW, the karma system is completely broken, or I’m irrecoverably wrong about everything.
Then I read the next one which provided the much more likely hypothesis that you encountered a horrible portrayal of the idea which biased you against it.
I have updated in the direction of the paper not being obvious to almost anyone except me, and me having rare and powerful intuitions about this kind of thing that could be very very useful in a lot of ways if utilized properly.
By the way about, if not for the logistics of skull size, brain surgery being hard in general, and the almost comically enormous ethical problems, I’d five a fair chance that we could do something similar to a mind meld today given a pair of identical twins and steam cells. Maybe 20% it’s possible at all and 2% that any given attempt succeeds.
Ok, not really. That was the confidence-5-seconds-after-thinking-of-it value. Calibrating it from a “confidence” to an actual probability and updating on meta stuff puts it at something significantly less than that which I can’t be bothered to calculate.
http://gizmodo.com/5682758/the-fascinating-story-of-the-twins-who-share-brains-thoughts-and-senses
Wow, thanks! That’s AMAZING, it’d be really fun to learn some more about those.
Also, due to this I’ve updated a LOT towards trusting that class of intuitions more, including all the previous Absurd predictions of it. The world is a LOT more interesting place to be now!
Also related to trusting that intuition more, do you know how to get cheap and safe electrodes? >:D