Heh—I had a bit of the opposite thing: while I had consumed sci-fi with group minds before, I had discounted it because it was obviously plot magic sci-fi and not serious speculation.
I think the main difference is that previous to talking with Harri, I presumed that brains were so variable that no common mental language could easily be found and you’d need a superintelligence to wire human brains together. I thought that yes, there might exist some way of creating group minds, but by that point we’d have been on the other side of a singularity event horizon for a long time. It was the notion of this possibly being the easiest and most feasible route to a singularity that surprised me.
After reading the first paragraph, I concluded that either it was long before you encountered LW, the karma system is completely broken, or I’m irrecoverably wrong about everything.
Then I read the next one which provided the much more likely hypothesis that you encountered a horrible portrayal of the idea which biased you against it.
I have updated in the direction of the paper not being obvious to almost anyone except me, and me having rare and powerful intuitions about this kind of thing that could be very very useful in a lot of ways if utilized properly.
By the way about, if not for the logistics of skull size, brain surgery being hard in general, and the almost comically enormous ethical problems, I’d five a fair chance that we could do something similar to a mind meld today given a pair of identical twins and steam cells. Maybe 20% it’s possible at all and 2% that any given attempt succeeds.
Ok, not really. That was the confidence-5-seconds-after-thinking-of-it value. Calibrating it from a “confidence” to an actual probability and updating on meta stuff puts it at something significantly less than that which I can’t be bothered to calculate.
By the way about, if not for the logistics of skull size, brain surgery being hard in general, and the almost comically enormous ethical problems, I’d five a fair chance that we could do something similar to a mind meld today given a pair of identical twins and steam cells.
Wow, thanks! That’s AMAZING, it’d be really fun to learn some more about those.
Also, due to this I’ve updated a LOT towards trusting that class of intuitions more, including all the previous Absurd predictions of it. The world is a LOT more interesting place to be now!
Also related to trusting that intuition more, do you know how to get cheap and safe electrodes? >:D
Heh—I had a bit of the opposite thing: while I had consumed sci-fi with group minds before, I had discounted it because it was obviously plot magic sci-fi and not serious speculation.
I think the main difference is that previous to talking with Harri, I presumed that brains were so variable that no common mental language could easily be found and you’d need a superintelligence to wire human brains together. I thought that yes, there might exist some way of creating group minds, but by that point we’d have been on the other side of a singularity event horizon for a long time. It was the notion of this possibly being the easiest and most feasible route to a singularity that surprised me.
After reading the first paragraph, I concluded that either it was long before you encountered LW, the karma system is completely broken, or I’m irrecoverably wrong about everything.
Then I read the next one which provided the much more likely hypothesis that you encountered a horrible portrayal of the idea which biased you against it.
I have updated in the direction of the paper not being obvious to almost anyone except me, and me having rare and powerful intuitions about this kind of thing that could be very very useful in a lot of ways if utilized properly.
By the way about, if not for the logistics of skull size, brain surgery being hard in general, and the almost comically enormous ethical problems, I’d five a fair chance that we could do something similar to a mind meld today given a pair of identical twins and steam cells. Maybe 20% it’s possible at all and 2% that any given attempt succeeds.
Ok, not really. That was the confidence-5-seconds-after-thinking-of-it value. Calibrating it from a “confidence” to an actual probability and updating on meta stuff puts it at something significantly less than that which I can’t be bothered to calculate.
http://gizmodo.com/5682758/the-fascinating-story-of-the-twins-who-share-brains-thoughts-and-senses
Wow, thanks! That’s AMAZING, it’d be really fun to learn some more about those.
Also, due to this I’ve updated a LOT towards trusting that class of intuitions more, including all the previous Absurd predictions of it. The world is a LOT more interesting place to be now!
Also related to trusting that intuition more, do you know how to get cheap and safe electrodes? >:D