FWIW, I’m entirely with you here, but I’m unsurprised by the responses. I’ve had variations of this conversation with people for years, and the “that would be awful!!!” reaction is by far the most common one I get from people who think seriously about it at all.
I’m not really sure what the difference is, though I have some theories.
From my perspective, my mind is already a cobbled-together collective constructed from lots of distinct and frequently opposed subunits (set A) that reside in my brain, which interact in various ways with both each other and with subunits in other brains (set B).
Moving to a mode of living where the interactions between A and B are as high-bandwidth as the interactions within A and B probably means I would stop identifying so much as set A. In the limit, that implies that the construct “I” currently refers to would stop existing in any particularly important way. All of me would instead be participating in a vast number of different constructs, including but not limited to “I”.
That seems like a win to me, but I can sort of understand why people are averse to the idea.
You risk not only losing your identity, but also your values. Merging with people having other values than you (imagine: psychopaths) is not the same as merging with people who are very similar to you, just in different bodies.
Your old values could disappear like: “meh, that’s some nonsense one of my old bodies used to believe, but I don’t care about such stupid things anymore”.
I’ll add to the above that I run this risk in the no-telepathy scenario as well. I run that risk every day in the real world.. I am not a well-designed intelligence with fixed values; I am a human being and my brain experiences value drift in response to various inputs. This is perhaps a bad thing, but it’s nevertheless true.
Yes, the risk is intensified as the number of interactions increase, and as the bandwidth of those interactions increases… it’s easier to preserve the values I had as a child if I live in a small town with people who mostly share my values than if I move to a large heterogenous city, and the risk would intensify still further in a scenario like a telepathic collective.
But I choose to live in a heterogenous city rather than a homogenous small town. I choose to read blogs written by people who don’t share my values. Why would I not choose to live in a telepathic collective if that option were available? Is there some absolute threshold of acceptable risk of value drift I should avoid crossing?
Yes, I risk coming to identify as a mind that has different values than I currently identify with.
This doesn’t really change anything, though. Those other values exist out there already, instantiated in running brains, and they are already having whatever effects they have on the world. The only difference is that currently they are tagged as “other”, and that is enforced by the insulation between skulls. In the new world, they might get tagged as “me”.
While I appreciate the fact that for many people this distinction is incredibly important, it just doesn’t seem that important to me. To the extent that the existence of bad values in N distinct nodes of a system has bad consequences, it has the same bad consequences whether I tag one of those nodes as “me” or not.
These are very good points I wouldn’t have thought about.
I guess my preferences for 10% of people having an opinion X (where 90% have opinion non-X) over a hive mind which 10% believes and 90% disbelieves in X has two sources:
1) Overconfidence about the ability of those 10% of people to somehow outsmart the remaining 90%. For example, if we speak about rationality, I hope the rationalists are able to win.
2) An intuition that if you mix food with crap, the result is not half-food-half-crap but crap. Again, specifically for rationality, having a few rational and many insane people might be better than having everyone mostly insane, or even waist-deep in the valley of bad rationality.
But both of these objections are pretty dubious.
In other words, I believe that rationality (or any other value) can somehow benefit from being separated, from having a local power as opposed to being a tiny minority in a large place.
if you mix food with crap, the result is not half-food-half-crap but crap.
To the extent that this is true, then it follows that there are no rational humans, nor even half-rational humans, but simply irrational humans. After all, every human mind is a mixture of rational and irrational elements.
FWIW, I agree that rationality (or any other value) can benefit from being densely concentrated rather than diffuse, which seems to be what you’re getting at here.
To say that a little differently: consider a cognitive system S, comprising various cognitive agents. Let us label Sv the set of agents that are aligned with value V, and Snv the set of agents that oppose V. If I draw a graph of all the agents in S, how they interact with one another, and how strong the connections between them are, and I find that Sv has strong intra-set connections and weak inter-set connections with Snv, I expect S’s judgments and behaviors to be more aligned with V than if Sv has weak intra-set connections and strong inter-set connections with Snv.
I just don’t think it matters very much whether those connections are between-mind connections or within-mind connections. It matters enormously in the real world, because within-mind connections are much, much stronger than between-mind connections. But the whole point of telepathy is to make that less true.
And I think it matters even less where the label “Dave” gets attached within S, though in practice in the real world I tend to attach that label to a “virtual node” that represents the consensus view of the set of agents instantiated in my brain, thanks to that same within/between distinction. And again, telepathy makes that distinction less important, so where “Dave” gets attached within S is less clearly defined… and continues not to matter much.
Yes, it’s about concentration. I imagine that some things are multiplicative, for example traits like “learns a lot about X” and “spends a lot of time doing X” give better output if they happen to be the traits of the same person (as opposed to one person who learns a lot but does nothing, and another person who does it a lot but doesn’t understand it).
It’s not just about agents, but about resources like memory. I don’t know how well and how fast could the telepaths use each other’s memory, or habits, or mental associations, or things like this. Seems more efficient if “caring about X” and “remembering many facts about X” are in the same person, otherwise there are communication costs.
I don’t know how well and how fast could the telepaths use each other’s memory, or habits, or mental associations, or things like this.
Sure. To the degree that I assume that telepathy does not successfully bridge the distance between minds, such that between-mind operations remain less efficient than within-mind operations, then I agree with you… in that case, a telepathic society is more like the real world, where minds are separate from one another, and the between/within mind distinction matters more.
But (for me) the important issue is the degree of internode connectivity. Whether those nodes are in one mind or two is merely an engineering detail.
traits like “learns a lot about X” and “spends a lot of time doing X” give better output if they happen to be the traits of the same person (as opposed to one person who learns a lot but does nothing, and another person who does it a lot but doesn’t understand it).
Similarly to the above, I agree completely that they give better output if they are tightly linked than if they are loosely linked or not linked at all. I would say that whether this tight linkage occurs within one person or not doesn’t matter, though. Again, in the real world we can’t separate them, because tight linkage between two people is not possible (I can’t use your knowledge to do things), and if telepathy doesn’t help us do this then we also can’t separate them in the OP’s hypothetical.
FWIW, I’m entirely with you here, but I’m unsurprised by the responses. I’ve had variations of this conversation with people for years, and the “that would be awful!!!” reaction is by far the most common one I get from people who think seriously about it at all.
I’m not really sure what the difference is, though I have some theories.
From my perspective, my mind is already a cobbled-together collective constructed from lots of distinct and frequently opposed subunits (set A) that reside in my brain, which interact in various ways with both each other and with subunits in other brains (set B).
Moving to a mode of living where the interactions between A and B are as high-bandwidth as the interactions within A and B probably means I would stop identifying so much as set A. In the limit, that implies that the construct “I” currently refers to would stop existing in any particularly important way. All of me would instead be participating in a vast number of different constructs, including but not limited to “I”.
That seems like a win to me, but I can sort of understand why people are averse to the idea.
You risk not only losing your identity, but also your values. Merging with people having other values than you (imagine: psychopaths) is not the same as merging with people who are very similar to you, just in different bodies.
Your old values could disappear like: “meh, that’s some nonsense one of my old bodies used to believe, but I don’t care about such stupid things anymore”.
I’ll add to the above that I run this risk in the no-telepathy scenario as well. I run that risk every day in the real world.. I am not a well-designed intelligence with fixed values; I am a human being and my brain experiences value drift in response to various inputs. This is perhaps a bad thing, but it’s nevertheless true.
Yes, the risk is intensified as the number of interactions increase, and as the bandwidth of those interactions increases… it’s easier to preserve the values I had as a child if I live in a small town with people who mostly share my values than if I move to a large heterogenous city, and the risk would intensify still further in a scenario like a telepathic collective.
But I choose to live in a heterogenous city rather than a homogenous small town. I choose to read blogs written by people who don’t share my values. Why would I not choose to live in a telepathic collective if that option were available? Is there some absolute threshold of acceptable risk of value drift I should avoid crossing?
Yes, I risk coming to identify as a mind that has different values than I currently identify with.
This doesn’t really change anything, though. Those other values exist out there already, instantiated in running brains, and they are already having whatever effects they have on the world. The only difference is that currently they are tagged as “other”, and that is enforced by the insulation between skulls. In the new world, they might get tagged as “me”.
While I appreciate the fact that for many people this distinction is incredibly important, it just doesn’t seem that important to me. To the extent that the existence of bad values in N distinct nodes of a system has bad consequences, it has the same bad consequences whether I tag one of those nodes as “me” or not.
These are very good points I wouldn’t have thought about.
I guess my preferences for 10% of people having an opinion X (where 90% have opinion non-X) over a hive mind which 10% believes and 90% disbelieves in X has two sources:
1) Overconfidence about the ability of those 10% of people to somehow outsmart the remaining 90%. For example, if we speak about rationality, I hope the rationalists are able to win.
2) An intuition that if you mix food with crap, the result is not half-food-half-crap but crap. Again, specifically for rationality, having a few rational and many insane people might be better than having everyone mostly insane, or even waist-deep in the valley of bad rationality.
But both of these objections are pretty dubious.
In other words, I believe that rationality (or any other value) can somehow benefit from being separated, from having a local power as opposed to being a tiny minority in a large place.
To the extent that this is true, then it follows that there are no rational humans, nor even half-rational humans, but simply irrational humans. After all, every human mind is a mixture of rational and irrational elements.
FWIW, I agree that rationality (or any other value) can benefit from being densely concentrated rather than diffuse, which seems to be what you’re getting at here.
To say that a little differently: consider a cognitive system S, comprising various cognitive agents. Let us label Sv the set of agents that are aligned with value V, and Snv the set of agents that oppose V. If I draw a graph of all the agents in S, how they interact with one another, and how strong the connections between them are, and I find that Sv has strong intra-set connections and weak inter-set connections with Snv, I expect S’s judgments and behaviors to be more aligned with V than if Sv has weak intra-set connections and strong inter-set connections with Snv.
I just don’t think it matters very much whether those connections are between-mind connections or within-mind connections. It matters enormously in the real world, because within-mind connections are much, much stronger than between-mind connections. But the whole point of telepathy is to make that less true.
And I think it matters even less where the label “Dave” gets attached within S, though in practice in the real world I tend to attach that label to a “virtual node” that represents the consensus view of the set of agents instantiated in my brain, thanks to that same within/between distinction. And again, telepathy makes that distinction less important, so where “Dave” gets attached within S is less clearly defined… and continues not to matter much.
Yes, it’s about concentration. I imagine that some things are multiplicative, for example traits like “learns a lot about X” and “spends a lot of time doing X” give better output if they happen to be the traits of the same person (as opposed to one person who learns a lot but does nothing, and another person who does it a lot but doesn’t understand it).
It’s not just about agents, but about resources like memory. I don’t know how well and how fast could the telepaths use each other’s memory, or habits, or mental associations, or things like this. Seems more efficient if “caring about X” and “remembering many facts about X” are in the same person, otherwise there are communication costs.
Sure. To the degree that I assume that telepathy does not successfully bridge the distance between minds, such that between-mind operations remain less efficient than within-mind operations, then I agree with you… in that case, a telepathic society is more like the real world, where minds are separate from one another, and the between/within mind distinction matters more.
But (for me) the important issue is the degree of internode connectivity. Whether those nodes are in one mind or two is merely an engineering detail.
Similarly to the above, I agree completely that they give better output if they are tightly linked than if they are loosely linked or not linked at all. I would say that whether this tight linkage occurs within one person or not doesn’t matter, though. Again, in the real world we can’t separate them, because tight linkage between two people is not possible (I can’t use your knowledge to do things), and if telepathy doesn’t help us do this then we also can’t separate them in the OP’s hypothetical.