Even ignoring the technical problems, and the fact nobody knows and the risk is to big, there’s still a huge difference between the CEV of humanity and the CEV of a-bunch-of-guys-on-the-internet. You might get a “none of us is as cruel as all of us” type Anonymous for example.
My plan will have approximately the same effect as connecting many peoples’ temporal lobes (where audio first comes into brains) to other peoples’ temporal lobes, the same way brain parts normally wire to or disconnect from other brain parts, forming a bigger mind. The massively multiplayer audio game is to make those connections.
Its like tunneling a network program through SSL, but this is much more complex because its tunneling statistical thoughts through mouse movements, audio software, ears, brain, mouse movements again (and keep looping), internet, audio software, ears, brain, mouse movements, audio software, ears brain (and keep looping), and back on the same path to the first person and many other people.
If we all shared neural connections, that would be closer to CEV than any one volition of any person or group. Since its an overall increase in the coherence of volition on Earth, it is purely a move toward CEV and away from UnFriendly AI.
It is better to increase the coherence of volition of a-bunch-of-guys-on-the-internet than not to. Or do you want everyone to continue disagreeing with eachother approximately equal amounts until most of those disagreements can be solved all at once with the normal kind of CEV?
Good point about speech. Many of the comments hear lead me to think of something I think I see too little of on this board. Much of what we talk about in AI has clear, interesting, and educational analogs in NI (Natural Intellitence). We certainly have a problem with unfriendly NI’s (Hitler, Stalin, Pol Pot, etc). Further, the significant structure of government in the western world (at least) shows we do a medium-good job of determining a CEV for humanity. And it also points out that a CEV will likely always be a compromise, finding an optimum-like compromise between components that are truly and actually different.
Since starting to read this site, I have thought more that Humanity has a collective intelligence to it which is way beyond that of the individuals in it. The difference between one human in isolation and one chimp in isolation is probably noticable but small. But with much higher bandwidth between individuals, humanity beats the pants of chimps (we wear pants, they do not).
Your insight about the role of speech ni providing the link between brains is a good one. Results of the project proposed above should be analyzed with respect to how the results achieved are the same as with voice, and how they might differ. We might learn something that way.
I predict they’ll do worse than voice for human communication, mostly due to lack of training, but might have niche advantages in situations where voice is impractical for various reasons.
Yes there is a strong collective mind made of communication through words, but its a very self-deceptive mind. It tries to redefine common words to redefine ideas that other parts of the mind do not intend to redefine, and those parts of mind later find their memory has been corrupted. Its why people start expecting to pay money when they agree to get something “free”. Intuition is much more honest. Its based on floating points at the subconscious level instead of symbols at the conscious level. By tunneling between the temporal lobes of peoples’ brains, Human AI Net will bypass the conscious level and access the core of the problems that lead to conscious disagreements. Words are a corrupted interface so any AI built on them will have errors.
To the LessWrong and Singularity community, I offered an invitation to influence by designing details of this plan for singularity. Downvoting an invitation will not cancel the event, but if you can convince me that my plan may result in UnFriendly AI then I will cancel it. Since I have considered many possibilities, I do not expect a reason against it exists. Would your time be better spent calculating the last digit of friendliness probability for all of the mind space, or working to fix any problems you may see in a singularity plan that’s in progress and will finish before yours?
Even ignoring the technical problems, and the fact nobody knows and the risk is to big, there’s still a huge difference between the CEV of humanity and the CEV of a-bunch-of-guys-on-the-internet. You might get a “none of us is as cruel as all of us” type Anonymous for example.
My plan will have approximately the same effect as connecting many peoples’ temporal lobes (where audio first comes into brains) to other peoples’ temporal lobes, the same way brain parts normally wire to or disconnect from other brain parts, forming a bigger mind. The massively multiplayer audio game is to make those connections.
Its like tunneling a network program through SSL, but this is much more complex because its tunneling statistical thoughts through mouse movements, audio software, ears, brain, mouse movements again (and keep looping), internet, audio software, ears, brain, mouse movements, audio software, ears brain (and keep looping), and back on the same path to the first person and many other people.
If we all shared neural connections, that would be closer to CEV than any one volition of any person or group. Since its an overall increase in the coherence of volition on Earth, it is purely a move toward CEV and away from UnFriendly AI.
It is better to increase the coherence of volition of a-bunch-of-guys-on-the-internet than not to. Or do you want everyone to continue disagreeing with eachother approximately equal amounts until most of those disagreements can be solved all at once with the normal kind of CEV?
Umm… so how would this be different from speech? Does the hand have that much higher bandwidth than the human voice?
I doubt any CEV would result, but I’m more inclined to think it’s safe now.
Good point about speech. Many of the comments hear lead me to think of something I think I see too little of on this board. Much of what we talk about in AI has clear, interesting, and educational analogs in NI (Natural Intellitence). We certainly have a problem with unfriendly NI’s (Hitler, Stalin, Pol Pot, etc). Further, the significant structure of government in the western world (at least) shows we do a medium-good job of determining a CEV for humanity. And it also points out that a CEV will likely always be a compromise, finding an optimum-like compromise between components that are truly and actually different.
Since starting to read this site, I have thought more that Humanity has a collective intelligence to it which is way beyond that of the individuals in it. The difference between one human in isolation and one chimp in isolation is probably noticable but small. But with much higher bandwidth between individuals, humanity beats the pants of chimps (we wear pants, they do not).
Your insight about the role of speech ni providing the link between brains is a good one. Results of the project proposed above should be analyzed with respect to how the results achieved are the same as with voice, and how they might differ. We might learn something that way.
I predict they’ll do worse than voice for human communication, mostly due to lack of training, but might have niche advantages in situations where voice is impractical for various reasons.
Yes there is a strong collective mind made of communication through words, but its a very self-deceptive mind. It tries to redefine common words to redefine ideas that other parts of the mind do not intend to redefine, and those parts of mind later find their memory has been corrupted. Its why people start expecting to pay money when they agree to get something “free”. Intuition is much more honest. Its based on floating points at the subconscious level instead of symbols at the conscious level. By tunneling between the temporal lobes of peoples’ brains, Human AI Net will bypass the conscious level and access the core of the problems that lead to conscious disagreements. Words are a corrupted interface so any AI built on them will have errors.
To the LessWrong and Singularity community, I offered an invitation to influence by designing details of this plan for singularity. Downvoting an invitation will not cancel the event, but if you can convince me that my plan may result in UnFriendly AI then I will cancel it. Since I have considered many possibilities, I do not expect a reason against it exists. Would your time be better spent calculating the last digit of friendliness probability for all of the mind space, or working to fix any problems you may see in a singularity plan that’s in progress and will finish before yours?