My plan will have approximately the same effect as connecting many peoples’ temporal lobes (where audio first comes into brains) to other peoples’ temporal lobes, the same way brain parts normally wire to or disconnect from other brain parts, forming a bigger mind. The massively multiplayer audio game is to make those connections.
Its like tunneling a network program through SSL, but this is much more complex because its tunneling statistical thoughts through mouse movements, audio software, ears, brain, mouse movements again (and keep looping), internet, audio software, ears, brain, mouse movements, audio software, ears brain (and keep looping), and back on the same path to the first person and many other people.
If we all shared neural connections, that would be closer to CEV than any one volition of any person or group. Since its an overall increase in the coherence of volition on Earth, it is purely a move toward CEV and away from UnFriendly AI.
It is better to increase the coherence of volition of a-bunch-of-guys-on-the-internet than not to. Or do you want everyone to continue disagreeing with eachother approximately equal amounts until most of those disagreements can be solved all at once with the normal kind of CEV?
Yes there is a strong collective mind made of communication through words, but its a very self-deceptive mind. It tries to redefine common words to redefine ideas that other parts of the mind do not intend to redefine, and those parts of mind later find their memory has been corrupted. Its why people start expecting to pay money when they agree to get something “free”. Intuition is much more honest. Its based on floating points at the subconscious level instead of symbols at the conscious level. By tunneling between the temporal lobes of peoples’ brains, Human AI Net will bypass the conscious level and access the core of the problems that lead to conscious disagreements. Words are a corrupted interface so any AI built on them will have errors.
To the LessWrong and Singularity community, I offered an invitation to influence by designing details of this plan for singularity. Downvoting an invitation will not cancel the event, but if you can convince me that my plan may result in UnFriendly AI then I will cancel it. Since I have considered many possibilities, I do not expect a reason against it exists. Would your time be better spent calculating the last digit of friendliness probability for all of the mind space, or working to fix any problems you may see in a singularity plan that’s in progress and will finish before yours?