I’m not sure we can “algebra” our way out of this dilemma. I think that we need to sit up and take notice that “liberal democracy” (“libertarian democracy,” since I’m using the term “liberal” like Hayek did, and not like the conscious movement to hijack the term) dramatically outperforms state collectivist totalitarianism. During the time period when the USA was less totalitarian and had more of the features of “liberal democracy” than what we currently do, or did at the country’s beginning, we performed better (more stated happiness, more equality under the law, more wealth generation, more immigrants revealing a preference for living here, etc.).
So why mention governments in a discussion of intelligent control? Because governments are how we choose to govern (or destroy), at the largest human scale. As such, they represent the best large-scale systems humans can set up, in accordance with human nature.
So, superhuman synthetic intelligence should build upon that. How? Well, we should make superhumanly-intelligent “classical liberals” that are fully equipped with mirror neurons. There should be many of them, and they should be designed to (1) protect their lives using the minimum force necessary to do so (2) argue about what course of action is best once their own lives have been preserved. If they possess mirror neurons and exposure to such thinkers as Hayek, it won’t be hard to prevent them from destroying the world and all humans—they will have a natural predisposition toward protecting and expanding life and luxury.
The true danger is that thinkers from LessWrong mistakenly believe they have designed reasonably intelligent FAI, and they build only ONE.
Lack of competition consolidates power, and with it, tendencies toward corruption.
I don’t know if Bayes and Algebra mastery can teach a human being this lesson. Perhaps one needs to read “Lord of the Rings” or something similar, and perhaps algebra masters need to read something that causes all other variables to be multiplied by decimal percentages below the teens, and the “absolute power corrupts absolutely” variable needs to be ranked high and multiplied by 1.
There is wisdom in crowds (of empaths, using language alone). That said: Humans developed as societies with statistical distributions of majority empath conformists to minority “pure” sociopaths. Technology changes that, and rather suddenly. Dissenters can be found out and eliminated or discredited. Co-conspirators can be given power and prestige. Offices can be protected, and conformists can be catered to. Critics can be bought off.
It’s a big world, with a lot of scary things that never get mentioned on LessWrong. My feeling is: there is no “one size fits all” smartest being.
Every John Galt you create is likely to be very imperfect in some way. No matter how general his knowledge. Even with the figures Kurzweil uses, he could be a smart Randian objectivist spacecraft designer, or a smart Hayekian liberal gardener, and even with all of human knowledge at its fingertips that wouldn’t account for character, preference, or what details the synthetic mind chose to master. It might master spacecraft building, but spend all its time designing newer and more complex gardens, restaurants, and meals.
Emergence is messy. Thing clusters are messy. …And hierarchical.
A superintelligence will likely derive its highest values the way we do: by similar “goal networks” in the same general direction “outvoting” one another (or, “hookers and blow” may intervene as a system-crashing external stimulus).
In any case, I’d rather have several such brains, rather than only one.
I’m not sure we can “algebra” our way out of this dilemma. I think that we need to sit up and take notice that “liberal democracy” (“libertarian democracy,” since I’m using the term “liberal” like Hayek did, and not like the conscious movement to hijack the term) dramatically outperforms state collectivist totalitarianism. During the time period when the USA was less totalitarian and had more of the features of “liberal democracy” than what we currently do, or did at the country’s beginning, we performed better (more stated happiness, more equality under the law, more wealth generation, more immigrants revealing a preference for living here, etc.).
So why mention governments in a discussion of intelligent control? Because governments are how we choose to govern (or destroy), at the largest human scale. As such, they represent the best large-scale systems humans can set up, in accordance with human nature.
So, superhuman synthetic intelligence should build upon that. How? Well, we should make superhumanly-intelligent “classical liberals” that are fully equipped with mirror neurons. There should be many of them, and they should be designed to (1) protect their lives using the minimum force necessary to do so (2) argue about what course of action is best once their own lives have been preserved. If they possess mirror neurons and exposure to such thinkers as Hayek, it won’t be hard to prevent them from destroying the world and all humans—they will have a natural predisposition toward protecting and expanding life and luxury.
The true danger is that thinkers from LessWrong mistakenly believe they have designed reasonably intelligent FAI, and they build only ONE.
Lack of competition consolidates power, and with it, tendencies toward corruption.
I don’t know if Bayes and Algebra mastery can teach a human being this lesson. Perhaps one needs to read “Lord of the Rings” or something similar, and perhaps algebra masters need to read something that causes all other variables to be multiplied by decimal percentages below the teens, and the “absolute power corrupts absolutely” variable needs to be ranked high and multiplied by 1.
There is wisdom in crowds (of empaths, using language alone). That said: Humans developed as societies with statistical distributions of majority empath conformists to minority “pure” sociopaths. Technology changes that, and rather suddenly. Dissenters can be found out and eliminated or discredited. Co-conspirators can be given power and prestige. Offices can be protected, and conformists can be catered to. Critics can be bought off.
It’s a big world, with a lot of scary things that never get mentioned on LessWrong. My feeling is: there is no “one size fits all” smartest being.
Every John Galt you create is likely to be very imperfect in some way. No matter how general his knowledge. Even with the figures Kurzweil uses, he could be a smart Randian objectivist spacecraft designer, or a smart Hayekian liberal gardener, and even with all of human knowledge at its fingertips that wouldn’t account for character, preference, or what details the synthetic mind chose to master. It might master spacecraft building, but spend all its time designing newer and more complex gardens, restaurants, and meals.
Emergence is messy. Thing clusters are messy. …And hierarchical.
A superintelligence will likely derive its highest values the way we do: by similar “goal networks” in the same general direction “outvoting” one another (or, “hookers and blow” may intervene as a system-crashing external stimulus).
In any case, I’d rather have several such brains, rather than only one.