I’m not sure either, it was a general rant against hyper-rational utilitarian thinking. My utility function can’t be described by statistics or logic; it involves purely irrational concepts such as “spirituality”, “aesthetics”, “humor”, “creativity”, “mysticism”, etc. These are the values I care about, and I see nothing in your calculations that takes them into account. So I am rejecting the entire project of LessWrong on these grounds. Have a nice day.
AlphaOmega
If your goal is to maximize human life, maybe you should start by outlawing abortion and birth control worldwide. Personally I think reducing human values to these utilitarian calculations is absurd, nihilistic and grotesque. What I want is a life worth living, people worth living with and a culture worth living in—quality, not quantity. The reason irrational things like religion, magical thinking and art will never go away, and why I find the ideology of this rationality cult rather repulsive, is because human beings are not rational robots and never will be. Trying to maximize happiness via rationality is a fool’s quest! The happiest people I know are totally irrational! If maximal rationality is your goal, you need to exterminate humanity and replace them with machines!
(Of course it may be that I am off my meds today, but I don’t think that invalidates my points.)
That’s how it strikes me also. To me Yudkowsky has most of the traits of a megalomaniacal supervillain, but I don’t hold that against him. I will give LessWrong this much credit: they still allow me to post here, unlike Anissimov who simply banned me outright from his blog.
What bothers me is that the real agenda of the LessWrong/Singularity Institute folks is being obscured by all these abstract philosophical discussions. I know that Peter Thiel and other billionaires are not funding these groups for academic reasons—this is ultimately a quest for power.
I’ve been told by Michael Anissimov personally that they are working on real, practical AI designs behind the scenes, but how often is this discussed here? Am I supposed to feel secure knowing that these groups are seeking the One Ring of Power, but it’s OK because they’ve written papers about “CEV” and are therefore the good guys? He who can save the world can control it. I don’t trust anyone with this kind of power, and I am deeply suspicious of any small group of intelligent people that is seeking power in this way.
Am I paranoid? Absolutely. I know too much about recent human history and the horrific failures of other grandiose intellectual projects to be anything else. Call me crazy, but I firmly believe that building intelligent machines is all about power, and that everything else (i.e. most of this site) is conversation.
You raise a good point here, which relates to my question: Is Good’s “intelligence explosion” a mathematically well-defined idea, or is it just a vague hypothesis that sounds plausible? When we are talking about something as poorly defined as intelligence, it seems a bit ridiculous to jump to these “lather, rinse, repeat, FOOM, the universe will soon end” conclusions as many people seem to like to do. Is there a mathematical description of this recursive process which takes into account its own complexity, or are these just very vague and overly reductionist claims by people who perhaps suffer from an excessive attachment to their own abstract models and a lack of exposure to the (so-called) real world?
Consciousness is how the algorithms of the universal simulation feel from the inside. We are a self-aware simulation program.
This is a good discussion. I see this whole issue as a power struggle, and I don’t consider the Singularity Institute to be more benevolent than anyone else just because Eliezer Yudkowsky has written a paper about “CEV” (whatever that is—I kept falling asleep when I tried to read it, and couldn’t make heads or tails of it in any case).
The megalomania of the SIAI crowd in claiming that they are the world-savers would worry me if I thought they might actually pull something off. For the sake of my peace of mind, I have formed an organization which is pursuing an AI world domination agenda of our own. At some point we might even write a paper explaining why our approach is the only ethically defensible means to save humanity from extermination. My working hypothesis is that AGI will be similar to nuclear weapons, in that it will be the culmination of a global power struggle (which has already started). Crazy old world, isn’t it?
Well I just want to rule the world. To want to abstractly “save the world” seems rather absurd, particularly when it’s not clear that the world needs saving. I suspect that the “I want to save the world” impulse is really the “I want to rule the world” impulse in disguise, and I prefer to be up front about my motives...
- Jun 3, 2011, 1:31 PM; 0 points) 's comment on [SEQ RERUN] Your Rationality is My Business by (
Entropy may always be increasing in the universe, but I would argue that so is something else, which is not well-understood scientifically but may be called complexity, life or intelligence. Intelligence seems to be the one “force” capable of overcoming entropy, and since it’s growing exponentially I conclude that it will overwhelm entropy and produce something quite different in our region of spacetime in short order—i.e. a “singularity”. If, as I believe, we are a transitional species and a means to a universal singularity, why would I want a system which restricts changes to those which are comprehensible or related to us?
My thinking of late is that if you embrace rationality as your raison d’etre, you almost inevitably conclude that human beings must be exterminated. This extermination is sometimes given a progressive spin by calling it “transhumanism” or “the Singularity”, but that doesn’t fundamentally change its nature.
To dismiss so many aspects of our humanity as “biases” is to dismiss humanity itself. The genius of irrationality is that it doesn’t get lost in these genocidal cul-de-sacs nor in the strange loops of Godelian undecidability in trying to derive a value system from first principles (I have no idea what this sentence means). Civilizations based on the irrational revelations of prophets have proven themselves to be more successful and appealing over a longer period of time than any rationalist society to date. As we speak, the vast majority of humans being born are not adopting, and never will adopt, a rational belief system in place of religion. Rationalists are quite literally a dying breed. This leads me to conclude that the rationalist optimism of post-Enlightenment civilization was a historical accident and a brief bubble, and that we’ll be returning to our primordial state of irrationality going forward.
It’s fun to fantasize about transcending the human condition via science and technology, but I’m skeptical in the extreme that such a thing will happen—at least in a way that is not repugnant to most current value systems.