@Recovering irrationalist, good points, thank you—I just wanted to save time and space by linking to relevant stuff on my blog without repeating myself over and over. My apologies for overdoing it. I guess I feel like talking to a wall or being deliberately ignored due to the lack of feedback. I shall curb my enthusiasm and let things take its course. You know where to find me.
Stefan_Pernar
@J Thomas, the trick lies in ensuring continued co-existence.
@TGGP: This forum really is not the right place to get into details. It would not be fair towards Eliezer and that I posted something at all is an embarrassing revelation in regards to my intellectual vanity. Mea culpa.
@Tiiba, trust me—I am quite certain that I do, but this is not the right forum—PM me if you want to continue off this blog.
I base friendliness (universally) on the mechanism of natural selection and claim in short “that is good what increases fitness”. You can find more on my blog at http://jame5.com
@Tiiba, my paper on friendly AI theory should provide an answer to your question.
@Elizier, you are slowly changing your point of view and are on a path to rethink old thoughts. Save yourself some time and go read the Principia Cybernetica Web. Only after that you will be able to tread on new ground.
@Nick Tarleton, yes—avoiding a dystopia of non-eudomatic agents is a challenge.
logicnazi, he is making progress ;-)
Humans certainly aren’t perfect at imagining. In fact if you ask most people to imagine a heavy object and a much heavier object falling they will predict the much heavier object hits first and I can give a host of other examples of the same thing.
When you ask someone to imagine something he is controlling his imagination which is equivalent to conscious though. What one can think of however is controlled by ones beliefs—what is skewed in humans is their beliefs, not their imagination. Once beliefs are being controlled by science one will imagine the falling consistent with the scientific theory forming ones beliefs. I would reword your sentence to: humans not usually form their beliefs on the basis of science.
Caledonian, sorry—do you mean that humanity needs to be superseded?
Gray Area, did you read my paper on friendly AI yet? I must be sounding like a broken record by now ;-)
I justify my statement ‘that is good what increases fitness’ with the axiomatic belief of ‘to exist is preferable over not to exist’
The phenomena created by evolution that seem like horrors to us (parasitic wasps) must be that particular wasp’s pinnacle of joy. It is a matter of perspective. I am not saying: eat crap—millions of flies can’t be wrong! I am taking the human perspective—not that of a wasp or a fly or of random horror inducing entity but can understand other entities points of view and see the general principle—what increases fitness of an entity is good for that entity. Generally put: that is good what increases fitness.
Caledonian, yes—I agree 100% - the tricky part is getting to post humanity—avoiding a non-friendly AI. That would be a future where we have a place in the sense that we will have evolved further.
gutzperson, today you are gutzperson—tomorrow you are post-gutzperson yesterday—ensuring your continued existence in that sense will lead to your eventual transcendence. Same for everyone else—just don’t extinguish that strand.
Aaron Luchko, I argue that morality can be universally defined. You can find my thoughts in my paper on friendly AI theory? Would love to hear your comments.
Somehow the links in my earlier comment got messed up.
For the link behind ‘cognitive evolution’ see: http://www.jame5.com/?p=23 For the link behind ‘make sure we will have a place’ see: http://www.jame5.com/?p=17
gutzperson: good points—it is all about increasing fitness and social control. You will find reading the following paper quite interesting: Selection of Organization at the Social level: obstacles and facilitators of metasystem transitions. Particularly chapter four: Social Control Mechanisms.
Evolution does not stop on the genetic level but continues on the <a href=”a href=”http://www.jame5.com/?p=23“>cognitive level allowing for a far higher complexity and speed. As a result group selection becomes intuitively obvious although on the cognitive level members of weaker groups have of cause in principle the chance to change their minds aka evolve their beliefs before physical annihilation.
“If we can’t see clearly the result of a single monotone optimization criterion”
We can project where ever increasing fitness leads up to and it is up to us to make sure we will have a place in such a future.
The very fact that a religious person would be afraid of God withdrawing Its threat to punish them for committing murder, shows that they have a revulsion of murder which is independent of whether God punishes murder or not. If they had no sense that murder was wrong independently of divine retribution, the prospect of God not punishing murder would be no more existentially horrifying than the prospect of God not punishing sneezing.
What a religious person realizes with such a fear is that truth matters – just not in a sense one would assume intuitively.
Philosopher 1 is promoting altruism on the basis of selfishness Philosopher 2 is promoting selfishness on the basis of altruism
It is a contradiction. But only in thought – not in reality. For which our language is to blame as it is poorly adapted to where the solution to said contradiction lies. The solution lies in the fact that both are in fact promoting to increase group fitness:
The first one on the fallacy that a higher paid job contributes (aka increases the fitness of) only to his personal fitness while in reality the society as a whole benefits.
The second one on the fallacy that his recommendations are truly altruistic while they are actually increasing the fitness of society as a whole including himself.
Both beliefs thus become false while still increasing fitness. That’s what I call ’irrationalist’s edge.
Great to see more thoughts on evolution from you Eliezer—good stuff.
Nick, truly fascinating read. Thank you. Although I have not read Bostrom’s paper prior to today I am glad to find that we come to largely identical conclusions. My core claim ‘What is good is what increases fitness’ does not mean that I argue for the replacement of humanity with non eudaemonic fitness maximizing agents as Bostrom calls them.
There are two paths to maximizing an individual’s fitness:
A) Change an indiidual’s genetic/memetic makeup to increase it’s fitness in a given environment B) Change an individual’s environment to increase it’s genetic/memetic fitness
In my AI friendliness theory I argue for option B) using a friendly AGI in which in essence represents Bostrom’s singleton.
Elizier: It is pure Judeo-Christian-Islamic exceptionalism, I regret to inform you, to think that failing to believe in the Bible God signifies anything more than failing to believe in the Flying Spaghetti Monster.
This is plain wrong—the former belief increases fitness while the later does not. Look at religion in the light of rational choice aka game theory instead of plainly true or false. Big difference.
Benoit: Stefan Pernar, you are right, christianity is fitter than atheism in an evolutionary kind of way. It’s members reproduce, spread, divide and conquer like cancer. That’s why they exist. But is that such a good thing? Utility wize cancer’s strategy is widely unoptimal imo.
I argue that it is a good thing in the context of my AI friendliness theory. However I do believe there is something better that could out compete and eventually marginalize it.
Oh and Stephan, why not have instead something like the Church or Reality an open source reason based religion, or even an atheistic compassion based religion like buddhism? Instead often violent divide and conquer based religions such as the abrahamic religions you mentioned. These religions are very immoral if you ask me.
I totally agree. In fact in my writings I repeatedly mention Buddhist teachings as their key concepts seem to arise naturally from my thoughts on AI friendliness. Before you dismiss this as new age mumbo-jumbo I suggest reading pages 105 following of my book on Ai friendliness.
Copied from my blog at http://www.jame5.com/
Does truth matter?
Sure it does. Science is all about The Truth (tm). But what about fitness? Yes, I mean Darwinian fitness as in ‘survival of the fittest’. According to my AI friendliness theory, that is good which increases fitness. In that sense knowledge of truth is good if and only if said knowledge increases fitness. Is there a set of false believes—as in scientifically unjustifiable believes—that if held by an individual would increase said individual’s fitness? Yes there is: religion.
Sorry folks—there is no way to prove or disprove neither the existence nor the absence of a God. But that’s why it is called a religious believe and not a religious truth—right? Richard Dawkins, Dan Dennet and Sam Harris have written book after book in defense of The Truth and science versus the misguided belief in religion.
Most popular counter beliefs aimed at ridiculing religion are the Pastafarian belief in a flying spaghetti monster and the more sophisticated Celestial Teapot by Bertrand Russell.
It stands to argue however that the belief in an undetectable monster or a celestial teapot on the one hand does not add to an individual’s fitness while the belief in Christianity, Islam or the Jewish faith on the other hand does. Religions increase an individual’s fitness by allowing for the development of groups larger then what can be evolutionary stable by sheer face to face monitoring by creating internalized restraints in their followers and thereby increasing the likelihood of sticking to a shared moral code.
For an in depth explanation I suggest reading Selection of Organization at the Social level: obstacles and facilitators of metasystem transitions. Particularly chapter four: Social Control Mechanisms.
The sentence ‘No Good without God’ becoming true in the sense of religion increasing fitness must burn like chili sauce on the eye lids of intelligent designers and religion bashing Darwinists alike. Oh the sweet irony!
Other examples of false beliefs increasing an individual’s fitness include the optimism bias for example.
Time out! Reality Check: Is the truth bad? Or are some truths good while others are evil? Far from it… The solution lies in the big picture. The truth is not that there is no God but that internalized restraints improve how well humans function as part of large groups. The truth is not that Joe average is less likely to succeed as he thinks but that those that try may loose but those that don’t have lost already.
These truths just happened to have manifested themselves in the course of genetic and memetic evolution in phenotypes that don’t make them immediately deducible from said phenotypes. So before you argue for the abolishment of religion please design a set of implementable internalized restrains that are at least as efficient and effective.
Or more generally put: before you argue for the truth make sure it is not just the debunking of a false belief without replacing the false belief with something that is not at least as effective and efficient at increasing an individual’s and/or a group’s fitness respectively.
Life is full of contradictions. Your boss wants you to work more, you want to spend more time with your family. On the one hand you need the salary to support your family and on the other hand you need a private life to enjoy yourself, recharge and be ready again to work some more. Do you work to live, or do you live to work? Can the question even be answered with a simple ‘yes’ or ‘no’? Assuming you do not live to work—then why do you work? And the other way around: if you do not work to live, then why do you live? That is a contradiction.
But life is not a matter of yes or no questions. Or is life a matter of yes and no questions? This is a clear ‘yes or no question’ and clearly a matter concerning life. Assuming life is, then it would not be a matter of yes or no questions and the statement ‘Life is not a matter of yes or no questions’ would be false, assuming on the other hand that life is a matter of yes and no questions then the statement would be false as well. No matter how you approach it the statement is always false but you nevertheless agree with it. Another contradiction—how can this be?
The answer is of course the middle ground. You do not only work just to life and you do not only life just to work. Being the smart person that you are you look at you options, understand the consequences and strike a compromise. Work some so you can life some so you can work some more… A part of your salary is flowing back into your next salary by allowing you to recharge and a part of your life supported by your salary is the cause that lets you recharge in order to earn more salary. It is a recursive self referencing feedback loop—like a Moebius snail.
How to understand this recursive self-referencing feedback loop—let us call it the Moebius effect—to know what you have to do, is what I want to help you realize.