Toby Ord, you should probably sign out and sign back on. Re: utilitarianism, if we are neither consciously nor unconsciously striving for nothing except happiness, and if I currently take a stand against rewiring my brain to enjoy an illusion of scientific discovery, and if I regard this as a deeply important and moral decision, then why on Earth should I listen to the one who comes and says, “Ah, but happiness is all that is good for you, whether you believe it or not”? Why would I not simply reply “No”?
And, all:
WE ARE NOT READY TO DISCUSS GENERAL ISSUES OF FRIENDLY AI YET. It took an extra month, beyond what I had anticipated, just to get to the point where I could say in defensible detail why a simple utility function wouldn’t do the trick. We are nowhere near the point where I can answer, in defensible detail, most of these other questions. DO NOT PROPOSE SOLUTIONS BEFORE INVESTIGATING THE PROBLEM AS DEEPLY AS POSSIBLE WITHOUT PROPOSING ANY. If you do propose a solution, then attack your own answer, don’t wait for me to do it! Because any resolution you come up with to Friendly AI is nearly certain to be wrong—whether it’s a positive policy or an impossibility proof—and so you can get a lot further by attacking your own resolution than defending it. If you have to rationalize, it helps to be rationalizing the correct answer rather than the wrong answer. DON’T STOP AT THE FIRST ANSWER YOU SEE. Question your first reaction, then question the questions.
But above all, wait on having this discussion, okay?
Toby Ord, you should probably sign out and sign back on. Re: utilitarianism, if we are neither consciously nor unconsciously striving for nothing except happiness, and if I currently take a stand against rewiring my brain to enjoy an illusion of scientific discovery, and if I regard this as a deeply important and moral decision, then why on Earth should I listen to the one who comes and says, “Ah, but happiness is all that is good for you, whether you believe it or not”? Why would I not simply reply “No”?
And, all:
WE ARE NOT READY TO DISCUSS GENERAL ISSUES OF FRIENDLY AI YET. It took an extra month, beyond what I had anticipated, just to get to the point where I could say in defensible detail why a simple utility function wouldn’t do the trick. We are nowhere near the point where I can answer, in defensible detail, most of these other questions. DO NOT PROPOSE SOLUTIONS BEFORE INVESTIGATING THE PROBLEM AS DEEPLY AS POSSIBLE WITHOUT PROPOSING ANY. If you do propose a solution, then attack your own answer, don’t wait for me to do it! Because any resolution you come up with to Friendly AI is nearly certain to be wrong—whether it’s a positive policy or an impossibility proof—and so you can get a lot further by attacking your own resolution than defending it. If you have to rationalize, it helps to be rationalizing the correct answer rather than the wrong answer. DON’T STOP AT THE FIRST ANSWER YOU SEE. Question your first reaction, then question the questions.
But above all, wait on having this discussion, okay?