What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.
By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.
2.
Your “Bookshelf” page is 10 years old (and contains a warning sign saying it is obsolete):
Could you tell us about some of the books and papers that you’ve been reading lately? I’m particularly interested in books that you’ve read since 1999 that you would consider to be of the highest quality and/or importance (fiction or not).
3.
What is a typical EY workday like? How many hours/day on average are devoted to FAI research, and how many to other things, and what are the other major activities that you devote your time to?
4.
Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus, do you use cognitive enhancing drugs or brain fitness programs, are you Neurotypical and why didn’t you attend school?
5.
During a panel discussion at the most recent Singularity Summit, Eliezer speculated that he might have ended up as a science fiction author, but then quickly added:
I have to remind myself that it’s not what’s the most fun to do, it’s not even what you have talent to do, it’s what you need to do that you ought to be doing.
Shortly thereafter, Peter Thiel expressed a wish that all the people currently working on string theory would shift their attention to AI or aging; no disagreement was heard from anyone present.
I would therefore like to ask Eliezer whether he in fact believes that the only two legitimate occupations for an intelligent person in our current world are (1) working directly on Singularity-related issues, and (2) making as much money as possible on Wall Street in order to donate all but minimal living expenses to SIAI/Methuselah/whatever.
How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI? If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction? Aren’t we allowed to have similar fun today? For a living, even?
6.
I know at one point you believed in staying celibate, and currently your main page mentions you are in a relationship. What is your current take on relationships, romance, and sex, how did your views develop, and how important are those things to you? (I’d love to know as much personal detail as you are comfortable sharing.)
7.
What’s your advice for Less Wrong readers who want to help save the human race?
8.
Autodidacticism
Eliezer, first congratulations for having the intelligence and courage to voluntarily drop out of school at age 12! Was it hard to convince your parents to let you do it? AFAIK you are mostly self-taught. How did you accomplish this? Who guided you, did you have any tutor/mentor? Or did you just read/learn what was interesting and kept going for more, one field of knowledge opening pathways to the next one, etc...?
EDIT: Of course I would be interested in the details, like what books did you read when, and what further interests did they spark, etc… Tell us a little story. ;)
9.
Is your pursuit of a theory of FAI similar to, say, Hutter’s AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I’m still not 100% certain of your goals at SIAI.
10.
What was the story purpose and/or creative history behind the legalization and apparent general acceptance of non-consensual sex in the human society from Three Worlds Collide?
11.
If you were to disappear (freak meteorite accident), what would the impact on FAI research be?
Do you know other people who could continue your research, or that are showing similar potential and working on the same problems? Or would you estimate that it would be a significant setback for the field (possibly because it is a very small field to begin with)?
12.
Your approach to AI seems to involve solving every issue perfectly (or very close to perfection). Do you see any future for more approximate, rough and ready approaches, or are these dangerous?
13.
How young can children start being trained as rationalists? And what would the core syllabus / training regimen look like?
14.
Could you (Well, “you” being Eliezer in this case, rather than the OP) elaborate a bit on your “infinite set atheism”? How do you feel about the set of natural numbers? What about its power set? What about that thing’s power set, etc?
From the other direction, why aren’t you an ultrafinitist?
15.
Why do you have a strong interest in anime, and how has it affected your thinking?
16.
What are your current techniques for balancing thinking and meta-thinking?
For example, trying to solve your current problem, versus trying to improve your problem-solving capabilities.
17.
Could you give an uptodate estimate of how soon non-Friendly general AI might be developed? With confidence intervals, and by type of originator (research, military, industry, unplanned evolution from non-general AI...)
18.
What progress have you made on FAI in the last five years and in the last year?
19.
How do you characterize the success of your attempt to create rationalists?
20.
What is the probability that this is the ultimate base layer of reality?
21.
Who was the most interesting would-be FAI solver you encountered?
22.
If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?
23.
In one of the discussions surrounding the AI-box experiments, you said that you would be unwilling to use a hypothetical fully general argument/”mind hack” to cause people to support SIAI. You’ve also repeatedly said that the friendly AI problem is a “save the world” level issue. Can you explain the first statement in more depth? It seems to me that if anything really falls into “win by any means necessary” mode, saving the world is it.
24.
What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider “conscious” in the sense of “having experiences” that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the wall behind John Searle’s back that can be interpreted as running computations corresponding to conscious suffering? Rocks? How does it distinguish interpretations of numbers as signed vs. unsigned, or ones complement vs. twos complement? What physical details of the computations matter? Does it regard carbon differently from silicon?
25.
I admit to being curious about various biographical matters. So for example I might ask:
What are your relations like with your parents and the rest of your family? Are you the only one to have given up religion?
26.
Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you’ve written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)
ETA: By AI I meant AGI.
27.
Do you feel lonely often? How bad (or important) is it?
(Above questions are a corollary of:) Do you feel that — as you improve your understanding of the world more and more —, there are fewer and fewer people who understand you and with whom you can genuinely relate in a personal level?
28.
Previously, you endorsed this position:
Never try to deceive yourself, or offer a reason to believe other than probable truth; because even if you come up with an amazing clever reason, it’s more likely that you’ve made a mistake than that you have a reasonable expectation of this being a net benefit in the long run.
One counterexample has been proposed a few times: holding false beliefs about oneself in order to increase the appearance of confidence, given that it’s difficult to directly manipulate all the subtle signals that indicate confidence to others.
What do you think about this kind of self-deception?
29.
In the spirit of considering semi abyssal plans, what happens if, say, next week you discover a genuine reduction of consciousness and in turns out that… There’s simply no way to construct the type of optimization process you want without it being conscious, even if very different from us?
ie, what if it turned out that The Law turned out to have the consequence of “to create a general mind is to create a conscious mind. No way around that”? Obviously that shifts the ethics a bit, but my question is basically if so, well… “now what?” what would have to be done differently, in what ways, etc?
30.
What single technique do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?
In response to Eliezer’s response on Video #5, indicating that smart people should be working on AI, and not String Theory.
I tend to agree, as those are fields which are not likely going to give us any new technologies that are going to make the world a safer place… and
Any work that speeds the arrival of AI will also speed the solution to any problems in sciences such as String Theory, as a recursively improving intelligence will be able to aid in the discovery of solutions much more rapidly than the addition of five or ten really smart people will aid in the discovery of solutions.
Shouldn’t we hedge our bets a little? I don’t know what the probability is that the Singularity Institute succeeds in building an FAI in time to prevent any existential disasters that would otherwise occur but it isn’t 1. Any work done to reduce existential risk in the meantime (and in possible futures where no Friendly AI exists) seems to me worthwhile.
1.
What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.
By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.
2.
Your “Bookshelf” page is 10 years old (and contains a warning sign saying it is obsolete):
http://yudkowsky.net/obsolete/bookshelf.html
Could you tell us about some of the books and papers that you’ve been reading lately? I’m particularly interested in books that you’ve read since 1999 that you would consider to be of the highest quality and/or importance (fiction or not).
3.
What is a typical EY workday like? How many hours/day on average are devoted to FAI research, and how many to other things, and what are the other major activities that you devote your time to?
4.
Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus, do you use cognitive enhancing drugs or brain fitness programs, are you Neurotypical and why didn’t you attend school?
5.
During a panel discussion at the most recent Singularity Summit, Eliezer speculated that he might have ended up as a science fiction author, but then quickly added:
Shortly thereafter, Peter Thiel expressed a wish that all the people currently working on string theory would shift their attention to AI or aging; no disagreement was heard from anyone present.
I would therefore like to ask Eliezer whether he in fact believes that the only two legitimate occupations for an intelligent person in our current world are (1) working directly on Singularity-related issues, and (2) making as much money as possible on Wall Street in order to donate all but minimal living expenses to SIAI/Methuselah/whatever.
How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI? If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction? Aren’t we allowed to have similar fun today? For a living, even?
6.
I know at one point you believed in staying celibate, and currently your main page mentions you are in a relationship. What is your current take on relationships, romance, and sex, how did your views develop, and how important are those things to you? (I’d love to know as much personal detail as you are comfortable sharing.)
7.
What’s your advice for Less Wrong readers who want to help save the human race?
8.
Autodidacticism
Eliezer, first congratulations for having the intelligence and courage to voluntarily drop out of school at age 12! Was it hard to convince your parents to let you do it? AFAIK you are mostly self-taught. How did you accomplish this? Who guided you, did you have any tutor/mentor? Or did you just read/learn what was interesting and kept going for more, one field of knowledge opening pathways to the next one, etc...?
EDIT: Of course I would be interested in the details, like what books did you read when, and what further interests did they spark, etc… Tell us a little story. ;)
9.
Is your pursuit of a theory of FAI similar to, say, Hutter’s AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I’m still not 100% certain of your goals at SIAI.
10.
What was the story purpose and/or creative history behind the legalization and apparent general acceptance of non-consensual sex in the human society from Three Worlds Collide?
11.
If you were to disappear (freak meteorite accident), what would the impact on FAI research be?
Do you know other people who could continue your research, or that are showing similar potential and working on the same problems? Or would you estimate that it would be a significant setback for the field (possibly because it is a very small field to begin with)?
12.
Your approach to AI seems to involve solving every issue perfectly (or very close to perfection). Do you see any future for more approximate, rough and ready approaches, or are these dangerous?
13.
How young can children start being trained as rationalists? And what would the core syllabus / training regimen look like?
14.
Could you (Well, “you” being Eliezer in this case, rather than the OP) elaborate a bit on your “infinite set atheism”? How do you feel about the set of natural numbers? What about its power set? What about that thing’s power set, etc?
From the other direction, why aren’t you an ultrafinitist?
15.
Why do you have a strong interest in anime, and how has it affected your thinking?
16.
What are your current techniques for balancing thinking and meta-thinking?
For example, trying to solve your current problem, versus trying to improve your problem-solving capabilities.
17.
Could you give an uptodate estimate of how soon non-Friendly general AI might be developed? With confidence intervals, and by type of originator (research, military, industry, unplanned evolution from non-general AI...)
18.
What progress have you made on FAI in the last five years and in the last year?
19.
How do you characterize the success of your attempt to create rationalists?
20.
What is the probability that this is the ultimate base layer of reality?
21.
Who was the most interesting would-be FAI solver you encountered?
22.
If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?
23.
In one of the discussions surrounding the AI-box experiments, you said that you would be unwilling to use a hypothetical fully general argument/”mind hack” to cause people to support SIAI. You’ve also repeatedly said that the friendly AI problem is a “save the world” level issue. Can you explain the first statement in more depth? It seems to me that if anything really falls into “win by any means necessary” mode, saving the world is it.
24.
What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider “conscious” in the sense of “having experiences” that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the wall behind John Searle’s back that can be interpreted as running computations corresponding to conscious suffering? Rocks? How does it distinguish interpretations of numbers as signed vs. unsigned, or ones complement vs. twos complement? What physical details of the computations matter? Does it regard carbon differently from silicon?
25.
I admit to being curious about various biographical matters. So for example I might ask:
What are your relations like with your parents and the rest of your family? Are you the only one to have given up religion?
26.
Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you’ve written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)
ETA: By AI I meant AGI.
27.
Do you feel lonely often? How bad (or important) is it?
(Above questions are a corollary of:) Do you feel that — as you improve your understanding of the world more and more —, there are fewer and fewer people who understand you and with whom you can genuinely relate in a personal level?
28.
Previously, you endorsed this position:
One counterexample has been proposed a few times: holding false beliefs about oneself in order to increase the appearance of confidence, given that it’s difficult to directly manipulate all the subtle signals that indicate confidence to others.
What do you think about this kind of self-deception?
29.
In the spirit of considering semi abyssal plans, what happens if, say, next week you discover a genuine reduction of consciousness and in turns out that… There’s simply no way to construct the type of optimization process you want without it being conscious, even if very different from us?
ie, what if it turned out that The Law turned out to have the consequence of “to create a general mind is to create a conscious mind. No way around that”? Obviously that shifts the ethics a bit, but my question is basically if so, well… “now what?” what would have to be done differently, in what ways, etc?
30.
What single technique do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?
You repeat #10 as #11; the question as cited by Eliezer is as follows:
In response to Eliezer’s response on Video #5, indicating that smart people should be working on AI, and not String Theory.
I tend to agree, as those are fields which are not likely going to give us any new technologies that are going to make the world a safer place… and
Any work that speeds the arrival of AI will also speed the solution to any problems in sciences such as String Theory, as a recursively improving intelligence will be able to aid in the discovery of solutions much more rapidly than the addition of five or ten really smart people will aid in the discovery of solutions.
Shouldn’t we hedge our bets a little? I don’t know what the probability is that the Singularity Institute succeeds in building an FAI in time to prevent any existential disasters that would otherwise occur but it isn’t 1. Any work done to reduce existential risk in the meantime (and in possible futures where no Friendly AI exists) seems to me worthwhile.
Am I wrong?