Interesting. I always assumed that raising a question was the first step toward answering it
Only if you want an answer. There is no curiosity that does not want an answer. There are four very widespread failure modes around “raising questions”—the failure mode of paper-writers who regard unanswerable questions as a biscuit bag that never runs out of biscuits, the failure mode of the politically savvy who’d rather not offend people by disagreeing too strongly with any of them, the failure mode of the religious who don’t want their questions to arrive at the obvious answer, the failure mode of technophobes who mean to spread fear by “raising questions” that are meant more to create anxiety by their raising than by being answered, and all of these easily sum up to an accustomed bad habit of thinking where nothing ever gets answered and true curiosity is dead.
So yes, if there’s an interim solution on the table and someone says “Ah, but surely we must ask more questions” instead of “No, you idiot, can’t you see that there’s a better way” or “But it looks to me like the preponderance of evidence is actually pointing in this here other direction”, alarms do go off inside my head. There’s a failure mode of answering too prematurely, but when someone talks explicitly about the importance of raising questions—this being language that is mainly explicitly used within the failure-mode groups—alarms go off and I want to see it demonstrated that they can think in terms of definite answers and preponderances of evidence at all besides just raising questions; I want a demonstration that true curiosity, wanting an actual answer, isn’t dead inside them, and that they have the mental capacity to do what’s needed to that effect—namely, weigh evidence in the scales and arrive at a non-balanced answer, or propose alternative solutions that are supposed to be better.
I’m impressed with your blog, by the way, and generally consider you to be a more adept rationalist than the above paragraphs might imply—but when it comes to this particular matter of metaethics, I’m not quite sure that you strike me as aggressive enough that if you had twenty years to sort out the mess, I would come back twenty years later and find you with a sheet of paper with the correct answer written on it, as opposed to a paper full of questions that clearly need to be very carefully considered.
Awesome. Now your reaction here makes complete sense to me. The way I worded my original article above looks very much like I’m in either the 1st category or the 4th category.
Let me, then, be very clear:
I do not want to raise questions so that I can make a living endlessly re-examining philosophical questions without arriving at answers.
I want me, and rationalists in general, to work aggressively enough on these problems so that we have answers by the time AI+ arrives. As for the fact that I don’t have answers yet, please remember that I was a fundamentalist Christian 3 years ago, with no rationality training at all, and a horrendous science education. And I didn’t discover the urgency of these problems until about 6 months ago. I’ve have had to make extremely rapid progress from that point to where I am today. If I can arrange to work on these problems full time, I think I can make valuable contributions to the project of dealing safely with Friendly AI. But if that doesn’t happen, well, I hope to at least enable others who can work on this problem full time, like yourself.
I want to solve these problems in 15 years, not 20. This will make most academic philosophers, and most people in general, snort the water they’re drinking through their nose. On the other hand, the time it takes to solve a problem expands to meet the time you’re given. For many philosophers, the time we have to answer the questions is… billions of years. For me, and people like me, it’s a few decades.
Well, the part about you being a fundamentalist Christian three years ago is damned impressive and does a lot to convince me that you’re moving at a reasonable clip.
On the other hand, a good metaethical answer to the question “What sort of stuff is morality made out of?” is essentially a matter of resolving confusion; and people can get stuck on confusions for decades, or they can breeze past confusions in seconds. Comprehending the most confusing secrets of the universe is more like realigning your car’s wheels than like finding the Lost Ark. I’m not entirely sure what to do about the partial failure of the metaethics sequence, or what to do about the fact that it failed for you in particular. But it does sound like you’re setting out to heroically resolve confusions that, um, I kinda already resolved, and then wrote up, and then only some people got the writeup… but it doesn’t seem like the sort of thing where you spending years working on it is a good idea. 15 years to a piece of paper with the correct answer written on it is for solving really confusing problems from scratch; it doesn’t seem like a good amount of time for absorbing someone else’s solution. If you plan to do something interesting with your life requiring correct metaethics then maybe we should have a Skype videocall or even an in-person meeting at some point.
The main open moral question SIAI actually does need a concrete answer to is “How exactly does one go about construing an extrapolated volition from the giant mess that is a human mind?”, which takes good metaethics as a background assumption but is fundamentally a moral question rather than a metaethical one. On the other hand, I think we’ve basically got covered “What sort of stuff is this mysterious rightness?”
What did you think of the free will sequence as a template for doing naturalistic cognitive philosophy where the first question is always “What algorithm feels from the inside like my philosophical intutions?”
I should add that I don’t think I will have meta-ethical solutions in 15 years, significantly because I’m not optimistic that I can get someone pay my living expenses while I do 15 years of research. (Why should they? I haven’t proven my abilities.) But I think these problems are answerable, and that we are in a fantastic position to answer them if we want to do so. We know an awful lot about physics, psychology, logic, neuroscience, AI, and so on. Even experts that were active 15 years before now did not have all these advantages. More importantly, most thinkers today do not even take advantage of them.
Have you considered applying to the SIAI Visiting Fellows program? It could be worth a month or 3 of having your living expenses taken care of while you research, and could lead to something longer term.
Seconding JGWeissman — you’d probably be accepted as a Visiting Fellow in an instant, and if you turn out to be sufficiently good at the kind of research and thinking that they need to have done, maybe you could join them as a paid researcher.
I want to solve these problems in 15 years, not 20. … the time it takes to solve a problem expands to meet the time you’re given.
15 years is much too much; if you haven’t solved metaethics after 15 years of serious effort, you probably never will. The only things that’re actually time consuming on that scale are getting stopped with no idea how to proceed, and wrong turns into muck. I see no reason why a sufficiently clear thinker couldn’t finish a correct and detailed metaethics in a month.
I see no reason why a sufficiently clear thinker couldn’t finish a correct and detailed metaethics in a month.
I suppose if you let “sufficiently clear thinker” do enough work this is just trivial.
But it’s a sui generis problem… I’m not sure what information a time table could be based on other than the fact that it has been way longer than a month and no one has succeeded yet.
It is also worth keeping in mind, that scientific discoveries routinely impact the concepts we use to understand the world. The computational model of the human brain was generated as a hypothesis until after we had built computers and could see what they do, even though, in principle that hypothesis could have been invented at nearly any point in history. So it seems plausible the crucial insight needed for a successful metaethics will come from a scientific discovery that someone concentrating on philosophy for a month wouldn’t make.
But it’s a sui generis problem… I’m not sure what information a time table could be based on other than the fact that it has been way longer than a month and no one has succeeded yet.
Supposing anyone had already succeeded, how strong an expectation do you think we should have of knowing about it?
Not all that strong. It may well be out there in some obscure journal but just wasn’t interesting enough for anyone to bother replying to. Hell, it multiple people may have succeeded.
But I think “success” might actually be underdetermined here. Some philosophers may have had the right insights, but I suspect that if they had communicated those insights in the formal method necessary for Friendly AI the insights would have felt insightful to readers and the papers would have gotten attention. Of course, I’m not even familiar with cutting edge metaethics. There may well be something like that out there. It doesn’t help that no one here seems willing to actually read philosophy in non-blog format.
It may well be out there in some obscure journal but just wasn’t interesting enough for anyone to bother replying to. Hell, it multiple people may have succeeded.
The computational model of the human brain was generated as a hypothesis until after we had built computers and could see what they do, even though, in principle that hypothesis could have been invented at nearly any point in history.
I think it’s correct, but it’s definitely not detailed; some major questions, like “how to weight and reconcile conflicting preferences”, are skipped entirely.
I think it’s correct, but it’s definitely not detailed;
What do you believe to be the reasons? Didn’t he try or fail? I’m trying to fathom what kind of person is a sufficiently clear thinker. If not even EY is a sufficiently clear thinker, then your statement that such a person could come up with a detailed metaethics in a month seems self-evident. If someone is a sufficiently clear thinker to accomplish a certain task then they will complete it if they try. What’s the point? It sounds like you are saying that there are many smart people that could accomplish the task if they only tried. But if in fact EY is not one of them, that’s bad.
Yesterday I read In Praise of Boredom. It seems that EY also views intelligence as something proactive:
...if I ever do fully understand the algorithms of intelligence, it will destroy all remaining novelty—no matter what new situation I encounter, I’ll know I can solve it just by being intelligent...
No doubt I am a complete layman when it comes to what intelligence is. But as far as I am aware it is a kind of goal-oriented evolutionary process equipped with a memory. It is evolutionary insofar as it still needs to stumble upon novelty. Intelligence is not a meta-solution but an efficient searchlight that helps to discover unknown unknowns. Intelligence is also a tool that can efficiently exploit previous discoveries, combine and permute them. But claiming that you just have to be sufficiently intelligent to solve a given problem sounds like it is more than that. I don’t see that. I think that if something crucial is missing, something you don’t know that it is missing, you’ll have to discover it first and not invent it by the sheer power of intelligence.
A month sounds considerably overoptimistic to me. Wrong steps and backtracking are probably to be expected, and it would probably be irresponsible to commit to a solution before allowing other intelligent people (who really want to find the right answer, not carry on endless debate) to review it in detail. For a sufficiently intelligent and committed worker, I would not be surprised if they could produce a reliably correct metaethical theory within two years, perhaps one, but a month strikes me as too restrictive.
the failure mode of technophobes who mean to spread fear by “raising questions” that are meant more to create anxiety by their raising than by being answered
Of course, this one applies to scaremongers in general, not just technophobes.
Only if you want an answer. There is no curiosity that does not want an answer. There are four very widespread failure modes around “raising questions”—the failure mode of paper-writers who regard unanswerable questions as a biscuit bag that never runs out of biscuits, the failure mode of the politically savvy who’d rather not offend people by disagreeing too strongly with any of them, the failure mode of the religious who don’t want their questions to arrive at the obvious answer, the failure mode of technophobes who mean to spread fear by “raising questions” that are meant more to create anxiety by their raising than by being answered, and all of these easily sum up to an accustomed bad habit of thinking where nothing ever gets answered and true curiosity is dead.
So yes, if there’s an interim solution on the table and someone says “Ah, but surely we must ask more questions” instead of “No, you idiot, can’t you see that there’s a better way” or “But it looks to me like the preponderance of evidence is actually pointing in this here other direction”, alarms do go off inside my head. There’s a failure mode of answering too prematurely, but when someone talks explicitly about the importance of raising questions—this being language that is mainly explicitly used within the failure-mode groups—alarms go off and I want to see it demonstrated that they can think in terms of definite answers and preponderances of evidence at all besides just raising questions; I want a demonstration that true curiosity, wanting an actual answer, isn’t dead inside them, and that they have the mental capacity to do what’s needed to that effect—namely, weigh evidence in the scales and arrive at a non-balanced answer, or propose alternative solutions that are supposed to be better.
I’m impressed with your blog, by the way, and generally consider you to be a more adept rationalist than the above paragraphs might imply—but when it comes to this particular matter of metaethics, I’m not quite sure that you strike me as aggressive enough that if you had twenty years to sort out the mess, I would come back twenty years later and find you with a sheet of paper with the correct answer written on it, as opposed to a paper full of questions that clearly need to be very carefully considered.
Awesome. Now your reaction here makes complete sense to me. The way I worded my original article above looks very much like I’m in either the 1st category or the 4th category.
Let me, then, be very clear:
I do not want to raise questions so that I can make a living endlessly re-examining philosophical questions without arriving at answers.
I want me, and rationalists in general, to work aggressively enough on these problems so that we have answers by the time AI+ arrives. As for the fact that I don’t have answers yet, please remember that I was a fundamentalist Christian 3 years ago, with no rationality training at all, and a horrendous science education. And I didn’t discover the urgency of these problems until about 6 months ago. I’ve have had to make extremely rapid progress from that point to where I am today. If I can arrange to work on these problems full time, I think I can make valuable contributions to the project of dealing safely with Friendly AI. But if that doesn’t happen, well, I hope to at least enable others who can work on this problem full time, like yourself.
I want to solve these problems in 15 years, not 20. This will make most academic philosophers, and most people in general, snort the water they’re drinking through their nose. On the other hand, the time it takes to solve a problem expands to meet the time you’re given. For many philosophers, the time we have to answer the questions is… billions of years. For me, and people like me, it’s a few decades.
Any response to this, Eliezer?
Well, the part about you being a fundamentalist Christian three years ago is damned impressive and does a lot to convince me that you’re moving at a reasonable clip.
On the other hand, a good metaethical answer to the question “What sort of stuff is morality made out of?” is essentially a matter of resolving confusion; and people can get stuck on confusions for decades, or they can breeze past confusions in seconds. Comprehending the most confusing secrets of the universe is more like realigning your car’s wheels than like finding the Lost Ark. I’m not entirely sure what to do about the partial failure of the metaethics sequence, or what to do about the fact that it failed for you in particular. But it does sound like you’re setting out to heroically resolve confusions that, um, I kinda already resolved, and then wrote up, and then only some people got the writeup… but it doesn’t seem like the sort of thing where you spending years working on it is a good idea. 15 years to a piece of paper with the correct answer written on it is for solving really confusing problems from scratch; it doesn’t seem like a good amount of time for absorbing someone else’s solution. If you plan to do something interesting with your life requiring correct metaethics then maybe we should have a Skype videocall or even an in-person meeting at some point.
The main open moral question SIAI actually does need a concrete answer to is “How exactly does one go about construing an extrapolated volition from the giant mess that is a human mind?”, which takes good metaethics as a background assumption but is fundamentally a moral question rather than a metaethical one. On the other hand, I think we’ve basically got covered “What sort of stuff is this mysterious rightness?”
What did you think of the free will sequence as a template for doing naturalistic cognitive philosophy where the first question is always “What algorithm feels from the inside like my philosophical intutions?”
I should add that I don’t think I will have meta-ethical solutions in 15 years, significantly because I’m not optimistic that I can get someone pay my living expenses while I do 15 years of research. (Why should they? I haven’t proven my abilities.) But I think these problems are answerable, and that we are in a fantastic position to answer them if we want to do so. We know an awful lot about physics, psychology, logic, neuroscience, AI, and so on. Even experts that were active 15 years before now did not have all these advantages. More importantly, most thinkers today do not even take advantage of them.
Have you considered applying to the SIAI Visiting Fellows program? It could be worth a month or 3 of having your living expenses taken care of while you research, and could lead to something longer term.
Seconding JGWeissman — you’d probably be accepted as a Visiting Fellow in an instant, and if you turn out to be sufficiently good at the kind of research and thinking that they need to have done, maybe you could join them as a paid researcher.
15 years is much too much; if you haven’t solved metaethics after 15 years of serious effort, you probably never will. The only things that’re actually time consuming on that scale are getting stopped with no idea how to proceed, and wrong turns into muck. I see no reason why a sufficiently clear thinker couldn’t finish a correct and detailed metaethics in a month.
I suppose if you let “sufficiently clear thinker” do enough work this is just trivial.
But it’s a sui generis problem… I’m not sure what information a time table could be based on other than the fact that it has been way longer than a month and no one has succeeded yet.
It is also worth keeping in mind, that scientific discoveries routinely impact the concepts we use to understand the world. The computational model of the human brain was generated as a hypothesis until after we had built computers and could see what they do, even though, in principle that hypothesis could have been invented at nearly any point in history. So it seems plausible the crucial insight needed for a successful metaethics will come from a scientific discovery that someone concentrating on philosophy for a month wouldn’t make.
Supposing anyone had already succeeded, how strong an expectation do you think we should have of knowing about it?
Not all that strong. It may well be out there in some obscure journal but just wasn’t interesting enough for anyone to bother replying to. Hell, it multiple people may have succeeded.
But I think “success” might actually be underdetermined here. Some philosophers may have had the right insights, but I suspect that if they had communicated those insights in the formal method necessary for Friendly AI the insights would have felt insightful to readers and the papers would have gotten attention. Of course, I’m not even familiar with cutting edge metaethics. There may well be something like that out there. It doesn’t help that no one here seems willing to actually read philosophy in non-blog format.
Yep:
Related question: suppose someone handed us a successful solution, would we recognize it?
Yep.
So Yudkowsky came up with a correct and detailed metaethics but failed to communicate it?
I think it’s correct, but it’s definitely not detailed; some major questions, like “how to weight and reconcile conflicting preferences”, are skipped entirely.
What do you believe to be the reasons? Didn’t he try or fail? I’m trying to fathom what kind of person is a sufficiently clear thinker. If not even EY is a sufficiently clear thinker, then your statement that such a person could come up with a detailed metaethics in a month seems self-evident. If someone is a sufficiently clear thinker to accomplish a certain task then they will complete it if they try. What’s the point? It sounds like you are saying that there are many smart people that could accomplish the task if they only tried. But if in fact EY is not one of them, that’s bad.
Yesterday I read In Praise of Boredom. It seems that EY also views intelligence as something proactive:
No doubt I am a complete layman when it comes to what intelligence is. But as far as I am aware it is a kind of goal-oriented evolutionary process equipped with a memory. It is evolutionary insofar as it still needs to stumble upon novelty. Intelligence is not a meta-solution but an efficient searchlight that helps to discover unknown unknowns. Intelligence is also a tool that can efficiently exploit previous discoveries, combine and permute them. But claiming that you just have to be sufficiently intelligent to solve a given problem sounds like it is more than that. I don’t see that. I think that if something crucial is missing, something you don’t know that it is missing, you’ll have to discover it first and not invent it by the sheer power of intelligence.
By “a sufficiently clear thinker” you mean an AI++, right? :)
Nah, an AI++ would take maybe five minutes.
A month sounds considerably overoptimistic to me. Wrong steps and backtracking are probably to be expected, and it would probably be irresponsible to commit to a solution before allowing other intelligent people (who really want to find the right answer, not carry on endless debate) to review it in detail. For a sufficiently intelligent and committed worker, I would not be surprised if they could produce a reliably correct metaethical theory within two years, perhaps one, but a month strikes me as too restrictive.
Of course, this one applies to scaremongers in general, not just technophobes.