Despite Yudkowsky’s obvious leanings, the Sequences are … first and foremost about how to not end up an idiot
My basic thesis is that even if that was not the intent, the result has been the production of idiots. Specifically, a type of idiotic madness that causes otherwise good people, self-proclaimed humanitarians to disparage the only sort of progress which has the potential to alleviate all human suffering, forever, on accelerated timescales. And they do so for reasons that are not grounded in empirical evidence, because they were taught though demonstration modes of non-empirical thinking from the sequences, and conditioned to think this was okay through social engagement on LW.
When you find yourself digging a hole, the sensible and correct thing to do is stop digging. I think we can do better, but I’m burned out on trying to reform from the inside. Or perhaps I’m no longer convinced that reform can work given the nature of the medium (social pressures of blog posts and forums work counter to the type of rationality that should be advocated for).
I don’t care about Many Worlds, FAI, Fun theory and Jeffreyssai stuff, but LW was the thing that stopped me from being a complete and utter idiot.
I don’t want to take that away. But for me LW was not just a baptismal fount for discovering rationality, it was also an effort to get people to work on humanitarian relief and existential risk reduction. I hope you don’t think me crazy for saying that LW has had a subject matter bias in these directions. But on at least some of these accounts the effect had by LW and/or MIRI and/or Yudkowsky’s specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.
We are finally coming out of a prolonged AI winter. And although funding is finally available to move the state of the art in automation forward, to accelerate progress in life sciences and molecular manufacturing that will bring great humanitarian change, we have created a band of Luddites that fear the solution more than the problem. And in a strange twist of double-think, consider themselves humanitarians for fighting progress.
If you had your own forum with lots of people, who share similar criticism of LW, hey, I’d go there and leave LW. But you don’t have such forum, so by leaving LW you just leave people like me alone. What’s the point of that? Do you really believe leaving LW like that is more utility, than trying to create an island within it?
I am myself working on various projects in my life which I expect to have positive effects on the world. Outside of work, LW has at times occupied a significant fraction of my leisure time. This must be seen as an activity of higher utility than working more hours on my startup, making progress on my molecular nanotech and AI side projects, or enriching myself personally in other ways (family time, reading, etc.). I saw the Rationality reading group as a chance to do something which would conceivably grow that community by a measurable amount, thereby justifying a time expenditure. However if all I am doing is bringing more people into a community that is actively working against developments in artificial intelligence that have a chance of relieving human suffering within a single generation… the Hippocratic corpus comes to mind: “first, do no harm.”
I am not sure yet what I will fill the time with. Maybe I’ll get off my butt and start making more concrete progress on some of the nanotech and AI stuff that I have been letting slide in recent years.
I recognize also that I am making broad generalizations which do not always apply to everyone. You seem to be an exception, and I wish I had engaged with you more. I also will miss TheAncientGeek’s contrarian posts, as well as many others who deserve credit for not following a herd mentality.
But on at least some of these accounts the effect had by LW and/or MIRI and/or Yudkowsky’s specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.
If I understand correctly, you think that LW, MIRI and other closely related people might have a net negative impact, because they distract some people from contributing to the more productive subareas/approaches of AI research and existential risk prevention, directing them to subareas which you estimate to be much less productive. For the sake of argument, let’s assume that is correct and if all people who follow MIRI’s approach to AGI turned to those subareas of AI that are more productive, it would be a net benefit to the world. But you should consider the other side of the medallion, that is, doesn’t blogs like LessWrong or books like that of N.Bostrom’s actually attract some students to consider working on AI, including the areas you consider beneficial, who would otherwise be working in areas that are unrelated to AI? Wouldn’t the number of people who have even heard about the concept of existential risks be smaller without people like Yudkowsky and Bostrom? I don’t have numbers, but since you are concerned about brain drain in other subareas of AGI and existential risk research, do you think it is unlikely that popularization work done by these people would attract enough young people to AGI in general and existential risks in general that would compensate for the loss of a few individuals, even in subareas of these fields that are unrelated to FAI?
We are finally coming out of a prolonged AI winter. And although funding is finally available to move the state of the art in automation forward, to accelerate progress in life sciences and molecular manufacturing that will bring great humanitarian change, we have created a band of Luddites that fear the solution more than the problem. And in a strange twist of double-think, consider themselves humanitarians for fighting progress.
But do people here actually fight progress? Has anyone actually retired from (or was dissuaded from pursuing) AI research after reading Bostrom or Yudkowsky?
If I understand you correctly, you fear that concerns about AI safety, being a thing that might invoke various emotions in a listener’s mind, is a thing that is sooner or later bound to be picked up by some populist politicians and activists who would sow and exploit these fears in the minds of general population in order to win elections/popularity/prestige among their peers/etc., thus leading to various regulations and restrictions on funding, because that is what these activists (who got popular and influential by catering to the fears of the masses) would demand?
I’m not sure how someone standing on a soapbox and yelling “AI is going to kill us al!” (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn’t have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
Data point: a feeling that I ought to do something about AI risk is the only reason why I submitted an FLI grant proposal that involves some practical AI work, rather than just figuring that the field isn’t for me and doing something completely different.
I’m not sure how someone standing on a soapbox and yelling “AI is going to kill us al!” (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
I don’t know how many copies of Bostrom’s book were sold, but it was on New York Times Best Selling Science Books list. Some of those books were read by high school students. Since very few people leave practical AI research for FAI research, even if only a tiny fraction of these young readers read the book and think “This AI thing is really exciting and interesting. Instead of majoring in X (which is unrelated to AI), I should major in computer science and focus on AI”, it would probably result in net gain for practical AI research.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn’t have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
I argued against this statement:
specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.
When people say that an action leads to a negative outcome, they usually mean that taking that action is worse than not taking it, i.e. they compare the result to zero. If you add another option, then the word “suboptimal” should be used instead. Since I argued against “negativity”, and not “suboptimality”, I dont’ think that the existence of other options is relevant here.
You buck the herd by saying their obsession with AI safety is preventing them from participating in the complete transformation of civilization.
I buck the herd by saying that the whole singulatarian complex is a chimera that has almost nothing to do with how reality will actually play out and its existence as a memeplex is explained primarily by sociological factors rather than having much to do with actual science and technology and history.
Oh, well I mostly agree with you there. Really ending aging will have a transformative effect on society, but the invention of AI is not going to radically alter power structures in the way that singulatarians imagine.
Really ending aging will have a transformative effect on society
“The medical revolution that began with the beginning of the twentieth century had warped all human society for five hundred years. America had adjusted to Eli Whitney’s cotton gin in less than half that time. As with the gin, the effects would never quite die out. But already society was swinging back to what had once been normal. Slowly; but there was motion. In Brazil a small but growing, alliance agitated for the removal of the death penalty for habitual traffic offenders. They would be opposed, but they would win.”
Well there are some serious ramifications that are without historical precedent. For example, without menopause it may perhaps become the norm for women to wait until retirement to have kids. It may in fact be the case that couples will work for 40 years, have a 25-30 year retirement where they raise a cohort of children, and then re-enter the work force for a new career. Certainly families are going to start representing smaller and smaller percentages of the population as birth rates decline while people get older and older without dying. The social ramifications alone will be huge, which was more along the lines of what I was talking about.
This just seems stupid to me. Ending aging is fundamentally SLOW change. In 100 or 200 or 300 years from now, as more and more people gain access to anti-aging (since it will start off very expensive), we can worry about that. But conscious AI will be a force in the world in under 50 years. And it doesn’t even have to be SUPER intelligent to cause insane amounts of social upheaval. Duplicability means that even 1 human level AI can be world-wide or mass produced in a very short time!
Can you link to a longer analysis of yours regarding this?
I simply feel overwhelmed when people discuss AI. To me intelligence is a deeply anthropomorphic category, includes subcategories like having a good sense of humor. Reducing it to optimization, without even sentience or conversational ability with self-consciousness… my brain throws out the stop sign already at this point and it is not even AI, it is the pre-studies of human intelligence that already dehumanize, deanthromorphize the idea of intelligence and make it sound more like a simple and brute-force algorithm. Like Solomonoff Induction, another thing that my brain completely freezes over: how can you have truth and clever solutions without even really thinking, just throwing a huge number of random ideas in and seeing what survives testing? Would it all be so quantitative? Can you reduce the wonderful qualities of the human mind to quantities?
Nobody’s saying AI will be human without humor, joy, etc. The point is AI will be dangerous, because it’ll have those aspects of intelligence that make us powerful, without those that make us nice. Like, that’s basically the point of worrying about UFAI.
Computers already can outperform you in a wide variety of tasks. Moreover, today, with the rise of machine learning, we can train computers to do pretty high-level things, like object recognition or senitment analysis (and sometimes outperform humans in these tasks). Isn’t it power?
As for Solomonoff induction… What do you think your brain is doing when you are thinking? Some kind of optimized search in hypotheses space, so you consider only a very very small set of hypotheses (compared to the entire space), hopefully good enough ones. While Solomonoff induction checks all of them, every single hypothesis, and finds the best.
Solomonoff induction is so much thinking that it is incomputable.
Since we don’t have that much raw computing power (and never will have), the hypotheses search must be heavily optimized. Throwing off unpromising directions of search. Searching in regions with high probability of success. Using prior knowledge to narrow search. That’s what your brain is doing, and that’s what machines will do. That’s not like “simple and brute-force”, because simple and brute-force algorithms are either impractically slow, or incomputable at all.
Computers already can outperform you in a wide variety of tasks.
Eagles, too: they can fly and I not. The question is whether the currently foreseeable computerizable tasks are closer to flying or to intelligence. Which in turn depends on how high and how “magic” we see intelligence.
As for Solomonoff induction… What do you think your brain is doing when you are thinking?
Ugh, using Aristotelean logic? So it is not random hypotheses but causality and logic based.
Solomonoff induction is so much thinking that it is incomputable.
I think using your terminology, thinking is not the searching, it is the findinging logical relationships so not a lot of space must be searched.
That’s not like “simple and brute-force”, because simple and brute-force algorithms are either impractically slow, or incomputable at all.
OK, that makes sense. Perhaps we can agree that logic and casuality and actual reasoning is all about narrowing the hypothesis space to search. This is intelligence, not the search.
I’m starting to suspect that we’re arguing on definitions. By search I mean the entire algorithm of finding the best hypothesis; both random hypothesis checking and Aristotelian logic (and any combination of these methods) fit. What do you mean?
Narrowing the hypothesis space is search. Once you narrowed the hypotheses space to a single point, you have found an answer.
As for eagles: if we build a drone that can fly as well as an eagle can, I’d say that the drone has an eagle-level flying ability; if a computer can solve all intellectual tasks that a human can solve, I’d say that the computer has a human-level intelligence.
Yes. Absolutely. When that happens inside a human being’s head, we generally call them ‘mass murderers’. Even I only cooperate with society because there is a net long term gain in doing so; if that were no longer the case, I honestly don’t know what I would do. Awesome, that’s something new to think about. Thanks.
That’s probably irrelevant, because mass murderers don’t have power without all the rest. They are likely to have sentience and conversational ability with self-consciousness, at least.
Not sure. Suspect nobody knows, but seems possible?
I think the most instructive post on this is actually Three Worlds Collide, for making a strong case for the arbitrary nature of our own “universal” values.
My basic thesis is that even if that was not the intent, the result has been the production of idiots. Specifically, a type of idiotic madness that causes otherwise good people, self-proclaimed humanitarians to disparage the only sort of progress which has the potential to alleviate all human suffering, forever, on accelerated timescales. And they do so for reasons that are not grounded in empirical evidence, because they were taught though demonstration modes of non-empirical thinking from the sequences, and conditioned to think this was okay through social engagement on LW.
When you find yourself digging a hole, the sensible and correct thing to do is stop digging. I think we can do better, but I’m burned out on trying to reform from the inside. Or perhaps I’m no longer convinced that reform can work given the nature of the medium (social pressures of blog posts and forums work counter to the type of rationality that should be advocated for).
I don’t want to take that away. But for me LW was not just a baptismal fount for discovering rationality, it was also an effort to get people to work on humanitarian relief and existential risk reduction. I hope you don’t think me crazy for saying that LW has had a subject matter bias in these directions. But on at least some of these accounts the effect had by LW and/or MIRI and/or Yudkowsky’s specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.
We are finally coming out of a prolonged AI winter. And although funding is finally available to move the state of the art in automation forward, to accelerate progress in life sciences and molecular manufacturing that will bring great humanitarian change, we have created a band of Luddites that fear the solution more than the problem. And in a strange twist of double-think, consider themselves humanitarians for fighting progress.
I am myself working on various projects in my life which I expect to have positive effects on the world. Outside of work, LW has at times occupied a significant fraction of my leisure time. This must be seen as an activity of higher utility than working more hours on my startup, making progress on my molecular nanotech and AI side projects, or enriching myself personally in other ways (family time, reading, etc.). I saw the Rationality reading group as a chance to do something which would conceivably grow that community by a measurable amount, thereby justifying a time expenditure. However if all I am doing is bringing more people into a community that is actively working against developments in artificial intelligence that have a chance of relieving human suffering within a single generation… the Hippocratic corpus comes to mind: “first, do no harm.”
I am not sure yet what I will fill the time with. Maybe I’ll get off my butt and start making more concrete progress on some of the nanotech and AI stuff that I have been letting slide in recent years.
I recognize also that I am making broad generalizations which do not always apply to everyone. You seem to be an exception, and I wish I had engaged with you more. I also will miss TheAncientGeek’s contrarian posts, as well as many others who deserve credit for not following a herd mentality.
If I understand correctly, you think that LW, MIRI and other closely related people might have a net negative impact, because they distract some people from contributing to the more productive subareas/approaches of AI research and existential risk prevention, directing them to subareas which you estimate to be much less productive. For the sake of argument, let’s assume that is correct and if all people who follow MIRI’s approach to AGI turned to those subareas of AI that are more productive, it would be a net benefit to the world. But you should consider the other side of the medallion, that is, doesn’t blogs like LessWrong or books like that of N.Bostrom’s actually attract some students to consider working on AI, including the areas you consider beneficial, who would otherwise be working in areas that are unrelated to AI? Wouldn’t the number of people who have even heard about the concept of existential risks be smaller without people like Yudkowsky and Bostrom? I don’t have numbers, but since you are concerned about brain drain in other subareas of AGI and existential risk research, do you think it is unlikely that popularization work done by these people would attract enough young people to AGI in general and existential risks in general that would compensate for the loss of a few individuals, even in subareas of these fields that are unrelated to FAI?
But do people here actually fight progress? Has anyone actually retired from (or was dissuaded from pursuing) AI research after reading Bostrom or Yudkowsky?
If I understand you correctly, you fear that concerns about AI safety, being a thing that might invoke various emotions in a listener’s mind, is a thing that is sooner or later bound to be picked up by some populist politicians and activists who would sow and exploit these fears in the minds of general population in order to win elections/popularity/prestige among their peers/etc., thus leading to various regulations and restrictions on funding, because that is what these activists (who got popular and influential by catering to the fears of the masses) would demand?
I’m not sure how someone standing on a soapbox and yelling “AI is going to kill us al!” (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn’t have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
Data point: a feeling that I ought to do something about AI risk is the only reason why I submitted an FLI grant proposal that involves some practical AI work, rather than just figuring that the field isn’t for me and doing something completely different.
I don’t know how many copies of Bostrom’s book were sold, but it was on New York Times Best Selling Science Books list. Some of those books were read by high school students. Since very few people leave practical AI research for FAI research, even if only a tiny fraction of these young readers read the book and think “This AI thing is really exciting and interesting. Instead of majoring in X (which is unrelated to AI), I should major in computer science and focus on AI”, it would probably result in net gain for practical AI research.
I argued against this statement:
When people say that an action leads to a negative outcome, they usually mean that taking that action is worse than not taking it, i.e. they compare the result to zero. If you add another option, then the word “suboptimal” should be used instead. Since I argued against “negativity”, and not “suboptimality”, I dont’ think that the existence of other options is relevant here.
Interesting, I seem to buck the herd in nearly exactly the opposite manner as you.
Meaning?
You buck the herd by saying their obsession with AI safety is preventing them from participating in the complete transformation of civilization.
I buck the herd by saying that the whole singulatarian complex is a chimera that has almost nothing to do with how reality will actually play out and its existence as a memeplex is explained primarily by sociological factors rather than having much to do with actual science and technology and history.
Oh, well I mostly agree with you there. Really ending aging will have a transformative effect on society, but the invention of AI is not going to radically alter power structures in the way that singulatarians imagine.
See, I include the whole ‘immanent radical life extension’ and ‘Drexlerian molecular manufacturing’ idea sets in the singulatarian complex...
The craziest person in the world can still believe the sky is blue.
Ah, but in this case as near as i can tell it is actually orange.
“The medical revolution that began with the beginning of the twentieth century had warped all human society for five hundred years. America had adjusted to Eli Whitney’s cotton gin in less than half that time. As with the gin, the effects would never quite die out. But already society was swinging back to what had once been normal. Slowly; but there was motion. In Brazil a small but growing, alliance agitated for the removal of the death penalty for habitual traffic offenders. They would be opposed, but they would win.”
Larry Niven: The Gift From Earth
Well there are some serious ramifications that are without historical precedent. For example, without menopause it may perhaps become the norm for women to wait until retirement to have kids. It may in fact be the case that couples will work for 40 years, have a 25-30 year retirement where they raise a cohort of children, and then re-enter the work force for a new career. Certainly families are going to start representing smaller and smaller percentages of the population as birth rates decline while people get older and older without dying. The social ramifications alone will be huge, which was more along the lines of what I was talking about.
This just seems stupid to me. Ending aging is fundamentally SLOW change. In 100 or 200 or 300 years from now, as more and more people gain access to anti-aging (since it will start off very expensive), we can worry about that. But conscious AI will be a force in the world in under 50 years. And it doesn’t even have to be SUPER intelligent to cause insane amounts of social upheaval. Duplicability means that even 1 human level AI can be world-wide or mass produced in a very short time!
“Will”? You guarantee that?
Can you link to a longer analysis of yours regarding this?
I simply feel overwhelmed when people discuss AI. To me intelligence is a deeply anthropomorphic category, includes subcategories like having a good sense of humor. Reducing it to optimization, without even sentience or conversational ability with self-consciousness… my brain throws out the stop sign already at this point and it is not even AI, it is the pre-studies of human intelligence that already dehumanize, deanthromorphize the idea of intelligence and make it sound more like a simple and brute-force algorithm. Like Solomonoff Induction, another thing that my brain completely freezes over: how can you have truth and clever solutions without even really thinking, just throwing a huge number of random ideas in and seeing what survives testing? Would it all be so quantitative? Can you reduce the wonderful qualities of the human mind to quantities?
Intelligence to what purpose?
Nobody’s saying AI will be human without humor, joy, etc. The point is AI will be dangerous, because it’ll have those aspects of intelligence that make us powerful, without those that make us nice. Like, that’s basically the point of worrying about UFAI.
But is it possible to have power without all the rest?
Certainly. Why not?
Computers already can outperform you in a wide variety of tasks. Moreover, today, with the rise of machine learning, we can train computers to do pretty high-level things, like object recognition or senitment analysis (and sometimes outperform humans in these tasks). Isn’t it power?
As for Solomonoff induction… What do you think your brain is doing when you are thinking? Some kind of optimized search in hypotheses space, so you consider only a very very small set of hypotheses (compared to the entire space), hopefully good enough ones. While Solomonoff induction checks all of them, every single hypothesis, and finds the best.
Solomonoff induction is so much thinking that it is incomputable.
Since we don’t have that much raw computing power (and never will have), the hypotheses search must be heavily optimized. Throwing off unpromising directions of search. Searching in regions with high probability of success. Using prior knowledge to narrow search. That’s what your brain is doing, and that’s what machines will do. That’s not like “simple and brute-force”, because simple and brute-force algorithms are either impractically slow, or incomputable at all.
Eagles, too: they can fly and I not. The question is whether the currently foreseeable computerizable tasks are closer to flying or to intelligence. Which in turn depends on how high and how “magic” we see intelligence.
Ugh, using Aristotelean logic? So it is not random hypotheses but causality and logic based.
I think using your terminology, thinking is not the searching, it is the findinging logical relationships so not a lot of space must be searched.
OK, that makes sense. Perhaps we can agree that logic and casuality and actual reasoning is all about narrowing the hypothesis space to search. This is intelligence, not the search.
I’m starting to suspect that we’re arguing on definitions. By search I mean the entire algorithm of finding the best hypothesis; both random hypothesis checking and Aristotelian logic (and any combination of these methods) fit. What do you mean?
Narrowing the hypothesis space is search. Once you narrowed the hypotheses space to a single point, you have found an answer.
As for eagles: if we build a drone that can fly as well as an eagle can, I’d say that the drone has an eagle-level flying ability; if a computer can solve all intellectual tasks that a human can solve, I’d say that the computer has a human-level intelligence.
Yes. Absolutely. When that happens inside a human being’s head, we generally call them ‘mass murderers’. Even I only cooperate with society because there is a net long term gain in doing so; if that were no longer the case, I honestly don’t know what I would do. Awesome, that’s something new to think about. Thanks.
That’s probably irrelevant, because mass murderers don’t have power without all the rest. They are likely to have sentience and conversational ability with self-consciousness, at least.
Not sure. Suspect nobody knows, but seems possible?
I think the most instructive post on this is actually Three Worlds Collide, for making a strong case for the arbitrary nature of our own “universal” values.