I would like to ask if you have turned this idea against your own most cherished beliefs?
I would be really interested to hear what you see when you “close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts” rationality and singularity the most.
If you would like to know what someone who partially disagrees with you would say:
In my opinion, the objective of being a rationalist contains the same lopsided view of technology’s capacity to transform reality that you attribute to God in the Jewish tradition.
According to Jewish theology, God continually sustains the universe and chooses every event in it; but ordinarily, drawing logical implications from this belief is reserved for happier occasions. By saying “God did it!” only when you’ve been blessed with a baby girl, and just-not-thinking “God did it!” for miscarriages and stillbirths and crib deaths, you can build up quite a lopsided picture of your God’s benevolent personality.
Technology cures diseases, provides a more materially comfortable life style for many people, and feeds over 7 billion. By saying “rapid innovation did it” when blessed with a baby girl who would have died in birth without modern medical equipment, and just-not-thinking “rapid implementation of innovation did it” for ecocide, the proliferation of nuclear waste, the destruction of the ocean, increase in cancer, and the ability to wipe out an entire city thousands of miles away, you can build up quite a lopsided picture of technological development’s beneficial personality.
The unquestioned rightness of rapid, continual technological innovation that disregards any negative results as potential signs for the need of moderation is what I see as the weakest point of your beliefs. Or at least my understanding of them.
Eliezer hasn’t argued for the unquestioned rightness of rapid, continual technological innovation. On the contrary, he’s argued that scientists should bear some responsibility for the potentially dangerous fruits of their work, rather than handwaving it away with the presumption that the developments can’t do any harm, or if they can, it’s not their responsibility.
In fact, the primary purpose of the SIAI is to try and get a particular technological development right, because they are convinced that getting it wrong could fuck up everything worse than anything has ever been fucked up.
In fact, the primary purpose of the SIAI is to try and get a particular technological development right, because they are convinced that getting it wrong could fuck up everything worse than anything has ever been fucked up.
Well put. SIAI needs to adopt this as a mission statement! :P
I’m afraid I don’t remember which post he discusses the idea that scientists should worry about the ethics of their work, and I’m having a difficult time finding it. If you want to find that specific post, it might be better to create an open request in a more prominent place and see if anyone else remembers which one it was.
Although it would take a much longer time though, I think it might be a good idea for you to read all the sequences. Eliezer wrote them to bring people up to speed with his position on the development of AI and rationality after all, so that if we are going to continue to have disagreements, at least they can be more meaningful and substantive disagreements, with all of us on the same page. It sounds very much to me like you’re pattern matching Eliezer’s writing and responding to what you expect him to think, but if his position were such a short hop of inferential distance for most readers, he wouldn’t have needed to go to all the work of creating the sequences in the first place.
Honestly, I would moderate society with more positive religious elements. In my opinion modern society has preserved many dysfunctional elements of religion while abandoning the functional benefits. I can see that a community of rationalists would have a problem with this perspective, seeing that religion almost always results in an undereducated majority being enchanted by their psychological reflexes; but personally, I don’t see the existence of an irrational mass as unconditionally detrimental.
It is interesting to speculate about the potential of a majorly rational society, but I see no practical method of accomplishing this, nor a reason to believe that, I see no real reason to believe that if there was such a configuration would necessarily be superior to the current model.
Either swimmer or Dave, are either of you aware of a practical methodology for rationalizing the masses, or a reason to think why a more efficient society would be any less oppressive or war driven. In fact, in a worst case scenario, I see a world of majorly rational people as transforming into an even more efficient war machine, and killing us all faster. As for the project of pursuit of Friendly AI, I do not know that much about it. What is the perceived end goal of friendly Ai? Is it that an unbiased, unfailing intelligence replaces humans as the primary organizers and arbiters of power in our society, or is it that humanity itself is digitized? I would be very interested to know…without being told to read an entire tome of LW essays.
Is it that an unbiased, unfailing intelligence replaces humans as the primary organizers and arbiters of power in our society, or is it that humanity itself is digitized?
Pretty much the first, but with a perspective worth mentioning. Expressing human values in terms that humans can understand is pretty easy, but still difficult enough to keep philosophy departments writing paper after paper and preachers writing sermon after sermon. Expressing human values in terms that computers can understand- well, that’s tough. Really tough. And if you get it wrong, and the computers become the primary organizers and arbiters of power- well, now we’ve lost the future.
Either swimmer or Dave, are either of you aware of a practical methodology for rationalizing the masses
For a sufficiently broad understanding of “practical” and “the masses” (and understanding “rationalizing” the way I think you mean it, which I would describe as educating), no. Way too many people on the planet for any of the educational techniques I know about to affect more than the smallest fraction of them without investing a huge amount of effort.
It’s worth asking what the benefits are of better educating even a small fraction of “the masses”, though.
or a reason to think why a more efficient society would be any less oppressive or war driven
That depends, of course, on what the society values. If I value oppressing people, making me more efficient just lets me oppress people more efficiently. If I value war, making me more efficient means I conduct war more efficiently.
My best guess is that collectively we value things that war turns out to be an inefficient way of achieving. I’m not confident the same is true about oppression.
In fact, in a worst case scenario, I see a world of majorly rational people as transforming into an even more efficient war machine, and killing us all faster.
Sure. But that scenario implies that wanting to kill ourselves is the goal we’re striving for, and I consider that unlikely enough to not be worth worrying about much.
What is the perceived end goal of friendly Ai? Is it that an unbiased, unfailing intelligence replaces humans as the primary organizers and arbiters of power in our society
Similar, yes. A system designed to optimize the environment for the stuff humans value will, if it’s a better optimizer than humans are, get better results than humans do.
That depends, of course, on what the society values. If I value oppressing people, making me more efficient just lets me oppress people more efficiently. If I value war, making me more efficient means I conduct war more efficiently.
So does rationality determine what a person or group values, or is it merely a tool to be used towards subjective values?
Sure. But that scenario implies that wanting to kill ourselves is the goal we’re striving for, and I consider that unlikely enough to not be worth worrying about much.
My scenario does not assume that all of humanity views themselves as one in-group. Whereas what you are saying assumes that it does. Killing ourselves and killing them are two very different things. I don’t think many groups have the goal of killing themselves, but do you not think that the eradication of competing out groups could be seen as increasing in-group survival?
Almost entirely orthogonal.
You are going to have to explain what you mean here.
So does rationality determine what a person or group values, or is it merely a tool to be used towards subjective values?
Dunno about “merely”, but yeah, the thing LW refers to by “rationality” is a tool that can be used to promote any values.
My scenario does not assume that all of humanity views themselves as one in-group. Whereas what you are saying assumes that it does.
I don’t think it assumes that, actually. You mentioned “a world of majorly rational people [..] killing us all faster.” I don’t see how a world of people who are better at achieving what they value results in all of us being killed faster, unless people value killing all of us.
If what I value is killing you and surviving myself, and you value the same, but we end up taking steps that result in both of us dying, it would appear we have failed to take steps that optimize for our goals. Perhaps if we were better at optimizing for our goals, we would have taken different steps.
do you not think that the eradication of competing out groups could be seen as increasing in-group survival?
Sure.
Almost entirely orthogonal.
You are going to have to explain what you mean here.
I mean that whether humanity is digitized has almost nothing to do with the perceived end goal.
Based on our earlier discussion of exactly this topic, I would say he wants to use some way of slowing down technological progress… My main argument against this is that I don’t think we have a way of slowing technological progress that a) affects all actors (it wouldn’t be a better world if only those nations not obeying international law were making technological progress), and b) has no negative ideological effects. (Has there ever been a regime that was pro-moderation-of-progress without being outright anti-progress? I don’t know, I haven’t thoroughly researched this, so maybe I’m just pattern-matching.) Also, I’m not sure how you’d set up the economic system of that society so there weren’t big incentives for people or companies to innovate and profit from it.
Of course, “no one has ever succeeded at X in the past” isn’t an unstoppable argument against X at all… But I am worried than any attempt to transform our current, no-brakes-on society into a ‘moderated’ society would be messy in the short term, and probably fail in the long term. (At our current level of technology, it’s basically possible for individuals to make progress on given problems, and that would be very hard to stop.)
I disagree with your claim that our current society has no brakes on technological innovation. It does have such brakes, and it could have more if we wanted.
But slowing down technological innovation in and of itself seems absurd. Either technological innovation has been a net harm, or a net gain, or neither. If neither, I see no reason to want to slow it down. Slowing down a net gain seems like an actively bad idea. And slowing down a net harm seems inadequate; if technological innovation is a net harm it should be stopped and reversed, not merely slowed down.
It seems more valuable to identify the differentially harmful elements of technological innovation and moderate the process to suppress those while encouraging the rest of it. I agree that that is difficult to do well and frequently has side-effects. (As it does in our currently moderated system.)
Which doesn’t mean an unmoderated system would be better. (Indeed, I’m inclined to doubt it would.)
It seems more valuable to identify the differentially harmful elements of technological innovation and moderate the process to suppress those while encouraging the rest of it. I agree that that is difficult to do well and frequently has side-effects.
I think there might be a part of my brain that, when given the problem “moderate technological progress in general”, automatically converts it to “slow down harmful technology while leaving beneficial technology alone” and then gets stuck trying to solve that. But you’re right, I can think of various elements in our society that slow down progress (regulations concerning drug testing before market release, anti-stem-cell-research lobbying groups, etc).
Sure… this is why I asked the question in the first place, of what kind of moderation.
Framing the problem as the OP does here, as an opposition between a belief in the “unquestioned rightness of [..] innovation that disregards any negative results” and some unclear alternative, seems a strategy better optimized towards the goal of creating conflict than the goal of developing new ideas.
Since I don’t particularly value conflict for its own sake, I figured I’d put my oar in the water in the direction of inviting new ideas.
I don’t think I know anyone who seriously endorses doing everything that anyone labels “technological innovation”, but I know people who consider most of our existing regulations intended to prevent some of those things to do more harm than good. Similarly, I don’t think I know anyone who seriously endorses doing none of those things (or at least, no one who retroactively endorses not having done any of those things we’ve already done), but I know people who consider our current level of regulation problematically low.
Similarly, I don’t think I know anyone who seriously endorses doing none of those things (or at least, no one who retroactively endorses not having done any of those things we’ve already done)
FWIW, I know plenty of libertarians who think regulation is unquestionably bad, and will happily insist the world would be better without regulations on technological advancement, even that one (for whatever one you’d like).
I don’t think we have a way of slowing technological progress that a) affects all actors (it wouldn’t be a better world if only those nations not obeying international law were making technological progress), and b) has no negative ideological effects.
By “negative ideological effects” do you mean the legitimization of some body of religious knowledge? As stated in my post to Dave, if your objective is to re-condition society to have a rational majority, I can see how religious knowledge (which is often narratively rather than logically sequenced) would be seen as having “negative ideological effects. However, I would argue that there are functional benefits of religion. One of which is the limitation of power. Historically technological progress has for millennia been slowed down by religious and moral barriers. One of the main effects of the scientific revolution was to dissolve these barriers that impeded the production of power (See Mannheim, Ideology and Utopia). However, the current constitution of American society still contains tools of limitation, even non-religious ones. People don’t often look at it this way, but taxation is used in an incredibly moral way. Governments tax highly what they want to dissuade and provide exemptions, even subsidies for what they want to promote. The fact that there is a higher tax on cigarettes is a type of morally based restriction on the expansion of the tobacco industry in our society.
Stronger than taxation there is ability to flat out illegalize something or stigmatize it. Compared to the state of marijuana as an illegal substance and the stigma it carries in many communities makes the limitation of the cigarettes industry through taxation seems relatively minor.
Whether social stigma, taxation, or illegalization, there are several tools at our nation’s disposal to alter the development of industries due to subjective moral values, next to none of which are aimed at limiting the information-technology industries. There is no tax on certain types of research based on a judgment of what is right or wrong. To the contrary, the vast majority of scientific research is for the development of weapons technologies. And who are the primary funders of this research? The department of homeland security and the U.S military make up somewhere around 65-80% of academic research (this statistic might be a little off).
In regards to non-academic research, one of the primary impetuses may not be militarization, but is without doubt entrepreneurialism. Where the primary focus of a person or group is the development of capital the purpose of innovation becomes not fulfilling some need, but to create needs to fulfill the endless goal of cultivating more wealth. Jean Baudrillard is a very interesting sociologist, whose work is built around the idea that in western society no longer do the desires (demands) of people lead to the production of a supply, but rather where desires (demands) are artificially produced by capitalists to fulfill their supplies. A large part of this production is symbolic,, and ultimately distorts the motivations and actions of people to contradict the territories they live in.
Definitely barking up the wrong tree there. Chaos-worshippersDynamists like me are under-represented here for such a technology-loving community—note that the whole basis of FAI is that rapidly self-improving technology by default results in a Bad End.
I am asking for Eliezer to apply the technique described in this essay to his own belief system. I don’t see how that could be barking up the wrong tree, unless you are implying that he is some how impervious to “spontaneously self-attack[ing] strong points with comforting replies to rehearse, then to spontaneously self-attack the weakest, most vulnerable points.”
I would like to ask if you have turned this idea against your own most cherished beliefs?
I would be really interested to hear what you see when you “close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts” rationality and singularity the most.
If you would like to know what someone who partially disagrees with you would say:
In my opinion, the objective of being a rationalist contains the same lopsided view of technology’s capacity to transform reality that you attribute to God in the Jewish tradition.
Technology cures diseases, provides a more materially comfortable life style for many people, and feeds over 7 billion. By saying “rapid innovation did it” when blessed with a baby girl who would have died in birth without modern medical equipment, and just-not-thinking “rapid implementation of innovation did it” for ecocide, the proliferation of nuclear waste, the destruction of the ocean, increase in cancer, and the ability to wipe out an entire city thousands of miles away, you can build up quite a lopsided picture of technological development’s beneficial personality.
The unquestioned rightness of rapid, continual technological innovation that disregards any negative results as potential signs for the need of moderation is what I see as the weakest point of your beliefs. Or at least my understanding of them.
Eliezer hasn’t argued for the unquestioned rightness of rapid, continual technological innovation. On the contrary, he’s argued that scientists should bear some responsibility for the potentially dangerous fruits of their work, rather than handwaving it away with the presumption that the developments can’t do any harm, or if they can, it’s not their responsibility.
In fact, the primary purpose of the SIAI is to try and get a particular technological development right, because they are convinced that getting it wrong could fuck up everything worse than anything has ever been fucked up.
Well put. SIAI needs to adopt this as a mission statement! :P
Could you show me where he argues this?
I’m afraid I don’t remember which post he discusses the idea that scientists should worry about the ethics of their work, and I’m having a difficult time finding it. If you want to find that specific post, it might be better to create an open request in a more prominent place and see if anyone else remembers which one it was.
Although it would take a much longer time though, I think it might be a good idea for you to read all the sequences. Eliezer wrote them to bring people up to speed with his position on the development of AI and rationality after all, so that if we are going to continue to have disagreements, at least they can be more meaningful and substantive disagreements, with all of us on the same page. It sounds very much to me like you’re pattern matching Eliezer’s writing and responding to what you expect him to think, but if his position were such a short hop of inferential distance for most readers, he wouldn’t have needed to go to all the work of creating the sequences in the first place.
Yup, implementation of technological innovation has costs as well as benefits.
What kind of moderation do you have in mind?
Honestly, I would moderate society with more positive religious elements. In my opinion modern society has preserved many dysfunctional elements of religion while abandoning the functional benefits. I can see that a community of rationalists would have a problem with this perspective, seeing that religion almost always results in an undereducated majority being enchanted by their psychological reflexes; but personally, I don’t see the existence of an irrational mass as unconditionally detrimental.
It is interesting to speculate about the potential of a majorly rational society, but I see no practical method of accomplishing this, nor a reason to believe that, I see no real reason to believe that if there was such a configuration would necessarily be superior to the current model.
Either swimmer or Dave, are either of you aware of a practical methodology for rationalizing the masses, or a reason to think why a more efficient society would be any less oppressive or war driven. In fact, in a worst case scenario, I see a world of majorly rational people as transforming into an even more efficient war machine, and killing us all faster. As for the project of pursuit of Friendly AI, I do not know that much about it. What is the perceived end goal of friendly Ai? Is it that an unbiased, unfailing intelligence replaces humans as the primary organizers and arbiters of power in our society, or is it that humanity itself is digitized? I would be very interested to know…without being told to read an entire tome of LW essays.
Pretty much the first, but with a perspective worth mentioning. Expressing human values in terms that humans can understand is pretty easy, but still difficult enough to keep philosophy departments writing paper after paper and preachers writing sermon after sermon. Expressing human values in terms that computers can understand- well, that’s tough. Really tough. And if you get it wrong, and the computers become the primary organizers and arbiters of power- well, now we’ve lost the future.
For a sufficiently broad understanding of “practical” and “the masses” (and understanding “rationalizing” the way I think you mean it, which I would describe as educating), no. Way too many people on the planet for any of the educational techniques I know about to affect more than the smallest fraction of them without investing a huge amount of effort.
It’s worth asking what the benefits are of better educating even a small fraction of “the masses”, though.
That depends, of course, on what the society values. If I value oppressing people, making me more efficient just lets me oppress people more efficiently. If I value war, making me more efficient means I conduct war more efficiently.
My best guess is that collectively we value things that war turns out to be an inefficient way of achieving. I’m not confident the same is true about oppression.
Sure. But that scenario implies that wanting to kill ourselves is the goal we’re striving for, and I consider that unlikely enough to not be worth worrying about much.
Similar, yes. A system designed to optimize the environment for the stuff humans value will, if it’s a better optimizer than humans are, get better results than humans do.
Almost entirely orthogonal.
So does rationality determine what a person or group values, or is it merely a tool to be used towards subjective values?
My scenario does not assume that all of humanity views themselves as one in-group. Whereas what you are saying assumes that it does. Killing ourselves and killing them are two very different things. I don’t think many groups have the goal of killing themselves, but do you not think that the eradication of competing out groups could be seen as increasing in-group survival?
You are going to have to explain what you mean here.
Dunno about “merely”, but yeah, the thing LW refers to by “rationality” is a tool that can be used to promote any values.
I don’t think it assumes that, actually. You mentioned “a world of majorly rational people [..] killing us all faster.” I don’t see how a world of people who are better at achieving what they value results in all of us being killed faster, unless people value killing all of us.
If what I value is killing you and surviving myself, and you value the same, but we end up taking steps that result in both of us dying, it would appear we have failed to take steps that optimize for our goals. Perhaps if we were better at optimizing for our goals, we would have taken different steps.
Sure.
I mean that whether humanity is digitized has almost nothing to do with the perceived end goal.
Based on our earlier discussion of exactly this topic, I would say he wants to use some way of slowing down technological progress… My main argument against this is that I don’t think we have a way of slowing technological progress that a) affects all actors (it wouldn’t be a better world if only those nations not obeying international law were making technological progress), and b) has no negative ideological effects. (Has there ever been a regime that was pro-moderation-of-progress without being outright anti-progress? I don’t know, I haven’t thoroughly researched this, so maybe I’m just pattern-matching.) Also, I’m not sure how you’d set up the economic system of that society so there weren’t big incentives for people or companies to innovate and profit from it.
Of course, “no one has ever succeeded at X in the past” isn’t an unstoppable argument against X at all… But I am worried than any attempt to transform our current, no-brakes-on society into a ‘moderated’ society would be messy in the short term, and probably fail in the long term. (At our current level of technology, it’s basically possible for individuals to make progress on given problems, and that would be very hard to stop.)
I disagree with your claim that our current society has no brakes on technological innovation. It does have such brakes, and it could have more if we wanted.
But slowing down technological innovation in and of itself seems absurd. Either technological innovation has been a net harm, or a net gain, or neither. If neither, I see no reason to want to slow it down. Slowing down a net gain seems like an actively bad idea. And slowing down a net harm seems inadequate; if technological innovation is a net harm it should be stopped and reversed, not merely slowed down.
It seems more valuable to identify the differentially harmful elements of technological innovation and moderate the process to suppress those while encouraging the rest of it. I agree that that is difficult to do well and frequently has side-effects. (As it does in our currently moderated system.)
Which doesn’t mean an unmoderated system would be better. (Indeed, I’m inclined to doubt it would.)
I think there might be a part of my brain that, when given the problem “moderate technological progress in general”, automatically converts it to “slow down harmful technology while leaving beneficial technology alone” and then gets stuck trying to solve that. But you’re right, I can think of various elements in our society that slow down progress (regulations concerning drug testing before market release, anti-stem-cell-research lobbying groups, etc).
Sure… this is why I asked the question in the first place, of what kind of moderation.
Framing the problem as the OP does here, as an opposition between a belief in the “unquestioned rightness of [..] innovation that disregards any negative results” and some unclear alternative, seems a strategy better optimized towards the goal of creating conflict than the goal of developing new ideas.
Since I don’t particularly value conflict for its own sake, I figured I’d put my oar in the water in the direction of inviting new ideas.
I don’t think I know anyone who seriously endorses doing everything that anyone labels “technological innovation”, but I know people who consider most of our existing regulations intended to prevent some of those things to do more harm than good. Similarly, I don’t think I know anyone who seriously endorses doing none of those things (or at least, no one who retroactively endorses not having done any of those things we’ve already done), but I know people who consider our current level of regulation problematically low.
FWIW, I know plenty of libertarians who think regulation is unquestionably bad, and will happily insist the world would be better without regulations on technological advancement, even that one (for whatever one you’d like).
Yeah, I believe you that they exist. I’ve never met one in real life.
By “negative ideological effects” do you mean the legitimization of some body of religious knowledge? As stated in my post to Dave, if your objective is to re-condition society to have a rational majority, I can see how religious knowledge (which is often narratively rather than logically sequenced) would be seen as having “negative ideological effects. However, I would argue that there are functional benefits of religion. One of which is the limitation of power. Historically technological progress has for millennia been slowed down by religious and moral barriers. One of the main effects of the scientific revolution was to dissolve these barriers that impeded the production of power (See Mannheim, Ideology and Utopia). However, the current constitution of American society still contains tools of limitation, even non-religious ones. People don’t often look at it this way, but taxation is used in an incredibly moral way. Governments tax highly what they want to dissuade and provide exemptions, even subsidies for what they want to promote. The fact that there is a higher tax on cigarettes is a type of morally based restriction on the expansion of the tobacco industry in our society.
Stronger than taxation there is ability to flat out illegalize something or stigmatize it. Compared to the state of marijuana as an illegal substance and the stigma it carries in many communities makes the limitation of the cigarettes industry through taxation seems relatively minor.
Whether social stigma, taxation, or illegalization, there are several tools at our nation’s disposal to alter the development of industries due to subjective moral values, next to none of which are aimed at limiting the information-technology industries. There is no tax on certain types of research based on a judgment of what is right or wrong. To the contrary, the vast majority of scientific research is for the development of weapons technologies. And who are the primary funders of this research? The department of homeland security and the U.S military make up somewhere around 65-80% of academic research (this statistic might be a little off).
In regards to non-academic research, one of the primary impetuses may not be militarization, but is without doubt entrepreneurialism. Where the primary focus of a person or group is the development of capital the purpose of innovation becomes not fulfilling some need, but to create needs to fulfill the endless goal of cultivating more wealth. Jean Baudrillard is a very interesting sociologist, whose work is built around the idea that in western society no longer do the desires (demands) of people lead to the production of a supply, but rather where desires (demands) are artificially produced by capitalists to fulfill their supplies. A large part of this production is symbolic,, and ultimately distorts the motivations and actions of people to contradict the territories they live in.
Definitely barking up the wrong tree there.
Chaos-worshippersDynamists like me are under-represented here for such a technology-loving community—note that the whole basis of FAI is that rapidly self-improving technology by default results in a Bad End.Contrast EY’s notion of AGI with Ben Goertzel’s.
I am asking for Eliezer to apply the technique described in this essay to his own belief system. I don’t see how that could be barking up the wrong tree, unless you are implying that he is some how impervious to “spontaneously self-attack[ing] strong points with comforting replies to rehearse, then to spontaneously self-attack the weakest, most vulnerable points.”