Continuing the argument though, I just don’t think including actual people on the receiving end into the debate would help determine true beliefs about the best way to solve whatever problem it is. It’d fall prey to the usual suspects like scope insensitivity, emotional pleading, and the like. Someone joins the debate and says “Your plan to wipe out malaria diverted funding away from charities that research the cure to my cute puppy’s rare illness, how could you do that?”—how do you respond to that truthfully while maintaining basic social standards of politeness?
Someone affected by the issue might bring up something that nobody else had thought of, something that the science and statistics and studies missed—but other than that, what marginal value are they adding to the discussion?
In my experience, reading blogs from minority representants (sensible ones) introduces you to different thought patterns.
Not very specific, huh ?
Gypsies are the most focused on minority in my country.
The gypsy blogger, who managed to leave her community, once described a story. Her mother visited her in her home, found frozen meat in her freezer, and started almonst crying: My daughter, how can you store meat at home, when people exist, who are hungry today ? (Gypsies are stereotypically bad at planning and managing their finances, to the point of selfdestruction. But before this blog, I did not understand, it makes them virtuous in their own eyes.)
Would not it be nice to have such people interacting in LW conversations, instead of just linking to them ?
Especially for people intending to program friendly AI, who need to understand the needs of other people (although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist. But it is just me.)
Would not it be nice to have such people interacting in LW conversations, instead of just linking to them ?
Yes. It would be nice. I am genuinely uncertain whether there’s a good way to make LW appealing to people who currently dislike it, without alienating the existing contributors who do like it.
Maybe I am naive, but, how about explicitly stating, by some high status member, that we would be very happy if they contributed here ?
Eliezer wrote the same thing about women.
http://lesswrong.com/lw/ap/of_gender_and_rationality/
It was not exactly “Women, come, please” but it was clear they would be welcome to participate.
It might have helped.
Or maybe the increased percentage in the census result was due to something else ?
How would I know...
If you want to increase your fish-size, articles / comment threads which generate lots of upvotes are a good way to do it. And since your fish-size is small already there’s not much to lose if people don’t like it.
Especially for people intending to program friendly AI, who need to understand the needs of other people (although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist. But it is just me.)
The plan to write an AI that will implement the Coherent Extrapolated Volition of all of humanity doesn’t involve talking to any of the affected humans. The plan is, literally, to first build an earlier AI that will do the interacting with all those other people for them.
That link only explains the concept of CEV as one possible idea related to building FAI, and a problematic one at that. But you’re making it sound like CEV being the only possible approach was an opinion that had already been set in stone.
AFAIK, it’s one idea that’s being considered, but I don’t think there’s currently enough confidence in any particular approach to call it The Plan. “The Plan” is more along the lines of “let’s experiment with a lot of approaches and see which ones seem the most promising”; the most recent direction that that plan has produced is a focus on general FAI math research, which may or may not eventually lead to something CEV-like.
although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist.
Could you elaborate on why you think that way? It’s always interesting to hear why people think a strong AI or Friendly AI is not possible/probable, especially if they have good reasons to think that way.
I think that AI is inevitable, but I think that unfriendly AI is more likely than friendly AI. This is just from my experience in developing software even in my small team environment where there are less human egos and tribalism/signaling to deal with. Something that you hadn’t thought of is always going to happen and a bug will be perpetuated throughout the lifecycle of your software. With AI, who knows what implications these bugs will have.
Rationality itself has to become much more mainstream before tackling AI responsibly.
I’m a programmer, and I doubt that AI is possible. Or, rather, I doubt that artificial intelligence will ever look that way to its creators. More broadly, I’m skeptical of ‘intelligence’ in general. It doesn’t seem like a useful term.
I mean, there’s a device down at the freeway that moves an arm up if you pay the toll. So, as a system, its got the ability to sense the environment (limited to the context of knowing if the coin verification system is satisfied with the payment), and affect that environment (raise and lower arm). Most folks would agree that that is not AI.
So, then, how can we get beyond that? It is a nonhuman reaction to the environment. Whatever I wrote that we called “AI”, would presumably do what I program it to (and naught else) in response to its sensory input. A futuristic war drone’s basket is its radar and its lever is its missiles, but there’s nothing new going on here. A chat bot’s basket is the incoming feed, and its lever is its outgoing text, but it’s not like it ‘chooses’ in any sense more meaningful than the toll bot’s decision matrix, what it sends out.
So maybe it could rewrite its own code. But if it does so, it’ll only do so in the way that I’ve programmed it to. The paper clip maximizer will never decide to rewrite itself as a gold coin maximizer. The final result is just a derived product of my original code and the sensory experiences its received. Is that any more ‘intelligent’ than the toll taker?
I like to bet folks that AI won’t happen within timeframe X. The problem then becomes defining AI happening. I wouldn’t want them to point to the toll robot, and presumably they’d be equally miffed if we were slaves of the MechaPope and I was pointing out that its Twenty Commandments could be predicted given a knowledge of its source code.
Thinking on it, my knee jerk criteria is that I will admit that AI exists if the United States knowingly gives it the right to vote. (Obviously there’s a window where AI is sentient but can’t vote, but given the speed of the FOOM it’ll probably pass quickly), or if the earth declares war (or the equivalent) on it. Its a pretty hard criteria to come up with.
What would yours be? Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so (keeping in mind that we are imagining you as motivated not by a desire to win the bet but a desire that the bet represent the truth)?
More broadly, I’m skeptical of ‘intelligence’ in general. It doesn’t seem like a useful term.
People here have tried to define intelligence in more strict terms. See Playing Taboo with “Intelligence”. They define ‘intelligence’ as an agent’s ability to achieve goals in a wide range of environments.
Anyway, if you define intelligence as the ability to achieve goals in a wide range of environments then it doesn’t really matter if the AI’s actions are just an extension of what it was programmed to do. Even people are just extensions of what they were “programmed to do by evolution”. Unless you believe in magical free will, one’s actions have to come from some source and in this regard people don’t differ from paper clip maximizers.
What would yours be?
I just think there are good optimizers and then there are really good optimizers. Between these there aren’t any sudden jumps except when the FOOM happens and possibly from unFriendly to Friendly. There isn’t any sudden point when the AI becomes sentient and the question how well the AI resembles humans is just a question of how well the AI can optimize towards this.
Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so.
There are already some really good optimizers, like Deep Blue and other chess computers that are far better at playing chess than their makers. But you probably meant when AIs become sentient? I don’t know exactly how sentience works, but I think something akin to the Turing test that shows how well the AI can behave like humans is sufficient to show that AI is sentient, at least in one subset of sentient AIs. To reach a FOOM scenario the AI doesn’t have to be sentient, just really good at cross-domain optimization.
I’m confused. You are looking for good reasons to believe that AI is not possible, per your post two above, but from your beliefs it would seem that you either consider AI to already exist (optimizers) or be impossible (sentient).
I don’t believe sentient AIs are impossible and I’m sorry if I gave that impression. But apart from that, yes, that is a roundabout version of my belief—though I would prefer the word “AI” be taboo’d in this case. This doesn’t mean my way of thinking is set in stone, I still want to update my beliefs and seek ways to think about this differently.
If it was unclear, by “strong AI” I meant an AI that is capable of self-improving to the point of FOOM.
I would pick either some kind of programming ability, or the ability to learn a language like English (which I would bet implies the former if we’re talking about what the design can do with some tweaks).
Someone affected by the issue might bring up something that nobody else had thought of, something that the science and statistics and studies missed—but other than that, what marginal value are they adding to the discussion?
Thinkers—including such naive, starry-eyed liberal idealists as Friedrich Hayek or Niccolo Machiavelli—have long touched on the utter indispensability of subjective, individual knowledge and its advantages over the authoritarian dictates of an ostensibly all-seing “pure reason”. Then along comes a brave young LW user and suggests that enlightened technocrats like him should tell people what’s really important in their lives.
I’m grateful to David for pointing out this comment, it’s really a good summary of what’s wrong with the typical LW approach to policy.
Continuing the argument though, I just don’t think including actual people on the receiving end into the debate would help determine true beliefs about the best way to solve whatever problem it is. It’d fall prey to the usual suspects like scope insensitivity, emotional pleading, and the like. Someone joins the debate and says “Your plan to wipe out malaria diverted funding away from charities that research the cure to my cute puppy’s rare illness, how could you do that?”—how do you respond to that truthfully while maintaining basic social standards of politeness?
Someone affected by the issue might bring up something that nobody else had thought of, something that the science and statistics and studies missed—but other than that, what marginal value are they adding to the discussion?
Aye !
Is that not enough for You ? Especially in some discussions, which are repetitive on LW ?
I’m thinking about the very low prior odds for them coming up anything unique.
In my experience, reading blogs from minority representants (sensible ones) introduces you to different thought patterns.
Not very specific, huh ?
Gypsies are the most focused on minority in my country. The gypsy blogger, who managed to leave her community, once described a story. Her mother visited her in her home, found frozen meat in her freezer, and started almonst crying: My daughter, how can you store meat at home, when people exist, who are hungry today ? (Gypsies are stereotypically bad at planning and managing their finances, to the point of selfdestruction. But before this blog, I did not understand, it makes them virtuous in their own eyes.)
This blog was also enlightening for me.
Would not it be nice to have such people interacting in LW conversations, instead of just linking to them ?
Especially for people intending to program friendly AI, who need to understand the needs of other people (although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist. But it is just me.)
Yes. It would be nice. I am genuinely uncertain whether there’s a good way to make LW appealing to people who currently dislike it, without alienating the existing contributors who do like it.
Maybe I am naive, but, how about explicitly stating, by some high status member, that we would be very happy if they contributed here ?
Eliezer wrote the same thing about women. http://lesswrong.com/lw/ap/of_gender_and_rationality/ It was not exactly “Women, come, please” but it was clear they would be welcome to participate. It might have helped. Or maybe the increased percentage in the census result was due to something else ? How would I know...
And note that Eliezer did not forbid pick-up art discussion and whatever You guys hold dear.
I could try and write a similar post as was that about women, but I am a small fish in this pond.
If you want to increase your fish-size, articles / comment threads which generate lots of upvotes are a good way to do it. And since your fish-size is small already there’s not much to lose if people don’t like it.
Please do! It would be worth a try (though I’m not totally sure what kind of post you want to write...)
The plan to write an AI that will implement the Coherent Extrapolated Volition of all of humanity doesn’t involve talking to any of the affected humans. The plan is, literally, to first build an earlier AI that will do the interacting with all those other people for them.
That link only explains the concept of CEV as one possible idea related to building FAI, and a problematic one at that. But you’re making it sound like CEV being the only possible approach was an opinion that had already been set in stone.
As far as I understood, it was still the plan as of quite recently (last coupla years). Has this changed?
AFAIK, it’s one idea that’s being considered, but I don’t think there’s currently enough confidence in any particular approach to call it The Plan. “The Plan” is more along the lines of “let’s experiment with a lot of approaches and see which ones seem the most promising”; the most recent direction that that plan has produced is a focus on general FAI math research, which may or may not eventually lead to something CEV-like.
Could you elaborate on why you think that way? It’s always interesting to hear why people think a strong AI or Friendly AI is not possible/probable, especially if they have good reasons to think that way.
I respond to your guestion for the fairness sake, but my reasons are not impressive.
Most of it is probably a wishful thinking, driven by my desire not to have the powerful AI aronud. I am scared at the idea.
The fact that people have felt AI is near for some time and we still do not have it.
Maybe the things which are essential for learning are the same which make human intelligence limited. For instance forgetting things.
Vague feeling, that biologically based inteligence is so complex, that computers are no match.
I think that AI is inevitable, but I think that unfriendly AI is more likely than friendly AI. This is just from my experience in developing software even in my small team environment where there are less human egos and tribalism/signaling to deal with. Something that you hadn’t thought of is always going to happen and a bug will be perpetuated throughout the lifecycle of your software. With AI, who knows what implications these bugs will have.
Rationality itself has to become much more mainstream before tackling AI responsibly.
I’m a programmer, and I doubt that AI is possible. Or, rather, I doubt that artificial intelligence will ever look that way to its creators. More broadly, I’m skeptical of ‘intelligence’ in general. It doesn’t seem like a useful term.
I mean, there’s a device down at the freeway that moves an arm up if you pay the toll. So, as a system, its got the ability to sense the environment (limited to the context of knowing if the coin verification system is satisfied with the payment), and affect that environment (raise and lower arm). Most folks would agree that that is not AI.
So, then, how can we get beyond that? It is a nonhuman reaction to the environment. Whatever I wrote that we called “AI”, would presumably do what I program it to (and naught else) in response to its sensory input. A futuristic war drone’s basket is its radar and its lever is its missiles, but there’s nothing new going on here. A chat bot’s basket is the incoming feed, and its lever is its outgoing text, but it’s not like it ‘chooses’ in any sense more meaningful than the toll bot’s decision matrix, what it sends out.
So maybe it could rewrite its own code. But if it does so, it’ll only do so in the way that I’ve programmed it to. The paper clip maximizer will never decide to rewrite itself as a gold coin maximizer. The final result is just a derived product of my original code and the sensory experiences its received. Is that any more ‘intelligent’ than the toll taker?
I like to bet folks that AI won’t happen within timeframe X. The problem then becomes defining AI happening. I wouldn’t want them to point to the toll robot, and presumably they’d be equally miffed if we were slaves of the MechaPope and I was pointing out that its Twenty Commandments could be predicted given a knowledge of its source code.
Thinking on it, my knee jerk criteria is that I will admit that AI exists if the United States knowingly gives it the right to vote. (Obviously there’s a window where AI is sentient but can’t vote, but given the speed of the FOOM it’ll probably pass quickly), or if the earth declares war (or the equivalent) on it. Its a pretty hard criteria to come up with.
What would yours be? Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so (keeping in mind that we are imagining you as motivated not by a desire to win the bet but a desire that the bet represent the truth)?
People here have tried to define intelligence in more strict terms. See Playing Taboo with “Intelligence”. They define ‘intelligence’ as an agent’s ability to achieve goals in a wide range of environments.
It seems your post seems to be more about free will than intelligence as defined by Muehlhauser in the above article. Free will has been covered quite comprehensibly on LessWrong) so I’m not particularly interested debating about it.
Anyway, if you define intelligence as the ability to achieve goals in a wide range of environments then it doesn’t really matter if the AI’s actions are just an extension of what it was programmed to do. Even people are just extensions of what they were “programmed to do by evolution”. Unless you believe in magical free will, one’s actions have to come from some source and in this regard people don’t differ from paper clip maximizers.
I just think there are good optimizers and then there are really good optimizers. Between these there aren’t any sudden jumps except when the FOOM happens and possibly from unFriendly to Friendly. There isn’t any sudden point when the AI becomes sentient and the question how well the AI resembles humans is just a question of how well the AI can optimize towards this.
There are already some really good optimizers, like Deep Blue and other chess computers that are far better at playing chess than their makers. But you probably meant when AIs become sentient? I don’t know exactly how sentience works, but I think something akin to the Turing test that shows how well the AI can behave like humans is sufficient to show that AI is sentient, at least in one subset of sentient AIs. To reach a FOOM scenario the AI doesn’t have to be sentient, just really good at cross-domain optimization.
I’m confused. You are looking for good reasons to believe that AI is not possible, per your post two above, but from your beliefs it would seem that you either consider AI to already exist (optimizers) or be impossible (sentient).
I don’t believe sentient AIs are impossible and I’m sorry if I gave that impression. But apart from that, yes, that is a roundabout version of my belief—though I would prefer the word “AI” be taboo’d in this case. This doesn’t mean my way of thinking is set in stone, I still want to update my beliefs and seek ways to think about this differently.
If it was unclear, by “strong AI” I meant an AI that is capable of self-improving to the point of FOOM.
I would pick either some kind of programming ability, or the ability to learn a language like English (which I would bet implies the former if we’re talking about what the design can do with some tweaks).
Thinkers—including such naive, starry-eyed liberal idealists as Friedrich Hayek or Niccolo Machiavelli—have long touched on the utter indispensability of subjective, individual knowledge and its advantages over the authoritarian dictates of an ostensibly all-seing “pure reason”. Then along comes a brave young LW user and suggests that enlightened technocrats like him should tell people what’s really important in their lives.
I’m grateful to David for pointing out this comment, it’s really a good summary of what’s wrong with the typical LW approach to policy.
(I’m a repentant ex/authoritarian myself, BTW.)
I’m having trouble wrapping my head around that. Could you give an example?