I’m finding these dialogues worthwhile for (so far) lowering my respect for “mainstream” AI researchers.
Pei Wang’s definition of intelligence is just “optimization process” in fancy clothes.
His emphasis on raising an AI with prim/proper experience makes me realize how humans can’t use our native architecture thinking about AI problems. For so many people, “building a safe AI” just pattern-matches to “raising a child so he becomes a good citizen”, even though these tasks have nothing to do with each other. But the analogy is so alluring that there are those who simply can’t escape it.
This is a basic mistake. It boggles the mind to see someone who claims to be a mainstream AGI person making it.
Pei Wang’s definition of intelligence is just “optimization process” in fancy clothes.
I’ve heard expressions such as “sufficiently powerful optimization process” around LW pretty often, too, especially in the context of sidelining metaphysical questions such as “will AI be ‘conscious’?”
(nods) I try to use “superhuman optimizer” to refer to superhuman optimizers, both to sidestep irrelevant questions about consciousness and sentience, and to sidestep irrelevant questions about intelligence. It’s not always socially feasible, though. (Or at least, I can’t always fease it socially.)
I’m finding these dialogues worthwhile for (so far) lowering my respect for “mainstream” AI researchers.
Think about how ridiculous your comment must sound to them. Some of those people have been researching AI for decades, wrote hundreds of papers that have been cited many thousands of times.
That you just assume that they must be stupid because they disagree with you seems incredible arrogant. They have probably thought about everything you know long before you and dismissed it.
Think about how ridiculous your comment must sound to them.
I have no reason to suspect that other people’s use of the absurdity heuristic should cause me to reevaluate every argument I’ve ever seen.
That a de novo AGI will be nothing like a human child in terms of how to make it safe is an antiprediction in that it would take a tremendous amount of evidence to suggest otherwise, and yet Wang just assumes this without having any evidence at all. I can only conclude that the surface analogy is the entire content of the claim.
That you just assume that they must be stupid
If he were just stupid, I’d have no right to be indignant at his basic mistake. He is clearly an intelligent person.
They have probably thought about everything you know long before you and dismissed it.
You are not making any sense. Think about how ridiculous your comment must sound to me.
(I’m starting to hate that you’ve become a fixture here.)
That a de novo AGI will be nothing like a human child in terms of how to make it safe is an antiprediction in that it would take a tremendous amount of evidence to suggest otherwise, and yet Wang just assumes this without having any evidence at all.
I think this single statement summarizes the huge rift between the narrow specific LW/EY view of AGI and other more mainstream views.
For researchers who are trying to emulate or simulate brain algorithms directly, its self-evidently obvious that the resulting AGI will start like a human child. If they succeed first your ‘antiprediction’ is trivially false. And then we have researchers like Wang or Goertzel who are pursuing AGI approaches that are not brain-like at all and yet still believe the AGI will learn like a human child and specifically use that analogy.
You can label anything an “antiprediction” and thus convince yourself that you need arbitrary positive evidence to disprove your counterfactual, but in doing so you are really just rationalizing your priors/existing beliefs.
If he were just stupid, I’d have no right to be indignant at his basic mistake.
As a general heuristic, if you agree that someone is both intelligent and highly educated, and has made a conclusion that you consider to be a basic mistake, there are a variety of reasonable responses, one of the most obvious of which is to question if the issue in question falls under the category of a basic mistake or even as a mistake at all. Maybe you should update your models?
Lowering the status of people who make basic mistakes causes them to be less likely to make those mistakes. However, you can’t demand that non-intelligent people don’t make basic mistakes, as they are going to make them anyway. So demand that smart people do better and maybe they will.
The reasoning is the same as Sark Julian’s here/here.
I guess the word “right” threw you off. I am a consequentialist.
I’d guess that such status lowering mainly benefits in communicating desired social norms to bystanders. I’m not sure we can expect those whose status is lowered to accept the social norm, or at least not right away.
In general, I’m very uncertain about the best way to persuade people that they could stand to shape up.
(I’m starting to hate that you’ve become a fixture here.)
People like you are the biggest problem. I had 4 AI researchers emailing me, after I asked them about AI risks, that they regret having engaged with this community and that they will from now on ignore it because of the belittling attitude of its members.
So congratulations for increasing AI risks by giving the whole community a bad name, idiot.
And you’re complaining about people’s belittling attitude, while you call them “idiots” and say stuff like “They have probably thought about everything you know long before you and dismissed it”?
Are you sure those AI researchers weren’t referring to you when they were talking about members’ belittling attitude?
The template XiXiDu used to contact the AI researchers seemed respectful and not at all belittling. I haven’t read the interviews themselves yet, but just looking at the one with Brandon Rohrer, his comment
This is an entertaining survey. I appreciate the specificity with which you’ve worded some of the questions. I don’t have a defensible or scientific answer to any of the questions, but I’ve included some answers below that are wild-ass guesses. You got some good and thoughtful responses. I’ve been enjoying reading them. Thanks for compiling them.
doesn’t sound like he feels belittled by XiXiDu.
Also, I perceived your question as tension-starter, because whatever XiXiDu’s faults or virtues, he does seem to respect the opinion of non-LW-researchers more than the average LW-member. I’m not here that often, but I assume that if I noticed that, somebody with a substantially higher Karma would have also noticed that. That makes me think there is a chance that your question wasn’t meant as a serious inquiry, but as an attack—which XiXiDu answered to in kind.
And you’re complaining about people’s belittling attitude, while you call them “idiots” and say stuff like “They have probably thought about everything you know long before you and dismissed it”?
Sure. The first comment was a mirror image of his style to show him how it is like when others act the way he does. The second comment was a direct counterstrike against him attacking me and along the lines of what your scriptures teach.
Think about how ridiculous your comment must sound to them.
People expressing disagreement with someone who is confident that they are right and is secure in their own status is going to be perceived by said high status person as foolish (or as an enemy to be crushed). This doesn’t mean you should never do so, merely that you will lose the goodwill of the person being criticized if you choose to do so.
Some of those people have been researching AI for decades, wrote hundreds of papers that have been cited many thousands of times.
Note that Grognor gave here a direction of his update based on this conversation. Even if he takes the status of Wang as overwhelmingly strong evidence of the correctness of his position it doesn’t mean that the direction of the update based on this particular piece of additional information should not be ‘down’. In fact, the more respect Grognor had for the speaker’s position prior to hearing him speak, the easier it is for the words to require a downward update. If there wasn’t already respect in place the new information wouldn’t be surprising.
That you just assume that they must be stupid because they disagree with you seems incredible arrogant.
He didn’t do that. Or, at least, Grognor’s comment doesn’t indicate that he did that. He saw a problem of basic logic in the arguments presented and took that as evidence against the conclusion. If Grognor could not do that it would be essentially pointless for him to evaluate the arguments at all.
Some of those people have been researching AI for decades, wrote hundreds of papers that have been cited many thousands of times.
If people are updating correctly they should have already factored in to their beliefs the fact that the SIAI position isn’t anywhere near the consensus in university artificial intelligence research. Given the assumption that these people are highly intelligent and well versed in their fields one should probably disbelieve the SIAI position at the outset because one expects that upon hearing from a “mainstream” AI researcher one would learn the reasons why SIAI is wrong. But if you then read a conversation between a “mainstream” AI researcher and an SIAI researcher and the former can’t explain why the latter is wrong then you better start updating.
They have probably thought about everything you know long before you and dismissed it.
I’m sure this is true when it comes to, say, programming a particular sort of artificial intelligence. But you are vastly overestimating how much thought scientists and engineers put into broad, philosophical concerns involving their fields. With few exceptions mathematicians don’t spend time thinking about the reality of infinite sets, physicists don’t spend time thinking about interpretations of quantum mechanics, computer programmers don’t spend time thinking about the Church-Turing-Deutsch principle etc.
But if you then read a conversation between a “mainstream” AI researcher and an SIAI researcher and the former can’t explain why the latter is wrong then you better start updating.
His arguments were not worse than Luke’s arguments if you ignore all the links, which he has no reason to read. He said that he does not believe that it is possible to restrict an AI the way that SI does imagine and still produce a general intelligence. He believes that the most promising route is AI that can learn by being taught.
In combination with his doubts about uncontrollable superintelligence, that position is not incoherent. You can also not claim, given this short dialogue, that he did not explain why SI is wrong.
But you are vastly overestimating how much thought scientists and engineers put into broad, philosophical concerns involving their fields.
That’s not what I was referring to. I doubt they have thought a lot about AI risks. What I meant is that they have likely thought about the possibility of recursive self-improvement and uncontrollable superhuman intelligence.
If an AI researcher tells you that he believes that AI risks are not a serious issue because they do not believe that AI can get out of control for technical reasons and you reply that they have not thought about AI drives and the philosophical reasons for why superhuman AI will pose a risk, then you created a straw man. Which is the usual tactic employed here.
For so many people, “building a safe AI” just pattern-matches to “raising a child so he becomes a good citizen”, even though these tasks have nothing to do with each other.
Suppose you build an AI that exactly replicates the developmental algorithms of an infant brain, and you embody it in a perfect virtual body. For this particular type of AI design, the analogy is perfectly exact, and the AI is in fact a child exactly equivalent to a human child.
A specific human brain is a single point in mindspace, but the set of similar architectures extends out into a wider region which probably overlaps highly with much of the useful, viable, accessible space of AGI designs. So the analogy has fairly wide reach.
As an analogy, its hard to see how comparing a young AI to a child is intrinsically worse than comparing the invention of AI to the invention of flight, for example.
I’m finding these dialogues worthwhile for (so far) lowering my respect for “mainstream” AI researchers.
Pei Wang’s definition of intelligence is just “optimization process” in fancy clothes.
His emphasis on raising an AI with prim/proper experience makes me realize how humans can’t use our native architecture thinking about AI problems. For so many people, “building a safe AI” just pattern-matches to “raising a child so he becomes a good citizen”, even though these tasks have nothing to do with each other. But the analogy is so alluring that there are those who simply can’t escape it.
This is a basic mistake. It boggles the mind to see someone who claims to be a mainstream AGI person making it.
I’ve heard expressions such as “sufficiently powerful optimization process” around LW pretty often, too, especially in the context of sidelining metaphysical questions such as “will AI be ‘conscious’?”
(nods) I try to use “superhuman optimizer” to refer to superhuman optimizers, both to sidestep irrelevant questions about consciousness and sentience, and to sidestep irrelevant questions about intelligence. It’s not always socially feasible, though. (Or at least, I can’t always fease it socially.)
Think about how ridiculous your comment must sound to them. Some of those people have been researching AI for decades, wrote hundreds of papers that have been cited many thousands of times.
That you just assume that they must be stupid because they disagree with you seems incredible arrogant. They have probably thought about everything you know long before you and dismissed it.
I have no reason to suspect that other people’s use of the absurdity heuristic should cause me to reevaluate every argument I’ve ever seen.
That a de novo AGI will be nothing like a human child in terms of how to make it safe is an antiprediction in that it would take a tremendous amount of evidence to suggest otherwise, and yet Wang just assumes this without having any evidence at all. I can only conclude that the surface analogy is the entire content of the claim.
If he were just stupid, I’d have no right to be indignant at his basic mistake. He is clearly an intelligent person.
You are not making any sense. Think about how ridiculous your comment must sound to me.
(I’m starting to hate that you’ve become a fixture here.)
I think this single statement summarizes the huge rift between the narrow specific LW/EY view of AGI and other more mainstream views.
For researchers who are trying to emulate or simulate brain algorithms directly, its self-evidently obvious that the resulting AGI will start like a human child. If they succeed first your ‘antiprediction’ is trivially false. And then we have researchers like Wang or Goertzel who are pursuing AGI approaches that are not brain-like at all and yet still believe the AGI will learn like a human child and specifically use that analogy.
You can label anything an “antiprediction” and thus convince yourself that you need arbitrary positive evidence to disprove your counterfactual, but in doing so you are really just rationalizing your priors/existing beliefs.
Hadn’t seen the antiprediction angle—obvious now you point it out.
I actually applauded this comment. Thank you.
As a general heuristic, if you agree that someone is both intelligent and highly educated, and has made a conclusion that you consider to be a basic mistake, there are a variety of reasonable responses, one of the most obvious of which is to question if the issue in question falls under the category of a basic mistake or even as a mistake at all. Maybe you should update your models?
This is off-topic, but this sentence means nothing to me as a person with a consequentialist morality.
The consequentialist argument is as follows:
Lowering the status of people who make basic mistakes causes them to be less likely to make those mistakes. However, you can’t demand that non-intelligent people don’t make basic mistakes, as they are going to make them anyway. So demand that smart people do better and maybe they will.
The reasoning is the same as Sark Julian’s here/here.
I guess the word “right” threw you off. I am a consequentialist.
I’d guess that such status lowering mainly benefits in communicating desired social norms to bystanders. I’m not sure we can expect those whose status is lowered to accept the social norm, or at least not right away.
In general, I’m very uncertain about the best way to persuade people that they could stand to shape up.
People like you are the biggest problem. I had 4 AI researchers emailing me, after I asked them about AI risks, that they regret having engaged with this community and that they will from now on ignore it because of the belittling attitude of its members.
So congratulations for increasing AI risks by giving the whole community a bad name, idiot.
This is an absolutely unacceptable response on at least three levels.
And you’re complaining about people’s belittling attitude, while you call them “idiots” and say stuff like “They have probably thought about everything you know long before you and dismissed it”?
Are you sure those AI researchers weren’t referring to you when they were talking about members’ belittling attitude?
The template XiXiDu used to contact the AI researchers seemed respectful and not at all belittling. I haven’t read the interviews themselves yet, but just looking at the one with Brandon Rohrer, his comment
doesn’t sound like he feels belittled by XiXiDu.
Also, I perceived your question as tension-starter, because whatever XiXiDu’s faults or virtues, he does seem to respect the opinion of non-LW-researchers more than the average LW-member. I’m not here that often, but I assume that if I noticed that, somebody with a substantially higher Karma would have also noticed that. That makes me think there is a chance that your question wasn’t meant as a serious inquiry, but as an attack—which XiXiDu answered to in kind.
Aris is accusing XiXiDu of being belittling not to the AI researchers, but to the people who disagree with the AI researchers.
Ah, I see. Thanks for the clarification.
Sure. The first comment was a mirror image of his style to show him how it is like when others act the way he does. The second comment was a direct counterstrike against him attacking me and along the lines of what your scriptures teach.
And what justification do you have for now insulting me by calling the Sequences (I presume) my “scriptures”?
Or are you going to claim that this bit was not meant for an insult and an accusation? I think it very clearly was.
People expressing disagreement with someone who is confident that they are right and is secure in their own status is going to be perceived by said high status person as foolish (or as an enemy to be crushed). This doesn’t mean you should never do so, merely that you will lose the goodwill of the person being criticized if you choose to do so.
Note that Grognor gave here a direction of his update based on this conversation. Even if he takes the status of Wang as overwhelmingly strong evidence of the correctness of his position it doesn’t mean that the direction of the update based on this particular piece of additional information should not be ‘down’. In fact, the more respect Grognor had for the speaker’s position prior to hearing him speak, the easier it is for the words to require a downward update. If there wasn’t already respect in place the new information wouldn’t be surprising.
He didn’t do that. Or, at least, Grognor’s comment doesn’t indicate that he did that. He saw a problem of basic logic in the arguments presented and took that as evidence against the conclusion. If Grognor could not do that it would be essentially pointless for him to evaluate the arguments at all.
If people are updating correctly they should have already factored in to their beliefs the fact that the SIAI position isn’t anywhere near the consensus in university artificial intelligence research. Given the assumption that these people are highly intelligent and well versed in their fields one should probably disbelieve the SIAI position at the outset because one expects that upon hearing from a “mainstream” AI researcher one would learn the reasons why SIAI is wrong. But if you then read a conversation between a “mainstream” AI researcher and an SIAI researcher and the former can’t explain why the latter is wrong then you better start updating.
I’m sure this is true when it comes to, say, programming a particular sort of artificial intelligence. But you are vastly overestimating how much thought scientists and engineers put into broad, philosophical concerns involving their fields. With few exceptions mathematicians don’t spend time thinking about the reality of infinite sets, physicists don’t spend time thinking about interpretations of quantum mechanics, computer programmers don’t spend time thinking about the Church-Turing-Deutsch principle etc.
His arguments were not worse than Luke’s arguments if you ignore all the links, which he has no reason to read. He said that he does not believe that it is possible to restrict an AI the way that SI does imagine and still produce a general intelligence. He believes that the most promising route is AI that can learn by being taught.
In combination with his doubts about uncontrollable superintelligence, that position is not incoherent. You can also not claim, given this short dialogue, that he did not explain why SI is wrong.
That’s not what I was referring to. I doubt they have thought a lot about AI risks. What I meant is that they have likely thought about the possibility of recursive self-improvement and uncontrollable superhuman intelligence.
If an AI researcher tells you that he believes that AI risks are not a serious issue because they do not believe that AI can get out of control for technical reasons and you reply that they have not thought about AI drives and the philosophical reasons for why superhuman AI will pose a risk, then you created a straw man. Which is the usual tactic employed here.
And yet they demonstrate that they have not, by not engaging the core arguments made by SI/FHI when they talk about it.
Suppose you build an AI that exactly replicates the developmental algorithms of an infant brain, and you embody it in a perfect virtual body. For this particular type of AI design, the analogy is perfectly exact, and the AI is in fact a child exactly equivalent to a human child.
A specific human brain is a single point in mindspace, but the set of similar architectures extends out into a wider region which probably overlaps highly with much of the useful, viable, accessible space of AGI designs. So the analogy has fairly wide reach.
As an analogy, its hard to see how comparing a young AI to a child is intrinsically worse than comparing the invention of AI to the invention of flight, for example.
Up until the point that Pinocchio realises he isn’t real boy.