Think about how ridiculous your comment must sound to them.
I have no reason to suspect that other people’s use of the absurdity heuristic should cause me to reevaluate every argument I’ve ever seen.
That a de novo AGI will be nothing like a human child in terms of how to make it safe is an antiprediction in that it would take a tremendous amount of evidence to suggest otherwise, and yet Wang just assumes this without having any evidence at all. I can only conclude that the surface analogy is the entire content of the claim.
That you just assume that they must be stupid
If he were just stupid, I’d have no right to be indignant at his basic mistake. He is clearly an intelligent person.
They have probably thought about everything you know long before you and dismissed it.
You are not making any sense. Think about how ridiculous your comment must sound to me.
(I’m starting to hate that you’ve become a fixture here.)
That a de novo AGI will be nothing like a human child in terms of how to make it safe is an antiprediction in that it would take a tremendous amount of evidence to suggest otherwise, and yet Wang just assumes this without having any evidence at all.
I think this single statement summarizes the huge rift between the narrow specific LW/EY view of AGI and other more mainstream views.
For researchers who are trying to emulate or simulate brain algorithms directly, its self-evidently obvious that the resulting AGI will start like a human child. If they succeed first your ‘antiprediction’ is trivially false. And then we have researchers like Wang or Goertzel who are pursuing AGI approaches that are not brain-like at all and yet still believe the AGI will learn like a human child and specifically use that analogy.
You can label anything an “antiprediction” and thus convince yourself that you need arbitrary positive evidence to disprove your counterfactual, but in doing so you are really just rationalizing your priors/existing beliefs.
If he were just stupid, I’d have no right to be indignant at his basic mistake.
As a general heuristic, if you agree that someone is both intelligent and highly educated, and has made a conclusion that you consider to be a basic mistake, there are a variety of reasonable responses, one of the most obvious of which is to question if the issue in question falls under the category of a basic mistake or even as a mistake at all. Maybe you should update your models?
Lowering the status of people who make basic mistakes causes them to be less likely to make those mistakes. However, you can’t demand that non-intelligent people don’t make basic mistakes, as they are going to make them anyway. So demand that smart people do better and maybe they will.
The reasoning is the same as Sark Julian’s here/here.
I guess the word “right” threw you off. I am a consequentialist.
I’d guess that such status lowering mainly benefits in communicating desired social norms to bystanders. I’m not sure we can expect those whose status is lowered to accept the social norm, or at least not right away.
In general, I’m very uncertain about the best way to persuade people that they could stand to shape up.
(I’m starting to hate that you’ve become a fixture here.)
People like you are the biggest problem. I had 4 AI researchers emailing me, after I asked them about AI risks, that they regret having engaged with this community and that they will from now on ignore it because of the belittling attitude of its members.
So congratulations for increasing AI risks by giving the whole community a bad name, idiot.
And you’re complaining about people’s belittling attitude, while you call them “idiots” and say stuff like “They have probably thought about everything you know long before you and dismissed it”?
Are you sure those AI researchers weren’t referring to you when they were talking about members’ belittling attitude?
The template XiXiDu used to contact the AI researchers seemed respectful and not at all belittling. I haven’t read the interviews themselves yet, but just looking at the one with Brandon Rohrer, his comment
This is an entertaining survey. I appreciate the specificity with which you’ve worded some of the questions. I don’t have a defensible or scientific answer to any of the questions, but I’ve included some answers below that are wild-ass guesses. You got some good and thoughtful responses. I’ve been enjoying reading them. Thanks for compiling them.
doesn’t sound like he feels belittled by XiXiDu.
Also, I perceived your question as tension-starter, because whatever XiXiDu’s faults or virtues, he does seem to respect the opinion of non-LW-researchers more than the average LW-member. I’m not here that often, but I assume that if I noticed that, somebody with a substantially higher Karma would have also noticed that. That makes me think there is a chance that your question wasn’t meant as a serious inquiry, but as an attack—which XiXiDu answered to in kind.
And you’re complaining about people’s belittling attitude, while you call them “idiots” and say stuff like “They have probably thought about everything you know long before you and dismissed it”?
Sure. The first comment was a mirror image of his style to show him how it is like when others act the way he does. The second comment was a direct counterstrike against him attacking me and along the lines of what your scriptures teach.
I have no reason to suspect that other people’s use of the absurdity heuristic should cause me to reevaluate every argument I’ve ever seen.
That a de novo AGI will be nothing like a human child in terms of how to make it safe is an antiprediction in that it would take a tremendous amount of evidence to suggest otherwise, and yet Wang just assumes this without having any evidence at all. I can only conclude that the surface analogy is the entire content of the claim.
If he were just stupid, I’d have no right to be indignant at his basic mistake. He is clearly an intelligent person.
You are not making any sense. Think about how ridiculous your comment must sound to me.
(I’m starting to hate that you’ve become a fixture here.)
I think this single statement summarizes the huge rift between the narrow specific LW/EY view of AGI and other more mainstream views.
For researchers who are trying to emulate or simulate brain algorithms directly, its self-evidently obvious that the resulting AGI will start like a human child. If they succeed first your ‘antiprediction’ is trivially false. And then we have researchers like Wang or Goertzel who are pursuing AGI approaches that are not brain-like at all and yet still believe the AGI will learn like a human child and specifically use that analogy.
You can label anything an “antiprediction” and thus convince yourself that you need arbitrary positive evidence to disprove your counterfactual, but in doing so you are really just rationalizing your priors/existing beliefs.
Hadn’t seen the antiprediction angle—obvious now you point it out.
I actually applauded this comment. Thank you.
As a general heuristic, if you agree that someone is both intelligent and highly educated, and has made a conclusion that you consider to be a basic mistake, there are a variety of reasonable responses, one of the most obvious of which is to question if the issue in question falls under the category of a basic mistake or even as a mistake at all. Maybe you should update your models?
This is off-topic, but this sentence means nothing to me as a person with a consequentialist morality.
The consequentialist argument is as follows:
Lowering the status of people who make basic mistakes causes them to be less likely to make those mistakes. However, you can’t demand that non-intelligent people don’t make basic mistakes, as they are going to make them anyway. So demand that smart people do better and maybe they will.
The reasoning is the same as Sark Julian’s here/here.
I guess the word “right” threw you off. I am a consequentialist.
I’d guess that such status lowering mainly benefits in communicating desired social norms to bystanders. I’m not sure we can expect those whose status is lowered to accept the social norm, or at least not right away.
In general, I’m very uncertain about the best way to persuade people that they could stand to shape up.
People like you are the biggest problem. I had 4 AI researchers emailing me, after I asked them about AI risks, that they regret having engaged with this community and that they will from now on ignore it because of the belittling attitude of its members.
So congratulations for increasing AI risks by giving the whole community a bad name, idiot.
This is an absolutely unacceptable response on at least three levels.
And you’re complaining about people’s belittling attitude, while you call them “idiots” and say stuff like “They have probably thought about everything you know long before you and dismissed it”?
Are you sure those AI researchers weren’t referring to you when they were talking about members’ belittling attitude?
The template XiXiDu used to contact the AI researchers seemed respectful and not at all belittling. I haven’t read the interviews themselves yet, but just looking at the one with Brandon Rohrer, his comment
doesn’t sound like he feels belittled by XiXiDu.
Also, I perceived your question as tension-starter, because whatever XiXiDu’s faults or virtues, he does seem to respect the opinion of non-LW-researchers more than the average LW-member. I’m not here that often, but I assume that if I noticed that, somebody with a substantially higher Karma would have also noticed that. That makes me think there is a chance that your question wasn’t meant as a serious inquiry, but as an attack—which XiXiDu answered to in kind.
Aris is accusing XiXiDu of being belittling not to the AI researchers, but to the people who disagree with the AI researchers.
Ah, I see. Thanks for the clarification.
Sure. The first comment was a mirror image of his style to show him how it is like when others act the way he does. The second comment was a direct counterstrike against him attacking me and along the lines of what your scriptures teach.
And what justification do you have for now insulting me by calling the Sequences (I presume) my “scriptures”?
Or are you going to claim that this bit was not meant for an insult and an accusation? I think it very clearly was.