Almost surely world class academic AI experts do “know something you do not” about the future possibilities of AI. To declare that topic to be your field and them to be “outside” it seems hubris of the first order.
This conversation seems to be following what appears to me to be a trend in Robin and Eliezer’s (observable by me) disagreements. This is one reason I would fascinated if Eliezer did cover Robin’s initial question, informed somewhat by Eliezer’sinterpretation.
I recall Eliezer mentioning in a tangential comment that he disagreed with Robin not just on the particular conclusion but more foundationally on how much weight should be given to certain types of evidence or argument. (Excuse my paraphrase from hazy memory, my googling failed me.) This is a difference that extends far beyond just R & E and Eliezer has hinted at insights that intrigue me.
How can you be so confident that you know so much about this topic that no world class AI expert could know something relevant that you do not? Surely they considered the fact that people like you think you a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as “snap” because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don’t think those you disagree with are rational, but they probably don’t think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.
Almost surely world class academic AI experts do “know something you do not” about the future possibilities of AI.
Does Daphne Koller know more than I do about the future possibilities of object-oriented Bayes Nets? Almost certainly. And, um… there are various complicated ways I could put this… but, well, so what?
(No disrespect intended to Koller, and OOBN/probabilistic relational models/lifted Bayes/etcetera is on my short-list of things to study next.)
How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not? Surely they considered the fact that people like you think you know a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as “snap” because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don’t think those you disagree with are rational, but they probably don’t think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.
How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not?
You changed what I said into a bizarre absolute. I am assuming no such thing. I am just assuming that, by default, world class experts on various topics in narrow AI, produce their beliefs about the Singularity by snap judgment rather than detailed modular analysis. This is a prior and hence an unstable probability—as soon as I see contrary evidence, as soon as I see the actual analysis, it gets revoked.
but they clearly disagree with that assessment of how much consideration your arguments deserve.
They have no such disagreement. They have no idea I exist. On the rare occasion when I encounter such a person who is physically aware of my existence, we often manage to have interesting though brief conversations despite their having read none of my stuff.
Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians
Science only works when you use it; scientific authority derives from science. If you’ve got Lord Kelvin running around saying that you can’t have flying machines because it’s ridiculous, the problem isn’t that he’s an Authority, the problem is that he’s running on naked snap intuitive judgments of absurdity and the Wright Brothers are using actual math. The asymmetry in this case is not that pronounced but, even so, the default unstable prior is to assume that experts in narrow AI algorithms are not doing anything more complicated than this to produce their judgments about the probability of intelligence explosion—both the ones with negative affect who say “Never, you religious fear-monger!” and the ones with positive affect who say “Yes! Soon! And they shall do no wrong!” As soon as I see actual analysis, then we can talk about the actual analysis!
Added: In this field, what happens by default is that people talk complete nonsense. I spent my first years talking complete nonsense. In a situation like that, everyone has to show their work! Or at least show that they did some work! No exceptions!
This conversation is probably reaching diminishing returns, so let me sum up. I propose that it would be instructive to you and many others if you would discuss what your dispute looks like from an outside view—what uninformed neutral but intelligent and rational observers should conclude about this topic from the features of this dispute they can observe from the outside. Such features include the various credentials of each party, and the effort he or she has spent on the topic and on engaging the other parties. If you think that a reasonable outsider viewer would have far less confidence in your conclusions than you do, then you must think that you possess hidden info, such as that your arguments are in fact far more persuasive than one could reasonably expect knowing only the outside features of the situation. Then you might ask why the usual sorts of clues that tend to leak out about argument persuasiveness have failed to do so in this case.
Robin, why do most academic experts (e.g. in biology) disagree with you (and Eliezer) about cryonics? Perhaps a few have detailed theories on why it’s hopeless, or simply have higher priorities than maximizing their expected survival time; but mostly it seems they’ve simply never given it much consideration, either because they’re entirely unaware of it or assume it’s some kind of sci-fi cult practice, and they don’t take cult practices seriously as a rule. But clearly people in this situation can be wrong, as you yourself believe in this instance.
Similarly, I think most of the apparent “disagreement” about the Singularity is nothing more than unawareness of Yudkowsky and his arguments. As far as I can tell, academics who come into contact with him tend to take him seriously, and their disagreements are limited to matters of detail, such as how fast AI is approaching (decades vs. centuries) and the exact form it will take (uploads/enhancement vs. de novo). They mainly agree that SIAI’s work is worth doing by somebody. Examples include yourself, Scott Aaronson, and David Chalmers.
Cryonics is also a good case to analyze what an outsider should think, given what they can see. But of course “they laughed at Galileo too” is hardly a strong argument for contrarian views. Yes sometimes contrarians are right—the key question is how outside observers, or self-doubting insiders, can tell when contrarians are right.
Outsiders can tell when contrarians are right by assessing their arguments, once they’ve decided the contrarians are worth listening to. This in turn can be ascertained through the usual means, such as association with credentialed or otherwise high-status folks. So for instance, you are affiliated with a respectable institution, Bostrom with an even more respectable institution, and the fact that EY was co-blogging at Overcoming Bias thus implied that if your and Bostrom’s arguments were worth listening to, so were his. (This is more or less my own story; and I started reading Overcoming Bias because it appeared on Scott Aaronson’s blogroll.)
Hence it seems that Yudkowsky’s affiliations are already strong enough to signal competence to those academics interested in the subjects he deals with, in which case we should expect to see detailed, inside-view analyses from insiders who disagree. In the absence of that, we have to conclude that insiders either agree or are simply unaware—and the latter, if I understand correctly, is a problem whose solution falls more under the responsibility of people like Vassar rather than Yudkowsky.
No for most people it is infeasible to evaluate who is right by working through the details of the arguments. The fact that Eliezer wrote on a blog affiliated with Oxford is very surely not enough to lead one to expect detailed rebuttal analyses from academics who disagree with him.
Well, for most people on most topics it is infeasible to evaluate who is right, period. At the end of the day, some effort is usually required to obtain reliable information. Even surveys of expert opinion may be difficult to conduct if the field is narrow and non-”traditional”. As for whatever few specialists there may be in Singularity issues, I think you expect too little of them if you don’t think Eliezer currently has enough status to expect rebuttals.
So, despite the fact that we (human phenotypes) are endowed with a powerful self-preservation instinct, you find a signaling explanation more likely than a straightforward application of self-preservation to a person’s concept of their own mind?
Given your peculiar preferences which value your DNA more highly than your brain, it’s tempting to chalk your absurd hypothesis up to the typical mind fallacy. But I think you’re well aware of the difference in values responsible for the split between your assessment of cryonics and Eliezer’s or Robin’s.
So I think you’re value sniping. I think your comment was made in bad faith as a roundabout way of signaling your values in a context where explicitly mentioning them would be seen as inappropriate or off-topic. I don’t know what your motivation would be—did mention of cryonics remind you that many here do not share your values, and thereby motivate you to plant your flag in the discussion?
Please feel free to provide evidence to the contrary by explaining in more detail why self-preservation is an unlikely motivation for cryonics relative to signaling.
An over-generalisation of self-preservation instincts certainly seems to be part of it.
On the other hand, one of my interests is in the spread of ideas. Without cryonic medalions, cryonic bracelets, cryonic advertising and cryonic preachers there wouldn’t be any cryonics movement. There seems to be a “show your friends how much you care—freeze them!” dynamic.
I have a similar theory about the pyramids. Not so much a real voyage to the afterlife, but a means of reinforcing the pecking order in everyone’s minds.
I am contrasting this signaling perspective with Robin’s views—in part because I am aware that he is sympathetic to signaling theories in other contexts.
I do think signaling is an important part of cryonics—but I was probably rash to attempt to quantify the effect. I don’t pretend to have any good way of measuring its overall contribution relative to other factors.
Say that “Yudkowsky has no real clue” and that those “AI academics are right”? Just another crackpot among many “well educated”, no big thing. Not worth to mention, almost.
Say, that this crackpot is of the Edisonian kind! In that case it is something well worth to mention.
Important enough to at least discuss with him ON THE TOPICS, and not on some meta level. Meta level discussion is sometimes (as here IMHO), just a waste of time.
IFF he is right. Probably he is and nothing dramatically will happen. Probably Edison and Wright brothers and many others were also wrong, looking from their historic perspective.
Note, that if the official Academia (Hanson’s guys) is correct, the amount of new information is exactly zero. Nothing interesting to talk about or expect to.
I am after the cases they were and are wrong. I am after a new context, misfits like Yudkowsky or Edison might provide and “The Hanson’s” can’t. By the definition.
P.S. I don’t want to get into a discussion; I believe it’s better to just state a judgment even if without a useful explanation than to not state a judgment at all; however it may be perceived negatively for those obscure status-related reasons (see “offense” on the wiki), so I predict that this comment would’ve been downvoted without this addendum, and not impossibly still will be with it. This “P.S.” is dedicated to all the relevant occasions, not this one alone where I could’ve used the time to actually address the topic.
If I’m reading the conversation correctly, Vladimir Nesov is indicating with his remark that he is no longer interested in continuing. If he were not a major participant in the thread, a downvote would be appropriate, but as a major participant, more is required of him.
I am not confused and I don’t want a discussion either. I only state, that a new content and a new context usually comes out from outside the kosher set of views.
Of course, most of the outsiders are delusive poor devils. Yet, they are almost the only source of new information.
Almost surely world class academic AI experts do “know something you do not” about the future possibilities of AI. To declare that topic to be your field and them to be “outside” it seems hubris of the first order.
This conversation seems to be following what appears to me to be a trend in Robin and Eliezer’s (observable by me) disagreements. This is one reason I would fascinated if Eliezer did cover Robin’s initial question, informed somewhat by Eliezer’s interpretation.
I recall Eliezer mentioning in a tangential comment that he disagreed with Robin not just on the particular conclusion but more foundationally on how much weight should be given to certain types of evidence or argument. (Excuse my paraphrase from hazy memory, my googling failed me.) This is a difference that extends far beyond just R & E and Eliezer has hinted at insights that intrigue me.
How can you be so confident that you know so much about this topic that no world class AI expert could know something relevant that you do not? Surely they considered the fact that people like you think you a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as “snap” because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don’t think those you disagree with are rational, but they probably don’t think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.
Does Daphne Koller know more than I do about the future possibilities of object-oriented Bayes Nets? Almost certainly. And, um… there are various complicated ways I could put this… but, well, so what?
(No disrespect intended to Koller, and OOBN/probabilistic relational models/lifted Bayes/etcetera is on my short-list of things to study next.)
How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not? Surely they considered the fact that people like you think you know a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as “snap” because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don’t think those you disagree with are rational, but they probably don’t think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.
You changed what I said into a bizarre absolute. I am assuming no such thing. I am just assuming that, by default, world class experts on various topics in narrow AI, produce their beliefs about the Singularity by snap judgment rather than detailed modular analysis. This is a prior and hence an unstable probability—as soon as I see contrary evidence, as soon as I see the actual analysis, it gets revoked.
They have no such disagreement. They have no idea I exist. On the rare occasion when I encounter such a person who is physically aware of my existence, we often manage to have interesting though brief conversations despite their having read none of my stuff.
Science only works when you use it; scientific authority derives from science. If you’ve got Lord Kelvin running around saying that you can’t have flying machines because it’s ridiculous, the problem isn’t that he’s an Authority, the problem is that he’s running on naked snap intuitive judgments of absurdity and the Wright Brothers are using actual math. The asymmetry in this case is not that pronounced but, even so, the default unstable prior is to assume that experts in narrow AI algorithms are not doing anything more complicated than this to produce their judgments about the probability of intelligence explosion—both the ones with negative affect who say “Never, you religious fear-monger!” and the ones with positive affect who say “Yes! Soon! And they shall do no wrong!” As soon as I see actual analysis, then we can talk about the actual analysis!
Added: In this field, what happens by default is that people talk complete nonsense. I spent my first years talking complete nonsense. In a situation like that, everyone has to show their work! Or at least show that they did some work! No exceptions!
This conversation is probably reaching diminishing returns, so let me sum up. I propose that it would be instructive to you and many others if you would discuss what your dispute looks like from an outside view—what uninformed neutral but intelligent and rational observers should conclude about this topic from the features of this dispute they can observe from the outside. Such features include the various credentials of each party, and the effort he or she has spent on the topic and on engaging the other parties. If you think that a reasonable outsider viewer would have far less confidence in your conclusions than you do, then you must think that you possess hidden info, such as that your arguments are in fact far more persuasive than one could reasonably expect knowing only the outside features of the situation. Then you might ask why the usual sorts of clues that tend to leak out about argument persuasiveness have failed to do so in this case.
Robin, why do most academic experts (e.g. in biology) disagree with you (and Eliezer) about cryonics? Perhaps a few have detailed theories on why it’s hopeless, or simply have higher priorities than maximizing their expected survival time; but mostly it seems they’ve simply never given it much consideration, either because they’re entirely unaware of it or assume it’s some kind of sci-fi cult practice, and they don’t take cult practices seriously as a rule. But clearly people in this situation can be wrong, as you yourself believe in this instance.
Similarly, I think most of the apparent “disagreement” about the Singularity is nothing more than unawareness of Yudkowsky and his arguments. As far as I can tell, academics who come into contact with him tend to take him seriously, and their disagreements are limited to matters of detail, such as how fast AI is approaching (decades vs. centuries) and the exact form it will take (uploads/enhancement vs. de novo). They mainly agree that SIAI’s work is worth doing by somebody. Examples include yourself, Scott Aaronson, and David Chalmers.
Cryonics is also a good case to analyze what an outsider should think, given what they can see. But of course “they laughed at Galileo too” is hardly a strong argument for contrarian views. Yes sometimes contrarians are right—the key question is how outside observers, or self-doubting insiders, can tell when contrarians are right.
Outsiders can tell when contrarians are right by assessing their arguments, once they’ve decided the contrarians are worth listening to. This in turn can be ascertained through the usual means, such as association with credentialed or otherwise high-status folks. So for instance, you are affiliated with a respectable institution, Bostrom with an even more respectable institution, and the fact that EY was co-blogging at Overcoming Bias thus implied that if your and Bostrom’s arguments were worth listening to, so were his. (This is more or less my own story; and I started reading Overcoming Bias because it appeared on Scott Aaronson’s blogroll.)
Hence it seems that Yudkowsky’s affiliations are already strong enough to signal competence to those academics interested in the subjects he deals with, in which case we should expect to see detailed, inside-view analyses from insiders who disagree. In the absence of that, we have to conclude that insiders either agree or are simply unaware—and the latter, if I understand correctly, is a problem whose solution falls more under the responsibility of people like Vassar rather than Yudkowsky.
No for most people it is infeasible to evaluate who is right by working through the details of the arguments. The fact that Eliezer wrote on a blog affiliated with Oxford is very surely not enough to lead one to expect detailed rebuttal analyses from academics who disagree with him.
Well, for most people on most topics it is infeasible to evaluate who is right, period. At the end of the day, some effort is usually required to obtain reliable information. Even surveys of expert opinion may be difficult to conduct if the field is narrow and non-”traditional”. As for whatever few specialists there may be in Singularity issues, I think you expect too little of them if you don’t think Eliezer currently has enough status to expect rebuttals.
I figure cryonics serves mainly a signaling role.
The message probably reads something like:
“I’m a geek, I think I am really important—and I’m loaded”.
So, despite the fact that we (human phenotypes) are endowed with a powerful self-preservation instinct, you find a signaling explanation more likely than a straightforward application of self-preservation to a person’s concept of their own mind?
Given your peculiar preferences which value your DNA more highly than your brain, it’s tempting to chalk your absurd hypothesis up to the typical mind fallacy. But I think you’re well aware of the difference in values responsible for the split between your assessment of cryonics and Eliezer’s or Robin’s.
So I think you’re value sniping. I think your comment was made in bad faith as a roundabout way of signaling your values in a context where explicitly mentioning them would be seen as inappropriate or off-topic. I don’t know what your motivation would be—did mention of cryonics remind you that many here do not share your values, and thereby motivate you to plant your flag in the discussion?
Please feel free to provide evidence to the contrary by explaining in more detail why self-preservation is an unlikely motivation for cryonics relative to signaling.
An over-generalisation of self-preservation instincts certainly seems to be part of it.
On the other hand, one of my interests is in the spread of ideas. Without cryonic medalions, cryonic bracelets, cryonic advertising and cryonic preachers there wouldn’t be any cryonics movement. There seems to be a “show your friends how much you care—freeze them!” dynamic.
I have a similar theory about the pyramids. Not so much a real voyage to the afterlife, but a means of reinforcing the pecking order in everyone’s minds.
I am contrasting this signaling perspective with Robin’s views—in part because I am aware that he is sympathetic to signaling theories in other contexts.
I do think signaling is an important part of cryonics—but I was probably rash to attempt to quantify the effect. I don’t pretend to have any good way of measuring its overall contribution relative to other factors.
Re: “They have no idea I exist.”
Are you sure? You may be underestimating your own fame in this instance.
Say that “Yudkowsky has no real clue” and that those “AI academics are right”? Just another crackpot among many “well educated”, no big thing. Not worth to mention, almost.
Say, that this crackpot is of the Edisonian kind! In that case it is something well worth to mention.
Important enough to at least discuss with him ON THE TOPICS, and not on some meta level. Meta level discussion is sometimes (as here IMHO), just a waste of time.
I’m not sure what you mean by your first few sentences. But I disagree with your last two. It is good for me to see this debate.
You get zilch, in the case of Hanson (and the Academia) is right. Zero in the informative sense. You get quite a bit, if Yudkowsky is right.
Verifying Hanson (& the so called Academia) means no new information.
You get not needing to run around trying to save the world and a pony if Hanson is right. It’s not useful to be deluded.
IFF he is right. Probably he is and nothing dramatically will happen. Probably Edison and Wright brothers and many others were also wrong, looking from their historic perspective.
Note, that if the official Academia (Hanson’s guys) is correct, the amount of new information is exactly zero. Nothing interesting to talk about or expect to.
I am after the cases they were and are wrong. I am after a new context, misfits like Yudkowsky or Edison might provide and “The Hanson’s” can’t. By the definition.
You are confused.
P.S. I don’t want to get into a discussion; I believe it’s better to just state a judgment even if without a useful explanation than to not state a judgment at all; however it may be perceived negatively for those obscure status-related reasons (see “offense” on the wiki), so I predict that this comment would’ve been downvoted without this addendum, and not impossibly still will be with it. This “P.S.” is dedicated to all the relevant occasions, not this one alone where I could’ve used the time to actually address the topic.
And a simple downvote isn’t sufficient?
If I’m reading the conversation correctly, Vladimir Nesov is indicating with his remark that he is no longer interested in continuing. If he were not a major participant in the thread, a downvote would be appropriate, but as a major participant, more is required of him.
I downvoted it. If it included two quotes from the context followed by ‘You are confused’ I would have upvoted it.
I initially tried that, but simple citation didn’t make the point any more rigorous.
I am not confused and I don’t want a discussion either. I only state, that a new content and a new context usually comes out from outside the kosher set of views.
Of course, most of the outsiders are delusive poor devils. Yet, they are almost the only source of new information.