This conversation is probably reaching diminishing returns, so let me sum up. I propose that it would be instructive to you and many others if you would discuss what your dispute looks like from an outside view—what uninformed neutral but intelligent and rational observers should conclude about this topic from the features of this dispute they can observe from the outside. Such features include the various credentials of each party, and the effort he or she has spent on the topic and on engaging the other parties. If you think that a reasonable outsider viewer would have far less confidence in your conclusions than you do, then you must think that you possess hidden info, such as that your arguments are in fact far more persuasive than one could reasonably expect knowing only the outside features of the situation. Then you might ask why the usual sorts of clues that tend to leak out about argument persuasiveness have failed to do so in this case.
Robin, why do most academic experts (e.g. in biology) disagree with you (and Eliezer) about cryonics? Perhaps a few have detailed theories on why it’s hopeless, or simply have higher priorities than maximizing their expected survival time; but mostly it seems they’ve simply never given it much consideration, either because they’re entirely unaware of it or assume it’s some kind of sci-fi cult practice, and they don’t take cult practices seriously as a rule. But clearly people in this situation can be wrong, as you yourself believe in this instance.
Similarly, I think most of the apparent “disagreement” about the Singularity is nothing more than unawareness of Yudkowsky and his arguments. As far as I can tell, academics who come into contact with him tend to take him seriously, and their disagreements are limited to matters of detail, such as how fast AI is approaching (decades vs. centuries) and the exact form it will take (uploads/enhancement vs. de novo). They mainly agree that SIAI’s work is worth doing by somebody. Examples include yourself, Scott Aaronson, and David Chalmers.
Cryonics is also a good case to analyze what an outsider should think, given what they can see. But of course “they laughed at Galileo too” is hardly a strong argument for contrarian views. Yes sometimes contrarians are right—the key question is how outside observers, or self-doubting insiders, can tell when contrarians are right.
Outsiders can tell when contrarians are right by assessing their arguments, once they’ve decided the contrarians are worth listening to. This in turn can be ascertained through the usual means, such as association with credentialed or otherwise high-status folks. So for instance, you are affiliated with a respectable institution, Bostrom with an even more respectable institution, and the fact that EY was co-blogging at Overcoming Bias thus implied that if your and Bostrom’s arguments were worth listening to, so were his. (This is more or less my own story; and I started reading Overcoming Bias because it appeared on Scott Aaronson’s blogroll.)
Hence it seems that Yudkowsky’s affiliations are already strong enough to signal competence to those academics interested in the subjects he deals with, in which case we should expect to see detailed, inside-view analyses from insiders who disagree. In the absence of that, we have to conclude that insiders either agree or are simply unaware—and the latter, if I understand correctly, is a problem whose solution falls more under the responsibility of people like Vassar rather than Yudkowsky.
No for most people it is infeasible to evaluate who is right by working through the details of the arguments. The fact that Eliezer wrote on a blog affiliated with Oxford is very surely not enough to lead one to expect detailed rebuttal analyses from academics who disagree with him.
Well, for most people on most topics it is infeasible to evaluate who is right, period. At the end of the day, some effort is usually required to obtain reliable information. Even surveys of expert opinion may be difficult to conduct if the field is narrow and non-”traditional”. As for whatever few specialists there may be in Singularity issues, I think you expect too little of them if you don’t think Eliezer currently has enough status to expect rebuttals.
So, despite the fact that we (human phenotypes) are endowed with a powerful self-preservation instinct, you find a signaling explanation more likely than a straightforward application of self-preservation to a person’s concept of their own mind?
Given your peculiar preferences which value your DNA more highly than your brain, it’s tempting to chalk your absurd hypothesis up to the typical mind fallacy. But I think you’re well aware of the difference in values responsible for the split between your assessment of cryonics and Eliezer’s or Robin’s.
So I think you’re value sniping. I think your comment was made in bad faith as a roundabout way of signaling your values in a context where explicitly mentioning them would be seen as inappropriate or off-topic. I don’t know what your motivation would be—did mention of cryonics remind you that many here do not share your values, and thereby motivate you to plant your flag in the discussion?
Please feel free to provide evidence to the contrary by explaining in more detail why self-preservation is an unlikely motivation for cryonics relative to signaling.
An over-generalisation of self-preservation instincts certainly seems to be part of it.
On the other hand, one of my interests is in the spread of ideas. Without cryonic medalions, cryonic bracelets, cryonic advertising and cryonic preachers there wouldn’t be any cryonics movement. There seems to be a “show your friends how much you care—freeze them!” dynamic.
I have a similar theory about the pyramids. Not so much a real voyage to the afterlife, but a means of reinforcing the pecking order in everyone’s minds.
I am contrasting this signaling perspective with Robin’s views—in part because I am aware that he is sympathetic to signaling theories in other contexts.
I do think signaling is an important part of cryonics—but I was probably rash to attempt to quantify the effect. I don’t pretend to have any good way of measuring its overall contribution relative to other factors.
This conversation is probably reaching diminishing returns, so let me sum up. I propose that it would be instructive to you and many others if you would discuss what your dispute looks like from an outside view—what uninformed neutral but intelligent and rational observers should conclude about this topic from the features of this dispute they can observe from the outside. Such features include the various credentials of each party, and the effort he or she has spent on the topic and on engaging the other parties. If you think that a reasonable outsider viewer would have far less confidence in your conclusions than you do, then you must think that you possess hidden info, such as that your arguments are in fact far more persuasive than one could reasonably expect knowing only the outside features of the situation. Then you might ask why the usual sorts of clues that tend to leak out about argument persuasiveness have failed to do so in this case.
Robin, why do most academic experts (e.g. in biology) disagree with you (and Eliezer) about cryonics? Perhaps a few have detailed theories on why it’s hopeless, or simply have higher priorities than maximizing their expected survival time; but mostly it seems they’ve simply never given it much consideration, either because they’re entirely unaware of it or assume it’s some kind of sci-fi cult practice, and they don’t take cult practices seriously as a rule. But clearly people in this situation can be wrong, as you yourself believe in this instance.
Similarly, I think most of the apparent “disagreement” about the Singularity is nothing more than unawareness of Yudkowsky and his arguments. As far as I can tell, academics who come into contact with him tend to take him seriously, and their disagreements are limited to matters of detail, such as how fast AI is approaching (decades vs. centuries) and the exact form it will take (uploads/enhancement vs. de novo). They mainly agree that SIAI’s work is worth doing by somebody. Examples include yourself, Scott Aaronson, and David Chalmers.
Cryonics is also a good case to analyze what an outsider should think, given what they can see. But of course “they laughed at Galileo too” is hardly a strong argument for contrarian views. Yes sometimes contrarians are right—the key question is how outside observers, or self-doubting insiders, can tell when contrarians are right.
Outsiders can tell when contrarians are right by assessing their arguments, once they’ve decided the contrarians are worth listening to. This in turn can be ascertained through the usual means, such as association with credentialed or otherwise high-status folks. So for instance, you are affiliated with a respectable institution, Bostrom with an even more respectable institution, and the fact that EY was co-blogging at Overcoming Bias thus implied that if your and Bostrom’s arguments were worth listening to, so were his. (This is more or less my own story; and I started reading Overcoming Bias because it appeared on Scott Aaronson’s blogroll.)
Hence it seems that Yudkowsky’s affiliations are already strong enough to signal competence to those academics interested in the subjects he deals with, in which case we should expect to see detailed, inside-view analyses from insiders who disagree. In the absence of that, we have to conclude that insiders either agree or are simply unaware—and the latter, if I understand correctly, is a problem whose solution falls more under the responsibility of people like Vassar rather than Yudkowsky.
No for most people it is infeasible to evaluate who is right by working through the details of the arguments. The fact that Eliezer wrote on a blog affiliated with Oxford is very surely not enough to lead one to expect detailed rebuttal analyses from academics who disagree with him.
Well, for most people on most topics it is infeasible to evaluate who is right, period. At the end of the day, some effort is usually required to obtain reliable information. Even surveys of expert opinion may be difficult to conduct if the field is narrow and non-”traditional”. As for whatever few specialists there may be in Singularity issues, I think you expect too little of them if you don’t think Eliezer currently has enough status to expect rebuttals.
I figure cryonics serves mainly a signaling role.
The message probably reads something like:
“I’m a geek, I think I am really important—and I’m loaded”.
So, despite the fact that we (human phenotypes) are endowed with a powerful self-preservation instinct, you find a signaling explanation more likely than a straightforward application of self-preservation to a person’s concept of their own mind?
Given your peculiar preferences which value your DNA more highly than your brain, it’s tempting to chalk your absurd hypothesis up to the typical mind fallacy. But I think you’re well aware of the difference in values responsible for the split between your assessment of cryonics and Eliezer’s or Robin’s.
So I think you’re value sniping. I think your comment was made in bad faith as a roundabout way of signaling your values in a context where explicitly mentioning them would be seen as inappropriate or off-topic. I don’t know what your motivation would be—did mention of cryonics remind you that many here do not share your values, and thereby motivate you to plant your flag in the discussion?
Please feel free to provide evidence to the contrary by explaining in more detail why self-preservation is an unlikely motivation for cryonics relative to signaling.
An over-generalisation of self-preservation instincts certainly seems to be part of it.
On the other hand, one of my interests is in the spread of ideas. Without cryonic medalions, cryonic bracelets, cryonic advertising and cryonic preachers there wouldn’t be any cryonics movement. There seems to be a “show your friends how much you care—freeze them!” dynamic.
I have a similar theory about the pyramids. Not so much a real voyage to the afterlife, but a means of reinforcing the pecking order in everyone’s minds.
I am contrasting this signaling perspective with Robin’s views—in part because I am aware that he is sympathetic to signaling theories in other contexts.
I do think signaling is an important part of cryonics—but I was probably rash to attempt to quantify the effect. I don’t pretend to have any good way of measuring its overall contribution relative to other factors.