You have written a lot of words. Just how many of your words would someone have had to read to make you feel a substantial need to explain the fact they are world class AI experts and disagree with your conclusions?
I’m sorry, but I don’t really have a proper lesson plan laid out—although the ongoing work of organizing LW into sequences may certainly help with that. It would depend on the specific issue and what I thought needed to be understood about that issue.
If they drew my feedback cycle of an intelligence explosion and then drew a different feedback cycle and explained why it fit the historical evidence equally well, then I would certainly sit up and take notice. It wouldn’t matter if they’d done it on their own or by reading my stuff.
E.g. Chalmers at the Singularity Summit is an example of an outsider who wandered in and started doing a modular analysis of the issues, who would certainly have earned the right of serious consideration and serious reply if, counterfactually, he had reached different conclusions about takeoff… with respect to only the parts that he gave a modular analysis of, though, not necessarily e.g. the statement that de novo AI is unlikely because no one will understand intelligence. If Chalmers did a modular analysis of that part, it wasn’t clear from the presentation.
Roughly, what I expect to happen by default is no modular analysis at all—just snap consideration and snap judgment. I feel little need to explain such.
Roughly, what I expect to happen by default is no modular analysis at all—just snap consideration and snap judgment. I feel little need to explain such.
You, or somebody anyway, could still offer a modular causal model of that snap consideration and snap judgment. For example:
What cached models of the planning abilities of future machine intelligences did the academics have available when they made the snap judgment?
What fraction of the academics are aware of any current published AI architectures which could reliably reason over plans at the level of abstraction of “implement a proxy intelligence”?
What fraction of them have thought carefully about when there might be future practical AI architectures that could do this?
What fraction use a process for answering questions about the category distinctions that will be known in the future, which uses as an unconscious default the category distinctions known in the present?
What false claims have been made about AI in the past? What decision rules might academics have learned to use, to protect themselves from losing prestige for being associated with false claims like those?
How much do those decision rules refer to modular causal analyses of the object of a claim and of the fact that people are making the claim?
How much do those decision rules refer to intuitions about other peoples’ states of mind and social category memberships?
How much do those decision rules refer to intuitions about other peoples’ intuitive decision rules?
Historically, have peoples’ own abilities to do modular causal analyses been good enough to make them reliably safe from losing prestige by being associated with false claims? What fraction of academics have the intuitive impression that their own ability to do analysis isn’t good enough to make them reliably safe from losing prestige by association with a false claim, so that they can only be safe if they use intuitions about the states of mind and social category memberships of a claim’s proponents?
Of those AI academics who believe that a machine intelligence could exist which could outmaneuver humans if motivated, how do they think about the possible motivations of a machine intelligence?
What fraction of them think about AI design in terms of a formalism such as approximating optimal sequential decision theory under a utility function? How easy would it be for them to substitute anthropomorphic intuitions for correct technical predictions?
What fraction of them think about AI design in terms of intuitively justified decision heuristics? How easy would it be for them to substitute anthropomorphic intuitions for correct technical predictions?
What fraction of them understand enough evolutionary psychology and/or cognitive psychology to recognize moral evaluations as algorithmically caused, so that they can reject the default intuitive explanation of the cause of moral evaluations, which seems to be: “there are intrinsic moral qualities attached to objects in the world, and when any intelligent agent apprehends an object with a moral quality, the action of the moral quality on the agent’s intelligence is to cause the agent to experience a moral evaluation”?
What combination of specializations in AI, moral philosophy, and cognitive psychology would an academic need to have, to be an “expert” whose disagreements about the material causes and implementation of moral evaluations were significant?
On the question of takeoff speeds, what fraction of the AI academics have a good enough intuitive understanding of decision theory to see that a point estimate or default scenario should not be substituted for a marginal posterior distribution, even in a situation where it would be socially costly in the default scenario to take actions which prevent large losses in one tail of the distribution?
What fraction recognized that they had a prior belief distribution over possible takeoff speeds at all?
What fraction understood that, regarding a variable which is underconstrained by evidence, “other people would disapprove of my belief distribution about this variable” is not an indicator for “my belief distribution about this variable puts mass in the wrong places”, except insofar as there is some causal reason to expect that disapproval would be somehow correlated with falsehood?
What other popular concerns have academics historically needed to dismiss? What decision rules have they learned to decide whether they need to dismiss a current popular concern?
After they make a decision to dismiss a popular concern, what kinds of causal explanations of the existence of that concern do they make reference to, when arguing to other people that they should agree with the decision?
How much do the true decision rules depend on those causal explanations?
How much do the decision rules depend on intuitions about the concerned peoples’ states of mind and social category memberships?
How much do the causal explanations use concepts which are implicitly defined by reference to hidden intuitions about states of mind and social category memberships?
Can these intuitively defined concepts carry the full weight of the causal explanations they are used to support, or does their power to cause agreement come from their ability to activate social intuitions?
Which people are the AI academics aware of, who have argued that intelligence explosion is a concern? What social categories do they intuit those people to be members of? What arguments are they aware of? What states of mind do they intuit those arguments to be indicators of (e.g. as in intuitively computed separating equilibria)?
What people and arguments did the AI academics think the other AI academics were thinking of? If only a few of the academics were thinking of people and arguments who they intuited to come from credible social categories and rational states of mind, would they have been able to communicate this to the others?
When the AI academics made the decision to dismiss concern about an intelligence explosion, what kinds of causal explanations of the existence of that concern did they intuitively expect that they would be able make reference to, if they later had to argue to other people that they should agree with the decision?
It is also possible to model the social process in the panel:
Are there factors that might make a joint statement by a panel of AI academics reflect different conclusions than they would have individually reached if they had been outsiders to the AI profession with the same AI expertise?
One salient consideration would be that agreeing with popular concern about an intelligence explosion would result in their funding being cut. What effects would this have had?
Would it have affected the order in which they became consciously aware of lines of argument that might make an intelligence explosion seem less or more deserving of concern?
Would it have made them associate concern about an intelligence explosion with unpopularity? In doubtful situations, unpopularity of an argument is one cue for its unjustifiability. Would they associate unpopularity with logical unjustifiability, and then lose willingness to support logically justifiable lines of argument that made an intelligence explosion seem deserving of concern, just as if they had felt those lines of argument to be logically unjustifiable, but without any actual unjustifiability?
There are social norms to justify taking prestige away from people who push a claim that an argument is justifiable while knowing that other prestigious people think the argument to to be a marker of a non-credible social category or state of mind. How would this have affected the discussion?
If there were panelists who personally thought the intelligence explosion argument was plausible, and they were in the minority, would the authors of the panel’s report mention it?
Would the authors know about it?
If the authors knew about it, would they feel any justification or need to mention those opinions in the report, given that the other panelists may have imposed on the authors an implicit social obligation to not write a report that would “unfairly” associate them with anything they think will cause them to lose prestige?
If panelists in such a minority knew that the report would not mention their opinions, would they feel any need or justification to object, given the existence of that same implicit social obligation?
How good are groups of people at making judgments about arguments that unprecedented things will have grave consequences?
How common is a reflective, causal understanding of the intuitions people use when judging popular concerns and arguments about unprecedented things, of the sort that would be needed to compute conditional probabilities like “Pr( we would decide that concern is not justified | we made our decision according to intuition X ∧ concern was justified )”?
How common is the ability to communicate the epistemic implications of that understanding in real-time while a discussion is happening, to keep it from going wrong?
Participants reviewed prior writings and thinking about the possibility of an “intelligence explosion” where computers one day begin designing computers that are more intelligent than themselves. … There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems.
Given this description it is hard to imagine they haven’t imagined the prospect of the rate of intelligence growth depending on the level of system intelligence.
I don’t see any arguments listed, though. I know there’s at least some smart people on that panel (e.g. Horvitz) so I could be wrong, but experience has taught me to be pessimistic, and pessimism says I have no particular evidence that anyone started breaking the problem down into modular pieces, as opposed to, say, stating a few snap perceptual judgments at each other and then moving on.
Why are you so optimistic about this sort of thing, Robin? You’re usually more cynical about what would happen when academics have no status incentive to get it right and every status incentive to dismiss the silly. We both have experience with novices encountering these problems and running straight into the brick wall of policy proposals without even trying a modular analysis. Why on this one occasion do you turn around and suppose that the case we don’t know will be so unlike the cases we do know?
The point is that this is a subtle and central issue to engage, so I was suggesting that you to consider describing your analysis more explicitly. Is there is never any point in listening to academics on “silly” topics? Is there never any point in listening to academics who haven’t explicitly told you how they’ve broken a problem down into modular parts, no matter now distinguished the are on related topics? Are people who have a modular parts analysis always a more reliable source than people who don’t, no matter what else their other features? And so on.
I confess, it doesn’t seem to me on a gut level like this is either healthy to obsess about, or productive to obsess about. It seems more like worrying that my status isn’t high enough to do work, than actually working. If someone shows up with amazing analyses I haven’t considered, I can just listen to the analyses then. Why spend time trying to guess who might have a hidden deep analysis I haven’t seen, when the prior is so much in favor of them having made a snap judgment, and it’s not clear why if they’ve got a deep analysis they wouldn’t just present it?
I think that on a purely pragmatic level there’s a lot to be said for the Traditional Rationalist concept of demanding that Authority show its work, even if it doesn’t seem like what ideal Bayesians would do.
You have in the past thought my research on the rationality of disagreement to be interesting and spent a fair bit of time discussing it. It seemed healthy to me for you to compare your far view of disagreement in the abstract to the near view of your own particular disagreement. If it makes sense in general for rational agents to take very seriously the fact that others disagree, why does it make little sense for you in particular to do so?
t seemed healthy to me for you to compare your far view of disagreement in the abstract to the near view of your own particular disagreement.
I was under the impression that my verbal analysis matched and cleverly excused my concrete behavior.
If it makes sense in general for rational agents to take very seriously the fact that others disagree, why does it make little sense for you in particular to do so?
Well (and I’m pretty sure this matches what I’ve been saying to you over the last few years) just because two ideal Bayesians would do something naturally, doesn’t mean you can singlehandedly come closer to Bayesianism by imitating the surface behavior of agreement. I’m not sure that doing elaborate analyses to excuse your disagreement helps much either. http://wiki.lesswrong.com/wiki/Occam%27s_Imaginary_Razor
I’d spend much more time worrying about the implications of Aumann agreement, if I thought the other party actually knew my arguments, took my arguments very seriously, took the Aumann problem seriously with respect to me in particular, and in general had a sense of immense gravitas about the possible consequences of abusing their power to make me update. This begins to approach the conditions for actually doing what ideal Bayesians do. Michael Vassar and I have practiced Aumann agreement a bit; I’ve never literally done the probability exchange-and-update thing with anyone else. (Edit: Actually on recollection I played this game a couple of times at a Less Wrong meetup.)
No such condition is remotely approached by disagreeing with the AAAI panel, so I don’t think I could, in real life, improve my epistemic position by pretending that they were ideal Bayesians who were fully informed about my reasons and yet disagreed with me anyway (in which case I ought to just update to match their estimates, rather than coming up with elaborate excuses to disagree with them!)
Well I disagree with you strongly that there is no point in considering the views of others if you are not sure they know the details of your arguments, or of the disagreement literature, or that those others are “rational.” Guess I should elaborate my view in a separate post.
There’s certainly always a point in considering specific arguments. But to be nervous merely that someone else has a different view, one ought, generally speaking, to suspect (a) that they know something you do not or at least (b) that you know no more than them (or far more rarely (c) that you are in a situation of mutual Aumann awareness and equal mutual respect for one another’s meta-rationality). As far as I’m concerned, these are eminent scientists from outside the field that I work in, and I have no evidence that they did anything more than snap judgment of my own subject material. It’s not that I have specific reason to distrust these people—the main name I recognize is Horvitz and a fine name it is. But the prior probabilities are not good here.
I don’t actually spend time obsessing about that sort of thing except when you’re asking me those sorts of questions—putting so much energy into self-justification and excuses would just slow me down if Horvitz showed up tomorrow with an argument I hadn’t considered.
I’ll say again: I think there’s much to be said for the Traditional Rationalist ideal of—once you’re at least inside a science and have enough expertise to evaluate the arguments—paying attention only when people lay out their arguments on the table, rather than trying to guess authority (or arguing over who’s most meta-rational). That’s not saying “there’s no point in considering the views of others”. It’s focusing your energy on the object level, where your thought time is most likely to be productive.
Is it that awful to say: “Show me your reasons”? Think of the prior probabilities!
You admit you have not done much to make it easy to show them your reasons. You have not written up your key arguments in a compact form using standard style and terminology and submitted it to standard journals. You also admit you have not contacted any of them to ask them for their reasons; Horvitz would have have to “show up” for you to listen to him. This looks a lot like a status pissing contest; the obvious interpretation: Since you think you are better than them, you won’t ask them for their reasons, and you won’t make it easy for them to understand your reasons, as that would admit they are higher status. They will instead have to acknowledge your higher status by coming to you and doing things your way. And of course they won’t since by ordinary standard they have higher status. So you ensure there will be no conversation, and with no conversation you can invoke your “traditional” (non-Bayesian) rationality standard to declare you have no need to consider their opinions.
You’re being slightly silly. I simply don’t expect them to pay any attention to me one way or another. As it stands, if e.g. Horvitz showed up and asked questions, I’d immediately direct him to http://singinst.org/AIRisk.pdf (the chapter I did for Bostrom), and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries. Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
FYI, I’ve talked with Peter Norvig a bit. He was mostly interested in the CEV / FAI-spec part of the problem—I don’t think we discussed hard takeoffs much per se. I certainly wouldn’t have brushed him off if he’d started asking!
“and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries.”
Why? No one in the academic community would spend that much time reading all that blog material for answers that would be best given in a concise form in a published academic paper. So why not spend the time? Unless you think you are that much of an expert in the field as to not need the academic community. If that be the case where are your publications and where are your credentials, where is the proof of this expertise (expert being a term that is applied based on actual knowledge and accomplishments)?
“Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.”
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. I think you would admit that in its current form SIAI has a 0 probability of creating FAI first. That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.
I am sorry I prefer to be blunt.. that way there is no mistaking meanings...
Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. …
That ‘probably not even then’ part is significant.
That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.
Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied ‘1’ and probably more than ‘0’ too.
“Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. …”
“That ‘probably not even then’ part is significant.”
My implication was that the idea that he can create FAI completely outside the academic or professional world is ridiculous when you’re speaking from an organization like SIAI which does not have the people or money to get the job done. In fact SIAI doesn’t have enough money to pay for the computing hardware to make human level AI.
“Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied ‘1’ and probably more than ‘0’ too.”
If he doesn’t agree with it now, I am sure he will when he runs into the problem of not having the money to build his AI or not having enough time in the day to solve the problems that will be associated with constructing the AI. Not even mentioning the fact that when you close yourself to outside influence that much you often end up with ideas that are riddled with problems, that if someone on the outside had looked at the idea they would have pointed the problems out.
If you have never taken an idea from idea to product this can be hard to understand.
No need to disclaim, your figures are sound enough and I took them as a demonstration of another rather significant difference between the assumptions of Eliezer and mormon2 (or mormon2′s sources).
If there is a status pissing contest, they started it! ;-)
“On the latter, some panelists believe that the AAAI study was held amidst a perception of urgency by non-experts (e.g., a book and a forthcoming movie titled “The Singularity is Near”), and focus of attention, expectation, and concern growing among the general population.”
Agree with them that there is much scaremongering going on in the field—but disagree with them about there not being much chance of an intelligence explosion.
I wondered why these folk got so much press. My guess is that the media probably thought the “AAAI Presidential Panel on Long-Term AI Futures” had something to do with the a report commisioned indirectly for the country’s president. In fact it just refers to the president of their organisation. A media-savvy move—though it probably represents deliberately misleading information.
Almost surely world class academic AI experts do “know something you do not” about the future possibilities of AI. To declare that topic to be your field and them to be “outside” it seems hubris of the first order.
This conversation seems to be following what appears to me to be a trend in Robin and Eliezer’s (observable by me) disagreements. This is one reason I would fascinated if Eliezer did cover Robin’s initial question, informed somewhat by Eliezer’sinterpretation.
I recall Eliezer mentioning in a tangential comment that he disagreed with Robin not just on the particular conclusion but more foundationally on how much weight should be given to certain types of evidence or argument. (Excuse my paraphrase from hazy memory, my googling failed me.) This is a difference that extends far beyond just R & E and Eliezer has hinted at insights that intrigue me.
How can you be so confident that you know so much about this topic that no world class AI expert could know something relevant that you do not? Surely they considered the fact that people like you think you a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as “snap” because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don’t think those you disagree with are rational, but they probably don’t think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.
Almost surely world class academic AI experts do “know something you do not” about the future possibilities of AI.
Does Daphne Koller know more than I do about the future possibilities of object-oriented Bayes Nets? Almost certainly. And, um… there are various complicated ways I could put this… but, well, so what?
(No disrespect intended to Koller, and OOBN/probabilistic relational models/lifted Bayes/etcetera is on my short-list of things to study next.)
How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not? Surely they considered the fact that people like you think you know a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as “snap” because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don’t think those you disagree with are rational, but they probably don’t think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.
How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not?
You changed what I said into a bizarre absolute. I am assuming no such thing. I am just assuming that, by default, world class experts on various topics in narrow AI, produce their beliefs about the Singularity by snap judgment rather than detailed modular analysis. This is a prior and hence an unstable probability—as soon as I see contrary evidence, as soon as I see the actual analysis, it gets revoked.
but they clearly disagree with that assessment of how much consideration your arguments deserve.
They have no such disagreement. They have no idea I exist. On the rare occasion when I encounter such a person who is physically aware of my existence, we often manage to have interesting though brief conversations despite their having read none of my stuff.
Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians
Science only works when you use it; scientific authority derives from science. If you’ve got Lord Kelvin running around saying that you can’t have flying machines because it’s ridiculous, the problem isn’t that he’s an Authority, the problem is that he’s running on naked snap intuitive judgments of absurdity and the Wright Brothers are using actual math. The asymmetry in this case is not that pronounced but, even so, the default unstable prior is to assume that experts in narrow AI algorithms are not doing anything more complicated than this to produce their judgments about the probability of intelligence explosion—both the ones with negative affect who say “Never, you religious fear-monger!” and the ones with positive affect who say “Yes! Soon! And they shall do no wrong!” As soon as I see actual analysis, then we can talk about the actual analysis!
Added: In this field, what happens by default is that people talk complete nonsense. I spent my first years talking complete nonsense. In a situation like that, everyone has to show their work! Or at least show that they did some work! No exceptions!
This conversation is probably reaching diminishing returns, so let me sum up. I propose that it would be instructive to you and many others if you would discuss what your dispute looks like from an outside view—what uninformed neutral but intelligent and rational observers should conclude about this topic from the features of this dispute they can observe from the outside. Such features include the various credentials of each party, and the effort he or she has spent on the topic and on engaging the other parties. If you think that a reasonable outsider viewer would have far less confidence in your conclusions than you do, then you must think that you possess hidden info, such as that your arguments are in fact far more persuasive than one could reasonably expect knowing only the outside features of the situation. Then you might ask why the usual sorts of clues that tend to leak out about argument persuasiveness have failed to do so in this case.
Robin, why do most academic experts (e.g. in biology) disagree with you (and Eliezer) about cryonics? Perhaps a few have detailed theories on why it’s hopeless, or simply have higher priorities than maximizing their expected survival time; but mostly it seems they’ve simply never given it much consideration, either because they’re entirely unaware of it or assume it’s some kind of sci-fi cult practice, and they don’t take cult practices seriously as a rule. But clearly people in this situation can be wrong, as you yourself believe in this instance.
Similarly, I think most of the apparent “disagreement” about the Singularity is nothing more than unawareness of Yudkowsky and his arguments. As far as I can tell, academics who come into contact with him tend to take him seriously, and their disagreements are limited to matters of detail, such as how fast AI is approaching (decades vs. centuries) and the exact form it will take (uploads/enhancement vs. de novo). They mainly agree that SIAI’s work is worth doing by somebody. Examples include yourself, Scott Aaronson, and David Chalmers.
Cryonics is also a good case to analyze what an outsider should think, given what they can see. But of course “they laughed at Galileo too” is hardly a strong argument for contrarian views. Yes sometimes contrarians are right—the key question is how outside observers, or self-doubting insiders, can tell when contrarians are right.
Outsiders can tell when contrarians are right by assessing their arguments, once they’ve decided the contrarians are worth listening to. This in turn can be ascertained through the usual means, such as association with credentialed or otherwise high-status folks. So for instance, you are affiliated with a respectable institution, Bostrom with an even more respectable institution, and the fact that EY was co-blogging at Overcoming Bias thus implied that if your and Bostrom’s arguments were worth listening to, so were his. (This is more or less my own story; and I started reading Overcoming Bias because it appeared on Scott Aaronson’s blogroll.)
Hence it seems that Yudkowsky’s affiliations are already strong enough to signal competence to those academics interested in the subjects he deals with, in which case we should expect to see detailed, inside-view analyses from insiders who disagree. In the absence of that, we have to conclude that insiders either agree or are simply unaware—and the latter, if I understand correctly, is a problem whose solution falls more under the responsibility of people like Vassar rather than Yudkowsky.
No for most people it is infeasible to evaluate who is right by working through the details of the arguments. The fact that Eliezer wrote on a blog affiliated with Oxford is very surely not enough to lead one to expect detailed rebuttal analyses from academics who disagree with him.
Well, for most people on most topics it is infeasible to evaluate who is right, period. At the end of the day, some effort is usually required to obtain reliable information. Even surveys of expert opinion may be difficult to conduct if the field is narrow and non-”traditional”. As for whatever few specialists there may be in Singularity issues, I think you expect too little of them if you don’t think Eliezer currently has enough status to expect rebuttals.
So, despite the fact that we (human phenotypes) are endowed with a powerful self-preservation instinct, you find a signaling explanation more likely than a straightforward application of self-preservation to a person’s concept of their own mind?
Given your peculiar preferences which value your DNA more highly than your brain, it’s tempting to chalk your absurd hypothesis up to the typical mind fallacy. But I think you’re well aware of the difference in values responsible for the split between your assessment of cryonics and Eliezer’s or Robin’s.
So I think you’re value sniping. I think your comment was made in bad faith as a roundabout way of signaling your values in a context where explicitly mentioning them would be seen as inappropriate or off-topic. I don’t know what your motivation would be—did mention of cryonics remind you that many here do not share your values, and thereby motivate you to plant your flag in the discussion?
Please feel free to provide evidence to the contrary by explaining in more detail why self-preservation is an unlikely motivation for cryonics relative to signaling.
An over-generalisation of self-preservation instincts certainly seems to be part of it.
On the other hand, one of my interests is in the spread of ideas. Without cryonic medalions, cryonic bracelets, cryonic advertising and cryonic preachers there wouldn’t be any cryonics movement. There seems to be a “show your friends how much you care—freeze them!” dynamic.
I have a similar theory about the pyramids. Not so much a real voyage to the afterlife, but a means of reinforcing the pecking order in everyone’s minds.
I am contrasting this signaling perspective with Robin’s views—in part because I am aware that he is sympathetic to signaling theories in other contexts.
I do think signaling is an important part of cryonics—but I was probably rash to attempt to quantify the effect. I don’t pretend to have any good way of measuring its overall contribution relative to other factors.
Say that “Yudkowsky has no real clue” and that those “AI academics are right”? Just another crackpot among many “well educated”, no big thing. Not worth to mention, almost.
Say, that this crackpot is of the Edisonian kind! In that case it is something well worth to mention.
Important enough to at least discuss with him ON THE TOPICS, and not on some meta level. Meta level discussion is sometimes (as here IMHO), just a waste of time.
IFF he is right. Probably he is and nothing dramatically will happen. Probably Edison and Wright brothers and many others were also wrong, looking from their historic perspective.
Note, that if the official Academia (Hanson’s guys) is correct, the amount of new information is exactly zero. Nothing interesting to talk about or expect to.
I am after the cases they were and are wrong. I am after a new context, misfits like Yudkowsky or Edison might provide and “The Hanson’s” can’t. By the definition.
P.S. I don’t want to get into a discussion; I believe it’s better to just state a judgment even if without a useful explanation than to not state a judgment at all; however it may be perceived negatively for those obscure status-related reasons (see “offense” on the wiki), so I predict that this comment would’ve been downvoted without this addendum, and not impossibly still will be with it. This “P.S.” is dedicated to all the relevant occasions, not this one alone where I could’ve used the time to actually address the topic.
If I’m reading the conversation correctly, Vladimir Nesov is indicating with his remark that he is no longer interested in continuing. If he were not a major participant in the thread, a downvote would be appropriate, but as a major participant, more is required of him.
I am not confused and I don’t want a discussion either. I only state, that a new content and a new context usually comes out from outside the kosher set of views.
Of course, most of the outsiders are delusive poor devils. Yet, they are almost the only source of new information.
“The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes”.
“Radical outcomes” seems like a case of avoiding refutation by being vague. However, IMO, they will need to establish the truth of their assertion before they will get very far there. Good luck to them with that.
The AAAI interim report is really too vague to bother much with—but I suspect they are making another error.
Many robot enthusiasts pour scorn on the idea that robots will take over the world. How To Survive A Robot Uprising is a classic presentation on this theme. A hostile takeover is a pretty unrealistic scenario—but these folk often ignore the possibility of a rapid robot rise from within society driven by mutual love. One day robots will be smart, sexy, powerful and cool—and then we will want to become more like them.
Why will we witness an intelligence explosion? Because nature has a long history of favouring big creatures with brains—and because the capability to satisfy those selection pressures has finally arrived.
The process has already resulted in enormous data-centres, the size of factories. As I have said:
Thinking about it, they are probably criticising the (genuinely dud) idea that an intelligence explosion will start suddenly at some future point with the invention of some machine—rather than gradually arising out of the growth of today’s already self-improving economies and industries.
I think, both ways are still open. The intelligence explosion from a self-improving economy and the intelligence explosion from a fringe of this process.
Re: “overall skepticism about the prospect of an intelligence explosion”...?
My guess would be that they are unfamiliar with the issues or haven’t thought things through very much. Or maybe they don’t have a good understanding of what that concept refers to (see link to my explanation—hopefully above). They present no useful analysis of the point—so it is hard to know why they think what they think.
The AAAI seem to have publicly come to these issues later than many in the community—and it seems to be playing catch-up.
It must be possible to engage at least some of these people in some sort of conversation to understand their positions, whether a public dialog as with Scott Aaronson or in private.
You have written a lot of words. Just how many of your words would someone have had to read to make you feel a substantial need to explain the fact they are world class AI experts and disagree with your conclusions?
I’m sorry, but I don’t really have a proper lesson plan laid out—although the ongoing work of organizing LW into sequences may certainly help with that. It would depend on the specific issue and what I thought needed to be understood about that issue.
If they drew my feedback cycle of an intelligence explosion and then drew a different feedback cycle and explained why it fit the historical evidence equally well, then I would certainly sit up and take notice. It wouldn’t matter if they’d done it on their own or by reading my stuff.
E.g. Chalmers at the Singularity Summit is an example of an outsider who wandered in and started doing a modular analysis of the issues, who would certainly have earned the right of serious consideration and serious reply if, counterfactually, he had reached different conclusions about takeoff… with respect to only the parts that he gave a modular analysis of, though, not necessarily e.g. the statement that de novo AI is unlikely because no one will understand intelligence. If Chalmers did a modular analysis of that part, it wasn’t clear from the presentation.
Roughly, what I expect to happen by default is no modular analysis at all—just snap consideration and snap judgment. I feel little need to explain such.
You, or somebody anyway, could still offer a modular causal model of that snap consideration and snap judgment. For example:
What cached models of the planning abilities of future machine intelligences did the academics have available when they made the snap judgment?
What fraction of the academics are aware of any current published AI architectures which could reliably reason over plans at the level of abstraction of “implement a proxy intelligence”?
What fraction of them have thought carefully about when there might be future practical AI architectures that could do this?
What fraction use a process for answering questions about the category distinctions that will be known in the future, which uses as an unconscious default the category distinctions known in the present?
What false claims have been made about AI in the past? What decision rules might academics have learned to use, to protect themselves from losing prestige for being associated with false claims like those?
How much do those decision rules refer to modular causal analyses of the object of a claim and of the fact that people are making the claim?
How much do those decision rules refer to intuitions about other peoples’ states of mind and social category memberships?
How much do those decision rules refer to intuitions about other peoples’ intuitive decision rules?
Historically, have peoples’ own abilities to do modular causal analyses been good enough to make them reliably safe from losing prestige by being associated with false claims? What fraction of academics have the intuitive impression that their own ability to do analysis isn’t good enough to make them reliably safe from losing prestige by association with a false claim, so that they can only be safe if they use intuitions about the states of mind and social category memberships of a claim’s proponents?
Of those AI academics who believe that a machine intelligence could exist which could outmaneuver humans if motivated, how do they think about the possible motivations of a machine intelligence?
What fraction of them think about AI design in terms of a formalism such as approximating optimal sequential decision theory under a utility function? How easy would it be for them to substitute anthropomorphic intuitions for correct technical predictions?
What fraction of them think about AI design in terms of intuitively justified decision heuristics? How easy would it be for them to substitute anthropomorphic intuitions for correct technical predictions?
What fraction of them understand enough evolutionary psychology and/or cognitive psychology to recognize moral evaluations as algorithmically caused, so that they can reject the default intuitive explanation of the cause of moral evaluations, which seems to be: “there are intrinsic moral qualities attached to objects in the world, and when any intelligent agent apprehends an object with a moral quality, the action of the moral quality on the agent’s intelligence is to cause the agent to experience a moral evaluation”?
What combination of specializations in AI, moral philosophy, and cognitive psychology would an academic need to have, to be an “expert” whose disagreements about the material causes and implementation of moral evaluations were significant?
On the question of takeoff speeds, what fraction of the AI academics have a good enough intuitive understanding of decision theory to see that a point estimate or default scenario should not be substituted for a marginal posterior distribution, even in a situation where it would be socially costly in the default scenario to take actions which prevent large losses in one tail of the distribution?
What fraction recognized that they had a prior belief distribution over possible takeoff speeds at all?
What fraction understood that, regarding a variable which is underconstrained by evidence, “other people would disapprove of my belief distribution about this variable” is not an indicator for “my belief distribution about this variable puts mass in the wrong places”, except insofar as there is some causal reason to expect that disapproval would be somehow correlated with falsehood?
What other popular concerns have academics historically needed to dismiss? What decision rules have they learned to decide whether they need to dismiss a current popular concern?
After they make a decision to dismiss a popular concern, what kinds of causal explanations of the existence of that concern do they make reference to, when arguing to other people that they should agree with the decision?
How much do the true decision rules depend on those causal explanations?
How much do the decision rules depend on intuitions about the concerned peoples’ states of mind and social category memberships?
How much do the causal explanations use concepts which are implicitly defined by reference to hidden intuitions about states of mind and social category memberships?
Can these intuitively defined concepts carry the full weight of the causal explanations they are used to support, or does their power to cause agreement come from their ability to activate social intuitions?
Which people are the AI academics aware of, who have argued that intelligence explosion is a concern? What social categories do they intuit those people to be members of? What arguments are they aware of? What states of mind do they intuit those arguments to be indicators of (e.g. as in intuitively computed separating equilibria)?
What people and arguments did the AI academics think the other AI academics were thinking of? If only a few of the academics were thinking of people and arguments who they intuited to come from credible social categories and rational states of mind, would they have been able to communicate this to the others?
When the AI academics made the decision to dismiss concern about an intelligence explosion, what kinds of causal explanations of the existence of that concern did they intuitively expect that they would be able make reference to, if they later had to argue to other people that they should agree with the decision?
It is also possible to model the social process in the panel:
Are there factors that might make a joint statement by a panel of AI academics reflect different conclusions than they would have individually reached if they had been outsiders to the AI profession with the same AI expertise?
One salient consideration would be that agreeing with popular concern about an intelligence explosion would result in their funding being cut. What effects would this have had?
Would it have affected the order in which they became consciously aware of lines of argument that might make an intelligence explosion seem less or more deserving of concern?
Would it have made them associate concern about an intelligence explosion with unpopularity? In doubtful situations, unpopularity of an argument is one cue for its unjustifiability. Would they associate unpopularity with logical unjustifiability, and then lose willingness to support logically justifiable lines of argument that made an intelligence explosion seem deserving of concern, just as if they had felt those lines of argument to be logically unjustifiable, but without any actual unjustifiability?
There are social norms to justify taking prestige away from people who push a claim that an argument is justifiable while knowing that other prestigious people think the argument to to be a marker of a non-credible social category or state of mind. How would this have affected the discussion?
If there were panelists who personally thought the intelligence explosion argument was plausible, and they were in the minority, would the authors of the panel’s report mention it?
Would the authors know about it?
If the authors knew about it, would they feel any justification or need to mention those opinions in the report, given that the other panelists may have imposed on the authors an implicit social obligation to not write a report that would “unfairly” associate them with anything they think will cause them to lose prestige?
If panelists in such a minority knew that the report would not mention their opinions, would they feel any need or justification to object, given the existence of that same implicit social obligation?
How good are groups of people at making judgments about arguments that unprecedented things will have grave consequences?
How common is a reflective, causal understanding of the intuitions people use when judging popular concerns and arguments about unprecedented things, of the sort that would be needed to compute conditional probabilities like “Pr( we would decide that concern is not justified | we made our decision according to intuition X ∧ concern was justified )”?
How common is the ability to communicate the epistemic implications of that understanding in real-time while a discussion is happening, to keep it from going wrong?
From that AAAI panel’s interim report:
Given this description it is hard to imagine they haven’t imagined the prospect of the rate of intelligence growth depending on the level of system intelligence.
I don’t see any arguments listed, though. I know there’s at least some smart people on that panel (e.g. Horvitz) so I could be wrong, but experience has taught me to be pessimistic, and pessimism says I have no particular evidence that anyone started breaking the problem down into modular pieces, as opposed to, say, stating a few snap perceptual judgments at each other and then moving on.
Why are you so optimistic about this sort of thing, Robin? You’re usually more cynical about what would happen when academics have no status incentive to get it right and every status incentive to dismiss the silly. We both have experience with novices encountering these problems and running straight into the brick wall of policy proposals without even trying a modular analysis. Why on this one occasion do you turn around and suppose that the case we don’t know will be so unlike the cases we do know?
The point is that this is a subtle and central issue to engage, so I was suggesting that you to consider describing your analysis more explicitly. Is there is never any point in listening to academics on “silly” topics? Is there never any point in listening to academics who haven’t explicitly told you how they’ve broken a problem down into modular parts, no matter now distinguished the are on related topics? Are people who have a modular parts analysis always a more reliable source than people who don’t, no matter what else their other features? And so on.
I confess, it doesn’t seem to me on a gut level like this is either healthy to obsess about, or productive to obsess about. It seems more like worrying that my status isn’t high enough to do work, than actually working. If someone shows up with amazing analyses I haven’t considered, I can just listen to the analyses then. Why spend time trying to guess who might have a hidden deep analysis I haven’t seen, when the prior is so much in favor of them having made a snap judgment, and it’s not clear why if they’ve got a deep analysis they wouldn’t just present it?
I think that on a purely pragmatic level there’s a lot to be said for the Traditional Rationalist concept of demanding that Authority show its work, even if it doesn’t seem like what ideal Bayesians would do.
You have in the past thought my research on the rationality of disagreement to be interesting and spent a fair bit of time discussing it. It seemed healthy to me for you to compare your far view of disagreement in the abstract to the near view of your own particular disagreement. If it makes sense in general for rational agents to take very seriously the fact that others disagree, why does it make little sense for you in particular to do so?
...and I’ve held and stated this same position pretty much from the beginning, no? E.g. http://lesswrong.com/lw/gr/the_modesty_argument/
I was under the impression that my verbal analysis matched and cleverly excused my concrete behavior.
Well (and I’m pretty sure this matches what I’ve been saying to you over the last few years) just because two ideal Bayesians would do something naturally, doesn’t mean you can singlehandedly come closer to Bayesianism by imitating the surface behavior of agreement. I’m not sure that doing elaborate analyses to excuse your disagreement helps much either. http://wiki.lesswrong.com/wiki/Occam%27s_Imaginary_Razor
I’d spend much more time worrying about the implications of Aumann agreement, if I thought the other party actually knew my arguments, took my arguments very seriously, took the Aumann problem seriously with respect to me in particular, and in general had a sense of immense gravitas about the possible consequences of abusing their power to make me update. This begins to approach the conditions for actually doing what ideal Bayesians do. Michael Vassar and I have practiced Aumann agreement a bit; I’ve never literally done the probability exchange-and-update thing with anyone else. (Edit: Actually on recollection I played this game a couple of times at a Less Wrong meetup.)
No such condition is remotely approached by disagreeing with the AAAI panel, so I don’t think I could, in real life, improve my epistemic position by pretending that they were ideal Bayesians who were fully informed about my reasons and yet disagreed with me anyway (in which case I ought to just update to match their estimates, rather than coming up with elaborate excuses to disagree with them!)
Well I disagree with you strongly that there is no point in considering the views of others if you are not sure they know the details of your arguments, or of the disagreement literature, or that those others are “rational.” Guess I should elaborate my view in a separate post.
There’s certainly always a point in considering specific arguments. But to be nervous merely that someone else has a different view, one ought, generally speaking, to suspect (a) that they know something you do not or at least (b) that you know no more than them (or far more rarely (c) that you are in a situation of mutual Aumann awareness and equal mutual respect for one another’s meta-rationality). As far as I’m concerned, these are eminent scientists from outside the field that I work in, and I have no evidence that they did anything more than snap judgment of my own subject material. It’s not that I have specific reason to distrust these people—the main name I recognize is Horvitz and a fine name it is. But the prior probabilities are not good here.
I don’t actually spend time obsessing about that sort of thing except when you’re asking me those sorts of questions—putting so much energy into self-justification and excuses would just slow me down if Horvitz showed up tomorrow with an argument I hadn’t considered.
I’ll say again: I think there’s much to be said for the Traditional Rationalist ideal of—once you’re at least inside a science and have enough expertise to evaluate the arguments—paying attention only when people lay out their arguments on the table, rather than trying to guess authority (or arguing over who’s most meta-rational). That’s not saying “there’s no point in considering the views of others”. It’s focusing your energy on the object level, where your thought time is most likely to be productive.
Is it that awful to say: “Show me your reasons”? Think of the prior probabilities!
You admit you have not done much to make it easy to show them your reasons. You have not written up your key arguments in a compact form using standard style and terminology and submitted it to standard journals. You also admit you have not contacted any of them to ask them for their reasons; Horvitz would have have to “show up” for you to listen to him. This looks a lot like a status pissing contest; the obvious interpretation: Since you think you are better than them, you won’t ask them for their reasons, and you won’t make it easy for them to understand your reasons, as that would admit they are higher status. They will instead have to acknowledge your higher status by coming to you and doing things your way. And of course they won’t since by ordinary standard they have higher status. So you ensure there will be no conversation, and with no conversation you can invoke your “traditional” (non-Bayesian) rationality standard to declare you have no need to consider their opinions.
You’re being slightly silly. I simply don’t expect them to pay any attention to me one way or another. As it stands, if e.g. Horvitz showed up and asked questions, I’d immediately direct him to http://singinst.org/AIRisk.pdf (the chapter I did for Bostrom), and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries. Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
FYI, I’ve talked with Peter Norvig a bit. He was mostly interested in the CEV / FAI-spec part of the problem—I don’t think we discussed hard takeoffs much per se. I certainly wouldn’t have brushed him off if he’d started asking!
“and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries.”
Why? No one in the academic community would spend that much time reading all that blog material for answers that would be best given in a concise form in a published academic paper. So why not spend the time? Unless you think you are that much of an expert in the field as to not need the academic community. If that be the case where are your publications and where are your credentials, where is the proof of this expertise (expert being a term that is applied based on actual knowledge and accomplishments)?
“Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.”
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. I think you would admit that in its current form SIAI has a 0 probability of creating FAI first. That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.
I am sorry I prefer to be blunt.. that way there is no mistaking meanings...
No.
That ‘probably not even then’ part is significant.
Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied ‘1’ and probably more than ‘0’ too.
“Since I don’t expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven’t, well, expended a huge amount of effort to get it.
Why? If you expect to make FAI you will undoubtedly need people in the academic communities’ help; unless you plan to do this whole project by yourself or with purely amateur help. …”
“That ‘probably not even then’ part is significant.”
My implication was that the idea that he can create FAI completely outside the academic or professional world is ridiculous when you’re speaking from an organization like SIAI which does not have the people or money to get the job done. In fact SIAI doesn’t have enough money to pay for the computing hardware to make human level AI.
“Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied ‘1’ and probably more than ‘0’ too.”
If he doesn’t agree with it now, I am sure he will when he runs into the problem of not having the money to build his AI or not having enough time in the day to solve the problems that will be associated with constructing the AI. Not even mentioning the fact that when you close yourself to outside influence that much you often end up with ideas that are riddled with problems, that if someone on the outside had looked at the idea they would have pointed the problems out.
If you have never taken an idea from idea to product this can be hard to understand.
And so the utter difference of working assumptions is revealed.
Back of a napkin math:
10^4 neurons per supercomputer
10^11 neurons per brain
10^7 supercomputers per brain
1.3*10^6 dollars per supercomputer
1.3*10^13 dollars per brain
Edit: Disclaimer: Edit: NOT!
Another difference in working assumptions.
It’s a fact stated by the guy in the video, not an assumption.
No need to disclaim, your figures are sound enough and I took them as a demonstration of another rather significant difference between the assumptions of Eliezer and mormon2 (or mormon2′s sources).
I have. I’ve also failed to take other ideas to products and so agree with that part of your position, just not the argument as it relates to context.
If there is a status pissing contest, they started it! ;-)
“On the latter, some panelists believe that the AAAI study was held amidst a perception of urgency by non-experts (e.g., a book and a forthcoming movie titled “The Singularity is Near”), and focus of attention, expectation, and concern growing among the general population.”
Agree with them that there is much scaremongering going on in the field—but disagree with them about there not being much chance of an intelligence explosion.
I wondered why these folk got so much press. My guess is that the media probably thought the “AAAI Presidential Panel on Long-Term AI Futures” had something to do with the a report commisioned indirectly for the country’s president. In fact it just refers to the president of their organisation. A media-savvy move—though it probably represents deliberately misleading information.
Almost surely world class academic AI experts do “know something you do not” about the future possibilities of AI. To declare that topic to be your field and them to be “outside” it seems hubris of the first order.
This conversation seems to be following what appears to me to be a trend in Robin and Eliezer’s (observable by me) disagreements. This is one reason I would fascinated if Eliezer did cover Robin’s initial question, informed somewhat by Eliezer’s interpretation.
I recall Eliezer mentioning in a tangential comment that he disagreed with Robin not just on the particular conclusion but more foundationally on how much weight should be given to certain types of evidence or argument. (Excuse my paraphrase from hazy memory, my googling failed me.) This is a difference that extends far beyond just R & E and Eliezer has hinted at insights that intrigue me.
How can you be so confident that you know so much about this topic that no world class AI expert could know something relevant that you do not? Surely they considered the fact that people like you think you a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as “snap” because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don’t think those you disagree with are rational, but they probably don’t think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.
Does Daphne Koller know more than I do about the future possibilities of object-oriented Bayes Nets? Almost certainly. And, um… there are various complicated ways I could put this… but, well, so what?
(No disrespect intended to Koller, and OOBN/probabilistic relational models/lifted Bayes/etcetera is on my short-list of things to study next.)
How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not? Surely they considered the fact that people like you think you know a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as “snap” because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don’t think those you disagree with are rational, but they probably don’t think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.
You changed what I said into a bizarre absolute. I am assuming no such thing. I am just assuming that, by default, world class experts on various topics in narrow AI, produce their beliefs about the Singularity by snap judgment rather than detailed modular analysis. This is a prior and hence an unstable probability—as soon as I see contrary evidence, as soon as I see the actual analysis, it gets revoked.
They have no such disagreement. They have no idea I exist. On the rare occasion when I encounter such a person who is physically aware of my existence, we often manage to have interesting though brief conversations despite their having read none of my stuff.
Science only works when you use it; scientific authority derives from science. If you’ve got Lord Kelvin running around saying that you can’t have flying machines because it’s ridiculous, the problem isn’t that he’s an Authority, the problem is that he’s running on naked snap intuitive judgments of absurdity and the Wright Brothers are using actual math. The asymmetry in this case is not that pronounced but, even so, the default unstable prior is to assume that experts in narrow AI algorithms are not doing anything more complicated than this to produce their judgments about the probability of intelligence explosion—both the ones with negative affect who say “Never, you religious fear-monger!” and the ones with positive affect who say “Yes! Soon! And they shall do no wrong!” As soon as I see actual analysis, then we can talk about the actual analysis!
Added: In this field, what happens by default is that people talk complete nonsense. I spent my first years talking complete nonsense. In a situation like that, everyone has to show their work! Or at least show that they did some work! No exceptions!
This conversation is probably reaching diminishing returns, so let me sum up. I propose that it would be instructive to you and many others if you would discuss what your dispute looks like from an outside view—what uninformed neutral but intelligent and rational observers should conclude about this topic from the features of this dispute they can observe from the outside. Such features include the various credentials of each party, and the effort he or she has spent on the topic and on engaging the other parties. If you think that a reasonable outsider viewer would have far less confidence in your conclusions than you do, then you must think that you possess hidden info, such as that your arguments are in fact far more persuasive than one could reasonably expect knowing only the outside features of the situation. Then you might ask why the usual sorts of clues that tend to leak out about argument persuasiveness have failed to do so in this case.
Robin, why do most academic experts (e.g. in biology) disagree with you (and Eliezer) about cryonics? Perhaps a few have detailed theories on why it’s hopeless, or simply have higher priorities than maximizing their expected survival time; but mostly it seems they’ve simply never given it much consideration, either because they’re entirely unaware of it or assume it’s some kind of sci-fi cult practice, and they don’t take cult practices seriously as a rule. But clearly people in this situation can be wrong, as you yourself believe in this instance.
Similarly, I think most of the apparent “disagreement” about the Singularity is nothing more than unawareness of Yudkowsky and his arguments. As far as I can tell, academics who come into contact with him tend to take him seriously, and their disagreements are limited to matters of detail, such as how fast AI is approaching (decades vs. centuries) and the exact form it will take (uploads/enhancement vs. de novo). They mainly agree that SIAI’s work is worth doing by somebody. Examples include yourself, Scott Aaronson, and David Chalmers.
Cryonics is also a good case to analyze what an outsider should think, given what they can see. But of course “they laughed at Galileo too” is hardly a strong argument for contrarian views. Yes sometimes contrarians are right—the key question is how outside observers, or self-doubting insiders, can tell when contrarians are right.
Outsiders can tell when contrarians are right by assessing their arguments, once they’ve decided the contrarians are worth listening to. This in turn can be ascertained through the usual means, such as association with credentialed or otherwise high-status folks. So for instance, you are affiliated with a respectable institution, Bostrom with an even more respectable institution, and the fact that EY was co-blogging at Overcoming Bias thus implied that if your and Bostrom’s arguments were worth listening to, so were his. (This is more or less my own story; and I started reading Overcoming Bias because it appeared on Scott Aaronson’s blogroll.)
Hence it seems that Yudkowsky’s affiliations are already strong enough to signal competence to those academics interested in the subjects he deals with, in which case we should expect to see detailed, inside-view analyses from insiders who disagree. In the absence of that, we have to conclude that insiders either agree or are simply unaware—and the latter, if I understand correctly, is a problem whose solution falls more under the responsibility of people like Vassar rather than Yudkowsky.
No for most people it is infeasible to evaluate who is right by working through the details of the arguments. The fact that Eliezer wrote on a blog affiliated with Oxford is very surely not enough to lead one to expect detailed rebuttal analyses from academics who disagree with him.
Well, for most people on most topics it is infeasible to evaluate who is right, period. At the end of the day, some effort is usually required to obtain reliable information. Even surveys of expert opinion may be difficult to conduct if the field is narrow and non-”traditional”. As for whatever few specialists there may be in Singularity issues, I think you expect too little of them if you don’t think Eliezer currently has enough status to expect rebuttals.
I figure cryonics serves mainly a signaling role.
The message probably reads something like:
“I’m a geek, I think I am really important—and I’m loaded”.
So, despite the fact that we (human phenotypes) are endowed with a powerful self-preservation instinct, you find a signaling explanation more likely than a straightforward application of self-preservation to a person’s concept of their own mind?
Given your peculiar preferences which value your DNA more highly than your brain, it’s tempting to chalk your absurd hypothesis up to the typical mind fallacy. But I think you’re well aware of the difference in values responsible for the split between your assessment of cryonics and Eliezer’s or Robin’s.
So I think you’re value sniping. I think your comment was made in bad faith as a roundabout way of signaling your values in a context where explicitly mentioning them would be seen as inappropriate or off-topic. I don’t know what your motivation would be—did mention of cryonics remind you that many here do not share your values, and thereby motivate you to plant your flag in the discussion?
Please feel free to provide evidence to the contrary by explaining in more detail why self-preservation is an unlikely motivation for cryonics relative to signaling.
An over-generalisation of self-preservation instincts certainly seems to be part of it.
On the other hand, one of my interests is in the spread of ideas. Without cryonic medalions, cryonic bracelets, cryonic advertising and cryonic preachers there wouldn’t be any cryonics movement. There seems to be a “show your friends how much you care—freeze them!” dynamic.
I have a similar theory about the pyramids. Not so much a real voyage to the afterlife, but a means of reinforcing the pecking order in everyone’s minds.
I am contrasting this signaling perspective with Robin’s views—in part because I am aware that he is sympathetic to signaling theories in other contexts.
I do think signaling is an important part of cryonics—but I was probably rash to attempt to quantify the effect. I don’t pretend to have any good way of measuring its overall contribution relative to other factors.
Re: “They have no idea I exist.”
Are you sure? You may be underestimating your own fame in this instance.
Say that “Yudkowsky has no real clue” and that those “AI academics are right”? Just another crackpot among many “well educated”, no big thing. Not worth to mention, almost.
Say, that this crackpot is of the Edisonian kind! In that case it is something well worth to mention.
Important enough to at least discuss with him ON THE TOPICS, and not on some meta level. Meta level discussion is sometimes (as here IMHO), just a waste of time.
I’m not sure what you mean by your first few sentences. But I disagree with your last two. It is good for me to see this debate.
You get zilch, in the case of Hanson (and the Academia) is right. Zero in the informative sense. You get quite a bit, if Yudkowsky is right.
Verifying Hanson (& the so called Academia) means no new information.
You get not needing to run around trying to save the world and a pony if Hanson is right. It’s not useful to be deluded.
IFF he is right. Probably he is and nothing dramatically will happen. Probably Edison and Wright brothers and many others were also wrong, looking from their historic perspective.
Note, that if the official Academia (Hanson’s guys) is correct, the amount of new information is exactly zero. Nothing interesting to talk about or expect to.
I am after the cases they were and are wrong. I am after a new context, misfits like Yudkowsky or Edison might provide and “The Hanson’s” can’t. By the definition.
You are confused.
P.S. I don’t want to get into a discussion; I believe it’s better to just state a judgment even if without a useful explanation than to not state a judgment at all; however it may be perceived negatively for those obscure status-related reasons (see “offense” on the wiki), so I predict that this comment would’ve been downvoted without this addendum, and not impossibly still will be with it. This “P.S.” is dedicated to all the relevant occasions, not this one alone where I could’ve used the time to actually address the topic.
And a simple downvote isn’t sufficient?
If I’m reading the conversation correctly, Vladimir Nesov is indicating with his remark that he is no longer interested in continuing. If he were not a major participant in the thread, a downvote would be appropriate, but as a major participant, more is required of him.
I downvoted it. If it included two quotes from the context followed by ‘You are confused’ I would have upvoted it.
I initially tried that, but simple citation didn’t make the point any more rigorous.
I am not confused and I don’t want a discussion either. I only state, that a new content and a new context usually comes out from outside the kosher set of views.
Of course, most of the outsiders are delusive poor devils. Yet, they are almost the only source of new information.
From that AAAI document:
“The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes”.
“Radical outcomes” seems like a case of avoiding refutation by being vague. However, IMO, they will need to establish the truth of their assertion before they will get very far there. Good luck to them with that.
The AAAI interim report is really too vague to bother much with—but I suspect they are making another error.
Many robot enthusiasts pour scorn on the idea that robots will take over the world. How To Survive A Robot Uprising is a classic presentation on this theme. A hostile takeover is a pretty unrealistic scenario—but these folk often ignore the possibility of a rapid robot rise from within society driven by mutual love. One day robots will be smart, sexy, powerful and cool—and then we will want to become more like them.
Why will we witness an intelligence explosion? Because nature has a long history of favouring big creatures with brains—and because the capability to satisfy those selection pressures has finally arrived.
The process has already resulted in enormous data-centres, the size of factories. As I have said:
http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/
Thinking about it, they are probably criticising the (genuinely dud) idea that an intelligence explosion will start suddenly at some future point with the invention of some machine—rather than gradually arising out of the growth of today’s already self-improving economies and industries.
I think, both ways are still open. The intelligence explosion from a self-improving economy and the intelligence explosion from a fringe of this process.
Did you take a look at my “The Intelligence Explosion Is Happening Now”? The point is surely a matter of history—not futurism.
Yes and you are right.
Great—thanks for your effort and input.
Re: “overall skepticism about the prospect of an intelligence explosion”...?
My guess would be that they are unfamiliar with the issues or haven’t thought things through very much. Or maybe they don’t have a good understanding of what that concept refers to (see link to my explanation—hopefully above). They present no useful analysis of the point—so it is hard to know why they think what they think.
The AAAI seem to have publicly come to these issues later than many in the community—and it seems to be playing catch-up.
It looks as though we will be hearing more from these folk soon:
“Futurists’ report reviews dangers of smart robots”
http://www.pittsburghlive.com/x/pittsburghtrib/news/pittsburgh/s_651056.html
It doesn’t sound much better than the first time around.
It must be possible to engage at least some of these people in some sort of conversation to understand their positions, whether a public dialog as with Scott Aaronson or in private.
Chalmers reached some odd conclusions. Probably not as odd as his material about zombies and consciousness, though.