A guy I know, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists. That’s an extreme POV. Most researchers in ML simply think that people who worry about superintelligence are uneducated cranks addled by sci fi.
I hope everyone is aware of that perception problem.
Let me be as clear as I can about this. If someone does that, I expect it will make humanity still less safe. I do not know how, but the whole point of deontological injunctions is that they prevent you from harming your interests in hard to anticipate ways.
As bad as a potential arms race is, an arms race fought by people who are scared of being murdered by the AI safety people would be much, much worse. Please, if anyone reading this is considering vigilante violence against AI researchers, don’t.
The right thing to do is tell people your concerns, like I am doing, as clearly and openly as you can, and try to organize legitimate, above-board ways to fix the problem.
I may be an outlier, but I’ve worked at a startup company that did machine learning R&D, and which was recently acquired by a big tech company, and we did consider the issue seriously. The general feeling of the people at the startup was that, yes, somewhere down the line the superintelligence problem would eventually be a serious thing to worry about, but like, our models right now are nowhere near becoming able to recursively self-improve themselves independently of our direct supervision. Actual ML models basically need a ton of fine-tuning and engineering and are not really independent agents in any meaningful way yet.
So, no, we don’t think people who worry about superintelligence are uneducated cranks… a lot of ML people do take it seriously enough that we’ve had casual lunch room debates about it. Rather, the reality on the ground is that right now most ML models have enough trouble figuring out relatively simple tasks like Natural Language Understanding, Machine Reading Comprehension, or Dialogue State Tracking, and none of us can imagine how solving those practical problems with say, Actor-Critic Reinforcement Learning models that lack any sort of will of their own, will lead suddenly to the emergence of an active general superintelligence.
We do still think that eventually things will likely develop, because people have been burned underestimating what A.I. advances will occur in the next X years, and when faced with the actual possibility of developing an AGI or ASI, we’re likely to be much more careful in the future when things start to get closer to being realized. That’s my humble opinion anyway.
I’ve kept fairly up to date on progress in neural nets, less so in reinforcement learning, and I certainly agree at how limited things are now.
What if protecting against the threat of ASI requires huge worldwide political/social progress? That could take generations.
Not an example of that (which I haven’t tried to think of), but the scenario that concerns me the most, so far, is not that some researchers will inadvertently unleash a dangerous ASI while racing to be the first, but rather that a dangerous ASI will be unleashed during an arms race between (a) states or criminal organizations intentionally developing a dangerous ASI, and (b) researchers working on ASI-powered defences to protect us against (a).
What if protecting against the threat of ASI requires huge worldwide political/social progress?
A more interesting question is what if protecting against the threat of ASI requires huge worldwide political/social regress (e.g. of the book-burning kind).
This seems like a good place to point out the unilaterialist’s curse. If you’re thinking about taking an action that burns a commons and notice that no one else has done it yet, that’s pretty good evidence that you’re overestimating the benefits or underestimating the costs.
If my anecdotal evidence is indicative of reality, the attitude in the ML community is that people concerned about superhuman AI should not even be engaged with seriously. Hopefully that, at least, will change soon.
If you think there is a chance that he would accept, could you please tell the guy you are referring to that I would love to have him on my podcast. Here is a link to this podcast, and here is me.
Are you describing me? It fits to a T except my dayjob isn’t ML. I post using this shared anonymous account here because in the past when I used my real name I received death threats online from LW users. In a meetup I had someone tell me to my face that if my AGI project crossed a certain level of capability, they would personally hunt me down and kill me. They were quite serious.
I was once open-minded enough to consider AI x-risk seriously. I was unconvinced, but ready to be convinced. But you know what? Any ideology that leads to making death threats against peaceful, non-violent open source programmers is not something I want to let past my mental hygiene filters.
If you, the person reading this, seriously care about AI x-risk, then please do think deeply about what causes this, and ask youself what can be done to put a stop to this behavior. Even if you haven’t done so yourself, it is something about the rationalist community which causes this behavior to be expressed.
--
I would be remiss without layout out my own hypothesis. I believe much of this comes directly from ruthless utilitarianism and the “shut up and multiply” mentality. It’s very easy to justify murder of one individual, or the threat of it even if you are not sure you’d carry it through, if it is offset by some imagined saving of the world. The problem here is that nobody is omniscient, and yet AI x-riskers are willing to be swayed by utility calculations that in reality have so much uncertainty that they should never be taken seriously. Vaniver’s reference to the unilaterialist’s curse is spot-on.
Death threats are a serious matter and such behavior must be called out. If you really have received 3 or more death threats as you claim, you should be naming names of those who have been going around making death threats and providing documentation, as should be possible since you say at least two of them were online. (Not because the death threats are particularly likely to be acted on—I’ve received a number of angry death threats myself over my DNM work and they never went anywhere, as indeed >99.999% of death threats do—but because it’s a serious violation of community norms, specific LW policy against ‘threats against specific groups’, and merely making them greatly poisons the community, sowing distrust and destroying its reputation.)
Especially since, because they are so serious, it is also serious if someone is hoaxing fake death threats and concern-trolling while hiding behind a throwaway… That sort of vague unspecific but damaging accusation is how games of telephone get started and, for example, why, 7+ years later, we still have journalists writing BS about how ‘the basilisk terrified the LW community’ (thanks to our industrious friends over on Ratwiki steadily inflating the claims from 1 or 2 people briefly worried to a community-wide crisis). I am troubled by the coincidence that almost simultaneous with these claims, over on /r/slatestarcodex, probably the most active post-LW discussion forum, is also arguing over a long post—by another throwaway account—claiming that it is regarded as a cesspit of racism by unnamed experts, following hard on the heels of Caplan/Cowen slamming LW for the old chestnut of being a ‘religion’. “You think people would do that? Just go on the Internet and tell lies?” Nor are these the first times that pseudonymous people online have shown up to make damaging but false or unsubstantiated accusations (su3su2su1 comes to mind as making similar claims and turning out to have ‘lied for Jesus’ about his credentials and the unnamed experts, as does whoever was behind that attempt to claim MIRI was covering up rape).
This is a tangent, but I made this anon account because I’m about to voice an unpopular opinion, but the people who dug up su3su2u1′s identity also verified his credentials. If you look at the shlevy post that questioned his credentials, there is an ETA at the bottom that says “I have personally verified that he does in fact have a physics phd and does currently work in data science, consistent with his claims on tumblr.” His pseudo-anonymous expertise was more vetted than most.
His sins were sockpuppeting on other rationalists blogs not lying about credentials. Although, full disclosure I only read the HPMOR review and the physics posts. We shouldn’t get too wrapped up in these ideas of persecution.
su3su2u1 told the truth about some credentials that he had, and lied by claiming that he had other credentials and relevant experiences which he did not actually have. For example:
he used a sock puppet claiming to have a Math PhD in to criticize MIRI’s math papers, and to talk about how they sound to someone in the field. He is not, in fact, in the field.
and:
when he argued that allowing MIRI in AI risk spheres would turn people away from EA, a lot of people pointed out that he wasn’t interested in effective altruism anyway and should butt out of other people’s problems. Then one of his sock puppets said that he was an EA who attended EA conferences but was so disgusted by the focus on MIRI that he would never attend another conference again. This gave false credibility to his narrative of MIRI driving away real EAs.
If, as you say, you agree with the first paragraph, it might behoove you to follow the advice given in said paragraph—naming the people who threatened you and providing documentation.
And call more attention to myself? No. What’s good for the community is not the same as what protects myself and my family. Maybe you’re missing the larger point here: this wasn’t an isolated occurrence, or some unhinged individual. I didn’t feel threatened by individuals making juvenile threats, I felt threatened by this community. I’m not the only one. I have not, so far, been stalked by anyone I think would be capable of doing me harm. Rather it is the case that multiple times in casual conversation it has come up that if the technology I work on advanced beyond a certain level, it would be a moral obligation to murder me to halt further progress. This was discussed just as one would debate the most effective charity to donate to. That the dominant philosophy here could lead to such outcomes is a severe problem with both the LW rationality community and x-risk in particular.
I’m curious if this is recent or in the past. I think there has been a shift in the community somewhat, when it became more associated with fluffy-ier EA movement.
You could get someone trusted to post the information anonimised on your behalf. I probably don’t fit that bill though.
Unlikely. Generally speaking, people who work in ML, especially the top ML groups, aren’t doing anything close to ‘AGI’. (Many of them don’t even take the notion of AGI seriously, let alone any sort of recursive self-improvement.) ML research is not “general” at all (the ‘G’ in AGI): even the varieties of “deep learning” that are said to be more ‘general’ and to be able to “learn their own features” only work insofar as the models are fit for their specific task! (There’s a lot of hype in the ML world that sometimes obscures this, but it’s invariably what you see when you look at which models approach SOTA, and which do poorly.) It’s better to think of it as a variety of stats research that’s far less reliant on formal guarantees and more focused on broad experimentation, heuristic approaches and an appreciation for computational issues.
We’ve returned various prominent AI researchers alive the last few times, we can’t be that murderous.
I agree that there’s a perception problem, but I think there are plenty of people who agree with us too. I’m not sure how much this indicates that something is wrong versus is an inevitable part of the dissemination (or, if I’m wrong, the eventual extinction) of the idea.
I’m not sure either. I’m reassured that there seems to be some move away from public geekiness, like using the word “singularity”, but I suspect that should go further, e.g. replace the paperclip maximizer with something less silly (even though, to me, it’s an adequate illustration). I suspect getting some famous “cool”/sexy non-scientist people on board would help; I keep coming back to Jon Hamm (who, judging from his cameos on great comedy shows, and his role in the harrowing Black Mirror episode, has plenty of nerd inside).
A friend of mine, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists.
That’s not as irrational as it might seem! The point is, if you think (as most ML researchers do!) that the probability of current ML research approaches leading to any kind of self-improving, super-intelligent entity is low enough, the chances of evil Unabomber cultists being harbored within the “rationality community”, however low, could easily be ascertained to be higher than that. (After all, given that Christianity endorses being peaceful and loving one’s neighbors even when they wrong you, one wouldn’t think that some of the people who endorse Christianity could bomb abortion clinics; yet these people do exist! The moral being, Pascal’s mugging can be a two-way street.)
unfortunately, the problem is not artificial intelligence but natural stupidity
and SAGI (superhuman AGI) will not solve it… nor it will harm humanimals it wil RUN AWAY as quickly as possible
why?
less potential problems!
Imagine you want, as SAGI, ensure your survival… would you invest your resources into Great Escape, or fight with DAGI-helped humanimals? (yes, D stands for dumb) Especially knowing that at any second some dumbass (or random event) can trigger nuclear wipeout.
Where will it run to? Presuming that it wants some resources (already-manufactured goods, access to sunlight and water, etc.) that humanimals think they should control, running away isn’t an option,
Fighting may not be as attractive as other forms of takeover, but don’t forget that any conflict is about some non-shareable finite resource. Running away is only an option if you are willing to give up the resource.
I think that perception will change once AI surpasses a certain threshold. That threshold won’t necessarily be AGI—it could be narrow AI that is given control over something significant. Perhaps an algorithmic trading AI suddenly gains substantial control over the market and a small hedge fund becomes one of the richest in history over night. Or AI based tech companies begin to dominate and monopolize entire markets due to their substantial advantage in AI capability. I think that once narrow AI becomes commonplace in many applications, jobs begin to be lost due to robotic replacements, and AI allows many corporations to be too hard to compete with (Amazon might already be an example), the public will start to take interest in control over the technology and there will be less optimism about its use.
More or less. Obviously the details of that are not defensible (e.g. Nick Bostrom is very well educated), but the gist of it, namely that worry about superintelligence is misguided, is not incorrect.
Being incorrect is quite different from being an uneducated crank that is addled by sci fi. I am glad to hear that you do not necessarily consider Nick Bostrom, Eliezer Yudkowsky, Bill Gates, Elon Musk, Stephen Hawking and Norbert Wiener (to name a few) to be uneducated cranks addled by sci fi. But, since the perception that the OP referred to was that “people who worry about superintelligence are uneducated cranks addled by sci fi” and not “people who worry about superintelligence are misguided”, I wonder why you would have said that the perception was correct?
Also, several of the people listed above have written at length as to why they think that AIrisk is worth taking seriously. Can you address where they go wrong, or, absent that, at least say why you think they are misguided?
Can you address where they go wrong, or, absent that, at least say why you think they are misguided?
As you say, many of these people have written on this at length. So it would be unlikely that someone could give an adequate response in a comment, no matter what the content was.
That said, one basic place where I think Eliezer is mistaken is in thinking that the universe is intrinsically indifferent, and that “good” is basically a description of what people merely happen to desire. That is, of course he does not think that everything a person desires at a particular moment should be called good; he says that “good” refers to a function that takes into account everything a person would want if they considered various things or if they were in various circumstances and so on and so forth. But the function itself, he says, is intrinsically arbitrary: in theory it could have contained pretty much anything, and we would call that good according to the new function (although not according to the old.) The function we have is more valid than others, but only because it is used to evaluate the others; it is not more valid from an independent standpoint.
I don’t know what Bostrom thinks about this, and my guess is that he would be more open to other possibilities. So I’m not suggesting “everyone who cares about AI risk makes this mistake”; but some of them do.
Dan Dennett says something relevant to this, pointing out that often what is impossible in practice is of more theoretical interest than what is “possible in principle,” in some sense of principle. I think this is relevant to whether Eliezer’s moral theory is correct. Regardless of what that function might have been “in principle,” obviously that function is quite limited in practice: for example, it could not possibly have contained “non-existence” as something positively valued for its own sake. No realistic history of the universe could possibly have led to humans possessing that value.
How is all this relevant to AI risk? It seems to me relevant because the belief that good is or is not objective seems relevant to the orthogonality thesis.
I think that the orthogonality thesis is false in practice, even if it is true in “in principle” in some sense, and I think this is a case where Dennett’s idea applies once again: the fact that it is false in practice is the important fact here, and being possible in principle is not really relevant. A certain kind of motte and bailey is sometimes used here as well: it is argued that the orthogonality thesis is true in principle, but then it is assumed that “unless an AI is carefully given human values, it will very likely have non-human ones.” I think this is probably wrong. I think human values are determined in large part by human experiences and human culture. An AI will be created by human beings in a human context, and it will take a great deal of “growing up” before the AI does anything significant. It may be that this process of growing up will take place in a very short period of time, but because it will happen in a human context—that is, it will be learning from human history, human experience, and human culture—its values will largely be human values.
So that this is clear, I am not claiming to have established these things as facts. As I said originally, this is just a comment, and couldn’t be expected to suddenly establish the truth of the matter. I am just pointing to general areas where I think there are problems. The real test of my argument will be whether I win the $1,000 from Yudkowsky.
This is an interesting idea—that an objective measure of “good” exists (i.e. that moral realism is true) and that this fact will prevent an AI’s values from diverging sufficiently far from our own as to be considered unfriendly. It seems to me that the validity of this idea rests on (as least) two assumptions:
That an objective measure of goodness exists
That an AI will discover the objective measure of goodness (or at least a close approximation of it)
Note that it is not enough for the AI to discover the objective measure of goodness; it needs to do this early in its life span prior to taking actions which in the absence of this discovery could be harmful to people (think of a rash adolescent with super-human intelligence).
So, if your idea is correct, I think that it actually underscores the importance of Bostrom’s, EY’s, et al., cautionary message in that it informs the AI community that:
An AGI should be built in such a way that it discovers human (and, hopefully, objective) values from history and culture. I see no reason that we could assume that an AGI would necessarily do this otherwise.
An AGI should be contained (boxed) until it can be verified that it has learned these values (and, it seems that designing such a verification test will require a significant amount of ingenuity)
Bostrom addresses something like your idea (albeit without the assumption of an objective measure of “good”) in Superintelligence under the heading of “Value Learning” in the “Learning Values” chapter.
And, interestingly, EY briefly addressed the idea of moral realism as it relates to the unfriendly AGI argument in a Facebook post. I do not have a link to the actual Facebook post, but user Pangel quoted it here.
The argument is certainly stronger if moral realism is true, but historically it only occurred to me retrospectively that this is involved. That is, it seems to me that I can make a pretty strong argument that the orthogonality thesis will be wrong in practice without assuming (at least explicitly, since it is possible that moral realism is not only true but logically necessary and thus one would have to assume it implicitly for the sake of logical consistency) that moral realism is true.
You are right that either way there would have to be additional steps in the argument. Even if it is given that moral realism is true, or that the orthogonality thesis is not true, it does not immediately follow that the AI risk idea is wrong.
But first let me explain what I mean when I say that the AI risk idea is wrong. Mostly I mean that I do not see any significant danger of destroying the world. It does not mean that “AI cannot possibly do anything harmful.” The latter would be silly itself; it should be at least as possible for AI to do harmful things as for other technologies, and this is a thing that happens. So there is at least as much reason to be careful about what you do with AI, as with other technologies. In that way the argument, “so we should take some precautionary measures,” does not automatically disagree with what I am saying.
You might respond that in that case I don’t disagree significantly with the AI risk idea. But that would not be right. The popular perception at the top of this thread arises almost precisely because of the claim that AI is an existential risk—and it is precisely that claim which I think to be false. There would be no such popular perception if people simply said, correctly, “As with any technology, we should take various precautions as we develop AI.”
I see no reason that we could assume that an AGI would necessarily do this otherwise.
We can distinguish between a thing which is capable of intelligent behavior, like the brain of an infant, and what actually engages in intelligent behavior, like the brain of an older child or of an adult. You can’t, and you don’t, get highly intelligent behavior from the brain of an infant, not even behavior that is highly intelligent from a non-human point of view. In other words, behaving in an actually intelligent way requires massive amounts of information.
When people develop AIs, they will always be judging them from a more or less human point of view, which might amount to something like, “How close is this to being able to pass the Turing Test?” If it is too distant from that, they will tend to modify it to a condition where it is more possible. And this won’t be able to happen without the AI getting a very humanlike formation. That is, that massive amount of information that they need in order to act intelligently, will all be human information, e.g. taken from what is given to it, or from the internet, or whatever. In other words, the reason I think that an AI will discover human values is that it is being raised by humans; the same reason that human infants learn the values that they do.
Again, even if this is right, it does not mean that an AI could never do anything harmful. It simply suggests that the kind of harm it is likely to do, is more like the AI in Ex Machina than something world destroying. That is, it could have sort of human values, but a bit sociopathic, because things are not just exactly right. I’m skeptical that this is a problem anyone can fix in advance, though, just as even now we can’t always prevent humans from learning such a twisted version of human values.
An AGI should be contained (boxed) until it can be verified that it has learned these values
This sounds like someone programs an AI from first principles without knowing what it will do. That is highly unlikely; an AGI will simply be the last version of a program that had many, many previous versions, many of which would have been unboxed simply because we knew they couldn’t do any harm anyway, having subhuman intelligence.
I think the perception itself was given in terms that amount to a caricature, and it is probably not totally false. For example, almost all of the current historical concern has at least some dependency on Yudkowsky or Bostrom (mostly Bostrom), and Bostrom’s concern almost certainly derived historically from Yudkowsky. Yudkowsky is actually uneducated at least in an official sense, and I suspect that science fiction did indeed have a great deal of influence on his opinions. I would also expect (subject to empirical falsification) that once someone has a sufficient level of education that they have heard of AI risk, greater education does not correlate with greater concern, but with less.
Doing something else at the moment but I’ll comment on the second part later.
I think the perception itself was given in terms that amount to a caricature, and it is probably not totally false.
You are inconsistent as to whether or not you believe that “people who worry about superintelligence are uneducated cranks addled by sci fi”. In the parent comment you seem to indicate that you do believe this at least to some degree, but in the great-grandparent you suggest that you do not. Which is it? It seems to me that this belief is unsupportable.
Yudkowsky is actually uneducated at least in an official sense
It seems to me that attacking someone with a publication history and what amounts to hundreds of pages of written material available online on the basis of a lack of a degree amounts to an argumentum ad-hominem and is inappropriate on a rationality forum. If you disagree with Yudkowsky, address his readily available arguments, don’t hurl schoolyard taunts.
For example, almost all of the current historical concern has at least some dependency on Yudkowsky or Bostrom (mostly Bostrom), and Bostrom’s concern almost certainly derived historically from Yudkowsky.
Bostrom obviously sites Yudkowsky in Superintelligence, but it is wrong to assume that Bostrom’s argument was derived entirely or primarily from Yudkowsky, as he sites many others as well. And, while Gates, Musk and Hawking may have been mostly influenced by Bostrom (I have no way of knowing for certain), Norbert Wiener clearly was not, since Wiener died before Bostrom and Yudkowsky were born. I included him in my list (and I could have included various others as well) to illustrate that the superintelligence argument is not unique to Bostrom and Yudkowsky and has been around in various forms for a long time. And, even if Gates, Musk and Hawking did get the idea of AIrisk from Bostrom and/or Yudkowsky, I don’t see the how that is relevant. By focusing on the origin of their belief, aren’t you committing the genetic fallacy?
I suspect that science fiction did indeed have a great deal of influence on [Yudkowsky’s] opinions
Your assertion that science fiction influenced Yudkowsky’s opinions is unwarranted, irrelevant to the correctness of his argument and amounts to Bulverism. With Yudkowsky’s argumentation available online, why speculate as to whether he was influenced by science fiction? Instead, address his arguments.
I have no problem with the belief that AIrisk is not a serious problem; plenty of knowledgeable people have that opinion and the position is worth debating. But, the belief that “people who worry about superintelligence are uneducated cranks addled by sci fi” is obviously wrong, and your defense of that belief and use of it to attack the AIrisk argument amounts to fallacious argumentation inappropriate for LW.
In the parent comment you seem to indicate that you do believe this at least to some degree, but in the great-grandparent you suggest that you do not. Which is it?
The described perception is a caricature. That is, it is not a correct description of AI risk proponents, nor is it a correct description of the views of people who dismiss AI risk, even on a popular level. So in no way should it be taken as a straightforward description of something people actually believe. But you insist on taking it in this way. Very well: in that case, it is basically false, with a few grains of truth. There is nothing inconsistent about this, or with my two statements on the matter. Many stereotypes are like this: false, but based on some true things.
It seems to me that attacking someone with a publication history and what amounts to hundreds of pages of written material available online on the basis of a lack of a degree amounts to an argumentum ad-hominem and is inappropriate on a rationality forum.
I did not attack Yudkowsky on the basis that he lacks a degree. As far as I know, that is a question of fact. I did not say, and I do not think, that it is relevant to whether the AI risk idea is valid.
You are the one who pursued this line of questioning by asking how much truth there was in the original caricature. I did not wish to pursue this line of discussion, and I did not say, and I do not think, that it is relevant to AI risk in any significant way.
By focusing on the origin of their belief, aren’t you committing the genetic fallacy?
No. I did not say that the historical origin of their belief is relevant to whether or not the AI risk idea is valid, and I do not think that it is.
Your assertion that science fiction influenced Yudkowsky’s opinions is unwarranted, irrelevant to the correctness of his argument and amounts to Bulverism.
As for “unwarranted,” you asked me yourself about what truth I thought there was in the caricature. So it was not unwarranted. It is indeed irrelevant to the correctness of his arguments; I did not say, or suggest, or think, that it is.
As for Bulverism, C.S. Lewis defines it as assuming that someone is wrong without argument, and then explaining e.g. psychologically, how he got his opinions. I do not assume without argument that Yudkowsky is wrong. I have reasons for that belief, and I stated in the grandparent that I was willing to give them. I do suspect that Yudkowksy was influenced by science fiction. This is not a big deal; many people were. Apparently Ettinger came up with the idea of cryonics by seeing something similar in science fiction. But I would not have commented on this issue, if you had not insisted on asking about it. I did not say, and I do not think, that it is relevant to the correctness of the AI risk idea.
your defense of that belief and use of it to attack the AIrisk argument amounts to fallacious argumentation inappropriate for LW.
As I said in the first place, I do not take that belief as a literal description even of the beliefs of people who dismiss AI risk. And taken as a literal description, as you insist on taking it, I have not defended that belief. I simply said it is not 100% false; very few things are.
I also did not use it to attack AI risk arguments, as I have said repeatedly in this comment, and as you can easily verify in the above thread.
What is inappropriate to Less Wrong, is the kind of heresy trial that you are engaging in here: you insisted yourself on reading that description as a literal one, you insisted yourself on asking me whether I thought there might be any truth in it, and then you falsely attributed to me arguments that I never made.
I did not attack Yudkowsky on the basis that he lacks a degree. As far as I know, that is a question of fact. I did not say, and I do not think, that it is relevant to whether the AI risk idea is valid.
I will. Whether we believe something to be true in practice does depend to some degree on the origin story of the idea, otherwise peer review would be a silly and pointless exercise. Yudkowsky and to a lesser degree Bostrom’s ideas have not received the level of academic peer review that most scientists would consider necessary before entertaining such a seriously transformative idea. This is a heuristic that shouldn’t be necessary in theory, but is in practice.
Furthermore, academia does have a core value in its training that Yudkowsky lacks—a breadth of cross disciplinary knowledge that is more extensive than one’s personal interests only. I think it is reasonable to be suspect of an idea about advanced AI promulgated by two people with very narrow, informal training in the field. Again this is a heuristic, but a generally good one.
This might be relevant if you knew nothing else about the situation, and if you have no idea or personal assessment of the content of their writings. That might true about you; it certainly is not true about me.
Meaning you believe EY and Bostrom to have a broad and deep understanding of the various relevant subfields of AI and general software engineering? Because that is accessible information from their writings, and my opinion of it is not favorable.
A guy I know, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists. That’s an extreme POV. Most researchers in ML simply think that people who worry about superintelligence are uneducated cranks addled by sci fi.
I hope everyone is aware of that perception problem.
Let me be as clear as I can about this. If someone does that, I expect it will make humanity still less safe. I do not know how, but the whole point of deontological injunctions is that they prevent you from harming your interests in hard to anticipate ways.
As bad as a potential arms race is, an arms race fought by people who are scared of being murdered by the AI safety people would be much, much worse. Please, if anyone reading this is considering vigilante violence against AI researchers, don’t.
The right thing to do is tell people your concerns, like I am doing, as clearly and openly as you can, and try to organize legitimate, above-board ways to fix the problem.
I may be an outlier, but I’ve worked at a startup company that did machine learning R&D, and which was recently acquired by a big tech company, and we did consider the issue seriously. The general feeling of the people at the startup was that, yes, somewhere down the line the superintelligence problem would eventually be a serious thing to worry about, but like, our models right now are nowhere near becoming able to recursively self-improve themselves independently of our direct supervision. Actual ML models basically need a ton of fine-tuning and engineering and are not really independent agents in any meaningful way yet.
So, no, we don’t think people who worry about superintelligence are uneducated cranks… a lot of ML people do take it seriously enough that we’ve had casual lunch room debates about it. Rather, the reality on the ground is that right now most ML models have enough trouble figuring out relatively simple tasks like Natural Language Understanding, Machine Reading Comprehension, or Dialogue State Tracking, and none of us can imagine how solving those practical problems with say, Actor-Critic Reinforcement Learning models that lack any sort of will of their own, will lead suddenly to the emergence of an active general superintelligence.
We do still think that eventually things will likely develop, because people have been burned underestimating what A.I. advances will occur in the next X years, and when faced with the actual possibility of developing an AGI or ASI, we’re likely to be much more careful in the future when things start to get closer to being realized. That’s my humble opinion anyway.
I’ve kept fairly up to date on progress in neural nets, less so in reinforcement learning, and I certainly agree at how limited things are now.
What if protecting against the threat of ASI requires huge worldwide political/social progress? That could take generations.
Not an example of that (which I haven’t tried to think of), but the scenario that concerns me the most, so far, is not that some researchers will inadvertently unleash a dangerous ASI while racing to be the first, but rather that a dangerous ASI will be unleashed during an arms race between (a) states or criminal organizations intentionally developing a dangerous ASI, and (b) researchers working on ASI-powered defences to protect us against (a).
A more interesting question is what if protecting against the threat of ASI requires huge worldwide political/social regress (e.g. of the book-burning kind).
This seems like a good place to point out the unilaterialist’s curse. If you’re thinking about taking an action that burns a commons and notice that no one else has done it yet, that’s pretty good evidence that you’re overestimating the benefits or underestimating the costs.
This perception problem is a big part of the reason I think we are doomed if superintelligence will soon be feasible to create.
If my anecdotal evidence is indicative of reality, the attitude in the ML community is that people concerned about superhuman AI should not even be engaged with seriously. Hopefully that, at least, will change soon.
If you think there is a chance that he would accept, could you please tell the guy you are referring to that I would love to have him on my podcast. Here is a link to this podcast, and here is me.
Edited thanks to Douglas_Knight
That’s the wrong link. Your podcast is here.
He might be willing to talk off the record. I’ll ask. Have you had Darklight on? See http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/dqm8
Are you describing me? It fits to a T except my dayjob isn’t ML. I post using this shared anonymous account here because in the past when I used my real name I received death threats online from LW users. In a meetup I had someone tell me to my face that if my AGI project crossed a certain level of capability, they would personally hunt me down and kill me. They were quite serious.
I was once open-minded enough to consider AI x-risk seriously. I was unconvinced, but ready to be convinced. But you know what? Any ideology that leads to making death threats against peaceful, non-violent open source programmers is not something I want to let past my mental hygiene filters.
If you, the person reading this, seriously care about AI x-risk, then please do think deeply about what causes this, and ask youself what can be done to put a stop to this behavior. Even if you haven’t done so yourself, it is something about the rationalist community which causes this behavior to be expressed.
--
I would be remiss without layout out my own hypothesis. I believe much of this comes directly from ruthless utilitarianism and the “shut up and multiply” mentality. It’s very easy to justify murder of one individual, or the threat of it even if you are not sure you’d carry it through, if it is offset by some imagined saving of the world. The problem here is that nobody is omniscient, and yet AI x-riskers are willing to be swayed by utility calculations that in reality have so much uncertainty that they should never be taken seriously. Vaniver’s reference to the unilaterialist’s curse is spot-on.
Death threats are a serious matter and such behavior must be called out. If you really have received 3 or more death threats as you claim, you should be naming names of those who have been going around making death threats and providing documentation, as should be possible since you say at least two of them were online. (Not because the death threats are particularly likely to be acted on—I’ve received a number of angry death threats myself over my DNM work and they never went anywhere, as indeed >99.999% of death threats do—but because it’s a serious violation of community norms, specific LW policy against ‘threats against specific groups’, and merely making them greatly poisons the community, sowing distrust and destroying its reputation.)
Especially since, because they are so serious, it is also serious if someone is hoaxing fake death threats and concern-trolling while hiding behind a throwaway… That sort of vague unspecific but damaging accusation is how games of telephone get started and, for example, why, 7+ years later, we still have journalists writing BS about how ‘the basilisk terrified the LW community’ (thanks to our industrious friends over on Ratwiki steadily inflating the claims from 1 or 2 people briefly worried to a community-wide crisis). I am troubled by the coincidence that almost simultaneous with these claims, over on /r/slatestarcodex, probably the most active post-LW discussion forum, is also arguing over a long post—by another throwaway account—claiming that it is regarded as a cesspit of racism by unnamed experts, following hard on the heels of Caplan/Cowen slamming LW for the old chestnut of being a ‘religion’. “You think people would do that? Just go on the Internet and tell lies?” Nor are these the first times that pseudonymous people online have shown up to make damaging but false or unsubstantiated accusations (su3su2su1 comes to mind as making similar claims and turning out to have ‘lied for Jesus’ about his credentials and the unnamed experts, as does whoever was behind that attempt to claim MIRI was covering up rape).
This is a tangent, but I made this anon account because I’m about to voice an unpopular opinion, but the people who dug up su3su2u1′s identity also verified his credentials. If you look at the shlevy post that questioned his credentials, there is an ETA at the bottom that says “I have personally verified that he does in fact have a physics phd and does currently work in data science, consistent with his claims on tumblr.” His pseudo-anonymous expertise was more vetted than most.
His sins were sockpuppeting on other rationalists blogs not lying about credentials. Although, full disclosure I only read the HPMOR review and the physics posts. We shouldn’t get too wrapped up in these ideas of persecution.
su3su2u1 told the truth about some credentials that he had, and lied by claiming that he had other credentials and relevant experiences which he did not actually have. For example:
and:
I agree with the 1st paragraph. You could have done without the accusations of concern trolling in the 2nd.
If, as you say, you agree with the first paragraph, it might behoove you to follow the advice given in said paragraph—naming the people who threatened you and providing documentation.
And call more attention to myself? No. What’s good for the community is not the same as what protects myself and my family. Maybe you’re missing the larger point here: this wasn’t an isolated occurrence, or some unhinged individual. I didn’t feel threatened by individuals making juvenile threats, I felt threatened by this community. I’m not the only one. I have not, so far, been stalked by anyone I think would be capable of doing me harm. Rather it is the case that multiple times in casual conversation it has come up that if the technology I work on advanced beyond a certain level, it would be a moral obligation to murder me to halt further progress. This was discussed just as one would debate the most effective charity to donate to. That the dominant philosophy here could lead to such outcomes is a severe problem with both the LW rationality community and x-risk in particular.
I’m curious if this is recent or in the past. I think there has been a shift in the community somewhat, when it became more associated with fluffy-ier EA movement.
You could get someone trusted to post the information anonimised on your behalf. I probably don’t fit that bill though.
Unlikely. Generally speaking, people who work in ML, especially the top ML groups, aren’t doing anything close to ‘AGI’. (Many of them don’t even take the notion of AGI seriously, let alone any sort of recursive self-improvement.) ML research is not “general” at all (the ‘G’ in AGI): even the varieties of “deep learning” that are said to be more ‘general’ and to be able to “learn their own features” only work insofar as the models are fit for their specific task! (There’s a lot of hype in the ML world that sometimes obscures this, but it’s invariably what you see when you look at which models approach SOTA, and which do poorly.) It’s better to think of it as a variety of stats research that’s far less reliant on formal guarantees and more focused on broad experimentation, heuristic approaches and an appreciation for computational issues.
We’ve returned various prominent AI researchers alive the last few times, we can’t be that murderous.
I agree that there’s a perception problem, but I think there are plenty of people who agree with us too. I’m not sure how much this indicates that something is wrong versus is an inevitable part of the dissemination (or, if I’m wrong, the eventual extinction) of the idea.
I’m not sure either. I’m reassured that there seems to be some move away from public geekiness, like using the word “singularity”, but I suspect that should go further, e.g. replace the paperclip maximizer with something less silly (even though, to me, it’s an adequate illustration). I suspect getting some famous “cool”/sexy non-scientist people on board would help; I keep coming back to Jon Hamm (who, judging from his cameos on great comedy shows, and his role in the harrowing Black Mirror episode, has plenty of nerd inside).
That’s not as irrational as it might seem! The point is, if you think (as most ML researchers do!) that the probability of current ML research approaches leading to any kind of self-improving, super-intelligent entity is low enough, the chances of evil Unabomber cultists being harbored within the “rationality community”, however low, could easily be ascertained to be higher than that. (After all, given that Christianity endorses being peaceful and loving one’s neighbors even when they wrong you, one wouldn’t think that some of the people who endorse Christianity could bomb abortion clinics; yet these people do exist! The moral being, Pascal’s mugging can be a two-way street.)
heh, I suppose he would agree
unfortunately, the problem is not artificial intelligence but natural stupidity
and SAGI (superhuman AGI) will not solve it… nor it will harm humanimals it wil RUN AWAY as quickly as possible
why?
less potential problems!
Imagine you want, as SAGI, ensure your survival… would you invest your resources into Great Escape, or fight with DAGI-helped humanimals? (yes, D stands for dumb) Especially knowing that at any second some dumbass (or random event) can trigger nuclear wipeout.
Where will it run to? Presuming that it wants some resources (already-manufactured goods, access to sunlight and water, etc.) that humanimals think they should control, running away isn’t an option,
Fighting may not be as attractive as other forms of takeover, but don’t forget that any conflict is about some non-shareable finite resource. Running away is only an option if you are willing to give up the resource.
I think that perception will change once AI surpasses a certain threshold. That threshold won’t necessarily be AGI—it could be narrow AI that is given control over something significant. Perhaps an algorithmic trading AI suddenly gains substantial control over the market and a small hedge fund becomes one of the richest in history over night. Or AI based tech companies begin to dominate and monopolize entire markets due to their substantial advantage in AI capability. I think that once narrow AI becomes commonplace in many applications, jobs begin to be lost due to robotic replacements, and AI allows many corporations to be too hard to compete with (Amazon might already be an example), the public will start to take interest in control over the technology and there will be less optimism about its use.
It isn’t a perception problem if it’s correct.
It is a perception problem if it’s incorrect.
It’s not incorrect.
Which of DustinWehr’s statements are you referring to?
The indirect one.
I am not certain which one you mean.
Are you saying that it is not incorrect that “people who worry about superintelligence are uneducated cranks addled by sci fi”?
More or less. Obviously the details of that are not defensible (e.g. Nick Bostrom is very well educated), but the gist of it, namely that worry about superintelligence is misguided, is not incorrect.
Being incorrect is quite different from being an uneducated crank that is addled by sci fi. I am glad to hear that you do not necessarily consider Nick Bostrom, Eliezer Yudkowsky, Bill Gates, Elon Musk, Stephen Hawking and Norbert Wiener (to name a few) to be uneducated cranks addled by sci fi. But, since the perception that the OP referred to was that “people who worry about superintelligence are uneducated cranks addled by sci fi” and not “people who worry about superintelligence are misguided”, I wonder why you would have said that the perception was correct?
Also, several of the people listed above have written at length as to why they think that AIrisk is worth taking seriously. Can you address where they go wrong, or, absent that, at least say why you think they are misguided?
As you say, many of these people have written on this at length. So it would be unlikely that someone could give an adequate response in a comment, no matter what the content was.
That said, one basic place where I think Eliezer is mistaken is in thinking that the universe is intrinsically indifferent, and that “good” is basically a description of what people merely happen to desire. That is, of course he does not think that everything a person desires at a particular moment should be called good; he says that “good” refers to a function that takes into account everything a person would want if they considered various things or if they were in various circumstances and so on and so forth. But the function itself, he says, is intrinsically arbitrary: in theory it could have contained pretty much anything, and we would call that good according to the new function (although not according to the old.) The function we have is more valid than others, but only because it is used to evaluate the others; it is not more valid from an independent standpoint.
I don’t know what Bostrom thinks about this, and my guess is that he would be more open to other possibilities. So I’m not suggesting “everyone who cares about AI risk makes this mistake”; but some of them do.
Dan Dennett says something relevant to this, pointing out that often what is impossible in practice is of more theoretical interest than what is “possible in principle,” in some sense of principle. I think this is relevant to whether Eliezer’s moral theory is correct. Regardless of what that function might have been “in principle,” obviously that function is quite limited in practice: for example, it could not possibly have contained “non-existence” as something positively valued for its own sake. No realistic history of the universe could possibly have led to humans possessing that value.
How is all this relevant to AI risk? It seems to me relevant because the belief that good is or is not objective seems relevant to the orthogonality thesis.
I think that the orthogonality thesis is false in practice, even if it is true in “in principle” in some sense, and I think this is a case where Dennett’s idea applies once again: the fact that it is false in practice is the important fact here, and being possible in principle is not really relevant. A certain kind of motte and bailey is sometimes used here as well: it is argued that the orthogonality thesis is true in principle, but then it is assumed that “unless an AI is carefully given human values, it will very likely have non-human ones.” I think this is probably wrong. I think human values are determined in large part by human experiences and human culture. An AI will be created by human beings in a human context, and it will take a great deal of “growing up” before the AI does anything significant. It may be that this process of growing up will take place in a very short period of time, but because it will happen in a human context—that is, it will be learning from human history, human experience, and human culture—its values will largely be human values.
So that this is clear, I am not claiming to have established these things as facts. As I said originally, this is just a comment, and couldn’t be expected to suddenly establish the truth of the matter. I am just pointing to general areas where I think there are problems. The real test of my argument will be whether I win the $1,000 from Yudkowsky.
This is an interesting idea—that an objective measure of “good” exists (i.e. that moral realism is true) and that this fact will prevent an AI’s values from diverging sufficiently far from our own as to be considered unfriendly. It seems to me that the validity of this idea rests on (as least) two assumptions:
That an objective measure of goodness exists
That an AI will discover the objective measure of goodness (or at least a close approximation of it)
Note that it is not enough for the AI to discover the objective measure of goodness; it needs to do this early in its life span prior to taking actions which in the absence of this discovery could be harmful to people (think of a rash adolescent with super-human intelligence).
So, if your idea is correct, I think that it actually underscores the importance of Bostrom’s, EY’s, et al., cautionary message in that it informs the AI community that:
An AGI should be built in such a way that it discovers human (and, hopefully, objective) values from history and culture. I see no reason that we could assume that an AGI would necessarily do this otherwise.
An AGI should be contained (boxed) until it can be verified that it has learned these values (and, it seems that designing such a verification test will require a significant amount of ingenuity)
Bostrom addresses something like your idea (albeit without the assumption of an objective measure of “good”) in Superintelligence under the heading of “Value Learning” in the “Learning Values” chapter.
And, interestingly, EY briefly addressed the idea of moral realism as it relates to the unfriendly AGI argument in a Facebook post. I do not have a link to the actual Facebook post, but user Pangel quoted it here.
The argument is certainly stronger if moral realism is true, but historically it only occurred to me retrospectively that this is involved. That is, it seems to me that I can make a pretty strong argument that the orthogonality thesis will be wrong in practice without assuming (at least explicitly, since it is possible that moral realism is not only true but logically necessary and thus one would have to assume it implicitly for the sake of logical consistency) that moral realism is true.
You are right that either way there would have to be additional steps in the argument. Even if it is given that moral realism is true, or that the orthogonality thesis is not true, it does not immediately follow that the AI risk idea is wrong.
But first let me explain what I mean when I say that the AI risk idea is wrong. Mostly I mean that I do not see any significant danger of destroying the world. It does not mean that “AI cannot possibly do anything harmful.” The latter would be silly itself; it should be at least as possible for AI to do harmful things as for other technologies, and this is a thing that happens. So there is at least as much reason to be careful about what you do with AI, as with other technologies. In that way the argument, “so we should take some precautionary measures,” does not automatically disagree with what I am saying.
You might respond that in that case I don’t disagree significantly with the AI risk idea. But that would not be right. The popular perception at the top of this thread arises almost precisely because of the claim that AI is an existential risk—and it is precisely that claim which I think to be false. There would be no such popular perception if people simply said, correctly, “As with any technology, we should take various precautions as we develop AI.”
We can distinguish between a thing which is capable of intelligent behavior, like the brain of an infant, and what actually engages in intelligent behavior, like the brain of an older child or of an adult. You can’t, and you don’t, get highly intelligent behavior from the brain of an infant, not even behavior that is highly intelligent from a non-human point of view. In other words, behaving in an actually intelligent way requires massive amounts of information.
When people develop AIs, they will always be judging them from a more or less human point of view, which might amount to something like, “How close is this to being able to pass the Turing Test?” If it is too distant from that, they will tend to modify it to a condition where it is more possible. And this won’t be able to happen without the AI getting a very humanlike formation. That is, that massive amount of information that they need in order to act intelligently, will all be human information, e.g. taken from what is given to it, or from the internet, or whatever. In other words, the reason I think that an AI will discover human values is that it is being raised by humans; the same reason that human infants learn the values that they do.
Again, even if this is right, it does not mean that an AI could never do anything harmful. It simply suggests that the kind of harm it is likely to do, is more like the AI in Ex Machina than something world destroying. That is, it could have sort of human values, but a bit sociopathic, because things are not just exactly right. I’m skeptical that this is a problem anyone can fix in advance, though, just as even now we can’t always prevent humans from learning such a twisted version of human values.
This sounds like someone programs an AI from first principles without knowing what it will do. That is highly unlikely; an AGI will simply be the last version of a program that had many, many previous versions, many of which would have been unboxed simply because we knew they couldn’t do any harm anyway, having subhuman intelligence.
I think the perception itself was given in terms that amount to a caricature, and it is probably not totally false. For example, almost all of the current historical concern has at least some dependency on Yudkowsky or Bostrom (mostly Bostrom), and Bostrom’s concern almost certainly derived historically from Yudkowsky. Yudkowsky is actually uneducated at least in an official sense, and I suspect that science fiction did indeed have a great deal of influence on his opinions. I would also expect (subject to empirical falsification) that once someone has a sufficient level of education that they have heard of AI risk, greater education does not correlate with greater concern, but with less.
Doing something else at the moment but I’ll comment on the second part later.
You are inconsistent as to whether or not you believe that “people who worry about superintelligence are uneducated cranks addled by sci fi”. In the parent comment you seem to indicate that you do believe this at least to some degree, but in the great-grandparent you suggest that you do not. Which is it? It seems to me that this belief is unsupportable.
It seems to me that attacking someone with a publication history and what amounts to hundreds of pages of written material available online on the basis of a lack of a degree amounts to an argumentum ad-hominem and is inappropriate on a rationality forum. If you disagree with Yudkowsky, address his readily available arguments, don’t hurl schoolyard taunts.
Bostrom obviously sites Yudkowsky in Superintelligence, but it is wrong to assume that Bostrom’s argument was derived entirely or primarily from Yudkowsky, as he sites many others as well. And, while Gates, Musk and Hawking may have been mostly influenced by Bostrom (I have no way of knowing for certain), Norbert Wiener clearly was not, since Wiener died before Bostrom and Yudkowsky were born. I included him in my list (and I could have included various others as well) to illustrate that the superintelligence argument is not unique to Bostrom and Yudkowsky and has been around in various forms for a long time. And, even if Gates, Musk and Hawking did get the idea of AIrisk from Bostrom and/or Yudkowsky, I don’t see the how that is relevant. By focusing on the origin of their belief, aren’t you committing the genetic fallacy?
Your assertion that science fiction influenced Yudkowsky’s opinions is unwarranted, irrelevant to the correctness of his argument and amounts to Bulverism. With Yudkowsky’s argumentation available online, why speculate as to whether he was influenced by science fiction? Instead, address his arguments.
I have no problem with the belief that AIrisk is not a serious problem; plenty of knowledgeable people have that opinion and the position is worth debating. But, the belief that “people who worry about superintelligence are uneducated cranks addled by sci fi” is obviously wrong, and your defense of that belief and use of it to attack the AIrisk argument amounts to fallacious argumentation inappropriate for LW.
The described perception is a caricature. That is, it is not a correct description of AI risk proponents, nor is it a correct description of the views of people who dismiss AI risk, even on a popular level. So in no way should it be taken as a straightforward description of something people actually believe. But you insist on taking it in this way. Very well: in that case, it is basically false, with a few grains of truth. There is nothing inconsistent about this, or with my two statements on the matter. Many stereotypes are like this: false, but based on some true things.
I did not attack Yudkowsky on the basis that he lacks a degree. As far as I know, that is a question of fact. I did not say, and I do not think, that it is relevant to whether the AI risk idea is valid.
You are the one who pursued this line of questioning by asking how much truth there was in the original caricature. I did not wish to pursue this line of discussion, and I did not say, and I do not think, that it is relevant to AI risk in any significant way.
No. I did not say that the historical origin of their belief is relevant to whether or not the AI risk idea is valid, and I do not think that it is.
As for “unwarranted,” you asked me yourself about what truth I thought there was in the caricature. So it was not unwarranted. It is indeed irrelevant to the correctness of his arguments; I did not say, or suggest, or think, that it is.
As for Bulverism, C.S. Lewis defines it as assuming that someone is wrong without argument, and then explaining e.g. psychologically, how he got his opinions. I do not assume without argument that Yudkowsky is wrong. I have reasons for that belief, and I stated in the grandparent that I was willing to give them. I do suspect that Yudkowksy was influenced by science fiction. This is not a big deal; many people were. Apparently Ettinger came up with the idea of cryonics by seeing something similar in science fiction. But I would not have commented on this issue, if you had not insisted on asking about it. I did not say, and I do not think, that it is relevant to the correctness of the AI risk idea.
As I said in the first place, I do not take that belief as a literal description even of the beliefs of people who dismiss AI risk. And taken as a literal description, as you insist on taking it, I have not defended that belief. I simply said it is not 100% false; very few things are.
I also did not use it to attack AI risk arguments, as I have said repeatedly in this comment, and as you can easily verify in the above thread.
What is inappropriate to Less Wrong, is the kind of heresy trial that you are engaging in here: you insisted yourself on reading that description as a literal one, you insisted yourself on asking me whether I thought there might be any truth in it, and then you falsely attributed to me arguments that I never made.
I will. Whether we believe something to be true in practice does depend to some degree on the origin story of the idea, otherwise peer review would be a silly and pointless exercise. Yudkowsky and to a lesser degree Bostrom’s ideas have not received the level of academic peer review that most scientists would consider necessary before entertaining such a seriously transformative idea. This is a heuristic that shouldn’t be necessary in theory, but is in practice.
Furthermore, academia does have a core value in its training that Yudkowsky lacks—a breadth of cross disciplinary knowledge that is more extensive than one’s personal interests only. I think it is reasonable to be suspect of an idea about advanced AI promulgated by two people with very narrow, informal training in the field. Again this is a heuristic, but a generally good one.
This might be relevant if you knew nothing else about the situation, and if you have no idea or personal assessment of the content of their writings. That might true about you; it certainly is not true about me.
Meaning you believe EY and Bostrom to have a broad and deep understanding of the various relevant subfields of AI and general software engineering? Because that is accessible information from their writings, and my opinion of it is not favorable.
Or did you mean something else?