Your argument applies to any donation of any sort, in fact to any action of any sort. What is the probability that the thing I am currently doing is the best possible thing to do? Why, its basically zero. Should I therefore not do it?
Referring to the SIAI as a cause “some posters stumbled on” is fairly inaccurate. It is a cause that a number of posters are dedicating their lives to, because in their analysis it is among the most efficient uses of their energy. In order to find a more efficient cause, I not only have to do some research, I have to do more research than the rational people who created SIAI (this isn’t entirely true, but it is much closer to the truth than your argument). The accessibility of SIAI in this setting may be strong evidence in its favor (this isn’t a coincidence; one reason to come to a place where rational people talk is that it tends to make good ideas more accessible than bad ones).
I am not donating myself. But for me there is some significant epistemic probability that the SIAI is in fact fighting for the most efficient possible cause, and that they are the best-equipped people currently fighting for it. If you have some information or an argument that suggests that this belief is inconsistent, you should share it rather than just imply that it is obvious (you have argued correctly that there probably exist better things to do with my resources, but I already knew that and it doesn’t help me decide what to actually do with my resources.)
By treating people who do things I approve of well, I can encourage them to do things I approve of, and conversely. By donating and proclaiming it loudly I am strongly suggesting that I personally approve of donating. Signaling isn’t necessarily irrational. If I am encouraging people to behave as rationally in support of my own goals, in what possible sense am I failing to be rational?
I’m curious: If you have the resources to donate (which you seem to imply by the statement that you have resources for which you can make a decision), and think it would be good to donate to the SIAI, then why don’t you donate?
(I don’t donate because I am not convinced unfriendly AI is such a big deal. I am aware that this may be lack of calibration on my part, but from the material I have read on on other sites, UFAI just doesn’t seem to be that big a risk. (There were some discussions on the topic on stardestroyer.net. While the board isn’t as dedicated to rationality as this board is, the counterarguments seemed well-founded, although I don’t remember the specifics right now. If anybody is interested, I will try to dig them up.)
I don’t know if it is a good idea to donate to SIAI. From my perspective, there is a significant chance that it is a good idea, but also a significant chance that it isn’t. I think everyone here recognizes the possibility that money going to the SIAI will accomplish nothing good. I either have a higher estimate for that possibility, or a different response to uncertainty. I strongly suspect that I will be better informed in the future, so my response is to continue earning interest on my money and only start donating to anything when I have a better idea of what is going on (or if I die, in which case the issue is forced).
The main source of uncertainty is whether the SIAI’s approach is useful for developing FAI. Based on its output so far, my initial estimate is “probably not” (except insofar as they successfully raise awareness of the issues). This is balanced by my respect for the rationality and intelligence of the people involved in the SIAI, which is why I plan to wait until I get enough (logical) evidence to either correct “probably not” or to correct my current estimates about the fallibility of the people working with the SIAI.
This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes. Don’t tell me there isn’t irrational prejudice here!
The argument that any donation is subject to similar objections is silly because it’s obvious that a human-welfare maximizer would plug for the the donation the donor believes best, despite the unlikelihood of finding the absolute best. It should also be obvious that my argument is that it’s unlikely that the Singularity Institute comes anywhere near the best donation, and one reason it’s unlikely is related to the unlikelihood of picking the best, even if you have to forgo the literal very best!
Numerous posters wouldn’t pick this particular charity, even if it happened to be among the best, unless they were motivated by signaling aspirations rather than the rational choice of the best recipient. As Yvain said in the previous entry: “Deciding which charity is the best is hard.” Rationalists should detect the irrationality of making an exception when one option is the Singularity Institute.
(As to whether signaling is rational, completely irrelevant to the discussion, as we’re talking about the best donation from a human-welfare standpoint. To argue that the contribution makes sense because signaling might be as rational as donating, even if plausible, is merely to change the subject, rather than respond to the argument.)
Another argument for the Singularity Institute donation I can’t be dismiss so easily. I read the counter-argument as saying that the Singularity Institute is clearly the best donation conceivable. To that I don’t have an answer, not any more than I have a counter-argument for many outright delusions. I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain’s preceding entry, where each $500 saves a human life.
Before downvoting this, ask yourself whether you’re saying my point is unintelligent or shouldn’t be raised for other reasons. (Ask yourself if my point should be made, was made by anyone else, and isn’t better than at least 50% of the postings here. Ask yourself whether it’s rational to upvote the critic and his silly argument and whether the many donors arrived at their views about the Singularity Institute’s importance based on the representative heuristic, the aura effect, which surrounds Eliezer, ignoring the probability of delivering any benefit,and a multitude of other errors in reasoning.)
This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes.
Envy is unbecoming; I recommend against displaying it. You’d be better off starting with your 3rd sentence and cutting the word “silly.”
I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain’s preceding entry, where each $500 saves a human life.
They have worked out this math, and it’s available in most of their promotional stuff that I’ve seen. Their argument is essentially “instead of operating on the level of individuals, we will either save all of humanity, present and future, or not.” And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it’s a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).
The question as to whether SIAI is the best way to nudge the entire future of humanity is a separate question from whether or not SIAI is a better bet than preventing malaria deaths. I don’t know if SIAI folks have made quantitative comparisons to other x-risk reduction plans, but I strongly suspect that if they have, a key feature of the comparison is that if we stop the Earth from getting hit by an asteroid, we just prevent bad stuff. If we get Friendly AI, we get unimaginably good stuff (and if we prevent Unfriendly AI without getting Friendly AI, we also prevent bad stuff).
They have worked out this math, and it’s available in most of their promotional stuff that I’ve seen. Their argument is essentially “instead of operating on the level of individuals, we will either save all of humanity, present and future, or not.” And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it’s a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).
Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal’s Wager. Pascal argued that if belief in God provided the most miniscule increase in the likelihood of being heaven bound, worship was prudent in light of heaven’s infinite rewards. One of the argument’s fatal flaws is that there is no reason to think worshipping this god will avoid reprisals by the real god—or any number of equally improbable alternative outcomes.
The Singularity Institute imputes only finite utiles, but the flaw is the same. It could as easily come to pass that the Institute’s activities make matters worse. They aren’t entitled to assume their efforts to control matters won’t have effects the reverse of the ones intended, any more than Pascal had the right to assume worshipping this god isn’t precisely what will send one to hell. We just don’t know (can’t know) about god’s nature by merely postulating his possible existence: we can’t know that the miniscule effects don’t run the other way. Similarly if not exactly the same, there’s no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.
When the only reason an expectation seems to have any probability lies in its extreme tininess, the reverse outcome must be allowed the same benefit, canceling them out.
there’s no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.
When I get in my car to drive to the grocery store, do you think there is any reason to favor the hypothesis that I will arrive at the grocery store over all the a priori equally unlikely hypotheses that I arrive at some other destination?
Depends. Do you know where the grocery store actually is? Do you have an accurate map of how to get there? Have you ever gone to the grocery store before?
Or is the grocery store an unknown, unsignposted location which no human being has ever visited or even knows how to visit?
Because if it was the latter, I’d bet pretty strongly against you not getting there...
The point of the analogy is that probability mass is concentrated towards the desired outcome, not that the desired outcome becomes more likely than not.
In a case where no examples of grocery stores have ever been seen, when intelligent, educated people even doubt the possibility of the existence of a grocery store, and when some people who are looking for grocery stores are telling you you’re looking in the wrong direction, I’d seriously doubt that the intention to drive there was affecting the probability mass in any measurable amount.
If you were merely wandering aimlessly with the hope of encountering a grocery store, it would only affect your chance of ending up there insofar as you’d intentionally stop looking if you arrived at one, and not if you didn’t. But our grocery seeker is not operating in a complete absence of evidence with regard to how to locate groceries, should they turn out to exist, so the search is, if not well focused, at least not actually aimless.
I usually think about this, not as expected utility calculations based on negligible probabilities of vast outcomes being just as likely as their negations, but as them being altogether unreliable, because our numerical intuitions outside the ranges we’re calibrated for are unreliable.
For example, when trying to evaluate the plausibility of an extra $500 giving SIAI an extra 1 out of 7 billion chance of succeeding, there is something in my mind that wants to say “well, geez, 1e-10 is such a tiny number, why not?”
Which demonstrates that my brain isn’t calibrated to work with numbers in that range, which is no surprise.
So I do best to set aside my unreliable numerical intuitions and look for other tools with which to evaluate that claim.
Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal’s Wager.
They’re aware of this and have written about it. The argument is “just because something looks like a known fallacy doesn’t mean it’s fallacious.” If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn’t sound like Pascal’s Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method.
It could as easily come to pass that the Institute’s activities make matters worse.
It’s not clear to me that it’s as easily, and I think that’s where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they’re still ahead by 1e-6. With Pascal’s Wager, you don’t have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It’s like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there’s still a chance malevolent god is the one you end up with, but it’s a better bet than picking solo (and you’re screwed anyway if you get a malevolent god).
I agree with you that it’s not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.
The referenced essay by Eliezer didn’t deal with the present argument. Eliezer said, correctly, that the key Pascal’s Wager is in the balanced potential outcomes, not in the use of infinity. But my argument doesn’t rely on infinities.
Tellingly, Eliezer ultimately flubs Pascal’s Wager itself, when he states (incredibly) that praying to various benevolent gods obviates the Wager argument. This should tell you (and him) that he hasn’t completely grasped the Wager. If you or other posters agree with Eliezer’s argument against the Wager argument, I’ll clarify, but at the moment the point looks so obvious as to make explanation otiose.
Now to your main point, which other posters also voice: that we have some reason to think preparing for AIs will help avert disaster, at least with greater likelihood than the reverse. I think one poster provided part of the refutation when he said we are intellectually unable to make intuitive estimates of exceedingly small probabilities. Combining this with the Pascal Argument (which I was tempted to make explicit in my presentation of the argument but decided to avoid excessive complication at the onset), there’s no rational basis for assuming the miniscule probability we’re debating is positive.
Pascal is relevant because (if I’m right) the only reason to accept the miniscule probability when probabilities are so low goes something like this: If we strive to avert disaster, it will certainly be the case that, to whatever small extent, we’re more likely to succeed than make things worse. But nobody can seriously claim to have made a probability estimate as low as the bottom limit SI offers. The reasoning goes from the inevitability of some difference in probability.. The only thing the SI estimate has in its favor is that it’s so small, and the existence of such tiny differences can be presupposed. Which is true, but this reasoning from the inevitability of some difference doesn’t lead to any conclusion about the effect’s direction. If the probability were so low as the lower limit, there could be no rational basis for intuitively making a positive estimate of its magnitude.
Here’s an analogy. I flip a coin and concentrate very hard on ‘Heads.’ I say my concentration has to make some difference. And this is undoubtedly true, if you’re willing to entertain sufficiently small probabilities. (My thoughts, being physical processes, have some effect on their surroundings. They even interact minisculely with the H T outcome.) But no matter how strong my intuition that the effect goes the way I hope, I have no rational basis for accepting my intuition, the ultimate reason being that if so tiny a difference in fact existed, its estimation would be way beyond my intuitive capacities. If I had an honest hunch about the coin’s bias, even small, absent other evidence, I rationally follow my intuition. There’s some probability it’s right because my intuitions generally are more often correct than not. If I think the coin is slightly biased, there’s some chance I’m right; more chance that is, however small, that I have managed, I know not how, to intuit this tiny bias. But at some point, certainly far above the Singularity Institute’s lower bound for the probability they’d make a difference. At that point it becomes absurd (as opposed to merely foolish) to rely on my intuition because I can have no intuition valid to the slightest degree, when the quantities are so low I can’t grasp them intuitively; nor can I hope to predict effects so terribly small that, if real, chaos effects would surely wipe them out.
I’ve seen comments questioning my attitude and motives, so I should probably say something about why I’m a bit hostile to this project,; it’s not a matter of hypocrisy alone. The Singularity Institute competes with other causes for contributions, and it should concern people that it does so using specious argument. If SI intuits the likelihood could be as low as the lower probability estimate for success, the only honest practice is to call the probability zero.
Numerous posters wouldn’t pick this particular charity, even if it happened to be among the best, unless they were motivated by signaling aspirations rather than the rational choice of the best recipient.
You know this is a blog started by and run by Eliezer Yudkowsky—right? Many of the posters are fans. Looking at the rest of this thread, signaling seems to be involved in large quantities—but consider also the fact that there is a sampling bias.
Do you have any argument for why the SIAI is unlikely to be the best other than the sheer size of the option space?
This is a community where a lot of the members have put substantial thought into locating the optimum in that option space, and have well developed reasons for their conclusion. Further, there are not a lot of real charities clustered around that optimum. Simply claiming a low prior probability of picking the right charity is not a strong argument here. If you have additional arguments, I suggest you explain them further.
(I’ll also add that I personally arrived at the conclusion that an SIAI-like charity would be the optimal recipient for charitable donations before learning that it existed, or encountering Overcoming Bias, Less Wrong, or any of Eliezer’s writings, and in fact can completely discount the possibility that my rationality in reaching my conclusion was corrupted by an aura effect around anyone I considered to be smarter or more moral than myself.)
It is obvious that a number of smart people have decided that SIAI is currently the most important cause to devote their time and money to. This in itself constitutes an extremely strong form of evidence. This is, or at least was, basically Eliezer’s blog; if the thing that unites its readers is respect for his intelligence and judgment, then you should be completely unsurprised to see that many support SIAI. It is not clear how this is a form of irrationality, unless you are claiming that the facts are so clearly against the SIAI that we should be interpreting them as evidence against the intelligence of supporters of the SIAI.
Someone who is trying to have an effect on the course of an intelligence explosion is more likely to than someone who isn’t. I think many readers (myself included) believe very strongly that an intelligence explosion is almost certainly going to happen eventually and that how it occurs will have a dominant influence on the future of humanity. I don’t know if the SIAI will have a positive, negative, or negligible influence, but based on my current knowledge all of these possibilities are still reasonably likely (where even 1% is way more than likely enough to warrant attention).
It is obvious that a number of smart people have decided that SIAI is currently the most important cause to devote their time and money to. This in itself constitutes an extremely strong form of evidence.
No. It isn’t very strong evidence by itself. Jonathan Sarfati is a chess master, published chemist, and a prominent young earth creationist. If we added all the major anti-evolutionists together it would easily include not just Sarfati but also William Dembski, Michael Behe, and Jonathan Wells, all of whom are pretty intelligent. There are some people less prominently involved who are also very smart such as Forrest Mims.
This is not the only example of this sort. In general, we live in a world where there are many, many smart people. That multiple smart people care about something can’t do much beyond locate the hypothesis. One distinction is that they most smart people who have looked at the SIAI have come away not thinking they are crazy, which is a very different situation from the sort of example given above, but by itself smart people having an interest is not strong evidence.
(Also, on a related note, see this subthread here which made it clear that what smart people think, even if one has a general consensus among smart people is not terribly reliable.)
I don’t really mean “smart” in the sense that a chess player proves their intelligence by being good at chess, or a mathematician proves their intelligence by being good at math. I mean smart in the sense of good at forming true beliefs and acting on them. If Nick Bostrom were to profess his belief that the world was created 6000 years ago, then I would say this constitutes reasonably strong evidence that the world was created 6000 years ago (when combined with existing evidence that Nick Bostrom is good at forming correct beliefs and reporting them honestly). Of course, there is much stronger evidence against this hypothesis (and it is extremely unlikely that I would have only Bostrom’s testimony—if he came to such a belief legitimately I would strongly expect there to be additional evidence he could present), so if he were to come out and say such a thing it would mostly just decrease my estimate of his intelligence rather than decreasing my estimate for the age of the Earth. The situation with SIAI is very different: I know of little convincing evidence bearing one way or the other on the question, and there are good reasons that intelligent people might not be able to produce easily understood evidence justifying their positions (since that evidence basically consists of a long thought process which they claim to have worked through over years).
Finally, though you didn’t object, I shouldn’t really have said “obvious.” There are definitely other plausible explanations for the observed behavior of SIAI supporters than their honest belief that it is the most important cause to support.
One distinction is that they most smart people who have looked at the SIAI have come away not thinking they are crazy
There is a strong selection effect. Most people won’t even look too closely, or comment on their observations. I’m not sure in what sense we can expect what you wrote to be correct.
This comment, on this post, in this blog, comes across as a textbook example of the Texas Sharpshooter Fallacy. You don’t form your hypothesis after you’ve looked at the data, just as you don’t prove what a great shot you are by drawing a target around the bullet hole.
You don’t form your hypothesis after you’ve looked at the data, just as you don’t prove what a great shot you are by drawing a target around the bullet hole.
I normally form hypotheses after I’ve looked at the data, although before placing high credence in them I would prefer to have confirmation using different data.
I agree that I made at least one error in that post (as in most things I write). But what exactly are you calling out?
I believe an intelligence explosion is likely (and have believed this for a good decade). I know the SIAI purports to try to positively influence an explosion. I have observed that some smart people are behind this effort and believe it is worth spending their time on. This is enough motivation for me to seriously consider how effective I think that the SIAI will be. It is also enough for me to question the claim that many people supporting SIAI is clear evidence of irrationality.
I’m sorry, I didn’t find the thread yet. I lurked there for a long time and just now registered to use their search function and find it again. The main objection I clearly remember finding convincing was that nanotech can’t be used in the way many proponents of the Singularity propose, due to physical constraints, and thus an AI would be forced to rely on existing industry etc..
I’ll continue the search, though. The point was far more elaborated than one sentence. I face a similar problem as with climate science here: I thoroughly informed myself on the subject, came to the conclusion that climate change deniers are wrong, and then, little by little, forgot the details of the evidence that led to this conclusion. My memory could be better :-/
The main objection I clearly remember finding convincing was that nanotech can’t be used in the way many proponents of the Singularity propose, due to physical constraints, and thus an AI would be forced to rely on existing industry etc..
Of course, the Singularity argument in no way relies on nanotech.
Of course, the Singularity argument in no way relies on nanotech.
Without advanced real-world nanotechnology it will be considerable more difficult for an AI to FOOM and therefore pose an existential risk. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU’s. It will have to rely on puny humans for a lot of tasks. It won’t be able to create new computational substrate without the whole economy of the world supporting it. It won’t be able to create an army of robot drones overnight without it either.
Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But more importantly it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources, therefore the AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.
So the absence of advanced nanotechnology constitutes an immense blow to any risk estimates including already available nanotech. Further, if one assumes that nanotech is a prerequisite for AI going FOOM then another question arises. It should be easier to create advanced replicators to destroy the world than creating AGI that then creates advanced replicators that then fails hold and then destroys the world. Therefore one might ask what is the bigger risk here.
Giving the worm-scenario a second thought, I do not see how an AGI would benefit from doing that. An AGI incapable of acquiring resources by means of advanced nanotech assemblers would likely just pretend to be friendly to get humans to build more advanced computational substrates. Launching any large-scale attacks on the existing infrastructure would cause havoc but also damage the AI itself because governments (China etc.) would shut-down the whole Internet rather than living with such an infection. Or even nuke the AI’s mainframe. And even if it could increase its intelligence further by making use of unsuitable and ineffective substrates it would still be incapacitated, stuck in the machine. Without advanced nanotechnology you simply cannot grow exponentially or make use of recursive self-improvement beyond the software-level. This in turn considerably reduces the existential risk posed by an AI. That is not to say that it wouldn’t be a huge catastrophe as well, but there are other catastrophes on the same scale that you would have to compare. Only by implicitly making FOOMing the premise one can make it the most dangerous high-impact risk (never mind aliens, the LHC etc.).
You don’t see how an AGI would benefit from spreading itself in a distributed form to every computer on the planet, control and manipulate all online communications, encrypt the contents of hard drives and keep their contents hostage, etc.? You could have the AGI’s code running on every Internet-connected computer on the planet, which would make it virtually impossible to get rid of.
And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I’m pretty sure that that’ll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups. Add to that the fact that we’ve already had one case of a computer virus infecting the control computers in an off-line facility and, according to one report, delaying the nuclear program of country by two years. Add to that the fact that even people’s normal phones are increasingly becoming smartphones, which can be hacked, and that simpler phones have already shown themselves to be vulnerable to being crashed by a well-crafted SMS. Let 20 more years pass and us become more and more dependant on IT, and an AGI could probably keep all of humanity as its hostage—shutting down the entire Internet simply wouldn’t be an option.
And even if it could increase its intelligence further by making use of unsuitable and ineffective substrates it would still be incapacitated, stuck in the machine. Without advanced nanotechnology you simply cannot grow exponentially or make use of recursive self-improvement beyond the software-level.
This is nonsense. The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don’t, having humans build whatever you want is relatively trivial. Besides, we already have early stage self-replicating machinery today: you don’t need nanotech for that.
And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I’m pretty sure that that’ll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups.
I understand. But how do you differentiate this from the same incident involving an army of human hackers? The AI will likely be very vulnerable if it runs on some supercomputer and even more so if it runs in the cloud (just use an EMP). In contrast an army of human hackers can’t be disturbed that easily and is an enemy you can’t pinpoint. You are portraying a certain scenario here and I do not see it as a convincing argument to fortify risks from AI compared to other risks.
The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don’t, having humans build whatever you want is relatively trivial.
It isn’t trivial. There is a strong interdependence of resources and manufacturers. The AI won’t be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious. Remember it won’t be able to coordinate a world-conspiracy because it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech. You’d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.
and even more so if it runs in the cloud (just use an EMP).
The point was that you can’t use an EMP if that means bringing down the whole human computer network.
t isn’t trivial. There is a strong interdependence of resources and manufacturers. The AI won’t be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious.
Why would people need to get suspicious? If you have tabs on all the communications in the world, you can make a killing on the market, even if you didn’t delay the orders from your competitors. One could fully legitimately raise enough money by trading to hire people to do everything you wanted. Nobody needs to ever notice that there’s something amiss, especially not if you do it via enough shell corporations.
Of course, the AGI could also use more forceful means, though it’s by no means necessary. If the AGI revealed itself and the fact that it was holding all of humanity’s networked computers hostage, it could probably just flat-out tell the humans “do this or else”. Sure, not everyone would obey, but some would. Also, disrupt enough communications and manufacture enough chaos, and people will be too distracted and stressed to properly question forged orders. Social engineering is rather easy with humans, and desperate people are quite prone to wishful thinking.
Remember it won’t be able to coordinate a world-conspiracy because it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech.
This claim strikes me as bizarre. Why would you need nanotech to acquire more resources for self-improvement?
Some botnets have been reported to have around 350,000 members. Currently, the distributed computing project Folding@Home, with 290,000 active clients composed mostly of volunteer home PCs and Playstations, can reach speeds in the 10^15 FLOPS range. Now say that an AGI gets developed in 20 years from now. A relatively conservative estimate that presumed an AGI couldn’t hack into more computers than the best malware practicioners of today, that a personal computer would have a hundred times the computing power of today, and that an AGI required a minimum of 10^13 FLOPS to run, would suggest that an AGI could either increase its own computational capacity 12,000-fold, or spawn 12,000 copies of itself.
Alternatively, if it wanted to avoid detection and didn’t want to do anything illegal or otherwise suspicious, it could just sign up on a site like oDesk and do lots of programming jobs for money, then rent all the computing capacity it needed until it was ready to unleash itself to the world. This is actually the more likely alternative, as unlike the botnet/hacking scheme it’s a virtually risk-free to gain extra processing power.
Acquiring computational resources is easy, and it will only get easier as time goes on. Also, while upgrading your hardware is one way of self-improving, you seem to be completely ignoring the potential for software self-improvement. It’s not a given that the AGI would even need massive hardware. For instance:
Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.
The point is that you people are presenting an idea that is an existential risk by definition. I claim that it might superficially appear to be the most dangerous of all risks but that this is mostly a result of its vagueness.
If you say that there is the possibility of a superhuman intelligence taking over the world and all its devices to destroy humanity then that is an existential risk by definition. I counter that I dispute some of the premises and the likelihood of subsequent scenarios. So to make me update on the original idea you would have to support your underlying premises rather than arguing within already established frameworks that impose several presuppositions onto me.
Are you aware of what the most common EMPs are? Nukes. The computer that triggers the high explosive lenses is already molten vapor by the time that the chain reaction has begun expanding into a fireball.
I’m not convinced of this. As time progresses there are more and more vulnerable systems on the internet, many which shouldn’t be. That includes nuclear power plants, particle accelerators, conventional power plants and others. Other systems likely have some methods of access such as communication satellites. Soon this will also include almost completely automated manufacturing plants. An AI that quickly grows to control much of the internet would have access directly to nasty systems and just have a lot more processing power. The extra processing power means that the AI can potentially crack cryptosystems that are being used by secure parts of the internet or non-internet systems that use radio to communicate.
That said, I agree that without strong nanotech this seems like an unlikely scenario.
An AI that quickly grows to control much of the internet would have access directly to nasty systems and just have a lot more processing power.
Yes, but then how does this risk differ from asteroid impacts, solar flares, bio weapons or nanotechnology? The point is that the only reason for a donation to the SIAI to have an higher expected pay off is the premise that AI can FOOM and kill all humans and take over the universe. In all other cases dumb risks are as or more likely and can accomplish to wipe us out as well. So why the SIAI? I’m trying to get a more definite answer to that question. I at least have to consider all possible arguments I can come up with in the time it takes to write a few comments and see what feedback I get. That way I can update my estimates and refine my thinking.
Yes, but then how does this risk differ from asteroid impacts, solar flares
Asteroid impacts and solar flares are relatively ‘dumb’ risks, in that they can be defended against once you know how. They don’t constantly try to outsmart you.
bio weapons or nanotechnology?
This question is a bit like asking “yes, I know bioweapons can be dangerous, but how does the risk of genetically engineered e.coli differ from the risk of bioweapons”.
Bioweapons and nanotechnology are particular special cases of “dangerous technologies that humans might come up with”. An AGI is potentially employing all of the dangerous technologies humans—or AGIs—might come up with.
Your comment assumes that I agree on some premises that I actually dispute. That an AGI will employ all other existential risks and therefore be the most dangerous of all existential risks doesn’t follow because if such an AGI is as likely as the other risks then it doesn’t matter if we are wiped out by one of the other risks or by an AGI making use of one of those risks.
Yes, but then how does this risk differ from asteroid impacts, solar flares, bio weapons or nanotechnology?
Well, one doesn’t need to think that that it intrinsically different. One would just need to think that the marginal return here is high because we aren’t putting in much resources now to look at the problem. Someone could potentially make that sort of argument for any existential risk.
Yes. I am getting much better responses from you than from some of the donors that replied or the SIAI itself. Which isn’t very reassuring. Anyway, you are of course right there. The SIAI is currently looking into the one existential risk that is most underfunded. As I said before, I believe that the SIAI should exist and therefore should be supported. Yet I still can’t follow some of the more frenetic supporters. That is, I don’t see the case being as strong as some portray it. And there is not enough skepticism here, although people reassure me constantly that they have been skeptic but were eventually convinced. They just don’t seem very convincing to me.
I guess I should stop trying then? Have I not provided anything useful? And do I come across as “frenetic”? That’s certainly not how I feel. And I figured 90 percent chance we all die to be pretty skeptical. Maybe you weren’t referring to me...
I’m sorry, I shouldn’t have phrased my comment like that. No, I was referring to this and this comment that I just got. I feel too tired to reply to those right now because I feel they do not answer anything and that I have already tackled their content in previous comments. I’m sometimes getting a bit weary when the amount of useless information gets too high. They probably feel the same about me and I should be thankful that they take the time at all. I can assure you that my intention is not to attack anyone or the SIAI personally just to discredit them. I’m honestly interested, simply curious.
OK, cool. Yeah, this whole thing does seem to go in circles at times… it’s the sort of topic where I wish I could just meet face to face and hash it out over an hour or so.
A large solar-outburst can cause similar havoc. Or some rouge group buys all Google stocks, tweaks its search algorithm and starts to influence election outcomes by slightly tweaking the results in favor of certain candidates while using its massive data repository to spy on people. There are a lot of scenarios. But the reason to consider the availability of advanced nanotechnology regarding AI associated existential risks is to reassess their impact and probability. An AI that can make use of advanced nanotech is certainly much more dangerous than one taking over the infrastructure of the planet by means of cyber-warfare. The question is if such a risk is still bad enough to outweigh other existential risks. That is the whole point here, comparison of existential risks to assess the value of contributing to the SIAI. If you scale back to an AGI incapable of quick self-improvement by use of nanotech and instead infrastructure take-over the difference between working to prevent such a catastrophe is not as far detached anymore from working on building an infrastructure more resistant to electromagnetic impulse weapons or sun flares.
The correct way to approach a potential risk is not to come up with a couple of specific scenarios relating to the risk, evaluate those, and then pretend that you’ve done a proper analysis of the risk involved. That’s analogous to trying to make a system secure by patching security vulnerabilities as they show up and not even trying to employ safety measures such as firewalls, or trying to make a software system bug-free simply by fixing bugs as they get reported and ignoring techniques such as unit tests, defensive programming, etc. It’s been tried and conclusively found to be a bad idea by both the security and software engineering communities. If you want to be safe, you need to take into account as many possibilities as you can, not just concentrate on the particular special cases that happened to rise to your attention.
The proper unit of analysis here are not the particular techniques that an AI might use to take over. That’s pointless: for any particular technique that we discuss here, there might be countless of others that the AI could employ, many of them ones nobody has even thought of yet. If we’d be in an alternate universe where Eric Drexler was overrun by a car before ever coming up with his vision of molecular nanotechnology, the whole concept of strong nanotech might be unknown to us. If we then only looked at the prospects for cyberwar, and concluded that an AI isn’t a big threat because humans can do cyberwarfare too, we could be committing a horrible mistake by completely ignoring nanotech. Of course, since in that scenario we couldn’t know about nanotech, our mistake wouldn’t be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle.
So what is the right unit of analysis? It’s the power of intelligence. It’s the historical case of a new form of intelligence showing up on the planet and completely reshaping its environment to create its own tools. It’s the difference in the power of the chimpanzee species to change its environment towards its preferred state, and the power of the human species to change its environment towards its preferred state. You saying “well here I’ve listed these methods that an AI could use to take over humanity, and I’ve analyzed them and concluded that the AI is of no threat” is just as fallacious as it would be for a chimpanzee to say “well here I’ve listed these methods that a human could take over chimpanzity, and I’ve analyzed them and concluded that humans are no threat to us”. You can’t imagine the ways that an AI could come up with and attempt to use against us, so don’t even try. Instead, look at the historical examples of what happens when you pit a civilization of inferior intelligences against a civilization of hugely greater ones. And that will tell you that a greater-than-human intelligence is the greatest existential risk there is, for it’s the only one where it’s by definition impossible for us to come up with the ways to stop it once it gets out of control.
Of course, since in that scenario we couldn’t know about nanotech, our mistake wouldn’t be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle.
You have to limit the scope of unknown unknowns. Otherwise why not employ the same line of reasoning to risks associated with aliens? If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. What, magic?
Yes, you could very well make an argument for the risks posed by superintelligent aliens. But then you would also have to produce an argument for a) why it’s plausible to assume that superintelligent aliens will show up anytime soon b) what we could do to prevent the invasion of superintelligent aliens if they did show up.
For AGI have an answer for point a (progress in computing power, neuroscience and brain reverse-engineering, etc.) and a preliminary answer for point b (figure out how to build benevolent AGIs). There are no corresponding answers to points a and b for aliens.
If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about.
No it’s not: think about this again. “Aliens of a superior intelligence might wipe us out by some means we don’t know” is symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”. But “aliens of superior intelligence might appear out of nowhere” is not symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”.
I didn’t mean to suggest that aliens are a more likely risk than AI. I was trying to show that unknown unknowns can not be employed to the extent you suggest. You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
Not quite.
Suppose that someone brought up a number of ways by which an AI could be dangerous, and somebody else refuted them all by pointing out that there’s no particular way by which having superior intelligence would help in them. (In other words, humans could do those things too, and an AI doing them wouldn’t be any more dangerous.) Now if I couldn’t come up with any examples where having a superior intelligence would help, then that would be evidence against the “a superior intelligence helps overall”.
But all of the examples we have been discussing (nanotech warfare, biological warfare, cyberwarfare) are technological arms races. And in a technological arms race, superior intelligence does bring quite a decisive edge. In the discussion about cyberwarfare, you asked what makes the threat from an AI hacker different from the threat of human hackers. And the answer is that hacking is a task that primarily requires qualities such as intelligence and patience, both of which an AI could have a lot more than humans do. Certainly human hackers could do a lot of harm as well, but a single AI could be as dangerous as all of the 90th percentile human hackers put together.
What I am arguing is that the power of intelligence is vastly overestimated and therefore any associated risks. There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that. I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist. Further I argue that there is no hint of any intelligence out there reshaping its environment. The stars show no sign of intelligent tinkering. I provided many other arguments for why other risks might be more worthy of our contribution. I came up with all those ideas in the time it took to write those comments. I simply expect a lot more arguments and other kinds of evidence supporting their premises from an organisation that has been around for over 10 years.
There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that.
Yes, there are dumb risks that could wipe us out just as well: but only a superhuman intelligence with different desires than humanity is guaranteed to wipe us out.
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
You don’t need qualitative differences: just take a human-level intelligence and add on enough hardware that it can run many times faster than the best of human thinkers, and hold far more things in its mind at once. If it came to a fight, the humanity of 2000 could easily muster the armies to crush the best troops of 1800 without trouble. That’s just the result of 200 years of technological development and knowledge acquisition, and doesn’t even require us to be more intelligent than the humans of 2000.
Further I argue that there is no hint of any intelligence out there reshaping its environment.
We may not have observed aliens reshaping their environment, but we can certainly observe humans reshaping their environment. This planet is full of artificial structures. We’ve blanketed the Earth with lights that can be seen anywhere where we’ve bothered to establish habitation. We’ve changed the Earth so much that we’re disturbing global climate patterns, and now we’re talking about large-scale engineering work to counteract those disturbances. If I choose to, there are ready transportation networks that will get me pretty much anywhere on Earth, and ready networks for supplying me with food, healthcare and entertainment on all the planet’s continents (though admittedly Antarctica is probably a bit tricky from a tourist’s point of view).
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
It seems as though it is rather easy to imagine humans being given the “Deep Blue” treatment in a wide range of fields. I don’t see why this would be a sticking point. Human intelligence is plainly just awful, in practically any domain you care to mention.
Further I argue that there is no hint of any intelligence out there reshaping its environment.
Uh, that’s us. wave
In case you didn’t realise, humanity is the proof of concept that superior intelligence is dangerous. Ask a chimpanzee.
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
Have you taken an IQ test? Anyone who scores significantly higher than you constitutes a superior form of intelligence.
There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that.
Few such dumb risks are being pursued by humanity. Superhuman intelligence solves all dumb risks unless you postulate a dumb risk that is in principle unsolvable. Something like collapse of vacuum energy might do it.
Contributing to the creation of FAI doesn’t just decrease the likelihood of UFAI, it also decreases the likelihood of all the other scenarios that end up with humanity ceasing to exist.
I was going to give a formal definition¹ but then I noticed you said either way. Assume that 1 and 2 are the definition of FOOM: that is a possible event, and that it is the end of everything. I challenge you to substantiate your claim of “ridiculous”, as formally as you can.
Do note that I will be unimpressed with “anything defined by 1 and 2 is ridiculous”. Asteroid strikes and rapid climate change are two non-ridiculous concepts that satisfy the definition given by 1 and 2.
¹. And here it is: FOOM is the concept that self-improvement is cumulative and additive and possibly fast. Let X be an agent’s intelligence, and let X + f(X) = X^ be the function describing that agent’s ability to improve its intelligence (where f(X) is the improvement generated by an intelligence of X, and X^ is the intelligence of the agent post-improvement). If X^ > X, and X^ + f(X^) evaluates to X^^, and X^^ > X^, the agent is said to be a recursively self-improving agent. If X + f(X) evaluates in a short period of time, the agent is said to be a FOOMing agent.
Ridiculousness is in the eye of the beholder. Probably the biggest red flag was that there was no mention of what was supposedly going to be annihilated—and yes, it does make a difference.
The supposedly formal definition tells me very little—because “short” is not defined—and because f(X) is not a specified function. Saying that it evaluates to something positive is not sufficient to be useful or meaningful.
Fast enough that none of the other intelligences in Earth can copy its strengths or produce countermeasures sufficient to stand a chance in opposing it.
Fooming has been pretty clearly described. Fooming amounts to an entity drastically increasing both its intelligence and ability to manipulate reality around it in a very short time, possibly a few hours or weeks, by successively improving its hardware and/or software.
Example locations where this has been defined include Mass Driver’s post here where he defined it slightly differently as “to quickly, recursively self-improve so as to influence our world with arbitrarily large strength and subtlety”. I think he meant indefinitely large there, but the essential idea is the same. I note that you posted comments in that thread, so presumably you’ve seen that before, and you explicitly discussed fooming. Did you only recently decide that it wasn’t sufficiently well-defined? If so, what caused that decision?
Possibly a few hours or weeks?!?
Well, I’ve seen different timelines used by people in different contexts. Note that this isn’t just a function of definitions, but also when one exactly has an AI start doing this. An AI that shows up later, when we have faster machines and more nanotech, can possibly go foom faster than an AI that shows up earlier when we have fewer technologies to work with. But for what it is worth, I doubt anyone would call it going foom if the process took more than a few months. If you absolutely insist on an outside estimate for purposes of discussion, 6 weeks should probably be a decent estimate.
Vague definitions are not worth critics bothering attacking.
It isn’t clear to me what you are finding too vague about the definition. Is it just the timeline or is it another aspect?
This might be a movie threat notion—if so, I’m sure I’ll be told.
I assume the operational definition of FOOM is that the AI is moving faster than human ability to stop it.
As theoretically human-controlled systems become more automated, it becomes easier for an AI to affect them. This would mean that any humans who could threaten an AI would find themselves distracted or worse by legal, financial, social network reputational, and possibly medical problems. Nanotech isn’t required.
Yes, that seems like a movie threat notion to me, if an AI has the power to do those things to arbitrary people it likely can scale up from there so quickly to full control that it shouldn’t need to bother with such steps, although it is minimally plausible that a slow growing AI might need to do that.
Ok. So what caused you to use the term as if it had a specific definition when you didn’t think it did? Your behavior is very confusing. You’ve discussed foom related issues on multiple threads. You’ve been here much longer than I have; I don’t understand why we are only getting to this issue now.
The above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off
We’ve been using artificial intelligence for over 50 years now. If you haven’t start the clock already, why not? What exactly are you waiting for? There is never going to be a point in the future where machine intelligence “suddenly” arises. Machine intelligence is better than human intelligence in many domains today. [...]
There may well be other instances in between—but scraping together references on the topic seems as though it would be rather tedious.
So what caused you to use the term as if it had a specific definition when you didn’t think it did?
The quote you give focuses just on the issue of time-span. It also has already been addressed in this thread. Machine intelligence in the sense it is often used is not at all the same as artificial general intelligence. This has in fact been addressed by others in this subthread. (Although it does touch on a point you’ve made elsewhere that we’ve been using machines to engage in what amounts to successive improvement which is likely relevant.)
So what caused you to use the term as if it had a specific definition when you didn’t think it did?
I did what, exactly?
I would have thought that your comments in the previously linked thread started by Mass Driver would be sufficient, like when you said:
One “anti-foom” factor is the observation that in the early stages we can make progress partly by cribbing from nature—and simply copying it. After roughly “human level” is reached, that short-cut is no longer available—so progress may require more work after that.
And again in that thread where you said:
1 seems unlikely and 2 and 3 seem silly to me. An associated problem of unknown scale is the wirehead problem. Some think that this won’t be a problem—but we don’t really know that yet. It probably would not slow down machine intelligence very much, until way past human level—but we don’t yet know for sure what its effects will be.
Although rereading your post, I am now wondering if you were careful to put “anti-foom” in quotation marks because it didn’t have a clear definition. But in that case, I’m slightly confused to how you knew enough to decide that that was an anti-foom argument.
Right—so, by “anti-foom factor”, I meant: factor resulting in relatively slower growth in machine intelligence. No implication that the “FOOM” term had been satisfactorily quantitatively nailed down was intended.
I do get that the term is talking about rapid growth in machine intelligence. The issue under discussion is: how fast is considered to be “rapid”.
If you absolutely insist on an outside estimate for purposes of discussion, 6 weeks should probably be a decent estimate.
Six weeks—from when? Machine intelligence has been on the rise since the 1950s. Already it exceeds human capabilities in many domains. When is the clock supposed to start ticking? When is it supposed to stop ticking? What is supposed to have happened in the middle?
Machine intelligence has been on the rise since the 1950s. Already it exceeds human capabilities in many domains.
There is a common and well-known distinction between what you mean by ‘machine intelligence’ and what is meant by ‘AGI’. Deep Blue is a chess AI. It plays chess. It can’t plan a stock portfolio because it is narrow. Humans can play chess and plan stock portfolios, because they have general intelligence. Artificial general intelligence, not ‘machine intelligence’, is under discussion here.
“to quickly, recursively self-improve so as to influence our world with arbitrarily large strength and subtlety”
Nothing is “arbitrarily large” in the real world. So, I figure that definition confines FOOM to the realms of fantasy. Since people are still discussing it, I figure they are probably talking about something else.
“to quickly, recursively self-improve so as to influence our world with arbitrarily large strength and subtlety”
Nothing is “arbitrarily large” in the real world. So, I figure that definition confines FOOM to the realms of fantasy. Since people are still discussing it, I figure they are probably talking about something else.
Tim, I have to wonder if you are reading what I wrote, given that the sentence right after the quote is “I think he meant indefinitely large there, but the essential idea is the same. ” And again, if you thought earlier that foom wasn’t well-defined what made you post using the term explicitly in the linked thread? If you have just now decided that it isn’t well-defined then a) what do you have more carefully defined and b) what made you conclude that it wasn’t narrowly defined enough?
What distinction are you trying to draw between “arbitrarily large” and “indefinitely large” that turns the concept into one which is applicable to the real world?
Maybe you can make up a definition—but what you said was “fooming has been pretty clearly described”. That may be true, but it surely needs to be referenced.
What exactly am I supposed to have said in the other thread under discussion?
Lots of factors indicate that “FOOM” is poorly defined—including the disagreement surrounding it, and the vagueness of the commonly referenced sources about it.
Usually, step 1 in those kinds of discussions is to make sure that people are using the terms in the same way—and have a real disagreement—and not just a semantic one.
Recently, I participated in this exchange—where a poster here gave p(FOOM) = 0.001 - and when pressed they agreed that they did not have a clear idea of what class of events they were referring to.
What distinction are you trying to draw between “arbitrarily large” and “indefinitely large” that turns the concept into one which is applicable to the real world?
Arbitrarily large means just that in the mathematical sense. Indefinitely large is a term that would be used in other contexts. In the contexts that I’ve seen “indefinitely” used and the way I would mean it, it means so large as to not matter as the exact value for the purpose under discussion (as in “our troops can hold the fort indefinitely”).
Lots of factors indicate that “FOOM” is poorly defined—including the disagreement surrounding it,
Disagreement about something is not always a definitional issue. Indeed, when dealing with people on LW where people try to be rational as possible and have whole sequences about tabooing words and the like, one shouldn’t assign a very high probability to disagreements being due to definitions. Moreover, as one of the people who assigns a low probability to foom and have talked to people here about those issues, I’m pretty sure that we aren’t disagreeing on definitions. Our estimates for what the world will probably look like in 50 years disagree. That’s not simply a definitional issue.
Usually, step 1 in those kinds of discussions is to make sure that people are using the terms in the same way—and have a real disagreement—and not just a semantic one.
Ok. So why are you now doing step 1 years later? And moreover, how long should this step take as you’ve phrased it, given that we know that there’s substantial disagreement in terms of predicted observations about reality in the next few years? That can’t come from definitions. This is not a tree in a forest.
Recently I participated in this exchange—where a poster here gave p(FOOM) = 0.001 - and when pressed they agreed that they did not have a clear idea of what class of events they were referring to.
Yes! Empirical evidence. Unfortunately, it isn’t very strong evidence. I don’t know if he meant in that context that he didn’t have a precise definition or just that he didn’t feel that he understood things well enough to assign a probability estimate. Note that those aren’t the same thing.
I don’t see how the proposed word substitution is supposed to help. If FOOM means: “to quickly, recursively self-improve so as to influence our world with indefinitely large strength and subtlety”, we still face the same issues—of how fast is “quickly” and how big is “indefinitely large”. Those terms are uncalibrated. For the idea to be meaningful or useful, some kind of quantification is needed. Otherwise, we are into “how long is a piece of string?” territory.
So why are you now doing step 1 years later?
I did also raise the issue two years ago. No response, IIRC. I am not too worried if FOOM is a vague term. It isn’t a term I use very much. However, for the folks here—who like to throw their FOOMs around—the issue may merit some attention.
If indefinitely large is still too vague, you can replace it with “”to quickly, recursively self-improve so as to influence our world with sufficient strength and subtlety such that it can a) easily wipe out humans b) humans are not a major threat to it achieving almost any goal set and c) humans are sufficiently weak that it doesn’t gain resources by bothering to bargain with us.” Is that narrow enough?
What is supposed to have happened in the mean time?
You partly address the third question—and suggest that the clock is stopped “quickly” after it is started.
I don’t think that is any good. If we have “quickly” being the proposed-elsewhere “inside six weeks”, it is better—but there is still a problem, which is that there are no constraints being placed on the capabilities of the humans back when the clock was started. Maybe they were just as weak back then.
Since I am the one pointing out this mess, maybe I should also be proposing solutions:
I think the problem is that people want to turn the “FOOM” term into a binary categorisation—to FOOM or not to FOOM.
Yudkowsky’s original way of framing the issue doesn’t really allow for that. The idea is explicitly and deliberately not quantified in his post on the topic. I think the concept is challenging to quantify—and so there is some wisdom in not doing so. All that means is that you can’t really talk about: “to FOOM or not to FOOM”. Rather, there are degrees of FOOM. If you want to quantify or classify them, it’s your responsibility to say how you are measuring things.
It does look as though Yudkowsky has tried this elsewhere—and made an effort to say something a little bit more quantitative.
I’m puzzled a bit by your repeated questions about when to “start the clock” and this seems like it is possibly connected to the issue that people when discussing fooming are discussing a general intelligence going foom. They aren’t talking about little machine intelligences, whether neural networks or support vector machines or matchbox learning systems. They are talking about artificial general intelligence. The “clock” starts from when a a general intelligence with intelligence about as much as a bright human goes online.
I don’t think that is any good. If we have “quickly” being the proposed-elsewhere “inside six weeks”, it is better—but there is still a problem, which is that there are no constraints being placed on the capabilities of the humans back when the clock was started. Maybe they were just as weak back then.
If you’re willing to reject every definition presented to you, you can keep asking the question as long as you want. I believe this is typically called ‘trolling’.
Your argument applies to any donation of any sort, in fact to any action of any sort. What is the probability that the thing I am currently doing is the best possible thing to do? Why, its basically zero. Should I therefore not do it?
Referring to the SIAI as a cause “some posters stumbled on” is fairly inaccurate. It is a cause that a number of posters are dedicating their lives to, because in their analysis it is among the most efficient uses of their energy. In order to find a more efficient cause, I not only have to do some research, I have to do more research than the rational people who created SIAI (this isn’t entirely true, but it is much closer to the truth than your argument). The accessibility of SIAI in this setting may be strong evidence in its favor (this isn’t a coincidence; one reason to come to a place where rational people talk is that it tends to make good ideas more accessible than bad ones).
I am not donating myself. But for me there is some significant epistemic probability that the SIAI is in fact fighting for the most efficient possible cause, and that they are the best-equipped people currently fighting for it. If you have some information or an argument that suggests that this belief is inconsistent, you should share it rather than just imply that it is obvious (you have argued correctly that there probably exist better things to do with my resources, but I already knew that and it doesn’t help me decide what to actually do with my resources.)
By treating people who do things I approve of well, I can encourage them to do things I approve of, and conversely. By donating and proclaiming it loudly I am strongly suggesting that I personally approve of donating. Signaling isn’t necessarily irrational. If I am encouraging people to behave as rationally in support of my own goals, in what possible sense am I failing to be rational?
I’m curious: If you have the resources to donate (which you seem to imply by the statement that you have resources for which you can make a decision), and think it would be good to donate to the SIAI, then why don’t you donate?
(I don’t donate because I am not convinced unfriendly AI is such a big deal. I am aware that this may be lack of calibration on my part, but from the material I have read on on other sites, UFAI just doesn’t seem to be that big a risk. (There were some discussions on the topic on stardestroyer.net. While the board isn’t as dedicated to rationality as this board is, the counterarguments seemed well-founded, although I don’t remember the specifics right now. If anybody is interested, I will try to dig them up.)
I don’t know if it is a good idea to donate to SIAI. From my perspective, there is a significant chance that it is a good idea, but also a significant chance that it isn’t. I think everyone here recognizes the possibility that money going to the SIAI will accomplish nothing good. I either have a higher estimate for that possibility, or a different response to uncertainty. I strongly suspect that I will be better informed in the future, so my response is to continue earning interest on my money and only start donating to anything when I have a better idea of what is going on (or if I die, in which case the issue is forced).
The main source of uncertainty is whether the SIAI’s approach is useful for developing FAI. Based on its output so far, my initial estimate is “probably not” (except insofar as they successfully raise awareness of the issues). This is balanced by my respect for the rationality and intelligence of the people involved in the SIAI, which is why I plan to wait until I get enough (logical) evidence to either correct “probably not” or to correct my current estimates about the fallibility of the people working with the SIAI.
This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes. Don’t tell me there isn’t irrational prejudice here!
The argument that any donation is subject to similar objections is silly because it’s obvious that a human-welfare maximizer would plug for the the donation the donor believes best, despite the unlikelihood of finding the absolute best. It should also be obvious that my argument is that it’s unlikely that the Singularity Institute comes anywhere near the best donation, and one reason it’s unlikely is related to the unlikelihood of picking the best, even if you have to forgo the literal very best!
Numerous posters wouldn’t pick this particular charity, even if it happened to be among the best, unless they were motivated by signaling aspirations rather than the rational choice of the best recipient. As Yvain said in the previous entry: “Deciding which charity is the best is hard.” Rationalists should detect the irrationality of making an exception when one option is the Singularity Institute.
(As to whether signaling is rational, completely irrelevant to the discussion, as we’re talking about the best donation from a human-welfare standpoint. To argue that the contribution makes sense because signaling might be as rational as donating, even if plausible, is merely to change the subject, rather than respond to the argument.)
Another argument for the Singularity Institute donation I can’t be dismiss so easily. I read the counter-argument as saying that the Singularity Institute is clearly the best donation conceivable. To that I don’t have an answer, not any more than I have a counter-argument for many outright delusions. I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain’s preceding entry, where each $500 saves a human life.
Before downvoting this, ask yourself whether you’re saying my point is unintelligent or shouldn’t be raised for other reasons. (Ask yourself if my point should be made, was made by anyone else, and isn’t better than at least 50% of the postings here. Ask yourself whether it’s rational to upvote the critic and his silly argument and whether the many donors arrived at their views about the Singularity Institute’s importance based on the representative heuristic, the aura effect, which surrounds Eliezer, ignoring the probability of delivering any benefit,and a multitude of other errors in reasoning.)
Envy is unbecoming; I recommend against displaying it. You’d be better off starting with your 3rd sentence and cutting the word “silly.”
They have worked out this math, and it’s available in most of their promotional stuff that I’ve seen. Their argument is essentially “instead of operating on the level of individuals, we will either save all of humanity, present and future, or not.” And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it’s a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).
The question as to whether SIAI is the best way to nudge the entire future of humanity is a separate question from whether or not SIAI is a better bet than preventing malaria deaths. I don’t know if SIAI folks have made quantitative comparisons to other x-risk reduction plans, but I strongly suspect that if they have, a key feature of the comparison is that if we stop the Earth from getting hit by an asteroid, we just prevent bad stuff. If we get Friendly AI, we get unimaginably good stuff (and if we prevent Unfriendly AI without getting Friendly AI, we also prevent bad stuff).
Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal’s Wager. Pascal argued that if belief in God provided the most miniscule increase in the likelihood of being heaven bound, worship was prudent in light of heaven’s infinite rewards. One of the argument’s fatal flaws is that there is no reason to think worshipping this god will avoid reprisals by the real god—or any number of equally improbable alternative outcomes.
The Singularity Institute imputes only finite utiles, but the flaw is the same. It could as easily come to pass that the Institute’s activities make matters worse. They aren’t entitled to assume their efforts to control matters won’t have effects the reverse of the ones intended, any more than Pascal had the right to assume worshipping this god isn’t precisely what will send one to hell. We just don’t know (can’t know) about god’s nature by merely postulating his possible existence: we can’t know that the miniscule effects don’t run the other way. Similarly if not exactly the same, there’s no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.
When the only reason an expectation seems to have any probability lies in its extreme tininess, the reverse outcome must be allowed the same benefit, canceling them out.
When I get in my car to drive to the grocery store, do you think there is any reason to favor the hypothesis that I will arrive at the grocery store over all the a priori equally unlikely hypotheses that I arrive at some other destination?
Depends. Do you know where the grocery store actually is? Do you have an accurate map of how to get there? Have you ever gone to the grocery store before? Or is the grocery store an unknown, unsignposted location which no human being has ever visited or even knows how to visit? Because if it was the latter, I’d bet pretty strongly against you not getting there...
The point of the analogy is that probability mass is concentrated towards the desired outcome, not that the desired outcome becomes more likely than not.
In a case where no examples of grocery stores have ever been seen, when intelligent, educated people even doubt the possibility of the existence of a grocery store, and when some people who are looking for grocery stores are telling you you’re looking in the wrong direction, I’d seriously doubt that the intention to drive there was affecting the probability mass in any measurable amount.
If you were merely wandering aimlessly with the hope of encountering a grocery store, it would only affect your chance of ending up there insofar as you’d intentionally stop looking if you arrived at one, and not if you didn’t. But our grocery seeker is not operating in a complete absence of evidence with regard to how to locate groceries, should they turn out to exist, so the search is, if not well focused, at least not actually aimless.
I usually think about this, not as expected utility calculations based on negligible probabilities of vast outcomes being just as likely as their negations, but as them being altogether unreliable, because our numerical intuitions outside the ranges we’re calibrated for are unreliable.
For example, when trying to evaluate the plausibility of an extra $500 giving SIAI an extra 1 out of 7 billion chance of succeeding, there is something in my mind that wants to say “well, geez, 1e-10 is such a tiny number, why not?”
Which demonstrates that my brain isn’t calibrated to work with numbers in that range, which is no surprise.
So I do best to set aside my unreliable numerical intuitions and look for other tools with which to evaluate that claim.
They’re aware of this and have written about it. The argument is “just because something looks like a known fallacy doesn’t mean it’s fallacious.” If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn’t sound like Pascal’s Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method.
It’s not clear to me that it’s as easily, and I think that’s where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they’re still ahead by 1e-6. With Pascal’s Wager, you don’t have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It’s like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there’s still a chance malevolent god is the one you end up with, but it’s a better bet than picking solo (and you’re screwed anyway if you get a malevolent god).
I agree with you that it’s not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.
Reply to Vaniver:
The referenced essay by Eliezer didn’t deal with the present argument. Eliezer said, correctly, that the key Pascal’s Wager is in the balanced potential outcomes, not in the use of infinity. But my argument doesn’t rely on infinities.
Tellingly, Eliezer ultimately flubs Pascal’s Wager itself, when he states (incredibly) that praying to various benevolent gods obviates the Wager argument. This should tell you (and him) that he hasn’t completely grasped the Wager. If you or other posters agree with Eliezer’s argument against the Wager argument, I’ll clarify, but at the moment the point looks so obvious as to make explanation otiose.
Now to your main point, which other posters also voice: that we have some reason to think preparing for AIs will help avert disaster, at least with greater likelihood than the reverse. I think one poster provided part of the refutation when he said we are intellectually unable to make intuitive estimates of exceedingly small probabilities. Combining this with the Pascal Argument (which I was tempted to make explicit in my presentation of the argument but decided to avoid excessive complication at the onset), there’s no rational basis for assuming the miniscule probability we’re debating is positive.
Pascal is relevant because (if I’m right) the only reason to accept the miniscule probability when probabilities are so low goes something like this: If we strive to avert disaster, it will certainly be the case that, to whatever small extent, we’re more likely to succeed than make things worse. But nobody can seriously claim to have made a probability estimate as low as the bottom limit SI offers. The reasoning goes from the inevitability of some difference in probability.. The only thing the SI estimate has in its favor is that it’s so small, and the existence of such tiny differences can be presupposed. Which is true, but this reasoning from the inevitability of some difference doesn’t lead to any conclusion about the effect’s direction. If the probability were so low as the lower limit, there could be no rational basis for intuitively making a positive estimate of its magnitude.
Here’s an analogy. I flip a coin and concentrate very hard on ‘Heads.’ I say my concentration has to make some difference. And this is undoubtedly true, if you’re willing to entertain sufficiently small probabilities. (My thoughts, being physical processes, have some effect on their surroundings. They even interact minisculely with the H T outcome.) But no matter how strong my intuition that the effect goes the way I hope, I have no rational basis for accepting my intuition, the ultimate reason being that if so tiny a difference in fact existed, its estimation would be way beyond my intuitive capacities. If I had an honest hunch about the coin’s bias, even small, absent other evidence, I rationally follow my intuition. There’s some probability it’s right because my intuitions generally are more often correct than not. If I think the coin is slightly biased, there’s some chance I’m right; more chance that is, however small, that I have managed, I know not how, to intuit this tiny bias. But at some point, certainly far above the Singularity Institute’s lower bound for the probability they’d make a difference. At that point it becomes absurd (as opposed to merely foolish) to rely on my intuition because I can have no intuition valid to the slightest degree, when the quantities are so low I can’t grasp them intuitively; nor can I hope to predict effects so terribly small that, if real, chaos effects would surely wipe them out.
I’ve seen comments questioning my attitude and motives, so I should probably say something about why I’m a bit hostile to this project,; it’s not a matter of hypocrisy alone. The Singularity Institute competes with other causes for contributions, and it should concern people that it does so using specious argument. If SI intuits the likelihood could be as low as the lower probability estimate for success, the only honest practice is to call the probability zero.
You know this is a blog started by and run by Eliezer Yudkowsky—right? Many of the posters are fans. Looking at the rest of this thread, signaling seems to be involved in large quantities—but consider also the fact that there is a sampling bias.
Do you have any argument for why the SIAI is unlikely to be the best other than the sheer size of the option space?
This is a community where a lot of the members have put substantial thought into locating the optimum in that option space, and have well developed reasons for their conclusion. Further, there are not a lot of real charities clustered around that optimum. Simply claiming a low prior probability of picking the right charity is not a strong argument here. If you have additional arguments, I suggest you explain them further.
(I’ll also add that I personally arrived at the conclusion that an SIAI-like charity would be the optimal recipient for charitable donations before learning that it existed, or encountering Overcoming Bias, Less Wrong, or any of Eliezer’s writings, and in fact can completely discount the possibility that my rationality in reaching my conclusion was corrupted by an aura effect around anyone I considered to be smarter or more moral than myself.)
It is obvious that a number of smart people have decided that SIAI is currently the most important cause to devote their time and money to. This in itself constitutes an extremely strong form of evidence. This is, or at least was, basically Eliezer’s blog; if the thing that unites its readers is respect for his intelligence and judgment, then you should be completely unsurprised to see that many support SIAI. It is not clear how this is a form of irrationality, unless you are claiming that the facts are so clearly against the SIAI that we should be interpreting them as evidence against the intelligence of supporters of the SIAI.
Someone who is trying to have an effect on the course of an intelligence explosion is more likely to than someone who isn’t. I think many readers (myself included) believe very strongly that an intelligence explosion is almost certainly going to happen eventually and that how it occurs will have a dominant influence on the future of humanity. I don’t know if the SIAI will have a positive, negative, or negligible influence, but based on my current knowledge all of these possibilities are still reasonably likely (where even 1% is way more than likely enough to warrant attention).
Upvoting but nitpicking one aspect:
No. It isn’t very strong evidence by itself. Jonathan Sarfati is a chess master, published chemist, and a prominent young earth creationist. If we added all the major anti-evolutionists together it would easily include not just Sarfati but also William Dembski, Michael Behe, and Jonathan Wells, all of whom are pretty intelligent. There are some people less prominently involved who are also very smart such as Forrest Mims.
This is not the only example of this sort. In general, we live in a world where there are many, many smart people. That multiple smart people care about something can’t do much beyond locate the hypothesis. One distinction is that they most smart people who have looked at the SIAI have come away not thinking they are crazy, which is a very different situation from the sort of example given above, but by itself smart people having an interest is not strong evidence.
(Also, on a related note, see this subthread here which made it clear that what smart people think, even if one has a general consensus among smart people is not terribly reliable.)
There are several problems with what I said.
My use of “extremely” was unequivocally wrong.
I don’t really mean “smart” in the sense that a chess player proves their intelligence by being good at chess, or a mathematician proves their intelligence by being good at math. I mean smart in the sense of good at forming true beliefs and acting on them. If Nick Bostrom were to profess his belief that the world was created 6000 years ago, then I would say this constitutes reasonably strong evidence that the world was created 6000 years ago (when combined with existing evidence that Nick Bostrom is good at forming correct beliefs and reporting them honestly). Of course, there is much stronger evidence against this hypothesis (and it is extremely unlikely that I would have only Bostrom’s testimony—if he came to such a belief legitimately I would strongly expect there to be additional evidence he could present), so if he were to come out and say such a thing it would mostly just decrease my estimate of his intelligence rather than decreasing my estimate for the age of the Earth. The situation with SIAI is very different: I know of little convincing evidence bearing one way or the other on the question, and there are good reasons that intelligent people might not be able to produce easily understood evidence justifying their positions (since that evidence basically consists of a long thought process which they claim to have worked through over years).
Finally, though you didn’t object, I shouldn’t really have said “obvious.” There are definitely other plausible explanations for the observed behavior of SIAI supporters than their honest belief that it is the most important cause to support.
There is a strong selection effect. Most people won’t even look too closely, or comment on their observations. I’m not sure in what sense we can expect what you wrote to be correct.
This comment, on this post, in this blog, comes across as a textbook example of the Texas Sharpshooter Fallacy. You don’t form your hypothesis after you’ve looked at the data, just as you don’t prove what a great shot you are by drawing a target around the bullet hole.
I normally form hypotheses after I’ve looked at the data, although before placing high credence in them I would prefer to have confirmation using different data.
I agree that I made at least one error in that post (as in most things I write). But what exactly are you calling out?
I believe an intelligence explosion is likely (and have believed this for a good decade). I know the SIAI purports to try to positively influence an explosion. I have observed that some smart people are behind this effort and believe it is worth spending their time on. This is enough motivation for me to seriously consider how effective I think that the SIAI will be. It is also enough for me to question the claim that many people supporting SIAI is clear evidence of irrationality.
Yes, but here you’re using your data to support the hypothesis you’ve formed.
If I believe X and you ask me why I believe X, surely I will respond by providing you with the evidence that caused me to believe X?
External reality is not changed by the temporal location of hypothesis formation.
No, but when hypotheses are formed is relevant to evaluating their likelyhood given standard human cognitive biases.
I’m sorry, I didn’t find the thread yet. I lurked there for a long time and just now registered to use their search function and find it again. The main objection I clearly remember finding convincing was that nanotech can’t be used in the way many proponents of the Singularity propose, due to physical constraints, and thus an AI would be forced to rely on existing industry etc..
I’ll continue the search, though. The point was far more elaborated than one sentence. I face a similar problem as with climate science here: I thoroughly informed myself on the subject, came to the conclusion that climate change deniers are wrong, and then, little by little, forgot the details of the evidence that led to this conclusion. My memory could be better :-/
Of course, the Singularity argument in no way relies on nanotech.
Without advanced real-world nanotechnology it will be considerable more difficult for an AI to FOOM and therefore pose an existential risk. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU’s. It will have to rely on puny humans for a lot of tasks. It won’t be able to create new computational substrate without the whole economy of the world supporting it. It won’t be able to create an army of robot drones overnight without it either.
Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But more importantly it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources, therefore the AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.
So the absence of advanced nanotechnology constitutes an immense blow to any risk estimates including already available nanotech. Further, if one assumes that nanotech is a prerequisite for AI going FOOM then another question arises. It should be easier to create advanced replicators to destroy the world than creating AGI that then creates advanced replicators that then fails hold and then destroys the world. Therefore one might ask what is the bigger risk here.
To be honest, I think this is far scarier AI-go-FOOM scenario than nanotech is.
Giving the worm-scenario a second thought, I do not see how an AGI would benefit from doing that. An AGI incapable of acquiring resources by means of advanced nanotech assemblers would likely just pretend to be friendly to get humans to build more advanced computational substrates. Launching any large-scale attacks on the existing infrastructure would cause havoc but also damage the AI itself because governments (China etc.) would shut-down the whole Internet rather than living with such an infection. Or even nuke the AI’s mainframe. And even if it could increase its intelligence further by making use of unsuitable and ineffective substrates it would still be incapacitated, stuck in the machine. Without advanced nanotechnology you simply cannot grow exponentially or make use of recursive self-improvement beyond the software-level. This in turn considerably reduces the existential risk posed by an AI. That is not to say that it wouldn’t be a huge catastrophe as well, but there are other catastrophes on the same scale that you would have to compare. Only by implicitly making FOOMing the premise one can make it the most dangerous high-impact risk (never mind aliens, the LHC etc.).
You don’t see how an AGI would benefit from spreading itself in a distributed form to every computer on the planet, control and manipulate all online communications, encrypt the contents of hard drives and keep their contents hostage, etc.? You could have the AGI’s code running on every Internet-connected computer on the planet, which would make it virtually impossible to get rid of.
And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I’m pretty sure that that’ll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups. Add to that the fact that we’ve already had one case of a computer virus infecting the control computers in an off-line facility and, according to one report, delaying the nuclear program of country by two years. Add to that the fact that even people’s normal phones are increasingly becoming smartphones, which can be hacked, and that simpler phones have already shown themselves to be vulnerable to being crashed by a well-crafted SMS. Let 20 more years pass and us become more and more dependant on IT, and an AGI could probably keep all of humanity as its hostage—shutting down the entire Internet simply wouldn’t be an option.
This is nonsense. The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don’t, having humans build whatever you want is relatively trivial. Besides, we already have early stage self-replicating machinery today: you don’t need nanotech for that.
I understand. But how do you differentiate this from the same incident involving an army of human hackers? The AI will likely be very vulnerable if it runs on some supercomputer and even more so if it runs in the cloud (just use an EMP). In contrast an army of human hackers can’t be disturbed that easily and is an enemy you can’t pinpoint. You are portraying a certain scenario here and I do not see it as a convincing argument to fortify risks from AI compared to other risks.
It isn’t trivial. There is a strong interdependence of resources and manufacturers. The AI won’t be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious. Remember it won’t be able to coordinate a world-conspiracy because it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech. You’d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.
The point was that you can’t use an EMP if that means bringing down the whole human computer network.
Why would people need to get suspicious? If you have tabs on all the communications in the world, you can make a killing on the market, even if you didn’t delay the orders from your competitors. One could fully legitimately raise enough money by trading to hire people to do everything you wanted. Nobody needs to ever notice that there’s something amiss, especially not if you do it via enough shell corporations.
Of course, the AGI could also use more forceful means, though it’s by no means necessary. If the AGI revealed itself and the fact that it was holding all of humanity’s networked computers hostage, it could probably just flat-out tell the humans “do this or else”. Sure, not everyone would obey, but some would. Also, disrupt enough communications and manufacture enough chaos, and people will be too distracted and stressed to properly question forged orders. Social engineering is rather easy with humans, and desperate people are quite prone to wishful thinking.
This claim strikes me as bizarre. Why would you need nanotech to acquire more resources for self-improvement?
Some botnets have been reported to have around 350,000 members. Currently, the distributed computing project Folding@Home, with 290,000 active clients composed mostly of volunteer home PCs and Playstations, can reach speeds in the 10^15 FLOPS range. Now say that an AGI gets developed in 20 years from now. A relatively conservative estimate that presumed an AGI couldn’t hack into more computers than the best malware practicioners of today, that a personal computer would have a hundred times the computing power of today, and that an AGI required a minimum of 10^13 FLOPS to run, would suggest that an AGI could either increase its own computational capacity 12,000-fold, or spawn 12,000 copies of itself.
Alternatively, if it wanted to avoid detection and didn’t want to do anything illegal or otherwise suspicious, it could just sign up on a site like oDesk and do lots of programming jobs for money, then rent all the computing capacity it needed until it was ready to unleash itself to the world. This is actually the more likely alternative, as unlike the botnet/hacking scheme it’s a virtually risk-free to gain extra processing power.
Acquiring computational resources is easy, and it will only get easier as time goes on. Also, while upgrading your hardware is one way of self-improving, you seem to be completely ignoring the potential for software self-improvement. It’s not a given that the AGI would even need massive hardware. For instance:
Here my reply.
And what kind of computer controls the EMP? Or is it hand-cranked?
The point is that you people are presenting an idea that is an existential risk by definition. I claim that it might superficially appear to be the most dangerous of all risks but that this is mostly a result of its vagueness.
If you say that there is the possibility of a superhuman intelligence taking over the world and all its devices to destroy humanity then that is an existential risk by definition. I counter that I dispute some of the premises and the likelihood of subsequent scenarios. So to make me update on the original idea you would have to support your underlying premises rather than arguing within already established frameworks that impose several presuppositions onto me.
Are you aware of what the most common EMPs are? Nukes. The computer that triggers the high explosive lenses is already molten vapor by the time that the chain reaction has begun expanding into a fireball.
What kind of computer indeed!
I used this very example to argue with Robin Hanson during after-lecture QA (it should be in Parsons Part 2), it did not seem to help :)
I’m not convinced of this. As time progresses there are more and more vulnerable systems on the internet, many which shouldn’t be. That includes nuclear power plants, particle accelerators, conventional power plants and others. Other systems likely have some methods of access such as communication satellites. Soon this will also include almost completely automated manufacturing plants. An AI that quickly grows to control much of the internet would have access directly to nasty systems and just have a lot more processing power. The extra processing power means that the AI can potentially crack cryptosystems that are being used by secure parts of the internet or non-internet systems that use radio to communicate.
That said, I agree that without strong nanotech this seems like an unlikely scenario.
Yes, but then how does this risk differ from asteroid impacts, solar flares, bio weapons or nanotechnology? The point is that the only reason for a donation to the SIAI to have an higher expected pay off is the premise that AI can FOOM and kill all humans and take over the universe. In all other cases dumb risks are as or more likely and can accomplish to wipe us out as well. So why the SIAI? I’m trying to get a more definite answer to that question. I at least have to consider all possible arguments I can come up with in the time it takes to write a few comments and see what feedback I get. That way I can update my estimates and refine my thinking.
Asteroid impacts and solar flares are relatively ‘dumb’ risks, in that they can be defended against once you know how. They don’t constantly try to outsmart you.
This question is a bit like asking “yes, I know bioweapons can be dangerous, but how does the risk of genetically engineered e.coli differ from the risk of bioweapons”.
Bioweapons and nanotechnology are particular special cases of “dangerous technologies that humans might come up with”. An AGI is potentially employing all of the dangerous technologies humans—or AGIs—might come up with.
Your comment assumes that I agree on some premises that I actually dispute. That an AGI will employ all other existential risks and therefore be the most dangerous of all existential risks doesn’t follow because if such an AGI is as likely as the other risks then it doesn’t matter if we are wiped out by one of the other risks or by an AGI making use of one of those risks.
Well, one doesn’t need to think that that it intrinsically different. One would just need to think that the marginal return here is high because we aren’t putting in much resources now to look at the problem. Someone could potentially make that sort of argument for any existential risk.
Yes. I am getting much better responses from you than from some of the donors that replied or the SIAI itself. Which isn’t very reassuring. Anyway, you are of course right there. The SIAI is currently looking into the one existential risk that is most underfunded. As I said before, I believe that the SIAI should exist and therefore should be supported. Yet I still can’t follow some of the more frenetic supporters. That is, I don’t see the case being as strong as some portray it. And there is not enough skepticism here, although people reassure me constantly that they have been skeptic but were eventually convinced. They just don’t seem very convincing to me.
I guess I should stop trying then? Have I not provided anything useful? And do I come across as “frenetic”? That’s certainly not how I feel. And I figured 90 percent chance we all die to be pretty skeptical. Maybe you weren’t referring to me...
I’m sorry, I shouldn’t have phrased my comment like that. No, I was referring to this and this comment that I just got. I feel too tired to reply to those right now because I feel they do not answer anything and that I have already tackled their content in previous comments. I’m sometimes getting a bit weary when the amount of useless information gets too high. They probably feel the same about me and I should be thankful that they take the time at all. I can assure you that my intention is not to attack anyone or the SIAI personally just to discredit them. I’m honestly interested, simply curious.
OK, cool. Yeah, this whole thing does seem to go in circles at times… it’s the sort of topic where I wish I could just meet face to face and hash it out over an hour or so.
A large solar-outburst can cause similar havoc. Or some rouge group buys all Google stocks, tweaks its search algorithm and starts to influence election outcomes by slightly tweaking the results in favor of certain candidates while using its massive data repository to spy on people. There are a lot of scenarios. But the reason to consider the availability of advanced nanotechnology regarding AI associated existential risks is to reassess their impact and probability. An AI that can make use of advanced nanotech is certainly much more dangerous than one taking over the infrastructure of the planet by means of cyber-warfare. The question is if such a risk is still bad enough to outweigh other existential risks. That is the whole point here, comparison of existential risks to assess the value of contributing to the SIAI. If you scale back to an AGI incapable of quick self-improvement by use of nanotech and instead infrastructure take-over the difference between working to prevent such a catastrophe is not as far detached anymore from working on building an infrastructure more resistant to electromagnetic impulse weapons or sun flares.
The correct way to approach a potential risk is not to come up with a couple of specific scenarios relating to the risk, evaluate those, and then pretend that you’ve done a proper analysis of the risk involved. That’s analogous to trying to make a system secure by patching security vulnerabilities as they show up and not even trying to employ safety measures such as firewalls, or trying to make a software system bug-free simply by fixing bugs as they get reported and ignoring techniques such as unit tests, defensive programming, etc. It’s been tried and conclusively found to be a bad idea by both the security and software engineering communities. If you want to be safe, you need to take into account as many possibilities as you can, not just concentrate on the particular special cases that happened to rise to your attention.
The proper unit of analysis here are not the particular techniques that an AI might use to take over. That’s pointless: for any particular technique that we discuss here, there might be countless of others that the AI could employ, many of them ones nobody has even thought of yet. If we’d be in an alternate universe where Eric Drexler was overrun by a car before ever coming up with his vision of molecular nanotechnology, the whole concept of strong nanotech might be unknown to us. If we then only looked at the prospects for cyberwar, and concluded that an AI isn’t a big threat because humans can do cyberwarfare too, we could be committing a horrible mistake by completely ignoring nanotech. Of course, since in that scenario we couldn’t know about nanotech, our mistake wouldn’t be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle.
So what is the right unit of analysis? It’s the power of intelligence. It’s the historical case of a new form of intelligence showing up on the planet and completely reshaping its environment to create its own tools. It’s the difference in the power of the chimpanzee species to change its environment towards its preferred state, and the power of the human species to change its environment towards its preferred state. You saying “well here I’ve listed these methods that an AI could use to take over humanity, and I’ve analyzed them and concluded that the AI is of no threat” is just as fallacious as it would be for a chimpanzee to say “well here I’ve listed these methods that a human could take over chimpanzity, and I’ve analyzed them and concluded that humans are no threat to us”. You can’t imagine the ways that an AI could come up with and attempt to use against us, so don’t even try. Instead, look at the historical examples of what happens when you pit a civilization of inferior intelligences against a civilization of hugely greater ones. And that will tell you that a greater-than-human intelligence is the greatest existential risk there is, for it’s the only one where it’s by definition impossible for us to come up with the ways to stop it once it gets out of control.
You have to limit the scope of unknown unknowns. Otherwise why not employ the same line of reasoning to risks associated with aliens? If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. What, magic?
Yes, you could very well make an argument for the risks posed by superintelligent aliens. But then you would also have to produce an argument for a) why it’s plausible to assume that superintelligent aliens will show up anytime soon b) what we could do to prevent the invasion of superintelligent aliens if they did show up.
For AGI have an answer for point a (progress in computing power, neuroscience and brain reverse-engineering, etc.) and a preliminary answer for point b (figure out how to build benevolent AGIs). There are no corresponding answers to points a and b for aliens.
No it’s not: think about this again. “Aliens of a superior intelligence might wipe us out by some means we don’t know” is symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”. But “aliens of superior intelligence might appear out of nowhere” is not symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”.
I didn’t mean to suggest that aliens are a more likely risk than AI. I was trying to show that unknown unknowns can not be employed to the extent you suggest. You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
I’ll be back tomorrow.
Not quite.
Suppose that someone brought up a number of ways by which an AI could be dangerous, and somebody else refuted them all by pointing out that there’s no particular way by which having superior intelligence would help in them. (In other words, humans could do those things too, and an AI doing them wouldn’t be any more dangerous.) Now if I couldn’t come up with any examples where having a superior intelligence would help, then that would be evidence against the “a superior intelligence helps overall”.
But all of the examples we have been discussing (nanotech warfare, biological warfare, cyberwarfare) are technological arms races. And in a technological arms race, superior intelligence does bring quite a decisive edge. In the discussion about cyberwarfare, you asked what makes the threat from an AI hacker different from the threat of human hackers. And the answer is that hacking is a task that primarily requires qualities such as intelligence and patience, both of which an AI could have a lot more than humans do. Certainly human hackers could do a lot of harm as well, but a single AI could be as dangerous as all of the 90th percentile human hackers put together.
What I am arguing is that the power of intelligence is vastly overestimated and therefore any associated risks. There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that. I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist. Further I argue that there is no hint of any intelligence out there reshaping its environment. The stars show no sign of intelligent tinkering. I provided many other arguments for why other risks might be more worthy of our contribution. I came up with all those ideas in the time it took to write those comments. I simply expect a lot more arguments and other kinds of evidence supporting their premises from an organisation that has been around for over 10 years.
Large brains can be dangerous to those who don’t have them. Look at the current human-caused mass extinction.
Yes, there are dumb risks that could wipe us out just as well: but only a superhuman intelligence with different desires than humanity is guaranteed to wipe us out.
You don’t need qualitative differences: just take a human-level intelligence and add on enough hardware that it can run many times faster than the best of human thinkers, and hold far more things in its mind at once. If it came to a fight, the humanity of 2000 could easily muster the armies to crush the best troops of 1800 without trouble. That’s just the result of 200 years of technological development and knowledge acquisition, and doesn’t even require us to be more intelligent than the humans of 2000.
We may not have observed aliens reshaping their environment, but we can certainly observe humans reshaping their environment. This planet is full of artificial structures. We’ve blanketed the Earth with lights that can be seen anywhere where we’ve bothered to establish habitation. We’ve changed the Earth so much that we’re disturbing global climate patterns, and now we’re talking about large-scale engineering work to counteract those disturbances. If I choose to, there are ready transportation networks that will get me pretty much anywhere on Earth, and ready networks for supplying me with food, healthcare and entertainment on all the planet’s continents (though admittedly Antarctica is probably a bit tricky from a tourist’s point of view).
It seems as though it is rather easy to imagine humans being given the “Deep Blue” treatment in a wide range of fields. I don’t see why this would be a sticking point. Human intelligence is plainly just awful, in practically any domain you care to mention.
Uh, that’s us. wave
In case you didn’t realise, humanity is the proof of concept that superior intelligence is dangerous. Ask a chimpanzee.
Have you taken an IQ test? Anyone who scores significantly higher than you constitutes a superior form of intelligence.
Few such dumb risks are being pursued by humanity. Superhuman intelligence solves all dumb risks unless you postulate a dumb risk that is in principle unsolvable. Something like collapse of vacuum energy might do it.
Contributing to the creation of FAI doesn’t just decrease the likelihood of UFAI, it also decreases the likelihood of all the other scenarios that end up with humanity ceasing to exist.
“The Singularity argument”? What’s that, then?
FOOM is possible
FOOM is annihilation
Expected value should guide your decisions
From 1 and 2:
Expected value of FOOM is “huge bad”
From 3 and 4:
Make decisions to reduce expected value of FOOM
The SIAI corollary is:
There exists a way to turn FOOM = annihilation into FOOM = paradise
There exists a group “SIAI” that is making the strongest known effort towards that way
From 5, 6 and 7:
Make decisions to empower SIAI
edit: reformulating the SIAI corollary to bring out hidden assumptions.
...and what is “FOOM”? Or are 1 and 2 supposed to serve as a definition?
Either way, this is looking pretty ridiculous :-(
I was going to give a formal definition¹ but then I noticed you said either way. Assume that 1 and 2 are the definition of FOOM: that is a possible event, and that it is the end of everything. I challenge you to substantiate your claim of “ridiculous”, as formally as you can.
Do note that I will be unimpressed with “anything defined by 1 and 2 is ridiculous”. Asteroid strikes and rapid climate change are two non-ridiculous concepts that satisfy the definition given by 1 and 2.
¹. And here it is: FOOM is the concept that self-improvement is cumulative and additive and possibly fast. Let X be an agent’s intelligence, and let X + f(X) = X^ be the function describing that agent’s ability to improve its intelligence (where f(X) is the improvement generated by an intelligence of X, and X^ is the intelligence of the agent post-improvement). If X^ > X, and X^ + f(X^) evaluates to X^^, and X^^ > X^, the agent is said to be a recursively self-improving agent. If X + f(X) evaluates in a short period of time, the agent is said to be a FOOMing agent.
Ridiculousness is in the eye of the beholder. Probably the biggest red flag was that there was no mention of what was supposedly going to be annihilated—and yes, it does make a difference.
The supposedly formal definition tells me very little—because “short” is not defined—and because f(X) is not a specified function. Saying that it evaluates to something positive is not sufficient to be useful or meaningful.
Fast enough that none of the other intelligences in Earth can copy its strengths or produce countermeasures sufficient to stand a chance in opposing it.
Yes—though it is worth noting that if Google wins, we may have passed that point without knowing it back in 1998 sometime.
Fooming has been pretty clearly described. Fooming amounts to an entity drastically increasing both its intelligence and ability to manipulate reality around it in a very short time, possibly a few hours or weeks, by successively improving its hardware and/or software.
Uh huh. Where, please?
Possibly a few hours or weeks?!? [emphasis added]
Is is a few hours? Or a few weeks? Or something else entirely? …and how much is “drastically”.
Vague definitions are not worth critics bothering attacking.
In an attempt to answer my own question, this one is probably the closest I have seen from Yudkowsky.
It apparently specifies less than a year—though seems “rather vague” about the proposed starting and finishing capabilities.
Example locations where this has been defined include Mass Driver’s post here where he defined it slightly differently as “to quickly, recursively self-improve so as to influence our world with arbitrarily large strength and subtlety”. I think he meant indefinitely large there, but the essential idea is the same. I note that you posted comments in that thread, so presumably you’ve seen that before, and you explicitly discussed fooming. Did you only recently decide that it wasn’t sufficiently well-defined? If so, what caused that decision?
Well, I’ve seen different timelines used by people in different contexts. Note that this isn’t just a function of definitions, but also when one exactly has an AI start doing this. An AI that shows up later, when we have faster machines and more nanotech, can possibly go foom faster than an AI that shows up earlier when we have fewer technologies to work with. But for what it is worth, I doubt anyone would call it going foom if the process took more than a few months. If you absolutely insist on an outside estimate for purposes of discussion, 6 weeks should probably be a decent estimate.
It isn’t clear to me what you are finding too vague about the definition. Is it just the timeline or is it another aspect?
This might be a movie threat notion—if so, I’m sure I’ll be told.
I assume the operational definition of FOOM is that the AI is moving faster than human ability to stop it.
As theoretically human-controlled systems become more automated, it becomes easier for an AI to affect them. This would mean that any humans who could threaten an AI would find themselves distracted or worse by legal, financial, social network reputational, and possibly medical problems. Nanotech isn’t required.
Yes, that seems like a movie threat notion to me, if an AI has the power to do those things to arbitrary people it likely can scale up from there so quickly to full control that it shouldn’t need to bother with such steps, although it is minimally plausible that a slow growing AI might need to do that.
No, I’ve been aware for the issue for a loooong time.
Ok. So what caused you to use the term as if it had a specific definition when you didn’t think it did? Your behavior is very confusing. You’ve discussed foom related issues on multiple threads. You’ve been here much longer than I have; I don’t understand why we are only getting to this issue now.
I did raise this closely-related issue over two years ago. To quote the most relevant bit:
There may well be other instances in between—but scraping together references on the topic seems as though it would be rather tedious.
I did what, exactly?
The quote you give focuses just on the issue of time-span. It also has already been addressed in this thread. Machine intelligence in the sense it is often used is not at all the same as artificial general intelligence. This has in fact been addressed by others in this subthread. (Although it does touch on a point you’ve made elsewhere that we’ve been using machines to engage in what amounts to successive improvement which is likely relevant.)
I would have thought that your comments in the previously linked thread started by Mass Driver would be sufficient, like when you said:
And again in that thread where you said:
Although rereading your post, I am now wondering if you were careful to put “anti-foom” in quotation marks because it didn’t have a clear definition. But in that case, I’m slightly confused to how you knew enough to decide that that was an anti-foom argument.
Right—so, by “anti-foom factor”, I meant: factor resulting in relatively slower growth in machine intelligence. No implication that the “FOOM” term had been satisfactorily quantitatively nailed down was intended.
I do get that the term is talking about rapid growth in machine intelligence. The issue under discussion is: how fast is considered to be “rapid”.
Six weeks—from when? Machine intelligence has been on the rise since the 1950s. Already it exceeds human capabilities in many domains. When is the clock supposed to start ticking? When is it supposed to stop ticking? What is supposed to have happened in the middle?
There is a common and well-known distinction between what you mean by ‘machine intelligence’ and what is meant by ‘AGI’. Deep Blue is a chess AI. It plays chess. It can’t plan a stock portfolio because it is narrow. Humans can play chess and plan stock portfolios, because they have general intelligence. Artificial general intelligence, not ‘machine intelligence’, is under discussion here.
Nothing is “arbitrarily large” in the real world. So, I figure that definition confines FOOM to the realms of fantasy. Since people are still discussing it, I figure they are probably talking about something else.
Tim, I have to wonder if you are reading what I wrote, given that the sentence right after the quote is “I think he meant indefinitely large there, but the essential idea is the same. ” And again, if you thought earlier that foom wasn’t well-defined what made you post using the term explicitly in the linked thread? If you have just now decided that it isn’t well-defined then a) what do you have more carefully defined and b) what made you conclude that it wasn’t narrowly defined enough?
What distinction are you trying to draw between “arbitrarily large” and “indefinitely large” that turns the concept into one which is applicable to the real world?
Maybe you can make up a definition—but what you said was “fooming has been pretty clearly described”. That may be true, but it surely needs to be referenced.
What exactly am I supposed to have said in the other thread under discussion?
Lots of factors indicate that “FOOM” is poorly defined—including the disagreement surrounding it, and the vagueness of the commonly referenced sources about it.
Usually, step 1 in those kinds of discussions is to make sure that people are using the terms in the same way—and have a real disagreement—and not just a semantic one.
Recently, I participated in this exchange—where a poster here gave p(FOOM) = 0.001 - and when pressed they agreed that they did not have a clear idea of what class of events they were referring to.
Arbitrarily large means just that in the mathematical sense. Indefinitely large is a term that would be used in other contexts. In the contexts that I’ve seen “indefinitely” used and the way I would mean it, it means so large as to not matter as the exact value for the purpose under discussion (as in “our troops can hold the fort indefinitely”).
Disagreement about something is not always a definitional issue. Indeed, when dealing with people on LW where people try to be rational as possible and have whole sequences about tabooing words and the like, one shouldn’t assign a very high probability to disagreements being due to definitions. Moreover, as one of the people who assigns a low probability to foom and have talked to people here about those issues, I’m pretty sure that we aren’t disagreeing on definitions. Our estimates for what the world will probably look like in 50 years disagree. That’s not simply a definitional issue.
Ok. So why are you now doing step 1 years later? And moreover, how long should this step take as you’ve phrased it, given that we know that there’s substantial disagreement in terms of predicted observations about reality in the next few years? That can’t come from definitions. This is not a tree in a forest.
Yes! Empirical evidence. Unfortunately, it isn’t very strong evidence. I don’t know if he meant in that context that he didn’t have a precise definition or just that he didn’t feel that he understood things well enough to assign a probability estimate. Note that those aren’t the same thing.
I don’t see how the proposed word substitution is supposed to help. If FOOM means: “to quickly, recursively self-improve so as to influence our world with indefinitely large strength and subtlety”, we still face the same issues—of how fast is “quickly” and how big is “indefinitely large”. Those terms are uncalibrated. For the idea to be meaningful or useful, some kind of quantification is needed. Otherwise, we are into “how long is a piece of string?” territory.
I did also raise the issue two years ago. No response, IIRC. I am not too worried if FOOM is a vague term. It isn’t a term I use very much. However, for the folks here—who like to throw their FOOMs around—the issue may merit some attention.
If indefinitely large is still too vague, you can replace it with “”to quickly, recursively self-improve so as to influence our world with sufficient strength and subtlety such that it can a) easily wipe out humans b) humans are not a major threat to it achieving almost any goal set and c) humans are sufficiently weak that it doesn’t gain resources by bothering to bargain with us.” Is that narrow enough?
The original issues were:
When to start the clock?
When to stop the clock?
What is supposed to have happened in the mean time?
You partly address the third question—and suggest that the clock is stopped “quickly” after it is started.
I don’t think that is any good. If we have “quickly” being the proposed-elsewhere “inside six weeks”, it is better—but there is still a problem, which is that there are no constraints being placed on the capabilities of the humans back when the clock was started. Maybe they were just as weak back then.
Since I am the one pointing out this mess, maybe I should also be proposing solutions:
I think the problem is that people want to turn the “FOOM” term into a binary categorisation—to FOOM or not to FOOM.
Yudkowsky’s original way of framing the issue doesn’t really allow for that. The idea is explicitly and deliberately not quantified in his post on the topic. I think the concept is challenging to quantify—and so there is some wisdom in not doing so. All that means is that you can’t really talk about: “to FOOM or not to FOOM”. Rather, there are degrees of FOOM. If you want to quantify or classify them, it’s your responsibility to say how you are measuring things.
It does look as though Yudkowsky has tried this elsewhere—and made an effort to say something a little bit more quantitative.
I’m puzzled a bit by your repeated questions about when to “start the clock” and this seems like it is possibly connected to the issue that people when discussing fooming are discussing a general intelligence going foom. They aren’t talking about little machine intelligences, whether neural networks or support vector machines or matchbox learning systems. They are talking about artificial general intelligence. The “clock” starts from when a a general intelligence with intelligence about as much as a bright human goes online.
Huh? I don’t follow.
Foom is finding the slider bar in the config menu labeled ‘intelligence’ and moving it all the way to the right.
I don’t know what it is—but I am pretty sure that is not it.
If you check with: http://lesswrong.com/lw/wf/hard_takeoff/
...you will see there’s a whole bunch of vague, hand-waving material about how fast that happens.
If you’re willing to reject every definition presented to you, you can keep asking the question as long as you want. I believe this is typically called ‘trolling’.
What is your definition of foom?
I’m interested!
Thirded interest.
I am also interested.