I’m currently preparing for the Summit so I’m not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.
Given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don’t emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for themselves is important; but when they talk about “putting all eggs in one basket” as an abstract argument we will generally point out that this is, in fact, the diametrically wrong direction in which abstract argument should be pushing.
Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
Read the Yudkowsky-Hanson AI Foom Debate. (Someone link to the sequence.)
Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).
Read Eric Drexler’s Nanosystems. (Someone find an introduction by Foresight and link to it, that sort of thing is their job.) Also the term you want is not “grey goo”, but never mind.
The likelihood of exponential growth versus a slow development over many centuries.
Exponentials are Kurzweil’s thing. They aren’t dangerous. See the Yudkowsky-Hanson Foom Debate.
That it is worth it to spend most on a future whose likelihood I cannot judge.
Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility. Things you spend on charitable efforts that just make you feel good should be considered selfish. If you are entirely selfish but you can think past a hyperbolic discount rate then it’s still possible you can get more hedons per dollar by donating to existential risk projects.
Your difficulties in judgment should be factored into a probability estimate. Your sense of aversion to ambiguity may interfere with warm glows, but we can demonstrate preference reversals and inconsistent behaviors that result from ambiguity aversion which doesn’t cash out as a probability estimate and factor straight into expected utility.
That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.
Michael Vassar is leading. I’m writing a book. When I’m done writing the book I plan to learn math for a year. When I’m done with that I’ll swap back to FAI research hopefully forever. I’m “leading” with respect to questions like “What is the form of the AI’s goal system?” but not questions like “Do we hire this guy?”
My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority. Therefore I perceive it as unreasonable to put all my eggs in one basket.
Someone link to relevant introductions of ambiguity aversion as a cognitive bias and do the detailed explanation on the marginal utility thing.
What I mean to say by using that idiom is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justify to make the SIAI a prime priority. That is, I’m donating to the SIAI but also spend considerable amounts of resources maximizing utility at present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future.
Can someone else do the work of showing how this sort of satisficing leads to a preference reversal if it can’t be viewed as expected utility maximization?
Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their asserted accuracy. Here much is uncertain to an extent that I’m not able to judge any nested probability estimations. I’m already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?
Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense. Don’t look at just one side and think about how much you doubt it and can’t guess. Look at both of them. Also, read the FOOM debate.
And this is what I’m having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I’m not sure how you call this in English, but in German I’d call this a castle in the air.
Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you’re looking for supporting arguments on FOOM they’re in the referenced debate.
You could tell me to learn about Solomonoff induction etc., I know that what I’m saying may simply be due to a lack of education. But that’s what I’m arguing and inquiring about here. And I dare to bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.
Nobody’s claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)
I can however follow much of the reasoning and arguments on this site. But I’m currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?
It sounds like you haven’t done enough reading in key places to expect to be able to judge the overall credence out of your own estimates.
There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.
You may have an unrealistic picture of what it takes to get scientists interested enough in you that they will read very long arguments and do lots of work on peer review. There’s no prestige payoff for them in it, so why would they?
I’m concerned that although consistently so, the LW community is updating on fictional evidence. This post is meant to inquire the basic principles, the foundation of the sound argumentation’s and the basic premises that they are based upon . That is, are you creating models to treat subsequent models or are the propositions based on fact?
You have a sense of inferential distance. That’s not going to go away until you (a) read through all the arguments that nail down each point, e.g. the FOOM debate, and (b) realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.
An example here is the treatment and use of MWI (a.k.a. the “many-worlds interpretation”) and the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that’s it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least as understood within the LW community? But that’s besides the point. The problem here is that such conclusions are, I believe, widely considered to be weak evidence to base further speculations and estimations on.
Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.
What I’m trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.
Actually, now that I read this paragraph, it sounds like you think that “exponential”, “evolving” AI is an unsupported premise, rather than “AI go FOOM” being the conclusion of a lot of other disjunctive lines of reasoning. That explains a lot about the tone of this post. And if you’re calling it “exponential” or “evolving”, which are both things the reasoning would specifically deny (it’s supposed to be faster-than-exponential and have nothing to do with natural selection), then you probably haven’t read the supporting arguments. Read the FOOM debate.
Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who has figured all this out? The only person who’s aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?
After reading enough sequences you’ll pick up enough of a general sense of what it means to treat a thesis analytically, analyze it modularly, and regard every detail of a thesis as burdensome, that you’ll understand people here would mention Bostrom or Hanson instead. The sort of thinking where you take things apart into pieces and analyze each piece is very rare, and anyone who doesn’t do it isn’t treated by us as a commensurable voice with those who do. Also, someone link an explanation of pluralistic ignorance and bystander apathy.
I’m talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all all those claims for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site and by the SIAI rather than other near-term risks that might very well wipe us out.
An argument which makes sense emotionally (ambiguity aversion, someone link to hyperbolic discounting, link to scope insensitivity for the concept of warm glow) but not analytically (the expected utility intervals are huge, research often has long lead times).
I believe that hard-SF authors certainly know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have picked Greg Egan. That’s besides the point though, it’s not just Stross or Egan but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
Good reasoning is very rare, and it only takes a single mistake to derail. “Teach but not use” is extremely common. You might as well ask “Why aren’t there other sites with the same sort of content as LW?” Reading enough, and either you’ll pick up a visceral sense of the quality of reasoning being higher than anything you’ve ever seen before, or you’ll be able to follow the object-level arguments well enough that you don’t worry about other sources casually contradicting them based on shallower examinations, or, well, you won’t.
What do you expect me to do? Just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn’t allow me to devote all my resource to the SIAI, or even a substantial amount of my income. The thought makes me reluctant to give anything at all.
Start out with a recurring Paypal donation that doesn’t hurt, let it fade into the background, consider doing more after the first stream no longer takes a psychic effort, don’t try to make any commitment now or think about it now in order to avoid straining your willpower.
Maybe after a few years of study I’ll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I’d have some fun.
I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.
I haven’t done the work to understand MWI yet, but if this FAQ is accurate, almost nobody likes the Copenhagen interpretation (observers are SPECIAL) and a supermajority of “cosmologists and quantum field theorists” think MWI is true.
Since MWI seems to have no practical impact on my decision making, this is good enough for me. Also, Feynman likes it :)
Thanks for taking the time to give a direct answer. I enjoyed reading this and these replies will likely serve as useful comments to when people ask similar questions in the future.
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down.
Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who’s already convinced, preferably of someone within the SIAI?
That is part of what I call transparency and a foundational and reproducible corroboration of one’s first principles.
Awesome, I never came across this until now. It’s not widely mentioned? Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing. All this might be featured in the debate, hopefully with reference to substantial third-party research papers, I don’t know yet.
Read Eric Drexler’s Nanosystems.
The whole point of the grey goo example was to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.
This is an open question and I’m inquiring about how exactly the uncertainties regarding these problems are accounted for in your probability estimations of the dangers posed by AI.
Exponentials are Kurzweil’s thing. They aren’t dangerous.
What I was inquiring about is the likelihood of slow versus fast development of AI. That is, how fast after we got AGI will we see the rise of superhuman AI? The means of development by which a quick transcendence might happen is circumstantial to the meaning of my question.
Where are your probability estimations that account for these uncertainties. Where are your variables and references that allow you to make any kind of estimations to balance the risks of a hard rapture with a somewhat controllable development?
Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility.
You misinterpreted my question. What I meant by asking if it is even worth the effort is, as exemplified in my link, the question for why to choose the future over the present. That is: “What do we actually do all day, if things turn out well?,” “How much fun is there in the universe?,” “Will we ever run out of fun?”.
Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense.
When I said that I already cannot follow the chain of reasoning depicted on this site I didn’t mean to say that I was unable due to intelligence or education. I believe I am intelligent enough and am trying to close the education gap. What I meant is that the chain of reasoning is intransparent.
Take the case of evolution, you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn’t far, it’s not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you’re looking for supporting arguments on FOOM they’re in the referenced debate.
What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, “do you have better data than me”? Or, “I have a bunch of good arguments”?
Nobody’s claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)
I’m not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I’ll ask you for how you came up with these estimations. I’ll ask you to provide more than a consistent internal logic but some evidence-based prior.
...realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.
If your antiprediction is not as informed as the original prediction, how is it not at most reducing the original prediction but actually overthrowing it to the extent on which the SIAI is basing its risk estimations?
Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.
Um… yes? Superhuman is a low bar and, more importantly, a completely arbitrary bar.
I’m not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I’ll ask you for how you came up with these estimations. I’ll ask you to provide more than a consistent internal logic but some evidence-based prior.
Evidence based? By which you seem to mean ‘some sort of experiment’? Who would be insane enough to experiment with destroying the world? This situation is exactly where you must understand that evidence is not limited to ‘reference to historical experimental outcomes’. You actually will need to look at ‘consistent internal logic’… just make sure the consistent internal logic is well grounded on known physics.
What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, “do you have better data than me”? Or, “I have a bunch of good arguments”?
And that, well, that is actually a reasonable point. You have been given some links (regarding human behavior) that are good answer to the question but it is nevertheless non-trivial. Unfortunately now you are actually going to have to do the work and read them.
...… just make sure the consistent internal logic is well grounded on known physics.
Is it? That smarter(faster)-than-human intelligence is possible is well grounded on known physics? If that is the case, how does it follow that intelligence can be applied to itself effectively, to the extent that one could realistically talk about “explosive” recursive self-improvement?
That smarter(faster)-than-human intelligence is possible is well grounded on known physics?
Some still seem sceptical—and you probably also need some math, compsci and philosophy to best understand the case for superhuman intelligence being possible.
Not only is there evidence that smarter than human intelligence is possible it is something that should be trivial given a vaguely sane reductionist model. Moreover you specifically have been given evidence on previous occasions when you have asked similar questions.
What you have not been given and what are not available are empirical observations of smarter than human intelligences existing now. That is evidence to which you would not be entitled.
Moreover you specifically have been given evidence on previous occasions when you >have asked similar questions.
Please provide a link to this effect? (Going off topic, I would suggest that a “show all threads with one or more comments by users X, Y and Z” or “show conversations between users X and Y” feature on LW might be useful.)
Moreover you specifically have been given evidence on previous occasions when you >have asked similar questions.
Please provide such a link. (Going off-topic, I additionally suggest that a “show all conversations between user X and user Y” feature on Less Wrong might be useful.)
It is currently not possible for me to either link or quote. I do not own a computer in this hemisphere and my android does not seem to have keys for brackets or greater than symbols. workarounds welcome.
The solution varies by model, but on mine, alt-shift-letter physical key combinations do special characters that aren’t labelled. You can also use the on-screen keyboard, and there are more onscreen keyboards available for download if the one you’re currently using is badly broken.
Uhm...yes? It’s just something I would expect to be integrated into any probability estimates of suspected risks. More here.
Who would be insane enough to experiment with destroying the world?
Check the point that you said is a reasonable one. And I have read a lot without coming across any evidence yet. I do expect an organisation like the SIAI to have detailed references and summaries about their decision procedures and probability estimations to be transparently available and not hidden beneath thousands of posts and comments. “It’s somewhere in there, line 10020035, +/- a million lines....” is not transparency! That is, an organisation who’s conerned with something taking over the universe and asks for your money. And organisation I’m told of which some members get nightmares just reading about evil AI...
I think you just want a brochure. We keep telling you to read archived articles explaining many of the positions and you only read the comment where we gave the pointers, pretending as if that’s all that’s contained in our answers. It’d be more like him saying, “I have a bunch of good arguments right over there,” and then you ignore the second half of the sentence.
I’m not asking for arguments. I know them. I donate. I’m asking for more now. I’m using the same kind of anti-argumentation that academics would use against your arguments. Which I’ve encountered myself a few times while trying to convince them to take a look at the inscrutable archives of posts and comment that is LW. What do they say? “I skimmed over it, but there were no references besides some sound argumentation, an internal logic.”, “You make strong claims, mere arguments and conclusions extrapolated from a few premises are insufficient to get what you ask for.”
Pardon my bluntness, but I don’t believe you, and that disbelief reflects positively on you. Basically, if you do know the arguments then a not insignificant proportion of your discussion here would amount to mere logical rudeness.
For example if you already understood the arguments for, or basic explanation of why ‘putting all your eggs in one basket’ is often the rational thing to do despite intuitions to the contrary then why on earth would you act like you didn’t?
Oh crap, the SIAI was just a punching bag. Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you’d end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It’s obvious, I don’t see how someone wouldn’t get this.
I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don’t care to save or that doesn’t need to be saved in the first place because I missed the fact that all the babies are puppets and not real.
I asked, are the babies real babies that need food and is the expected utility payoff of feeding them higher than eating the food myself right now?
I’m starting to doubt that anyone actually read my OP...
Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you’d end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It’s obvious, I don’t see how someone wouldn’t get this.
I know this is just a tangent… but that isn’t actually the reason.
I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don’t care to save or that doesn’t need to be saved in the first place because I missed the fact that all the babies are puppets and not real.
Just to be clear, I’m not objecting to this. That’s a reasonable point.
Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW? I’ve missed the reason then. Seriously, I’d love to read up on it now.
As a result, sober calculations suggest that the lifetime risk of dying from an asteroid strike is about the same as the risk of dying in a commercial airplane crash. Yet we spend far less on avoiding the former risk than the latter.
Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW?
Good question. If not there should be. It is just basic maths when handling expected utilities but it crops up often enough. Eliezer gave you a partial answer:
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain.
… but unfortunately only asked for a link for the ‘scope insensivity’ part, not a link to a ‘marginal utility’ tutorial. I’ve had a look and I actually cant find such a reference on LW. A good coverage of the subject can be found in an external paper, Heuristics and biases in charity. Section 1.1.3 Diversification covers the issue well.
You should just be discounting expected utilities by the probability of the claims being true...
That’s another point. As I asked, what are the variables, where do I find the data? How can I calculate this probability based on arguments to be found on LW?
This IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money.
I’m not trying to be a nuisance here, but it is the only point I’m making right now, and the one that can be traced right back through the context. It is extremely difficult to make progress in a conversation if I cannot make a point about a specific argument without being expected to argue against an overall position that I may or may not even disagree with. It makes me feel like my arguments must come armed as soldiers.
I’m sorry, I perceived your comment to be mainly about decision making regarding charities. Which is completely marginal since the SIAI is the only charity concerned with the risk I’m inquiring about. Is the risk in question even real and does its likelihood justify the consequences and arguments for action?
I inquired about the decisions making regarding charities because you claimed that what I stated about egg allocation is not the point being made. But I do not particularly care about that question as it is secondary.
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down.
Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who’s already convinced, preferably of someone within the SIAI?
That is part of what I call transparency and a foundational and reproducible corroboration of one’s first principles.
Leave aside SIAI specific claims here. The point Eliezer was making, was about ‘all your eggs in one basket’ claims in general. In situations like this (your contribution doesn’t drastically change the payoff at the margin, etc) putting all your eggs in best basket is the right thing to do.
You can understand that insight completely independently of your position on existential risk mitigation.
Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing.
...and “FOOM” means way the hell smarter than anything else around...
Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.
Not, “ooh, it’s a little Einstein but it doesn’t have any robot hands, how cute”.
Questionable. How is an encapsulated AI going to get this kind of control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions), but how is it going to make use of the things it orders?
Optimizing yourself is a special case, but it’s one we’re about to spend a lot of time talking about.
I believe that self-optimization is prone to be very limited. Changing anything substantial might lead Gandhi to swallow the pill that will make him want to hurt people, so to say.
...humans developed the idea of science, and then applied the idea of science...
Sound argumentation that gives no justification to extrapolate it to an extent that you could apply it to the shaky idea of a superhuman intellect coming up with something better than science and applying it again to come up...
In an AI, the lines between procedural and declarative knowledge are theoretically blurred, but in practice it’s often possible to distinguish cognitive algorithms and cognitive content.
All those ideas about possible advantages of being an entity that can reflect upon itself to the extent of being able to pinpoint its own shortcoming is again, highly speculative. This could be a disadvantage.
Much of the rest is about the plateau argument, once you got a firework you can go to the moon. Well yes, I’ve been aware of that argument. But that’s weak, that there are many hidden mysteries about reality that we completely missed yet is highly speculative. I think even EY admits that whatever happens, quantum mechanics will be a part of it. Is the AI going to invent FTL travel? I doubt it, and it’s already based on the assumption that superhuman intelligence, not just faster intelligence, is possible.
Insights are items of knowledge that tremendously decrease the cost of solving a wide range of problems.
Like the discovery that P ≠ NP? Oh wait, that would be limiting. This argument runs in both directions.
If you go to a sufficiently sophisticated AI—more sophisticated than any that currently exists...
Assumption.
But it so happens that the AI itself uses algorithm X to store associative memories, so if the AI can improve on this algorithm, it can rewrite its code to use the new algorithm X+1.
Nice idea, but recursion does not imply performance improvement.
You can’t draw detailed causal links between the wiring of your neural circuitry, and your performance on real-world problems.
How can he make any assumptions then about the possibility to improve them recursively, given this insight, to an extent that they empower an AI to transcendent into superhuman realms?
Well, we do have one well-known historical case of an optimization process writing cognitive algorithms to do further optimization; this is the case of natural selection, our alien god.
Did he just attribute intention to natural selection?
Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.
What would you accept as evidence?
Would you accept sophisticated machine learning algorithms like the ones in the Netflix contest, who find connections that make no sense to humans, who simply can’t work with high-dimensional data?
Would you accept a circuit designed by a genetic algorithm, which doesn’t work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?
Would you accept a chess program which could crush any human chess player who ever lived? Kasparov at ELO 2851, Rybka at 3265. Wikipedia says grandmaster status comes at ELO 2500. So Rybka is now even further beyond Kasparov at his peak as Kasparov was beyond a new grandmaster. And it’s not like Rybka or the other chess AIs will weaken with age.
Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?
I think it at least possible that much-smarter-than human intelligence might turn
out to be impossible. There exist some problem domains where there appear to be a large number of
solutions, but where the quality of the solutions saturate quickly as more and more resources
are thrown at them. A toy example is how often records are broken in a continuous 1-D domain, with attempts drawn from a constant probability distribution:
The number of records broken goes as the log of the number of attempts. If some
of the tasks an AGI must solve are like this, then it might not do much better than
humans—not because evolution did a wonderful job of optimizing humans for perfect
intelligence, but because that part of the problem domain is a brick wall, and
anything must bash into it at nearly the same point.
One (admittedly weak) piece of evidence: a real example of saturation, is an
optimizing compiler being used to recompile itself. It is a recursive optimizing
system, and, if there is a knob to allow more effort being used on the optimization,
the speed-up from the first pass can be used to allow a bit more effort to be applied
to a second pass for the same cpu time. Nonetheless, the results for this specific recursion are not FOOM.
The evidence in the other direction are basically existence proofs from the most intelligent people or groups of people that we know of. Something as intelligent as Einstein must be possible, since Einstein existed. Given an AI Einstein, working on improving its own intelligence—it isn’t clear if it could make a little progress or a
great deal.
but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point.
This goes for your compilers as well, doesn’t it? There are still major speed-ups available in compilation technology (the closely connected areas of whole-program compilation+partial evaluation+supercompilation), but a compiler is still expected to produce isomorphic code, and that puts hard information-theoretic bounds on output.
Would you accept a circuit designed by a genetic algorithm, which doesn’t work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?
“This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems—a clock. How does it work? Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way—yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery (Davidson 1997).”
The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.
We could have said in the 1950s that machines beat us at arithmetic by orders of magnitude. Classical AI researchers clearly were deluded by success at easy problems. The problem with winning on easy problems is that it says little about hard ones.
What I see is that in the domain of problems for which human level performance is difficult to replicate, computers are capable of catching us and likely beating us, but gaining a great distance on us in performance is difficult. After all, a human can still beat the best chess programs with a mere pawn handicap. This may never get to two pawns. ever. Certainly the second pawn is massively harder than the first. It’s the nature of the problem space. In terms of runaway AGI control of the planet, we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).
BTW, is ELO supposed to have that kind of linear interpretation?
The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.
Yes, this is the important part. Chimps lag behind humans in 2 distinct ways—they differ in degree, and in kind. Chimps can do a lot of human-things, but very minimally. Painting comes to mind. They do a little, but not a lot. (Degree.) Language is another well-studied subject. IIRC, they can memorize some symbols and use them, but not in the recursive way that modern linguistics (pace Chomsky) seems to regard as key, not recursive at all. (Kind.)
What can we do with this distinction? How does it apply to my three examples?
After all, a human can still beat the best chess programs with a mere pawn handicap.
Ever is a long time. Would you like to make this a concrete prediction I could put on PredictionBook, perhaps something along the lines of ‘no FIDE grandmaster will lose a 2-pawns-odds chess match(s) to a computer by 2050’?
BTW, is ELO supposed to have that kind of linear interpretation?
I’m not an expert on ELO by any means (do we know any LW chess experts?), but reading through http://en.wikipedia.org/wiki/Elo_rating_system#Mathematical_details doesn’t show me any warning signs—ELO point differences are supposed to reflect probabilistic differences in winning, or a ratio, and so the absolute values shouldn’t matter. I think.
we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).
This is a possibility (made more plausible if we’re talking about those reins being used to incentivize early AIs to design more reliable and transparent safety mechanisms for more powerful successive AI generations), but it’s greatly complicated by international competition: to the extent that careful limitation and restriction of AI capabilities and access to potential sources of power reduces economic, scientific, and military productivity it will be tough to coordinate. Not to mention that existing economic, political, and legal structures are not very reliably stable: electorates and governing incumbents often find themselves unable to retain power.
BTW, is ELO supposed to have that kind of linear interpretation?
It seems that whether or not it’s supposed to, in practice it does. From the just released “Intrinsic Chess Ratings”, which takes Rybka and does exhaustive evaluations (deep enough to be ‘relatively omniscient’) of many thousands of modern chess games; on page 9:
We conclude that there is a smooth relationship between the actual players’ Elo ratings and the intrinsic quality of the move choices as measured by the chess program and the agent fitting. Moreover, the final s-fit values obtained are nearly the same for the corresponding entries of all three time periods. Since a lower s indicates higher skill, we conclude that there has been little or no ‘inflation’ in ratings over time—if anything there has been deflation. This runs counter to conventional wisdom, but is predicted by population models on which rating systems have been based [Gli99].
The results also support a no answer to question 2 [“Were the top players of earlier times as strong as the top players of today?”]. In the 1970’s there were only two players with ratings over 2700, namely Bobby Fischer and Anatoly Karpov, and there were years as late as 1981 when no one had a rating over 2700 (see [Wee00]). In the past decade there have usually been thirty or more players with such ratings. Thus lack of inflation implies that those players are better than all but Fischer and Karpov were. Extrapolated backwards, this would be consistent with the findings of [DHMG07], which however (like some recent competitions to improve on the Elo system) are based only on the results of games, not on intrinsic decision-making.
You are getting much closer than any of the commenter’s before you to provide some other form of evidence to substantiate one of the primary claims here.
You have to list your primary propositions on which you base further argumentation, from which you draw conclusions and which you use to come up with probability estimations stating risks associated with former premises. You have to list these main principles so anyone who comes across claims of existential risks and a plead for donation, can get an overview. Then you have to provide the references you listed above, if you believe they give credence to the ideas, so that people see that all you say isn’t made up but based on previous work and evidence by people that are not associated with your organisation.
Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?
No, although I have heard about all of the achievements I’m not yet able to judge if they provide evidence supporting the possibility of strong superhuman AI, the kind that would pose a existential risk. Although in the case of chess I’m pretty much the opinion that this is no strong evidence as it is not sufficiently close to be able to overpower humans to an extent of posing a existential risk when extrapolated into other areas.
It would be good if you could provide links to the mentioned examples. Especially the genetic algorithm (ETA: Here.). It is still questionable however if this could lead to the stated recursive improvements or will shortly hit a limit. To my knowledge genetic algorithms are merely used for optimization, based on previous design spaces and are not able to come up with something unique to the extent of leaving their design space.
Whether sophisticated machine learning algorithms are able to discover valuable insights beyond statistical inferences within higher-dimensional data-sets is a very interesting idea though. As I just read, the 2009 prize of the Netflix contest was given to a team that achieved a 10.05% improvement over the previous algorithm. I’ll have to examine this further if it might bear evidence that shows this kind of complicated mesh of algorithms might lead to a quick self-improvement.
One of the best comments so far, thanks. Although your last sentence was to my understanding simply showing that you are reluctant to further critique.
I am reluctant because you seem to ask for magical programs when you write things like:
“To my knowledge genetic algorithms are merely used for optimization, based on previous design spaces and are not able to come up with something unique to the extent of leaving their design space.”
I was going to link to AIXI and approximationsthereof; full AIXI is as general as an intelligence can be if you accept that there are no uncomputable phenomenon, and the approximations are already pretty powerful (from nothing to playing Pac-Man).
But then it occurred to me that anyone invoking a phrase like ‘leaving their design space’ might then just say ‘oh, those designs and models can only model Turing machines, and so they’re stuck in their design space’.
But then it occurred to me that anyone invoking a phrase like ‘leaving their design space’...
I’ve no idea (formally) of what a ‘design space’ actually is. This is a tactic I’m frequently using against strongholds of argumentation that are seemingly based on expertise. I use their own terminology and rearrange it into something that sounds superficially clever. I like to call it a Chinese room approach. Sometimes it turns out that all they were doing was to sound smart but cannot explain themselves when faced with their own terminology set to inquire about their pretences.
I thank you however for taking the time to actually link to further third party information that will substantiate given arguments for anyone not trusting the whole of LW without it.
I see. Does that actually work for you? (Note that your answer will determine whether I mentally re-categorize you from ‘interested open-minded outsider’ to ‘troll’.)
It works against cults and religion in general. I don’t argue with them about their religion being not even wrong but rather accept their terms and highlight inconsistencies within their own framework by going as far as I can with one of their arguments and by inquiring about certain aspects based on their own terminology until they are unable to consistently answer or explain where I am wrong.
This also works with the anti GM-food bunch, data protection activists, hippies and many other fringe groups. For example, the data protection bunch concerned with information disclosure on social networks or Google Streetview. Yes, I say, that’s bad, burglar could use such services to check out your house! I wonder what evidence there is for the increase of burglary in the countries where Streetview is already available for many years?
Or I tell the anti-gun lobbyists how I support their cause. It’s really bad if anyone can buy a gun. Can you point me to the strong correlation between gun ownership and firearm homicides? Thanks.
Questionable. How is an encapsulated AI going to get this kind of control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions),
Any specific scenario is going to have burdensome details, but that’s what you get if you ask for specific scenarios rather than general pressures, unless one spends a lot of time going through detailed possibilities and vulnerabilities. With respect to the specific example, regular human criminals routinely swindle or earn money anonymously online, and hack into and control millions of computers in botnets. Cloud computing resources can be rented with ill-gotten money.
but how is it going to make use of the things it orders?
In the unlikely event of a powerful human-indifferent AI appearing in the present day, a smartphone held by a human could provide sensors and communication to use humans for manipulators (as computer programs direct the movements of some warehouse workers today). Humans can be paid, blackmailed, deceived (intelligence agencies regularly do these things) to perform some tasks. An AI that leverages initial capabilities could jury-rig a computer-controlled method of coercion [e.g. a cheap robot arm holding a gun, a tampered-with electronic drug-dispensing implant, etc]. And as time goes by and the cumulative probability of advanced AI becomes larger, increasing quantities of robotic vehicles and devices will be available.
Thanks, yes I know about those arguments. One of the reasons I’m actually donating and accept AI to be one existential risk. I’m inquiring about further supporting documents and transparency. More on that here, especially check the particle collider analogy.
With respect to transparency, I agree about a lack of concise, exhaustive, accessible treatments. Reading some of the linked comments about marginal evidence from hypotheses I’m not quite sure what you mean, beyond remembering and multiplying by the probability that particular premises are false. Consider Hanson’s “Economic Growth Given Machine Intelligence”. One might support it with generalizations from past population growth in plants and animals, from data on capital investment and past market behavior and automation, but what would you say would license drawing probabilistic inferences using it?
Note that such methods might not result in the destruction of the world within a week (the guaranteed result of a superhuman non-Friendly AI according to Eliezer.)
The linked bet doesn’t reference “a week,” and the “week” reference in the main linked post is about going from infrahuman to superhuman, not using that intelligence to destroy humanity.
That bet seems underspecified. Does attention to “Friendliness” mean any attention to safety whatsoever, or designing an AI with a utility function such that it’s trustworthy regardless of power levels? Is “superhuman” defined relative to the then-current level of human (or upload, or trustworthy less intelligent AI) capacity with any enhancements (or upload speedups, etc)? What level of ability counts as superhuman? You two should publicly clarify the terms.
A few comments later on the same comment thread someone asked me how much time was necessary, and I said I thought a week was enough, based on Eliezer’s previous statements, and he never contradicted this, so it seems to me that he accepted it by default, since some time limit will be necessary in order for someone to win the bet.
I defined superhuman to mean that everyone will agree that it is more intelligent than any human being existing at that time.
I agree that the question of whether there is attention to Friendliness might be more problematic to determine. But “any attention to safety whatsoever” seems to me to be clearly stretching the idea of Friendliness—for example, someone could pay attention to safety by trying to make sure that the AI was mostly boxed, or whatever, and this wouldn’t satisfy Eliezer’s idea of Friendliness.
Right. And if this scenario happened, there would be a good chance that it would not be able to foom, or at least not within a week. Eliezer’s opinion seems to be that this scenario is extremely unlikely, in other words that the first AI will already be far more intelligent than the human race, and that even if it is running on an immense amount of hardware, it will have no need to acquire more hardware, because it will be able to construct nanotechnology capable of controlling the planet through actions originating on the internet as you suggest. And as you can see, he is very confident that all this will happen within a very short period of time.
Have you tried asking yourself non-rhetorically what an AI could do without MNT? That doesn’t seem to me to be a very great inferential distance at all.
Have you tried asking yourself non-rhetorically what an AI could do without MNT?
I believe that in this case an emulation would be the bigger risk because it would be sufficiently obscure and could pretend to be friendly for a long time while secretly strengthening its power. A purely artificial intelligence would be too alien and therefore would have a hard time to acquire the necessary power to transcend to a superhuman level without someone figuring out what it does, either by its actions or by looking at its code. It would also likely not have the intention to increase its intelligence infinitely anyway. I just don’t see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You’d have to deliberately implement such an intention. It would generally require its creators to solve a lot of problems much more difficult than limiting its scope. That is why I do not see run-away self-improvement as a likely failure mode.
I could imagine all kinds of scenarios indeed. But I also have to assess their likelihood given my epistemic state. And my conclusion is that a purely artificial intelligence wouldn’t and couldn’t do much. I estimate the worst-case scenario to be on par with a local nuclear war.
I simply can’t see where the above beliefs might come from. I’m left assuming that you just don’t mean the same thing by AI as I usually mean. My guess is that you are implicitly thinking of a fairly complicated story but are not spelling that out.
I simply can’t see where the above beliefs might come from. I’m left assuming that you just don’t mean the same thing by AI as I usually mean.
And I can’t see where your beliefs might come from. What are you telling potential donors or AGI researchers? That AI is dangerous by definition? Well, what if they have a different definition, what should make them update in favor of your definition? That you thought about it for more than a decade now? I perceive serious flaws in any of the replies I got so far in under a minute and I am a nobody. There is too much at stake here to base the decision to neglect all other potential existential risks on the vague idea that intelligence might come up with something we haven’t thought about. If that kind of intelligence is as likely as other risks then it doesn’t matter what it comes up with anyway because those other risks will wipe us out just as good and with the same probability.
There already are many people criticizing the SIAI right now, even on LW. Soon, once you are more popular, other people than me will scrutinize everything you ever wrote. And what do you expect them to conclude if even a professional AGI researcher, who has been a member of the SIAI, does write the following:
Every AGI research I know can see that. The only people I know who think that an early-stage, toddler-level AGI has a meaningful chance of somehow self-modifying its way up to massive superhuman intelligence—are people associated with SIAI.
But I have never heard any remotely convincing arguments in favor of this odd, outlier view !!!
BTW the term “self-modifying” is often abused in the SIAI community. Nearly all learning involves some form of self-modification. Distinguishing learning from self-modification in a rigorous formal way is pretty tricky.
Why would I disregard his opinion in favor of yours? Can you present any novel achievements that would make me conclude that you people are actually experts when it comes to intelligence? The LW sequences are well written but do not showcase some deep comprehension of the potential of intelligence. Yudkowsky was able to compile previously available knowledge into a coherent framework of rational conduct. That isn’t sufficient to prove that he has enough expertise on the topic of AI to make me believe him regardless of any antipredictions being made that weaken the expected risks associated with AI. There is also insufficient evidence to conclude that Yudkowsky, or someone within the SIAI, is smart enough to be able to tackle the problem of friendliness mathematically.
If you would at least let some experts take a look at your work and assess its effectiveness and general potential. But there exists no peer review at all. There have been some popular people attend the Singularity Summit. Have you asked them why they do not contribute to the SIAI? Have you for example asked Douglas Hofstadter why he isn’t doing everything he can to mitigate risks from AI? Sure, you got some people to donate a lot of money to the SIAI. But to my knowledge they are far from being experts and contribute to other organisations as well. Congratulations on that, but even cults get rich people to support them. I’ll update on donors once they say why they support you and their arguments are convincing or if they are actually experts or people being able to showcase certain achievements.
My guess is that you are implicitly thinking of a fairly complicated story but are not spelling that out.
Intelligence is powerful, intelligence doesn’t imply friendliness, therefore intelligence is dangerous. Is that the line of reasoning based on which I shall neglect other risks? If you think so then you are making it more complicated than necessary. You do not need intelligence to invent stuff to kill us if there’s already enough dumb stuff around that is more likely to kill us. And I do not think that it is reasonable to come up with a few weak arguments on how intelligence could be dangerous and conclude that their combined probability beats any good argument against one of the premises or in favor of other risks. The problems are far too diverse, you can’t combine them and proclaim that you are going to solve all of them by simply defining friendliness mathematically. I just don’t see that right now because it is too vague. You could as well replace friendliness with magic as the solution to the many disjoint problems of intelligence.
Intelligence is also not the solution to all other problems we face. As I argued several times, I just do not see that recursive self-improvement will happen any time soon and cause an intelligence explosion. What evidence is there against a gradual development? As I see it we will have to painstakingly engineer intelligent machines. There won’t be some meta-solution that outputs meta-science to subsequently solve all other problems.
Have you for example asked Douglas Hofstadter why he isn’t doing everything he can to mitigate risks from AI?
Douglas Hofstadter and Daniel Dennett both seem to think these issues are probably still far away.
The reason I have injected myself into that world, unsavory though I find it in many ways, is that I think that it’s a very confusing thing that they’re suggesting. If you read Ray Kurzweil’s books and Hans Moravec’s, what I find is that it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two, because these are smart people; they’re not stupid.
...
Kelly said to me, “Doug, why did you not talk about the singularity and things like that in your book?” And I said, “Frankly, because it sort of disgusts me, but also because I just don’t want to deal with science-fiction scenarios.” I’m not talking about what’s going to happen someday in the future; I’m not talking about decades or thousands of years in the future. I’m talking about “What is a human being? What is an ‘I’?” This may be an outmoded question to ask 30 years from now. Maybe we’ll all be floating blissfully in cyberspace, there won’t be any human bodies left, maybe everything will be software living in virtual worlds, it may be science-fiction city. Maybe my questions will all be invalid at that point. But I’m not writing for people 30 years from now, I’m writing for people right now. We still have human bodies. We don’t yet have artificial intelligence that is at this level. It doesn’t seem on the horizon.
And I do not think that it is reasonable to come up with a few weak arguments on how intelligence could be dangerous and conclude that their combined probability beats any good argument against one of the premises or in favor of other risks.
I’m not sure who is doing that. Being hit by an asteroid, nuclear war and biological war are other possible potentially major setbacks. Being eaten by machines should also have some probability assigned to it—though it seems pretty challenging to know how to do that. It’s a bit of an unknown unknown. Anyway, this material probably all deserves some funding.
There is also insufficient evidence to conclude that Yudkowsky, or someone within the SIAI, is smart enough to be able to tackle the problem of friendliness mathematically.
The short-term goal seems more modest—prove that self-improving agents can have stable goal structures.
If true, that would be fascinating—and important. I don’t know what the chances of success are, but Yudkowsky’s pitch is along the lines of: look this stuff is pretty important, and we are spending less on it than we do on testing lipstick.
That’s a pitch which it is hard to argue with, IMO. Machine intelligence research does seem important and currently-underfunded. Yudkowsky is—IMHO—a pretty smart fellow. If he will work on the problem for $80K a year (or whatever) it seems as though there is a reasonable case for letting him get on with it.
I’m not sure you’re looking at the probability of other extinction risks with the proper weighting. The timescales are vastly different. Supervolcanoes: one every 350,000 years. Major asteroid strikes: one every 700,000 years. Gamma ray bursts: hundreds of millions of years, etc. There’s a reason the word ‘astronomical’ means huge beyond imagining.
Contrast that with the current human-caused mass extinction event: 10,000 years and accelerating. Humans operate on obscenely fast timescales compared to nature. Just with nukes we’re able to take out huge chunks of Earth’s life forms in 24 hours, most or all of it if we detonated everything we have in an intelligent, strategic campaign to end life. And that’s today, rather than tomorrow.
Regarding your professional AGI researcher and recursive self-improvement, I don’t know, I’m not an AGI researcher, but it seemed to me that a prerequisite to successful AGI is an understanding and algorithmic implementation of intelligence. Therefore, any AGI will know what intelligence is (since we do), and be able to modify it. Once you’ve got a starting point, any algorithm that can be called ‘intelligent’ at all, you’ve got a huge leap toward mathematical improvement. Algorithms have been getting faster at a higher rate than Moore’s Law and computer chips.
I’m not sure you’re looking at the probability of other extinction risks with the proper weighting.
That might be true. But most of them have one solution that demands research in many areas. Space colonization. It is true that intelligent systems, if achievable in due time, play a significant role here. But not an exceptional role if you disregard the possibility of an intelligence explosion, of which I am very skeptical. Further, it appears to me that donating to the SIAI would rather impede research on such systems giving their position that such systems themselves posit an existential risk. Therefore, at the moment, the possibility of risks from AI is partially being outweighed to the extent that the SIAI should be supported yet doesn’t hold an exceptional position that would necessarily make it the one charity with the highest expected impact per donation. I am unable to pinpoint another charity at the moment, e.g. space elevator projects, because I haven’t looked into it. But I do not know of any comparison analysis, although you and many other people claim they have calculated it nobody ever published their efforts. As you know, I am unable to do such an analysis myself at this point as I am still learning the math. But I am eager to get the best information by means of feedback anyhow. Not intended as an excuse of course.
Once you’ve got a starting point, any algorithm that can be called ‘intelligent’ at all, you’ve got a huge leap toward mathematical improvement. Algorithms have been getting faster at a higher rate than Moore’s Law and computer chips.
That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution? Also, can algorithms that could be employed in real-world scenarios be speed-up to have an effect that would warrant superhuman power? Take photosynthesis, could that particular algorithm be improved considerably, to an extent that it would be vastly better than the evolutionary one? Further, will such improvements be accomplishable fast enough to outpace human progress or the adaption of the given results? My problem is that I do not believe that intelligence is fathomable as a solution that can be applied to itself effectively. I see a fundamental dependency on unintelligent processes. Intelligence is merely to recapitulate prior discoveries. To alter what is already known by means of natural methods. If ‘intelligence’ is shorthand for ‘problem-solving’ then it’s also the solution which would mean that there was no problem to be solved. This can’t be true, we still have to solve problems and are only able to do so more effectively if we are dealing with similar problems that can be subject to known and merely altered solutions. In other words, on a fundamental level problems are not solved, solutions are discovered by an evolutionary process. In all discussions I took part so far ‘intelligence’ has had a somewhat proactive aftertaste. But nothing genuine new is ever being created deliberately.
Nonetheless I believe your reply was very helpful as an impulse to look at it from a different perspective. Although I might not be able to judge it in detail at this point I’ll have to incorporate it.
That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution?
This seems backwards—if intelligence is modular, that makes it more likely to be subject to overall improvements, since we can upgrade the modules one at a time. I’d also like to point out that we currently have two meta-algorithms, bagging and boosting, which can improve the performance of any other machine learning algorithm at the cost of using more CPU time.
It seems to me that, if we reach a point where we can’t improve an intelligence any further, it won’t be because it’s fundamentally impossible to improve, but because we’ve hit diminishing returns. And there’s really no way to know in advance where the point of diminishing returns will be. Maybe there’s one breakthrough point, after which it’s easy until you get to the intelligence of an average human, then it’s hard again. Maybe it doesn’t become difficult until after the AI’s smart enough to remake the world. Maybe the improvement is gradual the whole way up.
But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
In other words, on a fundamental level problems are not solved, solutions are discovered by an evolutionary process. In all discussions I took part so far ‘intelligence’ has had a somewhat proactive aftertaste. But nothing genuine new is ever being created deliberately.
In a sense, all thoughts are just the same words and symbols rearranged in different ways. But that is not the type of newness that matters. New software algorithms, concepts, frameworks, and programming languages are created all the time. And one new algorithm might be enough to birth an artificial general intelligence.
But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
The AI will be much bigger than a virus. I assume this will make propagation much harder.
And one new algorithm might be enough to birth an artificial general intelligence.
Anything could be possible—though the last 60 years of the machine intelligence field are far more evocative of the “blood-out of-a-stone” model of progress.
If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
Smart human programmers can make dark nets too. Relatively few of them want to trash their own reputations and appear in the cross-hairs of the world’s security services and law-enforcement agencies, though.
Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.
Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it’s because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage.
Maybe the state of computer security will be better in 20 years, and this won’t be as much of a risk anymore. I certainly hope so. But we can’t count on it.
Mafia superintelligence, spyware superintelligence—it’s all the forces of evil. The forces of good are much bigger, more powerful and better funded.
Sure, we should continue to be vigilant about the forces of evil—but surely we should also recognise that their chances of success are pretty slender—while still keeping up the pressure on them, of course.
You seem to be seriously misinformed about the present state of computer security. The resources on the side of good are vastly insufficient because offense is inherently easier than defense.
Your unfounded supposition seems pretty obnoxious—and you aren’t even right :-(
You can’t really say something is “vastly insufficient”—unless you have an intended purpose in mind—as a guide to what would qualify as being sufficient.
There’s a huge population of desktop and office computers doing useful work in the world—we evidently have computer security enough to support that.
Perhaps you are presuming some other criteria. However, projecting that presumption on to me—and then proclaiming that I am misinformed—seems out of order to me.
You can’t say really something is “vastly insufficient” unless you have an intended purpose in mind. There’s a huge population of desktop and office computers doing useful work in the world—we have computer security enough to support that.
The purpose I had in mind (stated directly in that post’s grandparent, which you replied to) was to stop an artificial general intelligence from stealing vast computational resources. Since exploits in major software packages are still commonly discovered, including fairly frequent 0-day exploits which anyone can get for free just by monitoring a few mailing lists, the computer security we have is quite obviously not sufficient for that purpose. Not only that, humans do in fact steal vast computational resources pretty frequently. The fact that no one has tried to or wants to stop people from getting work done on their office computers is completely irrelevant.
You sound bullish—when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
I am not sure which statement you stand by. The one about me being “seriously misinformed” about computer security? Let’s not go back to that—pulease!
The “adjusted” one—about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.
The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons—but the government will know what colour socks they are wearing. Similarly, medicine will be better—and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent “Rational Optimism”.
Is there a correspondingly convincing case that the forces of evil will win out—and that the mafia machine intelligence—or the spyware-maker’s machine intelligence—will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution’s drive to build large-scale cooperative systems to entertain such ideas for very long.
I don’t have much inclination to think about my attitude towards Crocker’s Rules just now—sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure—or on a community level. Otherwise the overhead of tracking people’s “Crocker status” seems considerable. You can take that as a “no”.
I believe your reply was very helpful as an impulse to look at it from a different perspective. Although I might not be able to judge it in detail at this point I’ll have to incorporate it.
Thank you for continuing to engage my point of view, and offering your own.
I do not believe that intelligence is fathomable as a solution that can [be] applied to itself effectively.
That’s an interesting hypothesis which easily fits into my estimated 90+ percent bucket of failure modes. I’ve got all kinds of such events in there, including things such as, there’s no way to understand intelligence, there’s no way to implement intelligence in computers, friendliness isn’t meaningful, CEV is impossible, they don’t have the right team to achieve it, hardware will never be fast enough, powerful corporations or governments will get there first, etc. My favorite is: no matter whether it’s possible or not, we won’t get there in time; basically, that it will take too long to be useful. I don’t believe any of them, but I do think they have solid probabilities which add up to a great amount of difficulty.
But the future isn’t set, they’re just probabilities, and we can change them. I think we need to explore this as much as possible, to see what the real math looks like, to see how long it takes, to see how hard it really is. Because the payoffs or results of failure are in that same realm of ‘astronomical’.
There is too much at stake here to base the decision to neglect all other potential existential risks on the vague idea that intelligence might come up with something we haven’t thought about.
To my knowledge, SIAI does not actually endorse neglecting all potential x-risks besides UFAI. (Analysis might recommend discounting the importance of fighting them head-on, but that analysis should still be done when resources are available.)
Intelligence is also not the solution to all other problems we face.
Not all of them—most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones—and so on. It probably won’t fix the speed of light limit, though.
Not all of them—most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones—and so on. It probably won’t fix the speed of light limit, though.
What makes you reach this conclusion? How can you think any of these problems can be solved by intelligence when none of them have been solved? I’m particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don’t see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.
I’m particularly perplexed by the claim that war would be solved by higher
intelligence. Many wars are due to ideological priorities. I don’t see how you
can expect necessarily (or even with high probability) that ideologues will be
less inclined to go to war if they are smarter.
Violence has been declining on (pretty much) every timescale: Steven Pinker: Myth of Violence. I think one could argue that this is because of greater collective intelligence of human race.
I’m particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don’t see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.
War won’t be solved by making everyone smarter, but it will be solved if a sufficiently powerful friendly AI takes over, as a singleton, because it would be powerful enough to stop everyone else from using force.
Yes, that makes sense, but in context I don’t think that’s what was meant since Tim is one of the people here is more skeptical of that sort of result.
How can you think any of these problems can be solved by intelligence when none of them have been solved?
War has already been solved to some extent by intelligence (negotiations and diplomacy significantly decreased instances of war), hunger has been solved in large chunks of the world by intelligence, energy limits have been solved several times by intelligence, resource shortages ditto, intelligence has made a good first attempt at space travel (the moon is quite far away), and intelligence has made huge bounds towards solving the problem of loss of loved ones (vaccination, medical intervention, surgery, lifespans in the high 70s, etc).
Many wars are due to ideological priorities.
This is a constraint satisfaction problem (give as many ideologies as much of what they want as possible). Intelligence solves those problems.
I have my doubts about war, although I don’t think most wars really come down to conflicts of terminal values. I’d hope not, anyway.
However as for the rest, if they’re solvable at all, intelligence ought to be able to solve them. Solvable means there exists a way to solve them. Intelligence is to a large degree simply “finding ways to get what you want”.
Do you think energy limits really couldn’t be solved by simply producing through thought working designs for safe and efficient fusion power plants?
ETA: ah, perhaps replace “intelligence” with “sufficient intelligence”. We haven’t already solved all these problems already in part because we’re not really that smart. I think fusion power plants are theoretically possible, and at our current rate of progress we should reach that goal eventually, but if we were smarter we should obviously achieve it faster.
As various people have said, the original context was not making everybody more intelligent and thereby changing their inclinations, but rather creating an arbitrarily powerful superintelligence that makes their inclinations irrelevant. (The presumption here is typically that we know which current human inclinations such a superintelligence would endorse and which ones it would reject.)
But I’m interested in the context you imply (of humans becoming more intelligent).
My $0.02: I think almost all people who value war do so instrumentally. That is, I expect that most warmongers (whether ideologues or not) want to achieve some goal (spread their ideology, or amass personal power, or whatever) and they believe starting a war is the most effective way for them to do that. If they thought something else was more effective, they would do something else.
I also expect that intelligence is useful for identifying effective strategies to achieve a goal. (This comes pretty close to being true-by-definition.)
So I would only expect smarter ideologues (or anyone else) to remain warmongers if if starting a war really was the most effective way to achieve their goals. And if that’s true, everyone else gets to decide whether we’d rather have wars, or modify the system so that the ideologues have more effective options than starting wars (either by making other options more effective, or by making warmongering less effective, whichever approach is more efficient).
So, yes, if we choose to incentivize wars, then we’ll keep getting wars. It follows from this scenario that war is the least important problem we face, so we should be OK with that.
Conversely, if it turns out that war really is an important problem to solve, then I’d expect fewer wars.
And what do you expect them to conclude if even a professional AGI researcher, who has been a member of the SIAI, does write the following:
Every AGI research I know can see that. The only people I know who think that an early-stage, toddler-level AGI has a meaningful chance of somehow self-modifying its way up to massive superhuman intelligence—are people associated with SIAI.
Is that really the idea? My impression is that the SIAI think machines without morals are dangerous, and that until there is more machine morality research, it would be “nice” if progress in machine intelligence was globally slowed down. If you believe that, then any progress—including constructing machine toddlers—could easily seem rather negative.
I just do not see that recursive self-improvement will happen any time soon and cause an intelligence explosion. What evidence is there against a gradual development?
Darwinian gradualism doesn’t forbid evolution taking place rapidly. I can see evolutionary progress accelerating over the course of my own lifespan—which is pretty incredible considering that evolution usually happens on a scale of millions of years. More humans in parallel can do more science and engineering. The better their living standard, the more they can do. Then there are the machines...
Maybe some of the pressures causing the speed-up will slack off—but if they don’t then humanity may well face a bare-knuckle ride into inner-space—and fairly soon.
Most toddlers can’t program, but many teenagers can. The toddler is a step towards the teenager—and teenagers are notorious for being difficult to manage.
I just don’t see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You’d have to deliberately implement such an intention.
It suggests that open-ended goal-directed systems will tend to improve themselves—and to grab resources to help them fulfill their goals—even if their goals are superficially rather innocent-looking and make no mention of any such thing.
The paper starts out like this:
AIs will want to self-improve—One kind of action a system can take is to alter either its own software or its own physical structure. Some of these changes would be very damaging to the system and cause it to no longer meet its goals. But some changes would enable it to reach its goals more effectively over its entire future. Because they last forever, these kinds of self-changes can provide huge benefits to a system. Systems will therefore be highly motivated to discover them and to make them happen. If they do not have good models of themselves, they will be strongly motivated to create them though learning and study. Thus almost all AIs will have drives towards both greater self-knowledge and self-improvement.
It would also likely not have the intention to increase its intelligence infinitely anyway. I just don’t see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You’d have to deliberately implement such an intention.
Well, some older posts had a guy praising “goal system zero”, which meant a plan to program an AI with the minimum goals it needs to function as a ‘rational’ optimization process and no more. I’ll quote his list directly:
(1) Increasing the security and the robustness of the goal-implementing process. This will probably entail the creation of machines which leave Earth at a large fraction of the speed of light in all directions and the creation of the ability to perform vast computations.
(2) Refining the model of reality available to the goal-implementing process. Physics and cosmology are the two disciplines most essential to our current best model of reality. Let us call this activity “physical research”.
(End of list.)
This seems plausible to me as a set of necessary conditions. It also logically implies the intention to convert all matter the AI doesn’t lay aside for other purposes (of which it has none, here) into computronium and research equipment. Unless humans for some reason make incredibly good research equipment, the zero AI would thus plan to kill us all. This would also imply some level of emulation as an initial instrumental goal. Note that sub-goal (1) implies a desire not to let instrumental goals like simulated empathy get in the way of our demise.
I believe that in this case an emulation would be the bigger risk because it would be sufficiently obscure and could pretend to be friendly for a long time while secretly strengthening its power.
Perhaps, though if we can construct such a thing in the first place we may be able to deep-scan its brain and read its thoughts pretty well—or at least see if it is lying to us and being deceptive.
IMO, the main problem there is with making such a thing in the first place before we have engineered intelligence. Brain emulations won’t come first—even though some people seem to think they will.
This was a short critique of one of the links given. The first I skimmed over. I wasn’t impressed yet. At least to the extent of having nightmares when someone tells me about bad AI’s.
I like how Nick Bostrom put it re: probabilities and interesting future phenomena:
I see philosophy and science as overlapping parts of a continuum. Many of my interests lie in the intersection. I tend to think in terms of probability distributions rather than dichotomous epistemic categories. I guess that in the far future the human condition will have changed profoundly, for better or worse. I think there is a non-trivial chance that this “far” future will be reached in this century. Regarding many big picture questions, I think there is a real possibility that our views are very wrong. Improving the ways in which we reason, act, and prioritize under this uncertainty would have wide relevance to many of our biggest challenges.
I’m currently preparing for the Summit so I’m not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don’t emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for themselves is important; but when they talk about “putting all eggs in one basket” as an abstract argument we will generally point out that this is, in fact, the diametrically wrong direction in which abstract argument should be pushing.
Read the Yudkowsky-Hanson AI Foom Debate. (Someone link to the sequence.)
Read Eric Drexler’s Nanosystems. (Someone find an introduction by Foresight and link to it, that sort of thing is their job.) Also the term you want is not “grey goo”, but never mind.
Exponentials are Kurzweil’s thing. They aren’t dangerous. See the Yudkowsky-Hanson Foom Debate.
Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility. Things you spend on charitable efforts that just make you feel good should be considered selfish. If you are entirely selfish but you can think past a hyperbolic discount rate then it’s still possible you can get more hedons per dollar by donating to existential risk projects.
Your difficulties in judgment should be factored into a probability estimate. Your sense of aversion to ambiguity may interfere with warm glows, but we can demonstrate preference reversals and inconsistent behaviors that result from ambiguity aversion which doesn’t cash out as a probability estimate and factor straight into expected utility.
Michael Vassar is leading. I’m writing a book. When I’m done writing the book I plan to learn math for a year. When I’m done with that I’ll swap back to FAI research hopefully forever. I’m “leading” with respect to questions like “What is the form of the AI’s goal system?” but not questions like “Do we hire this guy?”
Someone link to relevant introductions of ambiguity aversion as a cognitive bias and do the detailed explanation on the marginal utility thing.
Can someone else do the work of showing how this sort of satisficing leads to a preference reversal if it can’t be viewed as expected utility maximization?
Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense. Don’t look at just one side and think about how much you doubt it and can’t guess. Look at both of them. Also, read the FOOM debate.
Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you’re looking for supporting arguments on FOOM they’re in the referenced debate.
Nobody’s claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)
It sounds like you haven’t done enough reading in key places to expect to be able to judge the overall credence out of your own estimates.
You may have an unrealistic picture of what it takes to get scientists interested enough in you that they will read very long arguments and do lots of work on peer review. There’s no prestige payoff for them in it, so why would they?
You have a sense of inferential distance. That’s not going to go away until you (a) read through all the arguments that nail down each point, e.g. the FOOM debate, and (b) realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.
Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.
Actually, now that I read this paragraph, it sounds like you think that “exponential”, “evolving” AI is an unsupported premise, rather than “AI go FOOM” being the conclusion of a lot of other disjunctive lines of reasoning. That explains a lot about the tone of this post. And if you’re calling it “exponential” or “evolving”, which are both things the reasoning would specifically deny (it’s supposed to be faster-than-exponential and have nothing to do with natural selection), then you probably haven’t read the supporting arguments. Read the FOOM debate.
After reading enough sequences you’ll pick up enough of a general sense of what it means to treat a thesis analytically, analyze it modularly, and regard every detail of a thesis as burdensome, that you’ll understand people here would mention Bostrom or Hanson instead. The sort of thinking where you take things apart into pieces and analyze each piece is very rare, and anyone who doesn’t do it isn’t treated by us as a commensurable voice with those who do. Also, someone link an explanation of pluralistic ignorance and bystander apathy.
An argument which makes sense emotionally (ambiguity aversion, someone link to hyperbolic discounting, link to scope insensitivity for the concept of warm glow) but not analytically (the expected utility intervals are huge, research often has long lead times).
Good reasoning is very rare, and it only takes a single mistake to derail. “Teach but not use” is extremely common. You might as well ask “Why aren’t there other sites with the same sort of content as LW?” Reading enough, and either you’ll pick up a visceral sense of the quality of reasoning being higher than anything you’ve ever seen before, or you’ll be able to follow the object-level arguments well enough that you don’t worry about other sources casually contradicting them based on shallower examinations, or, well, you won’t.
Start out with a recurring Paypal donation that doesn’t hurt, let it fade into the background, consider doing more after the first stream no longer takes a psychic effort, don’t try to make any commitment now or think about it now in order to avoid straining your willpower.
I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.
Quantum Mechanics Sequence
Pluralistic Ignorance
Bystander Apathy
Scope Insensitivity
No bystander apathy here!
The relevant fallacy in ‘Aristotelian’ logic is probably false dilemma, though there are a few others in the neighborhood.
Probably black and white thinking.
I haven’t done the work to understand MWI yet, but if this FAQ is accurate, almost nobody likes the Copenhagen interpretation (observers are SPECIAL) and a supermajority of “cosmologists and quantum field theorists” think MWI is true.
Since MWI seems to have no practical impact on my decision making, this is good enough for me. Also, Feynman likes it :)
Thanks for taking the time to give a direct answer. I enjoyed reading this and these replies will likely serve as useful comments to when people ask similar questions in the future.
Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who’s already convinced, preferably of someone within the SIAI?
That is part of what I call transparency and a foundational and reproducible corroboration of one’s first principles.
Awesome, I never came across this until now. It’s not widely mentioned? Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing. All this might be featured in the debate, hopefully with reference to substantial third-party research papers, I don’t know yet.
The whole point of the grey goo example was to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.
This is an open question and I’m inquiring about how exactly the uncertainties regarding these problems are accounted for in your probability estimations of the dangers posed by AI.
What I was inquiring about is the likelihood of slow versus fast development of AI. That is, how fast after we got AGI will we see the rise of superhuman AI? The means of development by which a quick transcendence might happen is circumstantial to the meaning of my question.
Where are your probability estimations that account for these uncertainties. Where are your variables and references that allow you to make any kind of estimations to balance the risks of a hard rapture with a somewhat controllable development?
You misinterpreted my question. What I meant by asking if it is even worth the effort is, as exemplified in my link, the question for why to choose the future over the present. That is: “What do we actually do all day, if things turn out well?,” “How much fun is there in the universe?,” “Will we ever run out of fun?”.
When I said that I already cannot follow the chain of reasoning depicted on this site I didn’t mean to say that I was unable due to intelligence or education. I believe I am intelligent enough and am trying to close the education gap. What I meant is that the chain of reasoning is intransparent.
Take the case of evolution, you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn’t far, it’s not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, “do you have better data than me”? Or, “I have a bunch of good arguments”?
I’m not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I’ll ask you for how you came up with these estimations. I’ll ask you to provide more than a consistent internal logic but some evidence-based prior.
If your antiprediction is not as informed as the original prediction, how is it not at most reducing the original prediction but actually overthrowing it to the extent on which the SIAI is basing its risk estimations?
Um… yes? Superhuman is a low bar and, more importantly, a completely arbitrary bar.
Evidence based? By which you seem to mean ‘some sort of experiment’? Who would be insane enough to experiment with destroying the world? This situation is exactly where you must understand that evidence is not limited to ‘reference to historical experimental outcomes’. You actually will need to look at ‘consistent internal logic’… just make sure the consistent internal logic is well grounded on known physics.
And that, well, that is actually a reasonable point. You have been given some links (regarding human behavior) that are good answer to the question but it is nevertheless non-trivial. Unfortunately now you are actually going to have to do the work and read them.
Is it? That smarter(faster)-than-human intelligence is possible is well grounded on known physics? If that is the case, how does it follow that intelligence can be applied to itself effectively, to the extent that one could realistically talk about “explosive” recursive self-improvement?
Some still seem sceptical—and you probably also need some math, compsci and philosophy to best understand the case for superhuman intelligence being possible.
Not only is there evidence that smarter than human intelligence is possible it is something that should be trivial given a vaguely sane reductionist model. Moreover you specifically have been given evidence on previous occasions when you have asked similar questions.
What you have not been given and what are not available are empirical observations of smarter than human intelligences existing now. That is evidence to which you would not be entitled.
Please provide a link to this effect? (Going off topic, I would suggest that a “show all threads with one or more comments by users X, Y and Z” or “show conversations between users X and Y” feature on LW might be useful.)
(First reply below)
Please provide such a link. (Going off-topic, I additionally suggest that a “show all conversations between user X and user Y” feature on Less Wrong might be useful.)
It is currently not possible for me to either link or quote. I do not own a computer in this hemisphere and my android does not seem to have keys for brackets or greater than symbols. workarounds welcome.
The solution varies by model, but on mine, alt-shift-letter physical key combinations do special characters that aren’t labelled. You can also use the on-screen keyboard, and there are more onscreen keyboards available for download if the one you’re currently using is badly broken.
SwiftKey x beta Brilliant!
OK, can I have my quote(s) now? It might just be hidden somewhere in the comments to this very article.
Can you copy and paste characters?
Uhm...yes? It’s just something I would expect to be integrated into any probability estimates of suspected risks. More here.
Check the point that you said is a reasonable one. And I have read a lot without coming across any evidence yet. I do expect an organisation like the SIAI to have detailed references and summaries about their decision procedures and probability estimations to be transparently available and not hidden beneath thousands of posts and comments. “It’s somewhere in there, line 10020035, +/- a million lines....” is not transparency! That is, an organisation who’s conerned with something taking over the universe and asks for your money. And organisation I’m told of which some members get nightmares just reading about evil AI...
I think you just want a brochure. We keep telling you to read archived articles explaining many of the positions and you only read the comment where we gave the pointers, pretending as if that’s all that’s contained in our answers. It’d be more like him saying, “I have a bunch of good arguments right over there,” and then you ignore the second half of the sentence.
I’m not asking for arguments. I know them. I donate. I’m asking for more now. I’m using the same kind of anti-argumentation that academics would use against your arguments. Which I’ve encountered myself a few times while trying to convince them to take a look at the inscrutable archives of posts and comment that is LW. What do they say? “I skimmed over it, but there were no references besides some sound argumentation, an internal logic.”, “You make strong claims, mere arguments and conclusions extrapolated from a few premises are insufficient to get what you ask for.”
Pardon my bluntness, but I don’t believe you, and that disbelief reflects positively on you. Basically, if you do know the arguments then a not insignificant proportion of your discussion here would amount to mere logical rudeness.
For example if you already understood the arguments for, or basic explanation of why ‘putting all your eggs in one basket’ is often the rational thing to do despite intuitions to the contrary then why on earth would you act like you didn’t?
Oh crap, the SIAI was just a punching bag. Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you’d end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It’s obvious, I don’t see how someone wouldn’t get this.
I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don’t care to save or that doesn’t need to be saved in the first place because I missed the fact that all the babies are puppets and not real.
I asked, are the babies real babies that need food and is the expected utility payoff of feeding them higher than eating the food myself right now?
I’m starting to doubt that anyone actually read my OP...
I know this is just a tangent… but that isn’t actually the reason.
Just to be clear, I’m not objecting to this. That’s a reasonable point.
Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW? I’ve missed the reason then. Seriously, I’d love to read up on it now.
Here is an example of what I want:
Good question. If not there should be. It is just basic maths when handling expected utilities but it crops up often enough. Eliezer gave you a partial answer:
… but unfortunately only asked for a link for the ‘scope insensivity’ part, not a link to a ‘marginal utility’ tutorial. I’ve had a look and I actually cant find such a reference on LW. A good coverage of the subject can be found in an external paper, Heuristics and biases in charity. Section 1.1.3 Diversification covers the issue well.
That’s another point. As I asked, what are the variables, where do I find the data? How can I calculate this probability based on arguments to be found on LW?
This IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money.
I’m not trying to be a nuisance here, but it is the only point I’m making right now, and the one that can be traced right back through the context. It is extremely difficult to make progress in a conversation if I cannot make a point about a specific argument without being expected to argue against an overall position that I may or may not even disagree with. It makes me feel like my arguments must come armed as soldiers.
I’m sorry, I perceived your comment to be mainly about decision making regarding charities. Which is completely marginal since the SIAI is the only charity concerned with the risk I’m inquiring about. Is the risk in question even real and does its likelihood justify the consequences and arguments for action?
I inquired about the decisions making regarding charities because you claimed that what I stated about egg allocation is not the point being made. But I do not particularly care about that question as it is secondary.
Leave aside SIAI specific claims here. The point Eliezer was making, was about ‘all your eggs in one basket’ claims in general. In situations like this (your contribution doesn’t drastically change the payoff at the margin, etc) putting all your eggs in best basket is the right thing to do.
You can understand that insight completely independently of your position on existential risk mitigation.
Er, there’s a post by that title.
Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.
Questionable. How is an encapsulated AI going to get this kind of control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions), but how is it going to make use of the things it orders?
I believe that self-optimization is prone to be very limited. Changing anything substantial might lead Gandhi to swallow the pill that will make him want to hurt people, so to say.
Sound argumentation that gives no justification to extrapolate it to an extent that you could apply it to the shaky idea of a superhuman intellect coming up with something better than science and applying it again to come up...
All those ideas about possible advantages of being an entity that can reflect upon itself to the extent of being able to pinpoint its own shortcoming is again, highly speculative. This could be a disadvantage.
Much of the rest is about the plateau argument, once you got a firework you can go to the moon. Well yes, I’ve been aware of that argument. But that’s weak, that there are many hidden mysteries about reality that we completely missed yet is highly speculative. I think even EY admits that whatever happens, quantum mechanics will be a part of it. Is the AI going to invent FTL travel? I doubt it, and it’s already based on the assumption that superhuman intelligence, not just faster intelligence, is possible.
Like the discovery that P ≠ NP? Oh wait, that would be limiting. This argument runs in both directions.
Assumption.
Nice idea, but recursion does not imply performance improvement.
How can he make any assumptions then about the possibility to improve them recursively, given this insight, to an extent that they empower an AI to transcendent into superhuman realms?
Did he just attribute intention to natural selection?
What would you accept as evidence?
Would you accept sophisticated machine learning algorithms like the ones in the Netflix contest, who find connections that make no sense to humans, who simply can’t work with high-dimensional data?
Would you accept a circuit designed by a genetic algorithm, which doesn’t work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?
Would you accept a chess program which could crush any human chess player who ever lived? Kasparov at ELO 2851, Rybka at 3265. Wikipedia says grandmaster status comes at ELO 2500. So Rybka is now even further beyond Kasparov at his peak as Kasparov was beyond a new grandmaster. And it’s not like Rybka or the other chess AIs will weaken with age.
Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?
I think it at least possible that much-smarter-than human intelligence might turn out to be impossible. There exist some problem domains where there appear to be a large number of solutions, but where the quality of the solutions saturate quickly as more and more resources are thrown at them. A toy example is how often records are broken in a continuous 1-D domain, with attempts drawn from a constant probability distribution: The number of records broken goes as the log of the number of attempts. If some of the tasks an AGI must solve are like this, then it might not do much better than humans—not because evolution did a wonderful job of optimizing humans for perfect intelligence, but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point.
One (admittedly weak) piece of evidence: a real example of saturation, is an optimizing compiler being used to recompile itself. It is a recursive optimizing system, and, if there is a knob to allow more effort being used on the optimization, the speed-up from the first pass can be used to allow a bit more effort to be applied to a second pass for the same cpu time. Nonetheless, the results for this specific recursion are not FOOM.
The evidence in the other direction are basically existence proofs from the most intelligent people or groups of people that we know of. Something as intelligent as Einstein must be possible, since Einstein existed. Given an AI Einstein, working on improving its own intelligence—it isn’t clear if it could make a little progress or a great deal.
This goes for your compilers as well, doesn’t it? There are still major speed-ups available in compilation technology (the closely connected areas of whole-program compilation+partial evaluation+supercompilation), but a compiler is still expected to produce isomorphic code, and that puts hard information-theoretic bounds on output.
Can you provide details / link on this?
I should’ve known someone would ask for the cite rather than just do a little googling. Oh well. Turns out it wasn’t a radio, but a voice-recognition circuit. From http://www.talkorigins.org/faqs/genalg/genalg.html#examples :
The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.
We could have said in the 1950s that machines beat us at arithmetic by orders of magnitude. Classical AI researchers clearly were deluded by success at easy problems. The problem with winning on easy problems is that it says little about hard ones.
What I see is that in the domain of problems for which human level performance is difficult to replicate, computers are capable of catching us and likely beating us, but gaining a great distance on us in performance is difficult. After all, a human can still beat the best chess programs with a mere pawn handicap. This may never get to two pawns. ever. Certainly the second pawn is massively harder than the first. It’s the nature of the problem space. In terms of runaway AGI control of the planet, we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).
BTW, is ELO supposed to have that kind of linear interpretation?
Yes, this is the important part. Chimps lag behind humans in 2 distinct ways—they differ in degree, and in kind. Chimps can do a lot of human-things, but very minimally. Painting comes to mind. They do a little, but not a lot. (Degree.) Language is another well-studied subject. IIRC, they can memorize some symbols and use them, but not in the recursive way that modern linguistics (pace Chomsky) seems to regard as key, not recursive at all. (Kind.)
What can we do with this distinction? How does it apply to my three examples?
O RLY?
Ever is a long time. Would you like to make this a concrete prediction I could put on PredictionBook, perhaps something along the lines of ‘no FIDE grandmaster will lose a 2-pawns-odds chess match(s) to a computer by 2050’?
I’m not an expert on ELO by any means (do we know any LW chess experts?), but reading through http://en.wikipedia.org/wiki/Elo_rating_system#Mathematical_details doesn’t show me any warning signs—ELO point differences are supposed to reflect probabilistic differences in winning, or a ratio, and so the absolute values shouldn’t matter. I think.
This is a possibility (made more plausible if we’re talking about those reins being used to incentivize early AIs to design more reliable and transparent safety mechanisms for more powerful successive AI generations), but it’s greatly complicated by international competition: to the extent that careful limitation and restriction of AI capabilities and access to potential sources of power reduces economic, scientific, and military productivity it will be tough to coordinate. Not to mention that existing economic, political, and legal structures are not very reliably stable: electorates and governing incumbents often find themselves unable to retain power.
It seems that whether or not it’s supposed to, in practice it does. From the just released “Intrinsic Chess Ratings”, which takes Rybka and does exhaustive evaluations (deep enough to be ‘relatively omniscient’) of many thousands of modern chess games; on page 9:
You are getting much closer than any of the commenter’s before you to provide some other form of evidence to substantiate one of the primary claims here.
You have to list your primary propositions on which you base further argumentation, from which you draw conclusions and which you use to come up with probability estimations stating risks associated with former premises. You have to list these main principles so anyone who comes across claims of existential risks and a plead for donation, can get an overview. Then you have to provide the references you listed above, if you believe they give credence to the ideas, so that people see that all you say isn’t made up but based on previous work and evidence by people that are not associated with your organisation.
No, although I have heard about all of the achievements I’m not yet able to judge if they provide evidence supporting the possibility of strong superhuman AI, the kind that would pose a existential risk. Although in the case of chess I’m pretty much the opinion that this is no strong evidence as it is not sufficiently close to be able to overpower humans to an extent of posing a existential risk when extrapolated into other areas.
It would be good if you could provide links to the mentioned examples. Especially the genetic algorithm (ETA: Here.). It is still questionable however if this could lead to the stated recursive improvements or will shortly hit a limit. To my knowledge genetic algorithms are merely used for optimization, based on previous design spaces and are not able to come up with something unique to the extent of leaving their design space.
Whether sophisticated machine learning algorithms are able to discover valuable insights beyond statistical inferences within higher-dimensional data-sets is a very interesting idea though. As I just read, the 2009 prize of the Netflix contest was given to a team that achieved a 10.05% improvement over the previous algorithm. I’ll have to examine this further if it might bear evidence that shows this kind of complicated mesh of algorithms might lead to a quick self-improvement.
One of the best comments so far, thanks. Although your last sentence was to my understanding simply showing that you are reluctant to further critique.
I am reluctant because you seem to ask for magical programs when you write things like:
I was going to link to AIXI and approximations thereof; full AIXI is as general as an intelligence can be if you accept that there are no uncomputable phenomenon, and the approximations are already pretty powerful (from nothing to playing Pac-Man).
But then it occurred to me that anyone invoking a phrase like ‘leaving their design space’ might then just say ‘oh, those designs and models can only model Turing machines, and so they’re stuck in their design space’.
I’ve no idea (formally) of what a ‘design space’ actually is. This is a tactic I’m frequently using against strongholds of argumentation that are seemingly based on expertise. I use their own terminology and rearrange it into something that sounds superficially clever. I like to call it a Chinese room approach. Sometimes it turns out that all they were doing was to sound smart but cannot explain themselves when faced with their own terminology set to inquire about their pretences.
I thank you however for taking the time to actually link to further third party information that will substantiate given arguments for anyone not trusting the whole of LW without it.
I see. Does that actually work for you? (Note that your answer will determine whether I mentally re-categorize you from ‘interested open-minded outsider’ to ‘troll’.)
It works against cults and religion in general. I don’t argue with them about their religion being not even wrong but rather accept their terms and highlight inconsistencies within their own framework by going as far as I can with one of their arguments and by inquiring about certain aspects based on their own terminology until they are unable to consistently answer or explain where I am wrong.
This also works with the anti GM-food bunch, data protection activists, hippies and many other fringe groups. For example, the data protection bunch concerned with information disclosure on social networks or Google Streetview. Yes, I say, that’s bad, burglar could use such services to check out your house! I wonder what evidence there is for the increase of burglary in the countries where Streetview is already available for many years?
Or I tell the anti-gun lobbyists how I support their cause. It’s really bad if anyone can buy a gun. Can you point me to the strong correlation between gun ownership and firearm homicides? Thanks.
Any specific scenario is going to have burdensome details, but that’s what you get if you ask for specific scenarios rather than general pressures, unless one spends a lot of time going through detailed possibilities and vulnerabilities. With respect to the specific example, regular human criminals routinely swindle or earn money anonymously online, and hack into and control millions of computers in botnets. Cloud computing resources can be rented with ill-gotten money.
In the unlikely event of a powerful human-indifferent AI appearing in the present day, a smartphone held by a human could provide sensors and communication to use humans for manipulators (as computer programs direct the movements of some warehouse workers today). Humans can be paid, blackmailed, deceived (intelligence agencies regularly do these things) to perform some tasks. An AI that leverages initial capabilities could jury-rig a computer-controlled method of coercion [e.g. a cheap robot arm holding a gun, a tampered-with electronic drug-dispensing implant, etc]. And as time goes by and the cumulative probability of advanced AI becomes larger, increasing quantities of robotic vehicles and devices will be available.
Thanks, yes I know about those arguments. One of the reasons I’m actually donating and accept AI to be one existential risk. I’m inquiring about further supporting documents and transparency. More on that here, especially check the particle collider analogy.
With respect to transparency, I agree about a lack of concise, exhaustive, accessible treatments. Reading some of the linked comments about marginal evidence from hypotheses I’m not quite sure what you mean, beyond remembering and multiplying by the probability that particular premises are false. Consider Hanson’s “Economic Growth Given Machine Intelligence”. One might support it with generalizations from past population growth in plants and animals, from data on capital investment and past market behavior and automation, but what would you say would license drawing probabilistic inferences using it?
Note that such methods might not result in the destruction of the world within a week (the guaranteed result of a superhuman non-Friendly AI according to Eliezer.)
What guarantee?.
With a guarantee backed by $1000.
The linked bet doesn’t reference “a week,” and the “week” reference in the main linked post is about going from infrahuman to superhuman, not using that intelligence to destroy humanity.
That bet seems underspecified. Does attention to “Friendliness” mean any attention to safety whatsoever, or designing an AI with a utility function such that it’s trustworthy regardless of power levels? Is “superhuman” defined relative to the then-current level of human (or upload, or trustworthy less intelligent AI) capacity with any enhancements (or upload speedups, etc)? What level of ability counts as superhuman? You two should publicly clarify the terms.
A few comments later on the same comment thread someone asked me how much time was necessary, and I said I thought a week was enough, based on Eliezer’s previous statements, and he never contradicted this, so it seems to me that he accepted it by default, since some time limit will be necessary in order for someone to win the bet.
I defined superhuman to mean that everyone will agree that it is more intelligent than any human being existing at that time.
I agree that the question of whether there is attention to Friendliness might be more problematic to determine. But “any attention to safety whatsoever” seems to me to be clearly stretching the idea of Friendliness—for example, someone could pay attention to safety by trying to make sure that the AI was mostly boxed, or whatever, and this wouldn’t satisfy Eliezer’s idea of Friendliness.
Ah. So an AI could, e.g. be only slightly superhuman and require immense quantities of hardware to generate that performance in realtime.
Right. And if this scenario happened, there would be a good chance that it would not be able to foom, or at least not within a week. Eliezer’s opinion seems to be that this scenario is extremely unlikely, in other words that the first AI will already be far more intelligent than the human race, and that even if it is running on an immense amount of hardware, it will have no need to acquire more hardware, because it will be able to construct nanotechnology capable of controlling the planet through actions originating on the internet as you suggest. And as you can see, he is very confident that all this will happen within a very short period of time.
Have you tried asking yourself non-rhetorically what an AI could do without MNT? That doesn’t seem to me to be a very great inferential distance at all.
I believe that in this case an emulation would be the bigger risk because it would be sufficiently obscure and could pretend to be friendly for a long time while secretly strengthening its power. A purely artificial intelligence would be too alien and therefore would have a hard time to acquire the necessary power to transcend to a superhuman level without someone figuring out what it does, either by its actions or by looking at its code. It would also likely not have the intention to increase its intelligence infinitely anyway. I just don’t see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You’d have to deliberately implement such an intention. It would generally require its creators to solve a lot of problems much more difficult than limiting its scope. That is why I do not see run-away self-improvement as a likely failure mode.
I could imagine all kinds of scenarios indeed. But I also have to assess their likelihood given my epistemic state. And my conclusion is that a purely artificial intelligence wouldn’t and couldn’t do much. I estimate the worst-case scenario to be on par with a local nuclear war.
I simply can’t see where the above beliefs might come from. I’m left assuming that you just don’t mean the same thing by AI as I usually mean. My guess is that you are implicitly thinking of a fairly complicated story but are not spelling that out.
And I can’t see where your beliefs might come from. What are you telling potential donors or AGI researchers? That AI is dangerous by definition? Well, what if they have a different definition, what should make them update in favor of your definition? That you thought about it for more than a decade now? I perceive serious flaws in any of the replies I got so far in under a minute and I am a nobody. There is too much at stake here to base the decision to neglect all other potential existential risks on the vague idea that intelligence might come up with something we haven’t thought about. If that kind of intelligence is as likely as other risks then it doesn’t matter what it comes up with anyway because those other risks will wipe us out just as good and with the same probability.
There already are many people criticizing the SIAI right now, even on LW. Soon, once you are more popular, other people than me will scrutinize everything you ever wrote. And what do you expect them to conclude if even a professional AGI researcher, who has been a member of the SIAI, does write the following:
Why would I disregard his opinion in favor of yours? Can you present any novel achievements that would make me conclude that you people are actually experts when it comes to intelligence? The LW sequences are well written but do not showcase some deep comprehension of the potential of intelligence. Yudkowsky was able to compile previously available knowledge into a coherent framework of rational conduct. That isn’t sufficient to prove that he has enough expertise on the topic of AI to make me believe him regardless of any antipredictions being made that weaken the expected risks associated with AI. There is also insufficient evidence to conclude that Yudkowsky, or someone within the SIAI, is smart enough to be able to tackle the problem of friendliness mathematically.
If you would at least let some experts take a look at your work and assess its effectiveness and general potential. But there exists no peer review at all. There have been some popular people attend the Singularity Summit. Have you asked them why they do not contribute to the SIAI? Have you for example asked Douglas Hofstadter why he isn’t doing everything he can to mitigate risks from AI? Sure, you got some people to donate a lot of money to the SIAI. But to my knowledge they are far from being experts and contribute to other organisations as well. Congratulations on that, but even cults get rich people to support them. I’ll update on donors once they say why they support you and their arguments are convincing or if they are actually experts or people being able to showcase certain achievements.
Intelligence is powerful, intelligence doesn’t imply friendliness, therefore intelligence is dangerous. Is that the line of reasoning based on which I shall neglect other risks? If you think so then you are making it more complicated than necessary. You do not need intelligence to invent stuff to kill us if there’s already enough dumb stuff around that is more likely to kill us. And I do not think that it is reasonable to come up with a few weak arguments on how intelligence could be dangerous and conclude that their combined probability beats any good argument against one of the premises or in favor of other risks. The problems are far too diverse, you can’t combine them and proclaim that you are going to solve all of them by simply defining friendliness mathematically. I just don’t see that right now because it is too vague. You could as well replace friendliness with magic as the solution to the many disjoint problems of intelligence.
Intelligence is also not the solution to all other problems we face. As I argued several times, I just do not see that recursive self-improvement will happen any time soon and cause an intelligence explosion. What evidence is there against a gradual development? As I see it we will have to painstakingly engineer intelligent machines. There won’t be some meta-solution that outputs meta-science to subsequently solve all other problems.
Douglas Hofstadter and Daniel Dennett both seem to think these issues are probably still far away.
...
http://www.americanscientist.org/bookshelf/pub/douglas-r-hofstadter
I’m not sure who is doing that. Being hit by an asteroid, nuclear war and biological war are other possible potentially major setbacks. Being eaten by machines should also have some probability assigned to it—though it seems pretty challenging to know how to do that. It’s a bit of an unknown unknown. Anyway, this material probably all deserves some funding.
The short-term goal seems more modest—prove that self-improving agents can have stable goal structures.
If true, that would be fascinating—and important. I don’t know what the chances of success are, but Yudkowsky’s pitch is along the lines of: look this stuff is pretty important, and we are spending less on it than we do on testing lipstick.
That’s a pitch which it is hard to argue with, IMO. Machine intelligence research does seem important and currently-underfunded. Yudkowsky is—IMHO—a pretty smart fellow. If he will work on the problem for $80K a year (or whatever) it seems as though there is a reasonable case for letting him get on with it.
I’m not sure you’re looking at the probability of other extinction risks with the proper weighting. The timescales are vastly different. Supervolcanoes: one every 350,000 years. Major asteroid strikes: one every 700,000 years. Gamma ray bursts: hundreds of millions of years, etc. There’s a reason the word ‘astronomical’ means huge beyond imagining.
Contrast that with the current human-caused mass extinction event: 10,000 years and accelerating. Humans operate on obscenely fast timescales compared to nature. Just with nukes we’re able to take out huge chunks of Earth’s life forms in 24 hours, most or all of it if we detonated everything we have in an intelligent, strategic campaign to end life. And that’s today, rather than tomorrow.
Regarding your professional AGI researcher and recursive self-improvement, I don’t know, I’m not an AGI researcher, but it seemed to me that a prerequisite to successful AGI is an understanding and algorithmic implementation of intelligence. Therefore, any AGI will know what intelligence is (since we do), and be able to modify it. Once you’ve got a starting point, any algorithm that can be called ‘intelligent’ at all, you’ve got a huge leap toward mathematical improvement. Algorithms have been getting faster at a higher rate than Moore’s Law and computer chips.
That might be true. But most of them have one solution that demands research in many areas. Space colonization. It is true that intelligent systems, if achievable in due time, play a significant role here. But not an exceptional role if you disregard the possibility of an intelligence explosion, of which I am very skeptical. Further, it appears to me that donating to the SIAI would rather impede research on such systems giving their position that such systems themselves posit an existential risk. Therefore, at the moment, the possibility of risks from AI is partially being outweighed to the extent that the SIAI should be supported yet doesn’t hold an exceptional position that would necessarily make it the one charity with the highest expected impact per donation. I am unable to pinpoint another charity at the moment, e.g. space elevator projects, because I haven’t looked into it. But I do not know of any comparison analysis, although you and many other people claim they have calculated it nobody ever published their efforts. As you know, I am unable to do such an analysis myself at this point as I am still learning the math. But I am eager to get the best information by means of feedback anyhow. Not intended as an excuse of course.
That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution? Also, can algorithms that could be employed in real-world scenarios be speed-up to have an effect that would warrant superhuman power? Take photosynthesis, could that particular algorithm be improved considerably, to an extent that it would be vastly better than the evolutionary one? Further, will such improvements be accomplishable fast enough to outpace human progress or the adaption of the given results? My problem is that I do not believe that intelligence is fathomable as a solution that can be applied to itself effectively. I see a fundamental dependency on unintelligent processes. Intelligence is merely to recapitulate prior discoveries. To alter what is already known by means of natural methods. If ‘intelligence’ is shorthand for ‘problem-solving’ then it’s also the solution which would mean that there was no problem to be solved. This can’t be true, we still have to solve problems and are only able to do so more effectively if we are dealing with similar problems that can be subject to known and merely altered solutions. In other words, on a fundamental level problems are not solved, solutions are discovered by an evolutionary process. In all discussions I took part so far ‘intelligence’ has had a somewhat proactive aftertaste. But nothing genuine new is ever being created deliberately.
Nonetheless I believe your reply was very helpful as an impulse to look at it from a different perspective. Although I might not be able to judge it in detail at this point I’ll have to incorporate it.
This seems backwards—if intelligence is modular, that makes it more likely to be subject to overall improvements, since we can upgrade the modules one at a time. I’d also like to point out that we currently have two meta-algorithms, bagging and boosting, which can improve the performance of any other machine learning algorithm at the cost of using more CPU time.
It seems to me that, if we reach a point where we can’t improve an intelligence any further, it won’t be because it’s fundamentally impossible to improve, but because we’ve hit diminishing returns. And there’s really no way to know in advance where the point of diminishing returns will be. Maybe there’s one breakthrough point, after which it’s easy until you get to the intelligence of an average human, then it’s hard again. Maybe it doesn’t become difficult until after the AI’s smart enough to remake the world. Maybe the improvement is gradual the whole way up.
But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
In a sense, all thoughts are just the same words and symbols rearranged in different ways. But that is not the type of newness that matters. New software algorithms, concepts, frameworks, and programming languages are created all the time. And one new algorithm might be enough to birth an artificial general intelligence.
The AI will be much bigger than a virus. I assume this will make propagation much harder.
Harder, yes. Much harder, probably not, unless it’s on the order of tens of gigabytes; most Internet connections are quite fast.
Anything could be possible—though the last 60 years of the machine intelligence field are far more evocative of the “blood-out of-a-stone” model of progress.
Smart human programmers can make dark nets too. Relatively few of them want to trash their own reputations and appear in the cross-hairs of the world’s security services and law-enforcement agencies, though.
Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.
Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it’s because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage.
Maybe the state of computer security will be better in 20 years, and this won’t be as much of a risk anymore. I certainly hope so. But we can’t count on it.
Mafia superintelligence, spyware superintelligence—it’s all the forces of evil. The forces of good are much bigger, more powerful and better funded.
Sure, we should continue to be vigilant about the forces of evil—but surely we should also recognise that their chances of success are pretty slender—while still keeping up the pressure on them, of course.
Good is winning: http://www.google.com/insights/search/#q=good%2Cevil :-)
You seem to be seriously misinformed about the present state of computer security. The resources on the side of good are vastly insufficient because offense is inherently easier than defense.
Your unfounded supposition seems pretty obnoxious—and you aren’t even right :-(
You can’t really say something is “vastly insufficient”—unless you have an intended purpose in mind—as a guide to what would qualify as being sufficient.
There’s a huge population of desktop and office computers doing useful work in the world—we evidently have computer security enough to support that.
Perhaps you are presuming some other criteria. However, projecting that presumption on to me—and then proclaiming that I am misinformed—seems out of order to me.
The purpose I had in mind (stated directly in that post’s grandparent, which you replied to) was to stop an artificial general intelligence from stealing vast computational resources. Since exploits in major software packages are still commonly discovered, including fairly frequent 0-day exploits which anyone can get for free just by monitoring a few mailing lists, the computer security we have is quite obviously not sufficient for that purpose. Not only that, humans do in fact steal vast computational resources pretty frequently. The fact that no one has tried to or wants to stop people from getting work done on their office computers is completely irrelevant.
You sound bullish—when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
Crocker’s Rules for me. Will you do the same?
I am not sure which statement you stand by. The one about me being “seriously misinformed” about computer security? Let’s not go back to that—pulease!
The “adjusted” one—about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.
The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons—but the government will know what colour socks they are wearing. Similarly, medicine will be better—and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent “Rational Optimism”.
Is there a correspondingly convincing case that the forces of evil will win out—and that the mafia machine intelligence—or the spyware-maker’s machine intelligence—will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution’s drive to build large-scale cooperative systems to entertain such ideas for very long.
I don’t have much inclination to think about my attitude towards Crocker’s Rules just now—sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure—or on a community level. Otherwise the overhead of tracking people’s “Crocker status” seems considerable. You can take that as a “no”.
Thank you for continuing to engage my point of view, and offering your own.
That’s an interesting hypothesis which easily fits into my estimated 90+ percent bucket of failure modes. I’ve got all kinds of such events in there, including things such as, there’s no way to understand intelligence, there’s no way to implement intelligence in computers, friendliness isn’t meaningful, CEV is impossible, they don’t have the right team to achieve it, hardware will never be fast enough, powerful corporations or governments will get there first, etc. My favorite is: no matter whether it’s possible or not, we won’t get there in time; basically, that it will take too long to be useful. I don’t believe any of them, but I do think they have solid probabilities which add up to a great amount of difficulty.
But the future isn’t set, they’re just probabilities, and we can change them. I think we need to explore this as much as possible, to see what the real math looks like, to see how long it takes, to see how hard it really is. Because the payoffs or results of failure are in that same realm of ‘astronomical’.
A somewhat important correction:
To my knowledge, SIAI does not actually endorse neglecting all potential x-risks besides UFAI. (Analysis might recommend discounting the importance of fighting them head-on, but that analysis should still be done when resources are available.)
Not all of them—most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones—and so on. It probably won’t fix the speed of light limit, though.
What makes you reach this conclusion? How can you think any of these problems can be solved by intelligence when none of them have been solved? I’m particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don’t see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.
Violence has been declining on (pretty much) every timescale: Steven Pinker: Myth of Violence. I think one could argue that this is because of greater collective intelligence of human race.
War won’t be solved by making everyone smarter, but it will be solved if a sufficiently powerful friendly AI takes over, as a singleton, because it would be powerful enough to stop everyone else from using force.
Yes, that makes sense, but in context I don’t think that’s what was meant since Tim is one of the people here is more skeptical of that sort of result.
Tim on “one big organism”:
http://alife.co.uk/essays/one_big_organism/
http://alife.co.uk/essays/self_directed_evolution/
http://alife.co.uk/essays/the_second_superintelligence/
Thanks for clarifying (here and in the other remark).
War has already been solved to some extent by intelligence (negotiations and diplomacy significantly decreased instances of war), hunger has been solved in large chunks of the world by intelligence, energy limits have been solved several times by intelligence, resource shortages ditto, intelligence has made a good first attempt at space travel (the moon is quite far away), and intelligence has made huge bounds towards solving the problem of loss of loved ones (vaccination, medical intervention, surgery, lifespans in the high 70s, etc).
This is a constraint satisfaction problem (give as many ideologies as much of what they want as possible). Intelligence solves those problems.
I have my doubts about war, although I don’t think most wars really come down to conflicts of terminal values. I’d hope not, anyway.
However as for the rest, if they’re solvable at all, intelligence ought to be able to solve them. Solvable means there exists a way to solve them. Intelligence is to a large degree simply “finding ways to get what you want”.
Do you think energy limits really couldn’t be solved by simply producing through thought working designs for safe and efficient fusion power plants?
ETA: ah, perhaps replace “intelligence” with “sufficient intelligence”. We haven’t already solved all these problems already in part because we’re not really that smart. I think fusion power plants are theoretically possible, and at our current rate of progress we should reach that goal eventually, but if we were smarter we should obviously achieve it faster.
As various people have said, the original context was not making everybody more intelligent and thereby changing their inclinations, but rather creating an arbitrarily powerful superintelligence that makes their inclinations irrelevant. (The presumption here is typically that we know which current human inclinations such a superintelligence would endorse and which ones it would reject.)
But I’m interested in the context you imply (of humans becoming more intelligent).
My $0.02: I think almost all people who value war do so instrumentally. That is, I expect that most warmongers (whether ideologues or not) want to achieve some goal (spread their ideology, or amass personal power, or whatever) and they believe starting a war is the most effective way for them to do that. If they thought something else was more effective, they would do something else.
I also expect that intelligence is useful for identifying effective strategies to achieve a goal. (This comes pretty close to being true-by-definition.)
So I would only expect smarter ideologues (or anyone else) to remain warmongers if if starting a war really was the most effective way to achieve their goals. And if that’s true, everyone else gets to decide whether we’d rather have wars, or modify the system so that the ideologues have more effective options than starting wars (either by making other options more effective, or by making warmongering less effective, whichever approach is more efficient).
So, yes, if we choose to incentivize wars, then we’ll keep getting wars. It follows from this scenario that war is the least important problem we face, so we should be OK with that.
Conversely, if it turns out that war really is an important problem to solve, then I’d expect fewer wars.
I was about to reply—but jimrandomh said most of what I was going to say already—though he did so using that dreadful “singleton” terminology, spit.
I was also going to say that the internet should have got the 2010 Nobel peace prize.
Is that really the idea? My impression is that the SIAI think machines without morals are dangerous, and that until there is more machine morality research, it would be “nice” if progress in machine intelligence was globally slowed down. If you believe that, then any progress—including constructing machine toddlers—could easily seem rather negative.
Darwinian gradualism doesn’t forbid evolution taking place rapidly. I can see evolutionary progress accelerating over the course of my own lifespan—which is pretty incredible considering that evolution usually happens on a scale of millions of years. More humans in parallel can do more science and engineering. The better their living standard, the more they can do. Then there are the machines...
Maybe some of the pressures causing the speed-up will slack off—but if they don’t then humanity may well face a bare-knuckle ride into inner-space—and fairly soon.
Re: toddler-level machine intelligence.
Most toddlers can’t program, but many teenagers can. The toddler is a step towards the teenager—and teenagers are notorious for being difficult to manage.
The usual cite given in this area is the paper The Basic AI Drives.
It suggests that open-ended goal-directed systems will tend to improve themselves—and to grab resources to help them fulfill their goals—even if their goals are superficially rather innocent-looking and make no mention of any such thing.
The paper starts out like this:
Well, some older posts had a guy praising “goal system zero”, which meant a plan to program an AI with the minimum goals it needs to function as a ‘rational’ optimization process and no more. I’ll quote his list directly:
This seems plausible to me as a set of necessary conditions. It also logically implies the intention to convert all matter the AI doesn’t lay aside for other purposes (of which it has none, here) into computronium and research equipment. Unless humans for some reason make incredibly good research equipment, the zero AI would thus plan to kill us all. This would also imply some level of emulation as an initial instrumental goal. Note that sub-goal (1) implies a desire not to let instrumental goals like simulated empathy get in the way of our demise.
Perhaps, though if we can construct such a thing in the first place we may be able to deep-scan its brain and read its thoughts pretty well—or at least see if it is lying to us and being deceptive.
IMO, the main problem there is with making such a thing in the first place before we have engineered intelligence. Brain emulations won’t come first—even though some people seem to think they will.
Seconding this question.
Writing the word ‘assumption’ has its limits as a form of argument. At some stage you are going to have to read the links given.
This was a short critique of one of the links given. The first I skimmed over. I wasn’t impressed yet. At least to the extent of having nightmares when someone tells me about bad AI’s.
I like how Nick Bostrom put it re: probabilities and interesting future phenomena:
Index to the FOOM debate
Antipredictions