I consider a runaway process by which any AI ascends into godhood through recursive self-improvement of its intelligence to be… vaguely magical, by which I mean that while every word in that sentence makes sense, as a whole that sentence doesn’t refer to anything. The heavy lifting is done by poorly-defined abstractions and assumptions.
Unfriendly AI, by the metrics I consider meaningful, already exists. It just isn’t taking over the world.
Some of the smarter (large, naval) landmines are arguably both intelligent and unfriendly. Let us use the standard AI risk metric.
I feel that your sentence does refer to something: A hypothetical scenario.
(“Godhood” should be replaced with “Superintelligence”).
Is it correct that the sentence can be divided into these 4 claims?:
An AI self-improves it’s intelligence
The self-improvement becomes recursive
An AI reaches superintelligence through 1 and 2
This can happen in a process that can be called “runaway”
Do you mean that one of the probabilities is extremely small? (E.g., p(4 | 1 and 2 and 3) = 0.02).
Or do you mean that the statement is not well-formed? (E.g, Intelligence is poorly-defined by the AI Risk theory)
Intelligence is poorly-defined, for a start, artificial intelligence doubly so—think about the number of times we’ve redefined “AI” after achieving what we previously called “AI”.
“Recursive self-improvement” is also poorly-defined; as an example, we have recursive self-improving AIs right now, in the form of self-training neural nets.
Superintelligence is even less well-defined, which is why I prefer the term “godhood”, which I regard as more honest in its vagueness. It may also be illusory; most of us on Less Wrong are here in part because of boredom, because intelligence isn’t nearly as applicable in daily life as we’d need it to be to stay entertained; does intelligence have diminishing returns?
We can tell that some people are smarter than other people, but we’re not even certain what that means, except that they do better by the measurement we measure them by.
Intelligence, Artificial Intelligence and Recursive Self-improvement are likely poorly defined. But since we can point to concrete examples of all three, this is a problem in the map, not the territory. These things exist, and different versions of them will exist in the future.
Superintelligences do not exist, and it is an open question if they ever will. Bostrom defines superintelligences as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” While this definition has a lot of fuzzy edges, it is conceivable that we could one day point to a specific intellect, and confidently say that it is superintelligent. I feel that this too is a problem in the map, not the territory.
I was wrong to assume that you meant superintelligence when you wrote godhood, and I hope that you will forgive me for sticking with “superintelligence” for now.
I meant claim number 3 to be a sharper version of your claim: The AI will meet constraints, impediments and roadblocks, but these are overcome, and the AI reaches superintelligence.
Could you explain the analogy with human reproduction?
Ah, so you meant the accent in 3. to be on “reaches”, not on “super”?
The analogy looks like this: 1. Humans multiply, they self-improve their numbers; 2. The reproduction is recursive—the larger a generation is, the yet larger will the next one be. Absent constraints, the growth of a population is exponential.
English is not my first language. I think I would put the accent on “reaches”, but I am unsure what would be implied by having the accent on “super”. I apologize for my failure to write clearly.
I now see the analogy with human reproduction. Could we stretch the analogy to claim 3, and call some increases in human numbers “super”?
The lowest estimate of the historical number of humans I have seen is from https://en.wikipedia.org/wiki/Population_bottleneck , claiming down to 2000 humans for 100.000 years. Human numbers will probably reach a (mostly cultural) limit of 10.000.000.000. I feel that this development in human numbers deserves to be called “super”.
The analogy could perhaps even be stretched to claim 4 - some places at some times could be characterized by “runaway population growth”.
Could we stretch the analogy to claim 3, and call some increases in human numbers “super”?
I don’t know—it all depends on what you consider “super” :-) Populations of certain organisms oscillate with much greater magnitude than humans—see e.g. algae blooms.
Like Unfriendly AI, algae blooms are events that behave very differently from events we normally encounter.
I fear that the analogies have lost a crucial element. OrphanWIlde considered Unfriendly AI “vaguely magical” in the post here. The algae bloom analogy also has very vague definitions, but the changes in population size of an algae bloom is a matter I would call “strongly non-magical”.
I realize that you introduced the analogies to help make my argument precise.
It’s “vaguely magical” in sense that there is a large gap between what we have now and (U)FAI. We have no clear idea of how that gap could be crossed, we just wave hands and say “and then magic happens and we arrive at our destination”.
Many things are far beyond our current abilities, such as interstellar space travel. We have no clear idea of how humanity will travel to the stars, but the subject is neither “vaguely magical”, nor is it true that the sentence “humans will visit the stars” does not refer to anything.
I feel that it is an unfair characterization of the people who investigate AI risk to say that they claim it will happen by magic, and that they stop the investigation there. You could argue that their investigation is poor, but it is clear that they have worked a lot to investigate the processes that could lead to Unfriendly AI.
We have no clear idea of how humanity will travel to the stars, but the subject is neither “vaguely magical”, nor is it true that the sentence “humans will visit the stars” does not refer to anything.
We have no clear idea if or how humanity will travel to the stars. I feel that discussions of things like interstellar starship engines at the moment are “vaguely magical” since no known technology suffices and it’s not a “merely engineering” problem. Do you think it’s useful to work on safety of interstellar engines? They could blow up and destroy a whole potential colony…
You bring up a good point, whether it is useful to worry about UFAI.
To recap, my original query was about the claim that p(UFAI before 2116) is less than 1% due to UFAI being “vaguely magical”. I am interested in figuring out what that means—is it a fair representation of the concept to say that p(Interstellar before 2116) is less than 1% because interstellar travel is “vaguely magical”?
What would be the relationship between “Requiring Advanced Technology” and “Vaguely Magical”? Clarke’s third law is a straightforward link, but “vaguely magical” has previously been used to indicate poor definitions, poor abstractions and sentences that do not refer to anything.
I am not sure the OP had much meaning behind his “vaguely magical” expression, but given that we are discussing it anyway :-) I would probably reinterpret it in terms of Knightian uncertainty. It’s not only the case that we don’t know, we don’t know what we don’t know and how much we don’t know.
This interpretation makes a lot of sense. The term can describe events that have a lot of Knightian Uncertainty, which a “Black Swan” like UFAI certainly has.
Could you elaborate on why you consider p(UFAI before 2116) < 0.01? I am genuinely interested.
I consider a runaway process by which any AI ascends into godhood through recursive self-improvement of its intelligence to be… vaguely magical, by which I mean that while every word in that sentence makes sense, as a whole that sentence doesn’t refer to anything. The heavy lifting is done by poorly-defined abstractions and assumptions.
Unfriendly AI, by the metrics I consider meaningful, already exists. It just isn’t taking over the world.
Some of the smarter (large, naval) landmines are arguably both intelligent and unfriendly. Let us use the standard AI risk metric.
I feel that your sentence does refer to something: A hypothetical scenario. (“Godhood” should be replaced with “Superintelligence”).
Is it correct that the sentence can be divided into these 4 claims?:
An AI self-improves it’s intelligence
The self-improvement becomes recursive
An AI reaches superintelligence through 1 and 2
This can happen in a process that can be called “runaway”
Do you mean that one of the probabilities is extremely small? (E.g., p(4 | 1 and 2 and 3) = 0.02). Or do you mean that the statement is not well-formed? (E.g, Intelligence is poorly-defined by the AI Risk theory)
Intelligence is poorly-defined, for a start, artificial intelligence doubly so—think about the number of times we’ve redefined “AI” after achieving what we previously called “AI”.
“Recursive self-improvement” is also poorly-defined; as an example, we have recursive self-improving AIs right now, in the form of self-training neural nets.
Superintelligence is even less well-defined, which is why I prefer the term “godhood”, which I regard as more honest in its vagueness. It may also be illusory; most of us on Less Wrong are here in part because of boredom, because intelligence isn’t nearly as applicable in daily life as we’d need it to be to stay entertained; does intelligence have diminishing returns?
We can tell that some people are smarter than other people, but we’re not even certain what that means, except that they do better by the measurement we measure them by.
Intelligence, Artificial Intelligence and Recursive Self-improvement are likely poorly defined. But since we can point to concrete examples of all three, this is a problem in the map, not the territory. These things exist, and different versions of them will exist in the future.
Superintelligences do not exist, and it is an open question if they ever will. Bostrom defines superintelligences as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” While this definition has a lot of fuzzy edges, it is conceivable that we could one day point to a specific intellect, and confidently say that it is superintelligent. I feel that this too is a problem in the map, not the territory.
I was wrong to assume that you meant superintelligence when you wrote godhood, and I hope that you will forgive me for sticking with “superintelligence” for now.
You are missing an important claim: that the process of recursive self-improvement does not encounter any constraints, impediments, roadblocks, etc.
Consider the analogy of your 1. and 2. for human reproduction.
I meant claim number 3 to be a sharper version of your claim: The AI will meet constraints, impediments and roadblocks, but these are overcome, and the AI reaches superintelligence.
Could you explain the analogy with human reproduction?
Ah, so you meant the accent in 3. to be on “reaches”, not on “super”?
The analogy looks like this: 1. Humans multiply, they self-improve their numbers; 2. The reproduction is recursive—the larger a generation is, the yet larger will the next one be. Absent constraints, the growth of a population is exponential.
English is not my first language. I think I would put the accent on “reaches”, but I am unsure what would be implied by having the accent on “super”. I apologize for my failure to write clearly.
I now see the analogy with human reproduction. Could we stretch the analogy to claim 3, and call some increases in human numbers “super”?
The lowest estimate of the historical number of humans I have seen is from https://en.wikipedia.org/wiki/Population_bottleneck , claiming down to 2000 humans for 100.000 years. Human numbers will probably reach a (mostly cultural) limit of 10.000.000.000. I feel that this development in human numbers deserves to be called “super”.
The analogy could perhaps even be stretched to claim 4 - some places at some times could be characterized by “runaway population growth”.
I don’t know—it all depends on what you consider “super” :-) Populations of certain organisms oscillate with much greater magnitude than humans—see e.g. algae blooms.
Like Unfriendly AI, algae blooms are events that behave very differently from events we normally encounter.
I fear that the analogies have lost a crucial element. OrphanWIlde considered Unfriendly AI “vaguely magical” in the post here. The algae bloom analogy also has very vague definitions, but the changes in population size of an algae bloom is a matter I would call “strongly non-magical”.
I realize that you introduced the analogies to help make my argument precise.
It’s “vaguely magical” in sense that there is a large gap between what we have now and (U)FAI. We have no clear idea of how that gap could be crossed, we just wave hands and say “and then magic happens and we arrive at our destination”.
Many things are far beyond our current abilities, such as interstellar space travel. We have no clear idea of how humanity will travel to the stars, but the subject is neither “vaguely magical”, nor is it true that the sentence “humans will visit the stars” does not refer to anything.
I feel that it is an unfair characterization of the people who investigate AI risk to say that they claim it will happen by magic, and that they stop the investigation there. You could argue that their investigation is poor, but it is clear that they have worked a lot to investigate the processes that could lead to Unfriendly AI.
We have no clear idea if or how humanity will travel to the stars. I feel that discussions of things like interstellar starship engines at the moment are “vaguely magical” since no known technology suffices and it’s not a “merely engineering” problem. Do you think it’s useful to work on safety of interstellar engines? They could blow up and destroy a whole potential colony…
You bring up a good point, whether it is useful to worry about UFAI.
To recap, my original query was about the claim that p(UFAI before 2116) is less than 1% due to UFAI being “vaguely magical”. I am interested in figuring out what that means—is it a fair representation of the concept to say that p(Interstellar before 2116) is less than 1% because interstellar travel is “vaguely magical”?
What would be the relationship between “Requiring Advanced Technology” and “Vaguely Magical”? Clarke’s third law is a straightforward link, but “vaguely magical” has previously been used to indicate poor definitions, poor abstractions and sentences that do not refer to anything.
I am not sure the OP had much meaning behind his “vaguely magical” expression, but given that we are discussing it anyway :-) I would probably reinterpret it in terms of Knightian uncertainty. It’s not only the case that we don’t know, we don’t know what we don’t know and how much we don’t know.
This interpretation makes a lot of sense. The term can describe events that have a lot of Knightian Uncertainty, which a “Black Swan” like UFAI certainly has.
But they are not arguably dangerous because they are intelligent.