There are no specific plans—at the end of each session we discuss briefly what we should read for next time. I expect it will remain a mostly non-technical reading group.
SoerenE
Do you think Leo Szilard would have had more success through through overt means (political campaigning to end the human race) or surreptitiously adding kilotons of cobalt to a device intended for use in a nuclear test? I think both strategies would be unsuccessful (p<0.001 conditional on Szilard wishing to kill all humans).
I fully accept the following proposition: IF many humans currently have the capability to kill all humans THEN worrying about long-term AI Safety is probably a bad priority. I strongly deny the antecedent.
I guess the two most plausible candidates would be Trump and Putin, and I believe they are exceedingly likely to leave survivors (p=0.9999).
The word ‘sufficiently’ makes your claim a tautology. A ‘sufficiently’ capable human is capable of anything, by definition.
Your claim that Leo Szilard probably could have wiped out the human race seems very far from the historical consensus.
Good idea. I will do so.
AI Safety reading group
No, a Superintelligence is by definition capable of working out what a human wishes.
However, a Superintelligence designed to e.g. calculate digits of pi would not care about what a human wishes. It simply cares about calculating digits of pi.
In a couple of days, we are hosting a seminar in Århus (Denmark) on AI Risk.
I have taken the survey.
Congratulations!
My wife is also pregnant right now, and I strongly felt that I should include my unborn child in the count.
This interpretation makes a lot of sense. The term can describe events that have a lot of Knightian Uncertainty, which a “Black Swan” like UFAI certainly has.
You bring up a good point, whether it is useful to worry about UFAI.
To recap, my original query was about the claim that p(UFAI before 2116) is less than 1% due to UFAI being “vaguely magical”. I am interested in figuring out what that means—is it a fair representation of the concept to say that p(Interstellar before 2116) is less than 1% because interstellar travel is “vaguely magical”?
What would be the relationship between “Requiring Advanced Technology” and “Vaguely Magical”? Clarke’s third law is a straightforward link, but “vaguely magical” has previously been used to indicate poor definitions, poor abstractions and sentences that do not refer to anything.
Many things are far beyond our current abilities, such as interstellar space travel. We have no clear idea of how humanity will travel to the stars, but the subject is neither “vaguely magical”, nor is it true that the sentence “humans will visit the stars” does not refer to anything.
I feel that it is an unfair characterization of the people who investigate AI risk to say that they claim it will happen by magic, and that they stop the investigation there. You could argue that their investigation is poor, but it is clear that they have worked a lot to investigate the processes that could lead to Unfriendly AI.
Like Unfriendly AI, algae blooms are events that behave very differently from events we normally encounter.
I fear that the analogies have lost a crucial element. OrphanWIlde considered Unfriendly AI “vaguely magical” in the post here. The algae bloom analogy also has very vague definitions, but the changes in population size of an algae bloom is a matter I would call “strongly non-magical”.
I realize that you introduced the analogies to help make my argument precise.
Wow. It looks like light from James’ spaceship can indeed reach us, even if light from us cannot reach the spaceship.
English is not my first language. I think I would put the accent on “reaches”, but I am unsure what would be implied by having the accent on “super”. I apologize for my failure to write clearly.
I now see the analogy with human reproduction. Could we stretch the analogy to claim 3, and call some increases in human numbers “super”?
The lowest estimate of the historical number of humans I have seen is from https://en.wikipedia.org/wiki/Population_bottleneck , claiming down to 2000 humans for 100.000 years. Human numbers will probably reach a (mostly cultural) limit of 10.000.000.000. I feel that this development in human numbers deserves to be called “super”.
The analogy could perhaps even be stretched to claim 4 - some places at some times could be characterized by “runaway population growth”.
Intelligence, Artificial Intelligence and Recursive Self-improvement are likely poorly defined. But since we can point to concrete examples of all three, this is a problem in the map, not the territory. These things exist, and different versions of them will exist in the future.
Superintelligences do not exist, and it is an open question if they ever will. Bostrom defines superintelligences as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” While this definition has a lot of fuzzy edges, it is conceivable that we could one day point to a specific intellect, and confidently say that it is superintelligent. I feel that this too is a problem in the map, not the territory.
I was wrong to assume that you meant superintelligence when you wrote godhood, and I hope that you will forgive me for sticking with “superintelligence” for now.
I meant claim number 3 to be a sharper version of your claim: The AI will meet constraints, impediments and roadblocks, but these are overcome, and the AI reaches superintelligence.
Could you explain the analogy with human reproduction?
Thank you. It is moderately clear to me from the link that James’ thought-experiment is possible.
Do you know of a more authoritative description of the thought-experiment, preferably with numbers? It would be nice to have an equation where you give the speed of James’ spaceship and the distance to it, and calculate if the required speed to catch it is above the speed of light.
Some of the smarter (large, naval) landmines are arguably both intelligent and unfriendly. Let us use the standard AI risk metric.
I feel that your sentence does refer to something: A hypothetical scenario. (“Godhood” should be replaced with “Superintelligence”).
Is it correct that the sentence can be divided into these 4 claims?:
An AI self-improves it’s intelligence
The self-improvement becomes recursive
An AI reaches superintelligence through 1 and 2
This can happen in a process that can be called “runaway”
Do you mean that one of the probabilities is extremely small? (E.g., p(4 | 1 and 2 and 3) = 0.02). Or do you mean that the statement is not well-formed? (E.g, Intelligence is poorly-defined by the AI Risk theory)
I think I agree with all your assertions :).
(Please forgive me for a nitpick: The opposite statement would be “Many humans have the ability to kill all humans AND AI Safety is a good priority”. NOT (A IMPLIES B) is equivalent to A AND NOT B. )