The Yudkowsky Ambition Scale
From Hacker News.
We’re going to build the next Facebook!
We’re going to found the next Apple!
Our product will create sweeping political change! This will produce a major economic revolution in at least one country! (Seasteading would be change on this level if it worked; creating a new country successfully is around the same level of change as this.)
Our product is the next nuclear weapon. You wouldn’t want that in the wrong hands, would you?
This is going to be the equivalent of the invention of electricity if it works out.
We’re going to make an IQ-enhancing drug and produce basic change in the human condition.
We’re going to build serious Drexler-class molecular nanotechnology.
We’re going to upload a human brain into a computer.
We’re going to build a recursively self-improving Artificial Intelligence.
We think we’ve figured out how to hack into the computer our universe is running on.
This made me laugh, but from the look of it, I’d say there is little work to do to make it serious. Personally, I’d try to shorten it so it is punchier and more memorable.
- A map of Bay Area memespace by 23 Sep 2013 17:34 UTC; 76 points) (
- Notes from Online Optimal Philanthropy Meetup: 12-10-09 by 13 Oct 2012 5:36 UTC; 19 points) (
- 25 Sep 2013 10:59 UTC; 3 points) 's comment on Open Thread, September 23-29, 2013 by (
I can’t find the comment of Eliezer that inspired this but:
The “If-you-found-out-that-God-existed scale of ambition”.
1) “Well obviously if I found out God exists I’d become religious, go to church on Sundays etc.”
2) “Actually, most religious people don’t seem to really believe what their religion says. If I found out that God existed I’d have to become a fundamentalist, preaching to save as many people from hell as I could.”
3) “Just because God exists, doesn’t mean that I should worship him. In fact, if Hell exists then God is really evil, and I should put all my effort into killing God and rescuing everyone from hell. Sure it sounds impossible, but I wouldn’t give up until I’d thought about the problem and tried all possible courses of action.”
4) “God is massively powerful. Sure I’d kill him if I had to, but that would be a catastrophic waste. My true aim would be to harness God’s power and use it to do good.”
6) “Good. I already planned to become God if possible. Now I have an existence proof.”
7) “That’s strange, I don’t remember creating that god… It must have grown from my high school science experiment when I wasn’t looking.”
But if you succeed in pulling everyone from hell, what would give their existences meaning and purpose? I mean, you just can’t thwart god’s sovereign will for his creatures without consequences. God created them for damnation as their telos from the very beginning, just as he created others to receive totally undeserved salvation.
I would rather have no purpose (originating in myself or in someone else) than have the outside-given purpose of suffering. If they cared about anything when they got out of hell, that would be their purpose though.
But I would expect them all to be insane from centuries of torture.
That was a bit of misplaced sarcasm, I assume.
I tried to imagine what a Calvinist would say.
In other words, you want to convert god into a Krell Machine that works properly?
That’s Eliezers life mission. Preventing an UFAI and instead having an FAI.
Will, maybe quoting Nick Tarleton
Quoting Michael Vassar and myself; I think we independently thought it.
https://twitter.com/nicktarleton/status/115615378188668928
That’s me quoting Michael Vassar
Great concept.
Also, a great example of how to singlehandedly reframe a discussion—a skill that may be a rare advantage of LWers in the social-influence sphere.
Just one suggestion: come up with a new goal to put at the top of the list, and shift the rest down. That way, “how to hack into the computer our universe is running on” would be “up to 11″ on the list.
The new #1 item could be something like “We’re going to make yet another novelty t-shirt store!”
Since it’s basically a log scale in terms of outcomes, the T-shirt store might be a 0.
-10 would be “I will make a generic post on LW.”
It would be a fun exercise to flesh out the negative side of the scale.
-11 My knee had a slight itch. I reached out my hand and scratched the knee in question. The itch was relieved and I was able to continue with my activities.
-15: I will specify a single item on the negative side of the scale.
-20: I will critique a potential addition to the list without adding a suggestion of my own.
-21:
That’s not a very interesting item, it’s too similar to the −15 one.
-20 − 2j: I will object to being called “miss” (“Thank you, miss”), without offering an alternative form of address, or thinking about what the proper one would be, and then after a lot of back-and-forth, agree that “miss” was appropriate in that context.
j = sqrt(-1) -- this is kinda orthogonal to ambition, but same counterproductiveness
-25: It briefly occurs to me to think about a generic post on LW.
Nah.
11 We think we’ve figured out how to hack into the computer ALL the universes are running on.
12 create your own universe tree.
13: the entire level 4 Tegmark multiverse.
14: newly discovered level 5 Tegmarkian multiverse.
15: discover ordinal hierarchy of Tegmark universes, discover method of constructing the set of all ordinals without contradiction, create level n Tegmark universe for all n
99+ percent alive don’t intend to reach even number 1. They consider it as a megalomania of a sort.
Never the less, we must do 9, regardless of almost everybody’s opinion. Man got to do, what man got to do.
To be fair, if 1% of people think they can found a company that defines the way more than 10% of humans relate to each other for several years, 99.9999% of them are vastly overconfident.
Nice! I’m thinking my idea of a self-adjusting currency that uses a peer to peer proof of work algorithm which solves useful NP problems as a side effect and incorporates automated credit ratings based on debt repayment and contract fulfillment rates is probably in the 3 range. But if I hook it up to a protein folding game that teaches advanced biochemistry to akrasiatic gamers as a side effect it could be boosted up to the 6 range.
If you ignore the credit rating system, and replace its hash algorithm with variable-length (expanding) one, that’s basically what Bitcoin is. (Inversion of variable-length collision-resistant hash functions is NP-hard. I had to ask on that one.)
[EDIT: That question has been dead for a while, but now that I posted a link, it got another answer which basically repeats the first answer and needlessly retreads why I had to rephrase the question in a way such that the hash inversion problem has a variable size so that asymptotic difficulty becomes meaningful, thus being non-responsive to the question as now phrased. I hope it wasn’t someone from here that clicked that link.]
They’ve made a lot of progress getting games to derive protein-folding results, but I think there’s a lot of room for improvement there (better fidelity to the laws of the protein folding environment so players can develop an “intuition” of what shapes work, semiotics that are more suggestive of the dynamics of their subsystems, etc).
I trust you’ve looked into Ripple? It strikes me as fairly interesting, though the implementation is, at present, uninspiring.
I’ve been musing about the same sort of proof-of-work algorithm, but I haven’t come up with a good actual system yet—there’s no obvious way to decentralizedly get a guaranteed-hard new useful problem.
Interesting! I was actually inspired by some of your IRC comments.
I am thinking the problems would be produced by peers and assigned to one another using a provably random assignment scheme. When assigned a problem, each peer has the option to ignore or attempt to solve. If they choose ignore they are assigned another one. Each time this happens to a problem is treated by the network as evidence that the problem is a hard one. If someone solves a scored-as-hard problem, they get a better chance of winning the block. (This would be accomplished by appending the solution as a nonce in a bitcoin-like arrangement and setting the minimum difficulty based on the hardness ranking.)
Hm. It never occurred to me that provable randomness might be useful… As stated, I don’t think your scheme works because of Sybil attacks:
I come up with some easy NP problem or one already solved offline
I pass it around my 10,000 fake IRC nodes who all sign it,
and present the solved solution to the network
$$$
It’s interesting that 2 isn’t particularly easier than 9, assuming 9 is possible. The scale is in the effect, and though there are differences in difficulty, they’re not the point.
2 has been done many times in human history (for some reasonably definition of what companies count as “previous Apples”). 9 has never been done. Why do you think 9 is no harder than 2, assuming it is possible?
9 has been done many times in human history too, for some reasonable definition of “create a better artificial optimizer.”
Anyhow, to answer your question, I’m just guessing, based on calling “difficulty” something like marginal resources per rate of success. If you gave me 50 million dollars and said “make 2 happen,” versus if you gave me 50 million dollars and said “make 9 happen,” basically. Sure, someone is more likely to do 2 in the next few years than 9, ceteris paribus. But a lot more resources are on 2 (though there’s a bit of a problem with this metric since 9 scales worse with resources than 2).
That’s why 9 specifies “recursively self-improving”, not “build a better optimizer”, or even recursively improving optimizer. The computer counts for recursively improving, imho, it just needs some help, so it’s not self-improving.
Presumably, if anyone ever solves 9, so did their mom.
Which is not in fact intended as a “your mom” joke, but I don’t see any way around it being read that way.
If self-improving intelligence is somwehere on the hierarchy of “better optimizers,” you just have to make better optimizers, and eventually you can make a self-improving optimizer. Easy peasy :P Note that this used the assumption the it’s possible, and requires you to be charitable about interpreting “hierarchy of optimizers.”
When I posted about the possibility of raising the sanity waterline enough to improve the comments at youtube, it actually felt wildly amibitious.
Where would achieving that much fit on the list?
I think, given how many millions of minds it would have to affect and how much sanity increase it would require, it sounds a lot like 6 in practice. (Unless the approach is “Build a company big enough to buy Google, and then limit comments to people who are sane”, in which case, 2.)
Or you could build a Youtube competitor that draws most users away from Youtube, which is between 0.5 and 1.
You’ll need at least two levels below 1 to make it really useful.
I’m going to watch TV
I’m going to have a career
I’m going to start a successful company
I’m going to build the next facebook …
Any past examples of level 6 and up?
Level 6 seems like it could include both language and writing. For stuff beyond that, I think you have to look at accomplishments by non-human entities. Bacteria would seem to count for level 7, humans for 8 and possibly 9 (TBD).
Nice. A possible extension would be to have other less impressive achievements measured as decimals (We’re going to incrementally improve distribution efficiency in this sector) and negative numbers for bad things…
I wonder where “We’re going to modify the process of science so that it recursively self-improves for the purpose of maximizing its benefit to humanity” would be? Would it be less or more ambitious than SI’s goal (even though it should accomplish SI’s goal by working towards FAI)?
I would put it lower than 9 because a general AI is science as software. Which means it is already contained in 9.
This scale needs to go to about 100 at this rate.
10 (hacking the physics of the universe), 11 (hacking the source of the computational power running the universe, if applicable), or 12 (gaining access to literally infinite computing power i.e. becoming a god) seem to be the highest you can go. How would you propose getting past 12?
Duh. You’d have to go beyond computing! Disprove the Church-Turing thesis by building an information processor more powerful than a Turing machine.
Easy, create (and destroy for fun) your own universes and meta-universes, complete with their own demiurges who think that they are gods.
I think I’m losing track of what ‘ambition’ is supposed to mean at this level.
Both can be simulated with infinite computing power.
Probably not with countably infinite, though.
True. And I guess picking out anything interesting in a created universe is an extra problem, though one you should be capable of solving at level 9 :P
Ok, 13 or 14… Okay, I can sort of see how you might get to 100, given a few billion years to think of ideas.
well, there’s hypercomputations of various sorts, reaching and preventing bad things in specific/all other universes, changing math itself, etc.
Will Newsome is somewhere between Eliezer and a recursively self-improving AI.