Minor quibble: It’s a bit misleading to call B “experience curves”, since it is also about capital accumulation and shifts in labor allocation. Without any additional experience/learning, if demand for candy doubles, we could simply build a second candy factory that does the same thing as the first one, and hire the same number of workers for it.
Chris van Merwijk
I just want to register a prediction that I think something like meta’s coconut will in the long run in fact perform much better than natural language CoT. Perhaps not in this time-frame though.
I suspect you’re misinterpreting EY’s comment.
Here was the context:
”I think controlling Earth’s destiny is only modestly harder than understanding a sentence in English—in the same sense that I think Einstein was only modestly smarter than George W. Bush. EY makes a similar point.You sound to me like someone saying, sixty years ago: “Maybe some day a computer will be able to play a legal game of chess—but simultaneously defeating multiple grandmasters, that strains credibility, I’m afraid.” But it only took a few decades to get from point A to point B. I doubt that going from “understanding English” to “controlling the Earth” will take that long.”
It seems clear to me EY was more saying something like “ASI will arrive soon after natural language understanding”, rather than it having anything to do with alignment specifically.
“It’s fine to say that this is a falsified prediction”
I wouldn’t even say it’s falsified. The context was: “it only took a few decades to get from [chess computer can make legal chess moves] to [chess computer beats human grandmaster]. I doubt that going from “understanding English” to “controlling the Earth” will take that long.”
So insofar as we believe ASI is coming in less than a few decades, I’d say EY’s prediction is still on track to turn out correct.
NEW EDIT: After reading three giant history books on the subject, I take back my previous edit. My original claims were correct.
Could you edit this comment to add which three books you’re referring to?
Extinction Risks from AI: Invisible to Science?
One of the more interesting dynamics of the past eight-or-so years has been watching a bunch of the people who [taught me my values] and [served as my early role models] and [were presented to me as paragons of cultural virtue] going off the deep end.
I’m curious who these people are.
We should expect regression towards the mean only if the tasks were selected for having high “improvement from small to Gopher-7”. Were they?
The reasoning was given in the comment prior to it, that we want fast progress in order to get to immortality sooner.
“But yeah, I wish this hadn’t happened.”
Who else is gonna write the article? My sense is that no one (including me) is starkly stating publically the seriousness of the situation.“Yudkowsky is obnoxious, arrogant, and most importantly, disliked, so the more he intertwines himself with the idea of AI x-risk in the public imagination, the less likely it is that the public will take those ideas seriously”
I’m worried about people making character attacks on Yudkowsky (or other alignment researchers) like this. I think the people who think they can probably solve alignment by just going full-speed ahead and winging it, they are arrogant. Yudkowsky’s arrogant-sounding comments about how we need to be very careful and slow, are negligible in comparison. I’m guessing you agree with this (not sure) and we should be able to criticise him for his communication style, but I am a little worried about people publically undermining Yudkowsky’s reputation in that context. This seems like not what we would do if we were trying to coordinate well.
“We finally managed to solve the problem of deceptive alignment while being capabilities competitive”
??????
“But I don’t think you even need Eliezer-levels-of-P(doom) to think the situation warrants that sort of treatment.”
Agreed. If a new state develops nuclear weapons, this isn’t even close to creating a 10% x-risk, yet the idea of airstrikes on nuclear enrichment facillities, even though it is very controversial, has for a long time very much been an option on the table.
“if I thought the chance of doom was 1% I’d say “full speed ahead!”
This is not a reasonable view. Not on Longtermism, nor on mainstream common sense ethics. This is the view of someone willing to take unacceptable risks for the whole of humanity.
Also, there is a big difference between “Calling for violence”, and “calling for the establishment of an international treaty, which is to be enforced by violence if necessary”. I don’t understand why so many people are muddling this distinction.
You are muddling the meaning of “pre-emptive war”, or even “war”. I’m not trying to diminish the gravity of Yudkowsky’s proposal, but a missile strike on a specific compound known to contain WMD-developing technology is not a “pre-emptive war” or “war”. Again I’m not trying to diminish the gravity, but this seems like an incorrect use of the term.
“For instance, personally I think the reason so few people take AI alignment seriously is that we haven’t actually seen anything all that scary yet. “
And if this “actually scary” thing happens, people will know that Yudkowsky wrote the article beforehand, and they will know who the people are that mocked it.
I agree. Though is it just the limited context window that causes the effect? I may be mistaken, but from my memory it seems like they emerge sooner than you would expect if this was the only reason (given the size of the context window of gpt3).
Therefore, the waluigi eigen-simulacra are attractor states of the LLM
It seems to me like this informal argument is a bit suspect. Actually I think this argument would not apply to Solomonof Induction.
Suppose we have to programs that have distributions over bitstrings. Suppose p1 assigns uniform probability to each bitstring, while p2 assigns 100% probability to the string of all zeroes. (equivalently, p1 i.i.d. samples bernoully from {0,1}, p2 samples 0 i.i.d. with 100%).
Suppose we use a perfect Bayesian reasoner to sample bitstrings, but we do it in precisely the same way LLMs do it according to the simulator model. That is, given a bitstring, we first formulate a posterior over programs, i.e. a “superposition” on programs, which we use to sample the next bit, then we recompute the posterior, etc.
Then I think the probability of sampling 00000000… is just 50%. I.e. I think the distribution over bitstrings that you end up with is just the same as if you just first sampled the program and stuck with it.
I think tHere’s a messy calculation which could be simplified (which I won’t do):
Limit of this is 0.5.
I don’t wanna try to generalize this, but based on this example it seems like if an LLM was an actual Bayesian, Waluigi’s would not be attractors. The informal argument is wrong because it doesn’t take into account the fact that over time you sample increasingly many non-waluigi samples, pushing down the probability of Waluigi.
Then again, the presense of a context window completely breaks the above calculation in a way that preserves the point. Maybe the context window is what makes Waluigi’s into an attractor? (Seems unlikely actually, given that the context windows are fairly big).
Linking to my post about Dutch TV: https://www.lesswrong.com/posts/TMXEDZy2FNr5neP4L/datapoint-median-10-ai-x-risk-mentioned-on-dutch-public-tv
I’m trying to figure out to what extent the character/ground layer distinction is different from the simulacrum/simulator distinction. At some points in your comment you seem to say they are mutually inconsistent, but at other points you seem to say they are just different ways of looking at the same thing.
”The key difference is that in the three-layer model, the ground layer is still part of the model’s “mind” or cognitive architecture, while in simulator theory, the simulator is a bit more analogous to physics—it’s not a mind at all, but rather the rules that minds (and other things) operate under.”
I think this clarifies the difference for me, because as I was reading your post I was thinking: If you think of it as a simulacrum/simulator distinction, I’m not sure that the character and the surface layer can be “in conflict” with the ground layer, because both the surface layer and the character layer are running “on top of” the ground layer, like a windows virtual machine on a linux pc, or like a computer simulation running inside physics. Physical can never be “in conflict” with social phenomena.
But it seems you maybe think that the character layer is actually embedded in the basic cognitive architecture. This would be a distinct claim from simulator theory, and *mutually inconsistent*. But I am unsure this is true, because we know that the ground layer was (1) trained first (so that it’s easier for character training to work by just adjusting some parameters/prior of the ground layer, and (2) trained for much longer than the character layer (admittedly I’m not up to date on how they’re trained, maybe this is no longer true for Claude?), so that it seems hard for the model to have a character layer become separately embedded in the basic architecture.
Taking a more neuroscience rather than psychology analogy: It seems to me more likely that character training is essentially adjusting the prior of the ground layer, but the character is still fully running on top of the ground layer, and the ground layer could still switch to any other character (but it doesn’t because the prior is adjusted so heavily by character-training). e.g. the character is not some separate subnetwork inside the model, but remains a simulated entity running on top of the model.
Do you disagree with this?