I think more than this, when you look at the labs you will often see the breakthru work was done by a small handful of people or a small team, whose direction was not popular before their success. If just those people had decided to retire to the tropics, and everyone else had stayed, I think that would have made a huge difference to the trajectory. (What does it look like if Alec Radford had decided to not pursue GPT? Maybe the idea was ‘obvious’ and someone else gets it a month later, but I don’t think so.)
Vaniver
I do think there’s a Virtue of Silence problem here.
Like—I was a ML expert who, roughly ten years ago, decided to not advance capabilities and instead work on safety-related things, and when the returns to that seemed too dismal stopped doing that also. How much did my ‘unilateral stopping’ change things? It’s really hard to estimate the counterfactual of how much I would have actually shifted progress; on the capabilities front I had several ‘good ideas’ years early but maybe my execution would’ve sucked, or I would’ve been focused on my bad ideas instead. (Or maybe me being at the OpenAI lunch table and asking people good questions would have sped the company up by 2%, or w/e, independent of my direct work.)
How many people are there like me? Also not obvious, but probably not that many. (I would guess most of them ended up in the MIRI orbit and I know them, but maybe there are lurkers—one of my friends in SF works for generic tech companies but is highly suspicious of working for AI companies, for reasons roughly downstream of MIRI, and there might easily be hundreds of people in that boat. But maybe the AI companies would only actually have wanted to hire ten of them, and the others objecting to AI work didn’t actually matter.)
I only like the first one more than the current cover, and I think then not by all that much. I do think this is the sort of thing that’s relatively easy to focus group / get data on, and the right strategy is probably something that appeals to airport book buyers instead of LessWrongers.
I read an advance copy of the book; I liked it a lot. I think it’s worth reading even if you’re well familiar with the overall argument.
I think there’s often been a problem, in discussing something for ~20 years, that the material is all ‘out there somewhere’ but unless you’ve been reading thru all of it, it’s hard to have it in one spot. I think this book is good at presenting a unified story, and not getting bogged down in handling too many objections to not read smoothly or quickly. (Hopefully, the linked online discussions will manage to cover the remaining space in a more appropriately non-sequential fashion.)
Blue Prince came out a week ago; it’s a puzzle game where a young boy gets a mysterious inheritance from his granduncle the baron; a giant manor house which rearranges itself every day, which he can keep if he manages to find the hidden 46th room.
The basic structure—slowly growing a mansion thru the placement of tiles—is simple enough and will be roughly familiar to anyone who’s played Betrayal at House on the Hill in the last twenty years. It’s atmospheric and interesting; I heard someone suggesting it might be this generation’s Myst.But this generation, as you might have noticed, loves randomness and procedural generation. In Myst, you wander from place to place, noticing clues; nearly all of the action happens in your head and your growing understanding of the world. If you know the solution to the final puzzle, you can speedrun Myst in less than a minute. Blue Prince is very nearly a roguelike instead of a roguelite, with accumulated clues driving most of your progression instead of in-game unlocks. But it’s a world you build out with a game, giving you stochastic access to the puzzlebox.
This also means a lot of it ends up feeling like padding or filler. Many years ago I noticed that some games are really books or movies but wrap it in a game for some reason, and to check whether or not I actually like the book or movie enough to play the game. (Or, with games like Final Fantasy XVI, whether I was happier just watching the cutscenes on Youtube because that would let me watch them at 2x speed.) Eliezer had a tweet a while back:
My least favorite thing about some video games, many of which I think I might otherwise have been able to enjoy, is walking-dominated gameplay. Where you spend most of your real clock seconds just walking between game locations.
Blue Prince has walking-dominated gameplay. It has pointless animations which are neat the first time but aggravating the fifth. It ends ups with a pace more like a board game’s, where rather than racing from decision to decision you leisurely walk between them.
This is good in many ways—it gives you time to notice details, it gives you time to think. It wants to stop you from getting lost in resource management and tile placement and stay lost in the puzzles. But often you end up with a lead on one of the puzzles—”I need Room X to activate Room Y to figure out something”—but don’t actually draw one of the rooms you need, or finally get both of the rooms but am missing the resources to actually use both of them.
And so you call it a day and try again. It’s like Outer Wilds in that way—you can spend as many days as you like exploring and clue-hunting—but Outer Wilds is the same every time, and if you want to chase down a particular clue you can, if you know what you’re doing. But Blue Prince will ask you for twenty minutes, and maybe deliver the clue; maybe not. Or you might learn that you needed to take more detailed notes on a particular thing, and now you have to go back to a room that doesn’t exist today—exploring again until you find it, and then exploring again until you find the room that you were in originally.
So when I found the 46th room about 11 hours in—like many puzzle games, the first ‘end’ is more like a halfway point (or less)--I felt satisfied enough. There’s more to do—more history to read, more puzzles to solve, more trophies to add to the trophy room—but the fruit are so high on the tree, and the randomly placed branches make it a bothersome climb.
The grass that can be touched is not the true grass.
What convinced me this made sense?
One of EA’s most popular and profitable games is The Sims, which famously benefits from Sim irrationality. In The Sims 5, there will be bold and new exciting ways for your Sims to behave, and they’ll be able to use our memetic virality model to have controversies and factional alignment. (Generating scissor statements is ethical so long as you’re doing it in Simlish.)
EA is investing in the hypothesis that bad writing drives underperformance. Having ratfic writers and philosophers look at Mass Effect 3 could have turned that from a disappointing series-ender (did you play Andromeda?) to a resounding triumph, and Dragon Age: Veilguard, despite being positively reviewed in general, was panned for its weak writing and became inflamed in culture war controversy. We’ve thought a lot about how misbehaving gods would act, in a way that I think would have made for a more compelling story and user experience.
I didn’t expect we could do anything relating to EA’s flagship sports games (FIFA, NHL, Madden, etc.), but what astonished me was the potential to do the reverse. I don’t know if we’ll be able to get Gwern 2025 out in time, but look forward to Gwern 2026. They were practically salivating at the idea of being able to take a normally annual product, tied to sports schedules that won’t be adjusted by advancing AI progress, and adapt it to a domain which, as part of an overall hyperbolic growth curve, will generate enough new content for a new release in ~half the time every new release.
The short version is they’re more used to adversarial thinking and security mindset, and don’t have a culture of “fake it until you make it” or “move fast and break things”.
I don’t think it’s obvious that it goes that way, but I think it’s not obvious that it goes the other way.
This project is extremely neglected, since normal people don’t seriously consider whether orcas might be that smart.
Ok, but matters is not what normal people are doing, but what specialists are doing. Why not try to do this as part of Project CETI?
It looks like you only have pieces with 2 connections and 6 connections, which works for maximal density. But I think you need some slack space to create pieces without the six axial lines. I think you should include the tiles with 4 connections also (and maybe even the 0-connection tile!) and the other 2-connection tiles; it increases the number by quite a bit but I think will let you make complete knots.
I haven’t thought deeply about this specific case, but I think you should consider this like any other ablation study—like, what happens if you replace the SAE with a linear probe?
And then a lot of the post seems to make really quite bad arguments against forecasting AI timelines and other technologies, doing so with… I really don’t know, a rejection of bayesianism? A random invocation of an asymmetric burden of proof?
I think the position Ben (the author) has on timelines is really not that different from Eliezer’s; consider pieces like this one, which is not just about the perils of biological anchors.
I think the piece spends less time than I would like on what to do in a position of uncertainty—like, if the core problem is that we are approaching a cliff of uncertain distance, how should we proceed?--but I think it’s not particularly asymmetric.
[And—there’s something I like about realism in plans? If people are putting heroic efforts into a plan that Will Not Work, I am on the side of the person on the sidelines trying to save them their effort, or direct them towards a plan that has a chance of working. If the core uncertainty is whether or not we can get human intelligence advancement in 25 years—I’m on your side of thinking it’s plausible—then it seems worth diverting what attention we can from other things towards making that happen, and being loud about doing that.]
Instead, the U.S. government will do what it has done every time it’s been convinced of the importance of a powerful new technology in the past hundred years: it will drive research and development for military purposes.
I think this is my biggest disagreement with the piece. I think this is the belief I most wish 10-years-ago-us didn’t have, so that we would try something else, which might have worked better than what we got.
Or—in shopping the message around to Silicon Valley types, thinking more about the ways that Silicon Valley is the child of the US military-industrial complex, and will overestimate their ability to control what they create (or lack of desire to!). Like, I think many more ‘smart nerds’ than military-types believe that human replacement is good.
The article seems to assume that the primary motivation for wanting to slow down AI is to buy time for institutional progress. Which seems incorrect as an interpretation of the motivation. Most people that I hear talk about buying time are talking about buying time for technical progress in alignment.
I think you need both? That is—I think you need both technical progress in alignment, and agreements and surveillance and enforcement such that people don’t accidentally (or deliberately) create rogue AIs that cause lots of problems.
I think historically many people imagined “we’ll make a generally intelligent system and ask it to figure out a way to defend the Earth” in a way that I think seems less plausible to me now. It seems more like we need to have systems in place already playing defense, which ramp up faster than the systems playing offense.
My understanding is that the Lightcone Offices and Lighthaven have 1) overlapping but distinct audiences, with Lightcone Offices being more ‘EA’ in a way that seemed bad, and 2) distinct use cases, where Lighthaven is more of a conference venue with a bit of coworking whereas Lightcone Offices was basically just coworking.
By contrast, today’s AIs are really nice and ethical. They’re humble, open-minded, cooperative, kind. Yes, they care about some things that could give them instrumental reasons to seek power (eg being helpful, human welfare), but their values are great
They also aren’t facing the same incentive landscape humans are. You talk later about evolution to be selfish; not only is the story for humans is far more complicated (why do humans often offer an even split in the ultimatum game?), but also humans talk a nicer game than they act (see construal level theory, or social-desirability bias). Once you start looking at AI agents who have similar affordances and incentives that humans have, I think you’ll see a lot of the same behaviors.
(There are structural differences here between humans and AIs. As an analogy, consider the difference between large corporations and individual human actors. Giant corporate chain restaurants often have better customer service than individual proprietors because they have more reputation on the line, and so are willing to pay more to not have things blow up on them. One might imagine that AIs trained by large corporations will similarly face larger reputational costs for misbehavior and so behave better than individual humans would. I think the overall picture is unclear and nuanced and doesn’t clearly point to AI superiority.)
though there’s a big question mark over how much we’ll unintentionally reward selfish superhuman AI behaviour during training
Is it a big question mark? It currently seems quite unlikely to me that we will have oversight systems able to actually detect and punish superhuman selfishness on the part of the AI.
I think it’s hard to evaluate the counterfactual where I made a blog earlier, but I think I always found the built-in audience of LessWrong significantly motivating, and never made my own blog in part because I could just post everything here. (There’s some stuff that ends up on my Tumblr or w/e instead of LW, even after ShortForm, but almost all of the nonfiction ended up here.)
Consider the reaction my comment from three months ago got.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn’t have a leader they trust and respect, because Catholicism has a longer tradition
As an additional comment, few organizations have splintered more publicly than Catholicism; it seems sort of surreal to me to not check whether or not you ended up on the right side of the splintering. [This is probably more about theological questions than it is about leadership, but as you say, the leadership is relevant!]
Note that Anthropic, for the early years, did have a plan to not ship SOTA tech like every other lab, and changed their minds. (Maybe they needed the revenue to get the investment to keep up; maybe they needed the data for training; maybe they thought the first mover effects would be large and getting lots of enterprise clients or w/e was a critical step in some of their mid-game plans.) But I think many plans here fail once considered in enough detail.