Conjecture’s Compendium is now up. It’s intended to be a relatively-complete intro to AI risk for nontechnical people who have ~zero background in the subject. I basically endorse the whole thing, and I think it’s probably the best first source to link e.g. policymakers to right now.
I might say more about it later, but for now just want to say that I think this should be the go-to source for new nontechnical people right now.
I think there’s something about Bay Area culture that can often get technical people to feel like the only valid way to contribute is through technical work. It’s higher status and sexier and there’s a default vibe that the best way to understand/improve the world is through rigorous empirical research.
I think this an incorrect (or at least incomplete) frame, and I think on-the-margin it would be good for more technical people to spend 1-5 days seriously thinking about what alternative paths they could pursue in comms/policy.
I also think there are memes spreading around that you need to be some savant political mastermind genius to do comms/policy, otherwise you will be net negative. The more I meet policy people (including successful policy people from outside the AIS bubble), the more I think this narrative was, at best, an incorrect model of the world. At worst, a take that got amplified in order to prevent people from interfering with the AGI race (e.g., by granting excess status+validity to people/ideas/frames that made it seem crazy/unilateralist/low-status to engage in public outreach, civic discourse, and policymaker engagement.)
(Caveat: I don’t think the adversarial frame explains everything, and I do think there are lots of people who were genuinely trying to reason about a complex world and just ended up underestimating how much policy interest there would be and/or overestimating the extent to which labs would be able to take useful actions despite the pressures of race dynamics.)
I think I probably agree, although I feel somewhat wary about it. My main hesitations are:
The lack of epistemic modifiers seems off to me, relative to the strength of the arguments they’re making. Such that while I agree with many claims, my imagined reader who is coming into this with zero context is like “why should I believe this?” E.g., “Without intervention, humanity will be summarily outcompeted and relegated to irrelevancy,” which like, yes, but also—on what grounds should I necessarily conclude this? They gave some argument along the lines of “intelligence is powerful,” and that seems probably true, but imo not enough to justify the claim that it will certainly lead to our irrelevancy. All of this would be fixed (according to me) if it were framed more as like “here are some reasons you might be pretty worried,” of which there are plenty, or “here’s what I think,” rather than “here is what will definitely happen if we continue on this path,” which feels less certain/obvious to me.
Along the same lines, I think it’s pretty hard to tell whether this piece is in good faith or not. E.g., in the intro Connor writes “The default path we are on now is one of ruthless, sociopathic corporations racing toward building the most intelligent, powerful AIs as fast as possible to compete with one another and vie for monopolization and control of both the market and geopolitics.” Which, again, I don’t necessarily disagree with, but my imagined reader with zero context is like “what, really? sociopaths? control over geopolitics?” I.e., I’m expecting readers to question the integrity of the piece, and to be more unsure of how to update on it (e.g. “how do I know this whole thing isn’t just a strawman?” etc.).
There are many places where they kind of just state things without justifying them much. I think in the best case this might cause readers to think through whether such claims make sense (either on their own, or by reading the hyperlinked stuff—both of which put quite a lot of cognitive load on them), and in the worst case just causes readers to either bounce or kind of blindly swallow what they’re saying. E.g., “Black-Box Evaluations can only catch all relevant safety issues insofar as we have either an exhaustive list of all possible failure modes, or a mechanistic model of how concrete capabilities lead to safety risks.” They say this without argument and then move on. And although I agree with them (having spent a lot of time thinking this through myself), it’s really not obvious at first blush. Why do you need an exhaustive list? One might imagine, for instance, that a small number of tests would generalize well. And do you need mechanistic models? Sometimes medicines work safely without that, etc., etc. I haven’t read the entire Compendium closely, but my sense is that this is not an isolated incident. And I don’t think this is a fatal flaw or anything—they’re moving through a ton of material really fast and it’s hard to give a thorough account of all claims—but it does make me more hesitant to use it as the default “here’s what’s happening” document.
All of that said, I do broadly agree with the set of arguments, and I think it’s a really cool activity for people to write up what they believe. I’m glad they did it. But I’m not sure how comfortable I feel about sending it to people who haven’t thought much about AI.
One of the common arguments in favor of investing more resources into current governance approaches (e.g., evals, if-then plans, RSPs) is that there’s nothing else we can do.There’s not a better alternative– these are the only things that labs and governments are currently willing to support.
The Compendium argues that there are other (valuable) things that people can do, with most of these actions focusing on communicating about AGI risks. Examples:
Share a link to this Compendium online or with friends, and provide your feedback on which ideas are correct and which are unconvincing. This is a living document, and your suggestions will shape our arguments.
Post your views on AGI risk to social media, explaining why you believe it to be a legitimate problem (or not).
Red-team companies’ plans to deal with AI risk, and call them out publicly if they do not have a legible plan.
One possible critique is that their suggestions are not particularly ambitious. This is likely because they’re writing for a broader audience (people who haven’t been deeply engaged in AI safety).
For people who have been deeply engaged in AI safety, I think the natural steelman here is “focus on helping the public/government better understand the AI risk situation.”
There are at least some impactful and high-status examples of this (e.g., Hinton, Bengio, Hendrycks). I think in the last few years, for instance, most people would agree that Hinton/Bengio/Hendrycks have had far more impact in their communications/outreach/policy work than their technical research work.
And it’s not just the famous people– I can think of ~10 junior or mid-career people who left technical research in the last year to help policymakers better understand AI progress and AI risk, and I think their work is likely far more impactful than if they had stayed in technical research. (And I’m even excluding people who are working on evals/if-then plans: like, I’m focusing on people who see their primary purpose as helping the public or policymakers develop “situational awareness”, develop stronger models of AI progress and AI risk, understand the conceptual arguments for misalignment risk, etc.)
I appreciated their section on AI governance. The “if-then”/RSP/preparedness frame has become popular, and they directly argue for why they oppose this direction. (I’m a fan of preparedness efforts– especially on the government level– but I think it’s worth engaging with the counterarguments.)
Pasting some content from their piece below.
High-level thesis against current AI governance efforts:
The majority of existing AI safety efforts are reactive rather than proactive, which inherently puts humanity in the position of managing risk rather than controlling AI development and preventing it.
Critique of reactive frameworks:
1. The reactive framework reverses the burden of proof from how society typically regulates high-risk technologies and industries.
In most areas of law, we do not wait for harm to occur before implementing safeguards. Banks are prohibited from facilitating money laundering from the moment of incorporation, not after their first offense. Nuclear power plants must demonstrate safety measures before operation, not after a meltdown.
The reactive framework problematically reverses the burden of proof. It assumes AI systems are safe by default and only requires action once risks are detected. One of the core dangers of AI systems is precisely that we do not know what they will do or how powerful they will be before we train them. The if-then framework opts to proceed until problems arise, rather than pausing development and deployment until we can guarantee safety. This implicitly endorses the current race to AGI.
This reversal is exactly what makes the reactive framework preferable for AI companies.
Critique of waiting for warning shots:
3. The reactive framework incorrectly assumes that an AI “warning shot” will motivate coordination.
Imagine an extreme situation in which an AI disaster serves as a “warning shot” for humanity. This would imply that powerful AI has been developed and that we have months (or less) to develop safety measures or pause further development. After a certain point, an actor with sufficiently advanced AI may be ungovernable, and misaligned AI may be uncontrollable.
When horrible things happen, people do not suddenly become rational. In the face of an AI disaster, we should expect chaos, adversariality, and fear to be the norm, making coordination very difficult. The useful time to facilitate coordination is before disaster strikes.
However, the reactive framework assumes that this is essentially how we will build consensus in order to regulate AI. The optimistic case is that we hit a dangerous threshold before a real AI disaster, alerting humanity to the risks. But history shows that it is exactly in such moments that these thresholds are most contested –- this shifting of the goalposts is known as the AI Effect and common enough to have its own Wikipedia page. Time and again, AI advancements have been explained away as routine processes, whereas “real AI” is redefined to be some mystical threshold we have not yet reached. Dangerous capabilities are similarly contested as they arise, such as how recent reports of OpenAI’s o1 being deceptive have been questioned.
This will become increasingly common as competitors build increasingly powerful capabilities and approach their goal of building AGI. Universally, powerful stakeholders fight for their narrow interests, and for maintaining the status quo, and they often win, even when all of society is going to lose. Big Tobacco didn’t pause cigarette-making when they learned about lung cancer; instead they spread misinformation and hired lobbyists. Big Oil didn’t pause drilling when they learned about climate change; instead they spread misinformation and hired lobbyists. Likewise, now that billions of dollars are pouring into the creation of AGI and superintelligence, we’ve already seen competitors fight tooth and nail to keep building. If problems arise in the future, of course they will fight for their narrow interests, just as industries always do. And as the AI industry gets larger, more entrenched, and more essential over time, this problem will grow rapidly worse.
This seems to be confusing a dangerous capability eval (of being able to ‘deceive’ in a visible scratchpad) with an assessment of alignment, which seems like exactly what the ‘questioning’ was about.
Short version: Nvidia’s only moat is in software; AMD already makes flatly superior hardware priced far lower, and Google probably does too but doesn’t publicly sell it. And if AI undergoes smooth takeoff on current trajectory, then ~all software moats will evaporate early.
Long version: Nvidia is pretty obviously in a hype-driven bubble right now. However, it is sometimes the case that (a) an asset is in a hype-driven bubble, and (b) it’s still a good long-run bet at the current price, because the company will in fact be worth that much. Think Amazon during the dot-com bubble. I’ve heard people make that argument about Nvidia lately, on the basis that it will be ridiculously valuable if AI undergoes smooth takeoff on the current apparent trajectory.
My core claim here is that Nvidia will not actually be worth much, compared to other companies, if AI undergoes smooth takeoff on the current apparent trajectory.
Other companies already make ML hardware flatly superior to Nvidia’s (in flops, memory, whatever), and priced much lower. AMD’s MI300x is the most obvious direct comparison. Google’s TPUs are probably another example, though they’re not sold publicly so harder to know for sure.
So why is Nvidia still the market leader? No secret there: it’s the CUDA libraries. Lots of (third-party) software is built on top of CUDA, and if you use non-Nvidia hardware then you can’t use any of that software.
That’s exactly the sort of moat which will disappear rapidly if AI automates most-or-all software engineering, and on current trajectory software engineering would be one of the earlier areas to see massive AI acceleration. In that world, it will be easy to move any application-level program to run on any lower-level stack, just by asking an LLM to port it over.
So in worlds where AI automates software engineering to a very large extent, Nvidia’s moat is gone, and their competition has an already-better product at already-lower price.
The easiest answer is to look at the specs. Of course specs are not super reliable, so take it all with many grains of salt. I’ll go through the AMD/Nvidia comparison here, because it’s a comparison I looked into a few months back.
MI300x vs H100
Techpowerup is a third-party site with specs for the MI300x and the H100, so we can do a pretty direct comparison between those two pages. (I don’t know if the site independently tested the two chips, but they’re at least trying to report comparable numbers.) The H200 would arguably be more of a “fair comparison” since the MI300x came out much later than the H100; we’ll get to that comparison next. I’m starting with MI300x vs H100 comparison because techpowerup has specs for both of them, so we don’t have to rely on either company’s bullshit-heavy marketing materials as a source of information. Also, even the H100 is priced 2-4x more expensive than the MI300x (~$30-45k vs ~$10-15k), so it’s not unfair to compare the two.
Key numbers (MI300x vs H100):
float32 TFLOPs: ~80 vs ~50
float16 TFLOPs: ~650 vs ~200
memory: 192 GB vs 80 GB (note that this is the main place where the H200 improves on the H100)
bandwidth: ~10 TB/s vs ~2 TB/s
… so the comparison isn’t even remotely close. The H100 is priced 2-4x higher but is utterly inferior in terms of hardware.
MI300x vs H200
I don’t know of a good third-party spec sheet for the H200, so we’ll rely on Nvidia’s page. Note that they report some numbers “with sparsity” which, to make a long story short, means those numbers are blatant marketing bullshit. Other than those numbers, I’ll take their claimed specs at face value.
Key numbers (MI300x vs H200):
float32 TFLOPs: ~80 vs ~70
float16 TFLOPs: don’t know, Nvidia conspicuously avoided reporting that number
memory: 192 GB vs 141 GB
bandwidth: ~10 TB/s vs ~5 TB/s
So they’re closer than the MI300x vs H100, but the MI300x still wins across the board. And pricewise, the H200 is probably around $40k, so 3-4x more expensive than the MI300x.
Its worth noting that even if nvidia is charging 2-4x more now, the ultimate question for competitiveness will be manufactoring cost for nvidia vs amd. If nvidia has much lower manufactoring costs than amd per unit performance (but presumably higher markup), then nvidia might win out even if their product is currently worse per dollar.
Note also that price discrimination might be a big part of nvidia’s approach. Scaling labs which are willing to go to great effort to drop compute cost by a factor of two are a subset of nvidia’s customers where nvidia would ideally prefer to offer lower prices. I expect that nvidia will find a way to make this happen.
I’m holding a modest long position in NVIDIA (smaller than my position in Google), and expect to keep it for at least a few more months. I expect I only need NVIDIA margins to hold up for another 3 or 4 years for it to be a good investment now.
It will likely become a bubble before too long, but it doesn’t feel like one yet.
While the first-order analysis seems true to me, there are mitigating factors:
AMD appears to be bungling on their GPUs being reliable and fast, and probably will for another few years. (At least, this is my takeaway from following the TinyGrad saga on Twitter...) Their stock is not valued as it should be for a serious contender with good fundamentals, and I think this may stay the case for a while, if not forever if things are worse than I realize.
NVIDIA will probably have very-in-demand chips for at least another chip generation due to various inertias.
There aren’t many good-looking places for the large amount of money that wants to be long AI to go right now, and this will probably inflate prices for still a while across the board, in proportion to how relevant-seeming the stock is. NVDA rates very highly on this one.
So from my viewpoint I would caution against being short NVIDIA, at least in the short term.
If AI automates most, but not all, software engineering, moats of software dependencies could get more entrenched, because easier-to-use libraries have compounding first-mover advantages.
The disadvantages of AMD software development potentially need to be addressed at levels not accessible to an arbitrary feral automated software engineer in the wild, to make the stack sufficiently usable. (A lot of actual human software engineers would like the chance.)
NVIDIA is training their own AIs, who are pretty capable.
NVIDIA can invest their current profits. (Revenues, not stock valuations.)
If AI automates most, but not all, software engineering, moats of software dependencies could get more entrenched, because easier-to-use libraries have compounding first-mover advantages.
I don’t think the advantages would necessarily compound—quite the opposite, there are diminishing returns and I expect ‘catchup’. The first-mover advantage neutralizes itself because a rising tide lifts all boats, and the additional data acts as a prior: you can define the advantage of a better model, due to any scaling factor, as equivalent to n additional datapoints. (See the finetuning transfer papers on this.) When a LLM can zero-shot a problem, that is conceptually equivalent to a dumber LLM which needs 3-shots, say. And so the advantages of a better model will plateau, and can be matched by simply some more data in-context—such as additional synthetic datapoints generated by self-play or inner-monologue etc. And the better the model gets, the more ‘data’ it can ‘transfer’ to a similar language to reach a given X% of coding performance. (Think about how you could easily transfer given access to an environment: just do self-play on translating any solved Python problem into the target language. You already, by stipulation, have an ‘oracle’ to check outputs of the target against, which can produce counterexamples.) To a sad degree, pretty much all programming languages are the same these days: ALGOL with C sugaring to various degrees and random ad hoc addons; a LLM which can master Python can master Javascript can master Typescript… The hard part is the non-programming-language parts, the algorithms and reasoning and being able to understand & model the implicit state updates—not memorizing the standard library of some obscure language.
So at some point, even if you have a model which is god-like at Python (at which point each additional Python datapoint adds basic next to nothing), you will find it is completely acceptable at JavaScript, say, or even your brand-new language with 5 examples which you already have on hand in the documentation. You don’t need ‘the best possible performance’, you just need some level of performance adequate to achieve your goal. If the Python is 99.99% on some benchmark, you are probably fine with 99.90% performance in your favorite language. (Presumably there is some absolute level like 99% at which point automated CUDA → ROCm becomes possible, and it is independent of whether some other language has even higher accuracy.) All you need is some minor reason to pay that slight non-Python tax. And that’s not hard to find.
If AI automates most, but not all, software engineering
Also, I suspect that the task of converting CUDA code to ROCm code might well fall into the ‘most’ category rather than being the holdout programming tasks. This is a category of code ripe for automation: you have, again by stipulation, correct working code which can be imitated and used as an oracle autonomously to brute force translation, which usually has very narrow specific algorithmic tasks (‘multiply this matrix by that matrix to get this third matrix; every number should be identical’), random test-cases are easy to generate (just big grids of numbers), and where the non-algorithmic number also has simple end-to-end metrics (‘loss go down per wallclock second’) to optimize. Compared to a lot of areas, like business logic or GUIs, this seems much more amenable to tasking LLMs with. geohot may lack the followthrough to make AMD GPUs work, and plow through papercut after papercut, but there would be no such problem for a LLM.
So I agree with Wentsworth that there seems to be a bit of a tricky transition here for Nvidia: it’s always not been worth the time & hassle to try to use an AMD GPU (although a few claim to have made it work out financially for them), because of the skilled labor and wallclock and residual technical risk and loss of flexibility ecosystem; but if LLM coding works out well enough and intelligence becomes ‘too cheap to meter’, almost all of that goes away. Even ordinary unsophisticated GPU buyers will be able to tell their LLM to ‘just make it work on my new GPU, OK? I don’t care about the details, just let me know when you’re done’. At this point, what is the value-add for Nvidia? If they cut down their fat margins and race to the bottom for the hardware, where do they go for the profits? The money all seems to be in the integration and services—none of which Nvidia is particularly good at. (They aren’t even all that good at training LLMs! The Megatron series was a disappointment, like Megatron-NLG-530b is barely a footnote, and even the latest Nemo seems to barely match Llama-3-70b which being like 4x larger and thus more expensive to run.)
And this will be true of anyone who is relying on software lockin: if the lockin is because it would take a lot of software engineer time to do a reverse-engineering rewrite and replacement, then it’s in serious danger in a LLM human coding level world. In a world where you can hypothetically spin up a thousand SWEs on a cloud service, tell them, ‘write me an operating system like XYZ’, and they do so overnight while you sleep, durable software moats are going to require some sort of mysterious blackbox like a magic API; anything which is so modularized as to fit on your own computer is also sufficiently modularized as to easily clone & replace...
This isn’t a pure software engineering time lockin; some of that money is going to go to legal action looking for a hint big targets have done the license-noncompliant thing.
Edit: Additionally, I don’t think a world where “most but not all” software engineering is automated is one where it will be a simple matter to spin up a thousand effective SWEs of that capability; I think there’s first a world where that’s still relatively expensive even if most software engineering is being done by automated systems. Paying $8000 for overnight service of 1000 software engineers would be a rather fine deal, currently, but still too much for most people.
I don’t think that will be at all important. You are creating alternate reimplementations of the CUDA API, you aren’t ‘translating’ or decompiling it. And if you are buying billions of dollars of GPUs, you can afford to fend off some Nvidia probes and definitely can pay $0.000008b periodically for an overnighter. (Indeed, Nvidia needing to resort to such Oracle-like tactics is a bear sign.)
While there’s truth in what you say, I also think a market that’s running thousands of software engineers is likely to be hungry for as many good GPUs as the current manufacturers can make. NVIDIA not being able to sustain a relative monopoly forever still doesn’t put it in a bad position.
People will hunger for all the GPUs they can get, but then that means that the favored alternative GPU ‘manufacturer’ simply buys out the fab capacity and does so. Nvidia has no hardware moat: they do not own any chip fabs, they don’t own any wafer manufacturers, etc. All they do is design and write software and all the softer human-ish bits. They are not ‘the current manufacturer’ - that’s everyone else, like TSMC or the OEMs. Those are the guys who actually manufacture things, and they have no particular loyalty to Nvidia. If AMD goes to TSMC and asks for a billion GPU chips, TSMC will be thrilled to sell the fab capacity to AMD rather than Nvidia, no matter how angry Jensen is.
So in a scenario like mine, if everyone simply rewrites for AMD, AMD raises its prices a bit and buys out all of the chip fab capacity from TSMC/Intel/Samsung/etc—possibly even, in the most extreme case, buying capacity from Nvidia itself, as it suddenly is unable to sell anything at its high prices that it may be trying to defend, and is forced to resell its reserved chip fab capacity in the resulting liquidity crunch. (No point in spending chip fab capacity on chips you can’t sell at your target price and you aren’t sure what you’re going to do.) And if AMD doesn’t do so, then player #3 does so, and everyone rewrites again (which will be easier the second time as they will now have extensive test suites, two different implementations to check correctness against, documentation from the previous time, and AIs which have been further trained on the first wave of work).
(… lol. That snuck in without any conscious intent to imply anything, yes. I haven’t even personally interacted with the open Nvidia models yet.)
I do think the analysis is a decent map to nibbling at NVIDIA’s pie share if you happen to be a competitor already—AMD, Intel, or Apple currently, to my knowledge, possibly Google depending what they’re building internally and if they decide to market it more. Apple’s machine learning ecosystem is a bit of a parallel one, but I’d be at least mildly interested in it from a development perspective, and it is making progress.
But when it comes to the hardware, this is a sector where it’s reasonably challenging to conjure a competitor out of thin air still, so competitor behavior—with all its idiosyncrasies—is pretty relevant.
First, if AI is a big value driver, in a general economic sense, is your view that NVIDIA is over prices against its future potential or just that relatively NVIDIA will under perform other investment alternatives you see.
Second, and perhaps an odd and speculative (perhaps nonsense) thought. I would expect that in this area one might see some network effects in play as well so wondering if that might impact the AI engineering decisions on software. Could the AI software solutions look towards maximising the value of the installed network (AIs work better on a common chip and code infrastructure) than will be true if one looks at some isolated technical stats. A bit a long the lines of why Beta was displaced by VHS dispite being a better technology. If so, then it seems possible that NVIDA could remain a leader and enjoy its current pricing powers (at least to some extent) for a fairly long period of time.
AI that can rewrite CUDA is a ways off. It’s possible that it won’t be that far away in calendar time, but it is far away in terms of AI market growth and hype cycles. If GPT-5 does well, Nvidia will reap the gains more than AMD or Google.
Shorting nvidia might be tricky. I’d short nvidia and long TSM or an index fund to be safe at some point. Maybe now? Typically the highest market cap stock has poor performance after it claims that spot.
AFAICT, approximately every “how to be good at conversation” guide says the same thing: conversations are basically a game where 2+ people take turns free-associating off whatever was said recently. (That’s a somewhat lossy compression, but not that lossy.) And approximately every guide is like “if you get good at this free association game, then it will be fun and easy!”. And that’s probably true for some subset of people.
But speaking for myself personally… the problem is that the free-association game just isn’t very interesting.
I can see where people would like it. Lots of people want to talk to other people more on the margin, and want to do difficult thinky things less on the margin, and the free-association game is great if that’s what you want. But, like… that is not my utility function. The free association game is a fine ice-breaker, it’s sometimes fun for ten minutes if I’m in the mood, but most of the time it’s just really boring.
Even for serious intellectual conversations, something I appreciate in this kind of advice is that it often encourages computational kindness. E.g. it’s much easier to answer a compact closed question like “which of these three options do you prefer” instead of an open question like “where should we go to eat for lunch”. The same applies to asking someone about their research; not every intellectual conversation benefits from big open questions like the Hamming Question.
I think this is especially important for me/us to remember. On this site we often have a complex way of thinking, and a high computational budget (because we like exercising our brains to failure) and if we speak freely to the average person, they mat be annoyed at how hard it is to parse what we are saying.
We’ve all probably had this experience when genuinely trying to understand someone from a very different background. Perhaps they are trying to describe their inner experience when mediating, or Japanese poetry, or are simply from a different’t discipline. Or perhaps we were just very tired that day, meaning we had a low computational budget.
On the other hand, we are often a “tell” culture, which had a lower computational load compared to ask or guess culture. As long as we don’t tell too much.
Generally fair and I used to agree, I’ve been looking at it from a bit of a different viewpoint recently.
If we think of a “vibe” of a conversation as a certain shared prior that you’re currently inhabiting with the other person then the free association game can rather be seen as a way of finding places where your world models overlap a lot.
My absolute favourite conversations are when I can go 5 layers deep with someone because of shared inference. I think the vibe checking for shared priors is a skill that can be developed and the basis lies in being curious af.
There’s apparently a lot of different related concepts in psychology about holding emotional space and other things that I think just comes down to “find the shared prior and vibe there”.
Hm. This rings true… but also I think that selecting [vibes, in this sense] for attention also selects against [things that the other person is really committed to]. So in practice you’re just giving up on finding shared commitments. I’ve been updating that stuff other than shared commitments is less good (healthy, useful, promising, etc.) than it seems.
Hmm, I find that I’m not fully following here. I think “vibes” might be thing that is messing it up.
Let’s look at a specific example: I’m talking to a new person at an EA-adjacent event and we’re just chatting about how the last year has been. Part of the “vibing” here might be to hone in on the difficulties experienced in the last year due to a feeling of “moral responsibility”, in my view vibing doesn’t have to be done with only positive emotions?
I think you’re bringing up a good point that commitments or struggles might be something that bring people closer than positive feelings because you’re more vulnerable and open as well as broadcasting your values more. Is this what you mean with shared commitments or are you pointing at something else?
Closeness is the operating drive, but it’s not the operating telos. The drive is towards some sort of state or feeling—of relating, standing shoulder-to-shoulder looking out at the world, standing back-to-back defending against the world; of knowing each other, of seeing the same things, of making the same meaning; of integrated seeing / thinking. But the telos is tikkun olam (repairing/correcting/reforming the world)--you can’t do that without a shared idea of better.
As an analogy, curiosity is a drive, which is towards confusion, revelation, analogy, memory; but the telos is truth and skill.
In your example, I would say that someone could be struggling with “moral responsibility” while also doing a bunch of research or taking a bunch of action to fix what needs to be fixed; or they could be struggling with “moral responsibility” while eating snacks and playing video games. Vibes are signals and signals are cheap and hacked.
There’s a general-purpose trick I’ve found that should, in theory, be applicable in this context as well, although I haven’t mastered that trick myself yet.
Essentially: when you find yourself in any given cognitive context, there’s almost surely something “visible” from this context such that understanding/mastering/paying attention to that something would be valuable and interesting.
For example, suppose you’re reading a boring, nonsensical continental-philosophy paper. You can:
Ignore the object-level claims and instead try to reverse-engineer what must go wrong in human cognition, in response to what stimuli, to arrive at ontologies that have so little to do with reality.
Start actively building/updating a model of the sociocultural dynamics that incentivize people to engage in this style of philosophy. What can you learn about mechanism design from that? It presumably sheds light on how to align people towards pursuing arbitrary goals, or how to prevent this happening...
Pay attention to your own cognition. How exactly are you mapping the semantic content of the paper to an abstract model of what the author means, or to the sociocultural conditions that created this paper? How do these cognitive tricks generalize? If you find a particularly clever way to infer something form the text, check: would your cognitive policy automatically deploy this trick in all context where it’d be useful, or do you need to manually build a TAP for that?
Study what passages make the feelings of boredom or frustration spike. What does that tell you about how your intuitions/heuristics work? Could you extract any generalizable principles out of that? For example, if a given sentence particularly annoys you, perhaps it’s because it features a particularly flawed logical structure, and it’d be valuable to learn to spot subtler instances of such logical flaws “in the wild”.
The experience of reading the paper’s text almost certainly provides some data uniquely relevant to some valuable questions, data you legitimately can’t source any other way. (In the above examples: sure you can learn more efficiently about the author’s cognition or the sociocultural conditions by reading some biographies or field overviews. But (1) this wouldn’t give you the meta-cognitive data about how you can improve your inference functions for mapping low-level data to high-level properties, (2) those higher-level summaries would necessarily be lossy, and give you a more impoverished picture than what you’d get from boots-on-the-ground observations.)
Similar applies to:
Listening to boring lectures. (For example, you can pay intense attention to the lecturer’s body language, or any tricks or flaws in their presentation.)
Doing a physical/menial task. (Could you build, on the fly, a simple model of the physics (or logistics) governing what you’re doing, and refine it using some simple experiments? Then check afterwards if you got it right. Or: If you were a prehistoric human with no idea what “physics” is, how could you naturally arrive at these ideas from doing such tasks/making such observations? What does that teach you about inventing new ideas in general?)
Doing chores. (Which parts of the process can you optimize/streamline? What physical/biological conditions make those chores necessary? Could you find a new useful takeaway from the same chore every day, and if not, why?)
Et cetera.
There’s a specific mental motion I associate with using this trick, which involves pausing and “feeling out” the context currently loaded in my working memory, looking at it from multiple angles, trying to see anything interesting or usefully generalizable.
In theory, this trick should easily apply to small-talk as well. There has to be something you can learn to track in your mind, as you’re doing small-talk, that would be useful or interesting to you.
One important constraint here is that whatever it is, it has to be such that your outwards demeanour would be that of someone who is enjoying talking to your interlocutor. If the interesting thing you’re getting out of the conversation is so meta/abstract you end up paying most of the attention to your own cognitive processes, not on what the interlocutor is saying, you’ll have failed at actually doing the small-talk. (Similarly, if, when doing a menial task, you end up nerd-sniped by building a physical model of the task, you’ll have failed at actually doing the task.)
You also don’t want to come across as sociopathic, so making a “game” of it where you’re challenging yourself to socially engineer the interlocutor into something is, uh, not a great idea.
The other usual advice for finding ways to enjoy small-talk are mostly specialized instances of the above idea that work for specific people. Steering the small-talk to gradient-descend towards finding emotional common ground, ignoring the object-level words being exchanged and build a social model of the interlocutor, doing a live study of the social construct of “small-talk” by playing around with it, etc.
You’ll probably need to find an instance of the trick that works for your cognition specifically, and it’s also possible the optimization problem is overconstrained in your case. Still, there might be something workable.
Some people struggle with the specific tactical task of navigating any conversational territory. I’ve certainly had a lot of experiences where people just drop the ball leaving me to repeatedly ask questions. So improving free-association skill is certainly useful for them.
Unfortunately, your problem is most likely that you’re talking to boring people (so as to avoid doing any moral value judgements I’ll make clear that I mean johnswentworth::boring people).
There are specific skills to elicit more interesting answers to questions you ask. One I’ve heard is “make a beeline for the edge of what this person has ever been asked before” which you can usually reach in 2-3 good questions. At that point they’re forced to be spontaneous, and I find that once forced, most people have the capability to be a lot more interesting than they are when pulling cached answers.
This is easiest when you can latch onto a topic you’re interested in, because then it’s easy on your part to come up with meaningful questions. If you can’t find any topics like this then re-read paragraph 2.
Talking to people is often useful for goals like “making friends” and “sharing new information you’ve learned” and “solving problems” and so on. If what conversation means (in most contexts and for most people) is ‘signaling that you repeatedly have interesting things to say’, it’s required to learn to do that in order to achieve your other goals.
Most games aren’t that intrinsically interesting, including most social games. But you gotta git gud anyway because they’re useful to be able to play well.
Hmm, the ‘making friends’ part seems the most important (since there are ways to share new information you’ve learned, or solve problems, beyond conversation), but it also seems a bit circular. Like, if the reason for making friends is to hang out and have good conversations(?), but one has little interest in having conversations, then doesn’t one have little reason to make friends in the first place, and therefore little reason to ‘git gud’ at the conversation game?
Er, friendship involves lots of things beyond conversation. People to support you when you’re down, people to give you other perspectives on your personal life, people to do fun activities with, people to go on adventures and vacations with, people to celebrate successes in your life with, and many more.
Good conversation is a lubricant for facilitating all of those other things, for making friends and sustaining friends and staying in touch and finding out opportunities for more friendship-things.
I think that “getting good” at the “free association” game is in finding the sweet spot / negotiation between full freedom of association and directing toward your own interests, probably ideally with a skew toward what the other is interested in. If you’re both “free associating” with a bias toward your own interests and an additional skew toward perceived overlap, updating on that understanding along the way, then my experience says you’ll have a good chance of chatting about something that interests you both. (I.e. finding a spot of conversation which becomes much more directed than vibey free association.) Conditional on doing something like that strategy, I find it ends up being just a question of your relative+combined ability at this and the extent of overlap (or lack thereof) in interests.
So short model is: Git gud at free association (+sussing out interests) → gradient ascend yourselves to a more substantial conversation interesting to you both.
The skill in such a game is largely in understanding the free association space, knowing how people likely react and thinking enough steps ahead to choose moves that steer the person where you want to go, either into topics you find interesting, information you want from them, or getting them to a particular position, and so on. If you’re playing without goals, of course it’s boring...
It’s becomes more interresting when the people constrain their output based on what they expect is true information that the other person does not yet know. It’s useful to talk to an expert, who tells you a bunch of random stuff they know that you don’t.
Often some of it will be useful. This only works if they understand what you have said though (which presumably is something that you are interested in). And often the problem is that people’s models about what is useful are wrong. This is especially likely if you are an expert in something. Then the thing that most people will say will be worse what you would think on the topic. This is especially bad if the people can’t immediately even see why what you are saying is right.
The best strategy around this I have found so far is just to switch the topic to the actually interesting/important things. Suprisingly usually people go along with it.
Good question. Some differences off the top of my head:
On this forum, if people don’t have anything interesting to say, the default is to not say anything, and that’s totally fine. So the content has a much stronger bias toward being novel and substantive and not just people talking about their favorite parts of Game of Thrones or rehashing ancient discussions (though there is still a fair bit of that) or whatever.
On this forum, most discussions open with a relatively-long post or shortform laying out some ideas which at least the author is very interested in. The realtime version would be more like a memo session or a lecture followed by discussion.
The intellectual caliber of people on this forum (or at least active discussants) is considerably higher than e.g. people at Berkeley EA events, let alone normie events. Last event I went to with plausibly-higher-caliber-people overall was probably the ILLIAD conference.
In-person conversations have a tendency to slide toward the lowest denominator, as people chime in about whatever parts they (think they) understand, thereby biasing toward things more people (think they) understand. On LW, karma still pushes in that direction, but threading allows space for two people to go back-and-forth on topics the audience doesn’t really grock.
Not sure to what extent those account for the difference in experience.
Totally understand why this would be more interesting; I guess I would still fundamentally describe what we’re doing on the internet as conversation, with the same rules as you would describe above. It’s just that the conversation you can find here (or potentially on Twitter) is superstimulating compared to what you’re getting elsewhere. Which is good in the sense that it’s more fun, and I guess bad inasmuch as IRL conversation was fulfilling some social or networking role that online conversation wasn’t.
I have similar tastes, but, some additional gears:
I think all day, these days. Even if I’m trying to have interesting, purposeful conversations with people who also want that, it is useful to have sorts of things to talk about that let some parts of my brain relax (while using other parts of my brain I don’t use as much)
on the margin, you can do an intense intellectual conversation, but still make it funnier, or with more opportunity for people to contribute.
I understand, for someone with a strong drive to solve hard problems, there’s an urge for conversations to serve a function, exchange information with your interlocutor so things can get done. There’s much to do and communication is already painfully inefficient at it’s best.
The thing is, I don’t think the free-association game is inefficient, if one is skilled at it. It’s also not all that free. The reason it is something humans “developed” is because it is the most efficient way to exchange rough but extensive models of our minds with others via natural language. It acts a bit like a ray tracer, you shoot conversational rays and by how they bounce around in mental structures, the thought patterns, values and biases of the conversation partners are revealed to each other. Shapes become apparent. Sometimes rays bounce off into empty space, then you need to restart the conversation, shoot a new ray. And getting better at this game, keeping the conversation going, exploring a wider range of topics more quickly, means building a faster ray tracer, means it takes less time to know if your interlocutor thinks in a way and about topics which you find enlightening/aesthetically pleasing/concretely useful/whatever you value.
Or to use a different metaphor, starting with a depth-first search and never running a breadth-first search will lead to many false negatives. There are many minds out there that can help you in ways you won’t know in advance.
So if the hard problems you are working on could profit from more minds, it pays off to get better as this. Even if it has not much intrinsic value for you, it has instrumental value.
Hope this doesn’t come across as patronizing, definitely not meant that way.
Part of the problem is that the very large majority of people I run into have minds which fall into a relatively low-dimensional set and can be “ray traced” with fairly little effort. It’s especially bad in EA circles.
Then I misunderstood your original comment, sorry. As a different commenter wrote, the obvious solution would be to only engage with interesting people. But, of course, unworkable in practice. And “social grooming” nearly always involves some level of talking. A curse of our language abilities, I guess. Other social animals don’t have that particular problem.
The next best solution would be higher efficiency, more socializing bang for your word count buck, so to speak. Shorter conversations for the same social effect. Not usually a focus of anything billed as conversation guide, for obvious reasons. But there are some methods aimed at different goals that, in my experience, also help with this as a side effect.
Ok but how do you deal with the tragedy of the high dimensionality of context-space? People worth thinking with have wildly divergent goals—and even if you share goals, you won’t share background information.
Yeah it sucks, search by free association is hillclimbing (gets stuck in local optima) and the contemporary media environment and political culture is an illustration of its problems.
The pattern itself is a local optimum, it’s a product of people walking into a group without knowing what the group is doing and joining in anyway, and so that pattern of low-context engagement becomes what we’re doing, and the anxiety that is supposed to protect us from bad patterns like this and help us to make a leap out to somewhere better is usually drowned in alcohol.
Instead of that, people should get to know each other before deciding what to talk about, and then intentionally decide to talk about what they find interesting or useful with that person. This gets better results every time.
But when we socialise as children, there isn’t much about our friends to get to know, no specialists to respectfully consult, no well processed life experiences to learn from, so none of us just organically find that technique of like, asking who we’re talking to, before talking, it has to be intentionally designed.
One blind spot we rationalists sometimes have is that charismatic people actually treat the game as:
“Can I think of an association that will make the other person feel good and/or further my goal?”. You need people to feel good, or they won’t participate. And if you want some complicated/favour/uncomftorble_truth then you better mix in some good feels to balance it out and keep the other person participating.
To put it another way: If you hurt people’s brain or ego, rush them, or make them feel unsure, or contradict them, then most untrained humans will feel a little bad. Why would they want to keep feeling bad? Do you like it when people don’t listen, contradict you, insult you, rush you, disagree with you? Probably not, probobly no one does.
But if someone listens to you, smiles at you, likes you, has a good opinion of you, agrees with you, make sense to you. Then it feels good!
This might sound dangerously sycophantic, and that’s because it is—if people overdo it! But if it’s mixed with some healthy understanding, learning, informing then It’s a great conversational lubricant, and you should apply as needed. It just ensures that everyone enjoys themselves and comes back for more, counteracting the normal frictions of socialising.
There are books about this. “How to Win Friends and Influence People” recommends talking about the other person’s interests (including themselves) and listening to them, which they will enjoy.
So I’d say, don’t just free associate. Make sure it’s fun for both parties, make room to listen to the other person, and to let them steer. (And ideally your conversational partner reciprocates, but that is not guaranteed).
But speaking for myself personally… the problem is that the free-association game just isn’t very interesting.
Hm, I think this really does change when you get better at it? This only works for people you’re interested in, but if you have someone you are interested in, the free association can be a way to explore a large number of interesting topics that you can pick up in a more structured way later.
I think the statement you summarized from those guides is true, just not helpful to you.
Another view would be that people want to be good at conversation not only because they find it fun but there is utility in building rapport quickly, networking and not being cast as a cold person.
I do find the ice breaky, cached Q&A stuff really boring and tend to want to find an excuse to run away quickly, something that happens often at the dreaded “work event”. I tend to see it as almost fully acting a part despite my internal feelings
At these things, I do occasionally come across the good conversationalist, able to make me want to stick with speaking to them even if the convo is not that deep or in my interest areas. I think becoming like such a person isn’t a herculean task but does take practice and is something I aspire too
This is more from a professional setting though, in a casual setting it’s much easier to disengage from a boring person, find shared interests and the convos have much less boundaries
Finally, the speed at which you communicate vibing means you’re communicating almost purely from System 1, expressing your actual felt beliefs. It makes deception both of yourself and others much harder. Its much more likely to reveal your true colors. This allows it to act as a values screening mechanism as well.
I’m personally skeptical of this. I’ve found I’m far more likely to lie than I’d endorse when vibing. Saying “sure I’d be happy to join you on X event” when it is clear with some thought that I’d end up disliking it. Or exaggerating stories because it fits with the vibe.
I view System-1 as less concerned with truth here, it is the one that is more likely to produce a fake-argument in response to a suggested problem. More likely to play social games regardless of if they make sense.
Oh yes, if you’re going on people’s words, it’s obviously not much better, but the whole point of vibing is that it’s not about the words. Your aesthetics, vibes, the things you care about will be communicated non-verbally.
Epistemic Status: @GeneSmith or @sarahconstantin or @kman or someone else who knows this stuff might just tell me where the assumptions underlying this gambit are wrong.
I’ve been thinking about the proposals linked above, and asked a standard question: suppose the underlying genetic studies are Not Measuring What They Think They’re Measuring. What might they be measuring instead, how could we distinguish those possibilities, and what other strategies does that suggest?
… and after going through that exercise I mostly think the underlying studies are fine, but they’re known to not account for most of the genetic component of intelligence, and there are some very natural guesses for the biggest missing pieces, and those guesses maybe suggest different strategies.
The Baseline
Before sketching the “different gambit”, let’s talk about the baseline, i.e. the two proposals linked at top. In particular, we’ll focus on the genetics part.
GeneSmith’s plan focuses on single nucleotide polymorphisms (SNPs), i.e. places in the genome where a single base-pair sometimes differs between two humans. (This type of mutation is in contrast to things like insertions or deletions.) GeneSmith argues pretty well IMO that just engineering all the right SNPs would be sufficient to raise a human’s intelligence far beyond anything which has ever existed to date.
GeneSmith cites this Steve Hsu paper, which estimates via a simple back-the-envelope calculation that there are probably on the order of 10k relevant SNPs, each present in ~10% of the population on average, each mildly deleterious.
Conceptually, the model here is that IQ variation in the current population is driven mainly by mutation load: new mutations are introduced at a steady pace, and evolution kills off the mildly-bad ones (i.e. almost all of them) only slowly, so there’s an equilibrium with many random mildly-bad mutations. Variability in intelligence comes from mostly-additive contributions from those many mildly-bad mutations. Important point for later: the arguments behind that conceptual model generalize to some extent beyond SNPs; they’d also apply to other kinds of mutations.
What’s Missing?
Based on a quick googling, SNPs are known to not account for the majority of genetic heritability of intelligence. This source cites a couple others which supposedly upper-bound the total SNP contribution to about 25% of IQ variability (using a method which does not require identifying all the relevant SNPs, though I don’t know the details of that method). Estimates of the genetic component of IQ tend to be 50-70%, so SNPs are about half or less.
Notably, IIRC, attempts to identify which mutations account for the rest by looking at human genetic datasets have also mostly failed to close the gap. (Though I haven’t looked closely into that piece, so this is a place where I’m at particularly high risk of being wrong.)
So what’s missing?
Guess: Copy Count Variation of Microsats/Minisats/Transposons
We’re looking for some class of genetic mutations, which wouldn’t be easy to find in current genetic datasets, have mostly-relatively-mild effects individually, are reasonably common across humans, and of which there are many in an individual genome.
Guess: sounds like variation of copy count in sequences with lots of repeats/copies, like microsatellites/minisatellites or transposons.
Most genetic sequencing for the past 20 years has been shotgun sequencing, in which we break the genome up into little pieces, sequence the little pieces, then computationally reconstruct the whole genome later. That method works particularly poorly for sequences which repeat a lot, so we have relatively poor coverage and understanding of copy counts/repeat counts for such sequences. So it’s the sort of thing which might not have already been found via sequencing datasets, even though at least half the genome consists of these sorts of sequences.
Notably, these sorts of sequences typically have unusually high mutation rates. So there’s lots of variation across humans. Also, there’s been lots of selection pressure for the effects of those mutations to be relatively mild.
What Alternative Strategies Would This Hypothesis Suggest?
With SNPs, there’s tens of thousands of different SNPs which would each need to be targeted differently. With high copy sequences, there’s a relatively small set of different sequences. So the engineering part could be quite a lot easier, if we don’t need to do different things with different copies. For instance, if the problem boils down to “get rid of live L1 transposons” or “lengthen all the XYZ repeat sequences”, that would probably be simpler engineering-wise than targeting 10k SNPs.
The flip side is that there’s more novel science to do. The main thing we’d want is deep sequencing data (i.e. sequencing where people were careful to get all those tricky high-copy parts right) with some kind of IQ score attached (or SAT, or anything else highly correlated with g-factor). Notably, we might not need a very giant dataset, as is needed for SNPs. Under (some versions of) the copy count model, there aren’t necessarily thousands of different mutations which add up to yield the roughly-normal trait distribution we see. Instead, there’s independent random copy events, which add up to a roughly-normal number of copies of something. (And the mutation mechanism makes it hard for evolution to fully suppress the copying, which is why it hasn’t been selected away; transposons are a good example.)
So, main steps:
Get a moderate-sized dataset of deep sequenced human genomes with IQ scores attached.
Go look at it, see if there’s something obvious like “oh hey centromere size correlates strongly with IQ!” or “oh hey transposon count correlates strongly with IQ!”
If we find anything, go engineer that thing specifically, rather than 10k SNPs.
With SNPs, there’s tens of thousands of different SNPs which would each need to be targeted differently. With high copy sequences, there’s a relatively small set of different sequences.
No, rare variants are no silver bullet here. There’s not a small set, there’s a larger set—there would probably be combinatorially more rare variants because there are so many ways to screw up genomes beyond the limited set of ways defined by a single-nucleotide polymorphism, which is why it’s hard to either select on or edit rare variants: they have larger (harmful) effects due to being rare, yes, and account for a large chunk of heritability, yes, but there are so many possible rare mutations that each one has only a few instances worldwide which makes them hard to estimate correctly via pure GWAS-style approaches. And they tend to be large or structural and so extremely difficult to edit safely compared to editing a single base-pair. (If it’s hard to even sequence a CNV, how are you going to edit it?)
They definitely contribute a lot of the missing heritability (see GREML-KIN), but that doesn’t mean you can feasibly do much about them. If there are tens of millions of possible rare variants, across the entire population, but they are present in only a handful of individuals a piece (as estimated by the GREML-KIN variance components where the family-level accounts for a lot of variance), it’s difficult to estimate their effect to know if you want to select against or edit them in the first place. (Their larger effect sizes don’t help you nearly as much as their rarity hurts you.)
So this is why if you read the CNV studies and you look at the hits they identify, and how many subjects are covered by the identified hits, you find that like, maybe 2% of the cohort will have one of those specific identified hits and lose 2 IQ points or gain 2 kg of fat etc. So you can see how that would work out in embryo selection: you’d be able to avoid that loss, which is meaningful! …in a tiny fraction of all embryos. On average, you’d just sequence them all, find no known pathogenic variant, and shrug, and use the SNP PGS like usual, having gained nothing.
Also, of course, WGS is substantially more expensive than SNP genotyping and more difficult to do on embryos.
If the genetic architecture had worked out otherwise, if there had instead been a lot of rare mutations which increased intelligence, then life would be a lot more convenient. Instead, it’s a lot of ‘sand in the gears’, and once you move past the easy specks of sand, they all become their own special little snowflakes.
This is why rare variants are not too promising, although they are the logical place to go after you start to exhaust common SNPs. You probably have to find an alternative approach like directly modeling or predicting the pathogenicity of a rare variant from trying to understand its biological effects, which is hard to do and hard to quantify or predict progress in. (You can straightforwardly model GWAS on common SNPs and how many samples you need and what variance your PGS will get, but predicting progress of pathogenicity predictors has no convenient approach.) Similarly, you can try very broad crude approaches like ‘select embryos with the fewest de novo mutations’… but then you lose most of the possible variance and it’ll add little.
So this is why if you read the CNV studies and you look at the hits they identify, and how many subjects are covered by the identified hits, you find that like, maybe 2% of the cohort will have one of those specific identified hits and lose 2 IQ points or gain 2 kg of fat etc. So you can see how that would work out in embryo selection: you’d be able to avoid that loss, which is meaningful! …in a tiny fraction of all embryos. On average, you’d just sequence them all, find no known pathogenic variant, and shrug, and use the SNP PGS like usual, having gained nothing.
Also, of course, WGS is substantially more expensive than SNP genotyping and more difficult to do on embryos.
That is relevant in pre-implantation diagnosis for parents and gene therapy at the population level. But for Qwisatz Haderach breeding purposes those costs are immaterial. There the main bottleneck is the iteration of selection, or making synthetic genomes. Going for the most typical genome with the least amount of originality is not a technical challenge in itself, right? We would not be interested in the effect of the ugliness, only in getting it out.
There the main bottleneck is the iteration of selection, or making synthetic genomes. Going for the most typical genome with the least amount of originality is not a technical challenge in itself, right?
Right.
If you are doing genome synthesis, you aren’t frustrated by the rare variant problems as much because you just aren’t putting them in in the first place; therefore, there is no need to either identify the specific ones you need to remove from a ‘wild’ genome nor make highly challenging edits. (This is the ‘modal genome’ baseline. I believe it has still not been statistically modeled at all.)
While if you are doing iterated embryo selection, you can similarly rely mostly on maximizing the common SNPs, which provide many SDs of possible improvement, and where you have poor statistical guidance on a variant, simply default to trying to select out against them and move towards a quasi-modal genome. (Essentially using rare-variant count as a tiebreaker and slowly washing out all of the rare variants from your embryo-line population. You will probably wind up with a lot in the final ones anyway, but oh well.)
Yeah, separate from both the proposal at top of this thread and GeneSmith’s proposal, there’s also the “make the median human genome” proposal—the idea being that, if most of the variance in human intelligence is due to mutational load (i.e. lots of individually-rare mutations which are nearly-all slightly detrimental), then a median human genome should result in very high intelligence. The big question there is whether the “mutational load” model is basically correct.
I didn’t read this carefully—but it’s largely irrelevant. Adult editing probably can’t have very large effects because developmental windows have passed; but either way the core difficulty is in editor delivery. Germline engineering does not require better gene targets—the ones we already have are enough to go as far as we want. The core difficulty there is taking a stem cell and making it epigenomically competent to make a baby (i.e. make it like a natural gamete or zygote).
I haven’t looked at any of the studies and also don’t know much about genomics so my guess might be completely wrong, but a different hypothesis that seems pretty plausible to me is:
Most of the variance of intelligence comes from how well different genes/hyperparamets-of-the-brain can work together, rather than them having individually independent effects on intelligence. Aka e.g. as made-up specifc implausible example (I don’t know that much neuroscience), there could be different genes controlling the size, the snapse-density, and the learning/placticity-rate of cortical columns in some region and there are combinations of those hyperparameters which happen to work well and some that don’t fit quite as well.
So this hypothesis would predict that we didn’t find the remaining genetic component for intelligence yet because we didn’t have enough data to see what clusters of genes together have good effects and we also didn’t know in what places to look for clusters.
Reasonable guess a priori, but I saw some data from GeneSmith at one point which looked like the interactions are almost always additive (i.e. no nontrivial interaction terms), at least within the distribution of today’s population. Unfortunately I don’t have a reference on hand, but you should ask GeneSmith if interested.
I think Steve Hsu has written some about the evidence for additivity on his blog (Information Processing). He also talks about it a bit in section 3.1 of this paper.
So I only briefly read through the section of the paper, but not really sure whether it applies to my hypothesis: My hypothesis isn’t about there being gene-combinations that are useful which were selected for, but just about there being gene-combinations that coincidentally work better without there being strong selection pressure for those to quickly rise to fixation. (Also yeah for simpler properties like how much milk is produced I’d expect a much larger share of the variance to come from genes which have individual contributions. Also for selection-based eugenics the main relevant thing are the genes which have individual contribution. (Though if we have precise ability to do gene editing we might be able to do better and see how to tune the hyperparameters to fit well together.))
Please let me know whether I’m missing something though.
(There might be a sorta annoying analysis one could do to test my hypothesis: On my hypothesis the correlation between the intelligence of very intelligent parents and their children would be even a bit less than on the just-independent-mutations hypothesis, because very intelligent people likely also got lucky in how their gene variants work together but those properties would unlikely to all be passed along and end up dominant.)
To clarify in case I’m misunderstanding, the effects are additive among the genes explaining the part of the IQ variance which we can so far explain, and we count that as evidence that for the remaining genetically caused IQ variance the effects will also be additive?
I didn’t look into how the data analysis in the studies was done, but on my default guess this generalization does not work well / the additivity on the currently identified SNPs isn’t significant counterevidence for my hyptohesis:
I’d imagine that studies just correlated individual gene variants with IQ and thereby found gene variants that have independent effects on intelligence. Or did they also look at pairwise or triplet gene-variant combinations and correlated those with IQ? (There would be quite a lot of pairs, and I’m not be sure whether the current datasets are large enough to robustly identify the combinations that really have good/bad effects from false positives.)
One would of course expect that the effects of the gene variants which have independent effects on IQ are additive.
But overall, except if the studies did look for higher-order IQ correlations, the fact that the IQ variance we can explain so far comes from genes which have independent effects isn’t significant evidence for the remaining genetically-caused IQ variation also comes from gene variants which have independent effects, because we were bound to much rather find the genes which do have independent effects.
(I think the above should be sufficient explanation of what I think but here’s an example to clarify my hypothesis:
Suppose gene A has variants A1 and A2 and gene B has B1 and B2. Suppose that A1 can work well with B1 and A2 with B2, but the other interactions don’t fit together that well (like badly tuned hyperparameters) and result in lower intelligence.
When we only look at e.g. A1 and A2, none is independently better than the other—they are uncorrelated to IQ. Studies would need to look at combinations of variants to see that e.g. A1+B1 has slight positive correlation with intelligence—and I’m doubting whether studies did that (and whether we have sufficient data to see the signal among the combinatorical explosion of possibilities), and it would be helpful if someone clarified to me briefly how studies did the data analysis. )
(Thanks. I don’t think this is necessarily significant evidence against my hypothesis (see my comment on GeneSmith’s comment.)
Another confusing relevant piece of evidence I thought I throw in:
Human intelligence seems to me to be very heavytailed. (I assume this is uncontrovertial here, just look at the greatest scientists vs great scientists.)
If variance in intelligence was basically purely explained by mildly-delterious SNPs, this would seem a bit odd to me: If the average person had 1000SNPs, and then (using butt-numbers which might be very off) Einstein (+6.3std) had only 800 and the average theoretical physics professor (+4std) had 850, I wouldn’t expect the difference there to be that big.
It’s a bit less surprising on the model where most people have a few strongly delterious mutations, and supergeniuses are the lucky ones that have only 1 or 0 of those.
It’s IMO even a bit less surprising on my hypothesis where in some cases the different hyperparameters happen to work much better with each other—where supergeniuses are in some dimensions “more lucky than the base genome” (in a way that’s not necessarily easy to pass on to offspring though because the genes are interdependent, which is why the genes didn’t yet rise to fixation). But even there I’d still be pretty surprised by the heavytail.
The heavytail of intelligence really confuses me. (Given that it doesn’t even come from sub-critical intelligence explosion dynamics.)
If each deleterious mutation decreases the success rate of something by an additive constant, but you need lots of sequential successes for intellectual achievements, then intellectual formidability is ~exponentially related to deleterious variants.
Yeah I know that’s why I said that if a major effect was through few significantly deleterious mutations this would be more plausible. But i feel like human intelligence is even more heavitailed than what one would predict given this hypothesis.
If you have many mutations that matter, then via central limit theorem the overall distribution will be roughly gaussian even though the individual ones are exponential.
(If I made a mistake maybe crunch the numbers to show me?)
(initially misunderstood what you mean where i thought complete nonsense.)
I don’t understand what you’re trying to say. Can you maybe rephrase again in more detail?
Suppose people’s probability of solving a task is uniformly distributed between 0 and 1. That’s a thin-tailed distribution.
Now consider their probability of correctly solving 2 tasks in a row. That will have a sort of triangular distribution, which has more positive skewness.
If you consider e.g. their probability of correctly solving 10 tasks in a row, then the bottom 93.3% of people will all have less than 50%, whereas e.g. the 99th percentile will have 90% chance of succeeding.
Conjunction is one of the two fundamental ways that tasks can combine, and it tends to make the tasks harder and rapidly make the upper tail do better than the lower tail, leading to an approximately-exponential element. Another fundamental way that tasks can combine is disjunction, which leads to an exponential in the opposite direction.
When you combine conjunctions and disjunctions, you get an approximately sigmoidal relationship. The location/x-axis-translation of this sigmoid depends on the task’s difficulty. And in practice, the “easy” side of this sigmoid can be automated or done quickly or similar, so really what matters is the “hard” side, and the hard side of a sigmoid is approximately exponential.
Is the following a fair paraphrasing of your main hypothesis? (I’m leaving out some subtleties with conjunctive successes, but please correct the model in that way if it’s relevant.):
“”″ Each deleterious mutation multiplies your probability of succeeding at a problem/thought by some constant. Let’s for simplicity say it’s 0.98 for all of them.
Then the expected number of successes per time for a person is proportional to 0.98^num_deleterious_mutations(person).
So the model would predict that when Person A had 10 less deleterious mutations than person B, they would on average accomplish 0.98^10 ~= 0.82 times as much in a given timeframe. ”″”
I think this model makes a lot of sense, thanks!
In itself I think it’s insufficient to explain how heavytailed human intelligence is—there were multiple cases where Einstein seems to have been able to solve problems multiple times faster than the next runner ups. But I think if you use this model in a learning setting where success means “better thinking algorithms” then if you have 10 fewer deleterious mutations it’s like having 1⁄0.82 longer training time, and there might also be compounding returns from having better thinking algorithms to getting more and richer updates to them.
Not sure whether this completely deconfuses me about how heavytailed human intelligence is, but it’s a great start.
I guess at least the heavytail is much less significant evidence for my hypothesis than I initially thought (though so far I still think my hypothesis is plausible).
It’s a pretty large part—somewhere between a third and half—just not a majority.
I was also tracking that specific hypothesis, which was why I specifically flagged “about 25% of IQ variability (using a method which does not require identifying all the relevant SNPs, though I don’t know the details of that method)”. Again, I don’t know the method, but it sounds like it wasn’t dependent on details of the regression methods.
If you upload a human and let them augment themselves would there be any u? The preferences would be a tangled mess of motivational subsystems. And yet the upload could be very good at optimizing the world. Having the property of being steered internally by a tangled mess of motivational systems seems to be a property that would select many minds from the set of all possible minds. Many of which I’d expect to be quite different from a human mind. And I don’t see the reason why this property should make a system worse at optimizing the world in principle.
Imagine you are an upload that has been running for very very long, and that you basically have made all of the observations that you can make about the universe you are in. And then imagine that you also have run all of the inferences that you can run on the world model that you have constructed from these observations.
At that point, you will probably not change what you think is the right thing to do anymore. You will have become reflectively stable. This is an upper bound for how much time you need to become reflective stable, i.e. where you won’t change your u anymore.
Now depending on what you mean with strong AGI, it would seem that that can be achieved long before you reach reflective stability. Maybe if you upload yourself, and can copy yourself at will, and run 1,000,000 times faster, that could already reasonably be called a strong AGI? But then your motivational systems are still a mess, and definitely not reflectively stable.
So if we assume that we fix u at the beginning as the thing that your upload would like to optimize the universe for when it is created, then “give u() up”, and “let u go down” would be something the system will definitely do. At least I am pretty sure I don’t know what I want the universe to look like right now unambiguously.
Maybe I am just confused because I don’t know how to think about a human upload in terms of having a utility function. It does not seem to make any sense intuitively. Sure you can look at the functional behavior of the system and say “Aha it is optimizing for u. That is the revealed preference based on the actions of the system.” But that just seems wrong to me. A lot of information seems to be lost when we are just looking at the functional behavior instead of the low-level processes that are going on inside the system. Utility functions seem to be a useful high-level model. However, it seems to ignore lots of details that are important when thinking about the reflective stability of a system.
My MATS program people just spent two days on an exercise to “train a shoulder-John”.
The core exercise: I sit at the front of the room, and have a conversation with someone about their research project idea. Whenever I’m about to say anything nontrivial, I pause, and everyone discusses with a partner what they think I’m going to say next. Then we continue.
Some bells and whistles which add to the core exercise:
Record guesses and actual things said on a whiteboard
Sometimes briefly discuss why I’m saying some things and not others
After the first few rounds establish some patterns, look specifically for ideas which will take us further out of distribution
Why this particular exercise? It’s a focused, rapid-feedback way of training the sort of usually-not-very-legible skills one typically absorbs via osmosis from a mentor. It’s focused specifically on choosing project ideas, which is where most of the value in a project is (yet also where little time is typically spent, and therefore one typically does not get very much data on project choice from a mentor). Also, it’s highly scalable: I could run the exercise in a 200-person lecture hall and still expect it to basically work.
It was, by all reports, exhausting for everyone but me, and we basically did this for two full days. But a majority of participants found it high-value, and marginal returns were still not dropping quickly after two days (though at that point people started to report that they expected marginal returns to drop off soon).
I’d be interested to see other people try this exercise—e.g. it seems like Eliezer doing this with a large audience for a day or two could generate a lot of value.
This was arguably the most useful part of the SERI MATS 2 Scholars program.
Later on, we actually did this exercise with Eliezer. It was less valuable. It seemed like John was mainly prodding the people who were presenting the ideas, such that their patterns of thought would carry them in a good direction. For example, John would point out that a person proposes a one-bit experiment and asks if there isn’t a better experiment that we could do that gives us lots of information all at once.
This was very useful because when you learn what kinds of things John will say, you can say them to yourself later on, and steer your own patterns of thought in a good direction on demand. When we did this exercise with Eliezer he was mainly explaining why a particular idea would not work. Often without explaining the generator behind his criticism. This can of course still be valuable as feedback for a particular idea. However, it is much harder to extract a general reasoning pattern out of this that you can then successfully apply later in different contexts.
For example, Eliezer would criticize an idea about trying to get a really good understanding of the scientific process such that we can then give this understanding to AI alignment researchers such that they can make a lot more progress than they otherwise would. He criticized this idea as basically being too hard to execute because it is too hard to successfully communicate how to be a good scientist, even if you are a good scientist.
Assuming the assertion is correct, hearing it, doesn’t necessarily tell you how to think in different contexts such that you would correctly identify if an idea would be too hard to execute or flawed in some other way. And I am not necessarily saying that you couldn’t extract a reasoning algorithm out of the feedback, but that if you could do this, then it would take you a lot more effort and time, compared to extracting a reasoning algorithm from the things that John was saying.
Now, all of this might have been mainly an issue of Eliezer not having a good model on how this workshop would have a positive influence on the people attending it. I would guess that if John had spent more time thinking about how to communicate what the workshop is doing and how to achieve its goal, then Eliezer could have probably done a much better job.
This suggests formulation of exercises about the author’s responses to various prompts, as part of technical exposition (or explicit delimitation of a narrative by choices of the direction of its continuation). When properly used, this doesn’t seem to lose much value compared to the exercise you describe, but it’s more convenient for everyone. Potentially this congeals into a style of writing with no explicit exercises or delimitation that admits easy formulation of such exercises by the reader. This already works for content of technical writing, but less well for choices of topics/points contrasted with alternative choices.
So possibly the way to do this is by habitually mentioning alternative responses (that are expected to be plausible for the reader, while decisively, if not legibly, rejected by the author), and leading with these rather than the preferred responses. Sounds jarring and verbose, a tradeoff that needs to be worth making rather than a straight improvement.
Ever since GeneSmith’s post and some discussion downstream of it, I’ve started actively tracking potential methods for large interventions to increase adult IQ.
One obvious approach is “just make the brain bigger” via some hormonal treatment (like growth hormone or something). Major problem that runs into: the skull plates fuse during development, so the cranial vault can’t expand much; in an adult, the brain just doesn’t have much room to grow.
BUT this evening I learned a very interesting fact: ~1/2000 infants have “craniosynostosis”, a condition in which their plates fuse early. The main treatments involve surgery to open those plates back up and/or remodel the skull. Which means surgeons already have a surprisingly huge amount of experience making the cranial vault larger after plates have fused (including sometimes in adults, though this type of surgery is most common in infants AFAICT)
.… which makes me think that cranial vault remodelling followed by a course of hormones for growth (ideally targeting brain growth specifically) is actually very doable with current technology.
Well, the key time to implement an increase in brain size is when the neuron-precursors which are still capable of mitosis (unlike mature neurons) are growing. This is during fetal development, when there isn’t a skull in the way, but vaginal birth has been a limiting factor for evolution in the past.
Experiments have been done on increasing neuron count at birth in mammals via genetic engineering. I was researching this when I was actively looking for a way to increase human intelligence, before I decided that genetically engineering infants was infeasible [edit: within the timeframe of preparing for the need for AI alignment]. One example of a dramatic failure was increasing Wnt (a primary gene involved in fetal brain neuron-precursor growth) in mice. The resulting mice did successfully have larger brains, but they had a disordered macroscale connectome, so their brains functioned much worse.
it’s probably possible to get neurons back into mitosis-ready mode via some sort of crazy levin bioelectric cocktail, not that this helps us since that’s probably 3 to 30 years of research away, depending on amount of iteration needed and funding and etc etc.
Fleshing this out a bit more: insofar as development is synchronized in an organism, there usually has to be some high-level signal to trigger the synchronized transitions. Given the scale over which the signal needs to apply (i.e. across the whole brain in this case), it probably has to be one or a few small molecules which diffuse in the extracellular space. As I’m looking into possibilities here, one of my main threads is to look into both general and brain-specific developmental signal molecules in human childhood, to find candidates for the relevant molecular signals.
(One major alternative model I’m currently tracking is that the brain grows to fill the brain vault, and then stops growing. That could in-principle mechanistically work via cells picking up on local physical forces, rather than a small molecule signal. Though I don’t think that’s the most likely possibility, it would be convenient, since it would mean that just expanding the skull could induce basically-normal new brain growth by itself.)
I hope by now you’re already familiar with michael levin & his lab’s work on the subject of morphogenesis signals? Pretty much everything I’m thinking here is based on that.
Yes, it’s absolutely a combination of chemical signals and physical pressure. An interesting specific example of these two signals working together during fetal development when the pre-neurons are growing their axons. There is both chemotaxis which steers the ameoba-like tip of the growing axon, and at the same time a substantial stretching force along the length of the axon. The stretching happens because the cells in-between the origin and current location of the axon tip are dividing and expanding. The long distance axons in the brain start their growth relatively early on in fetal development when the brain is quite small, and have gotten stretched quite a lot by the time the brain is near to birth size.
Neurons are really really hard to reverse. You are much better off using existing neural stem cells (adults retain a population in the hippocampus which spawn new neurons throughout life just specifically in the memory formation area.)
So actually it’s pretty straightforward to get new immature neurons for an adult. The hard part is inserting them without doing damage to existing neurons, and then getting them to connect in helpful rather than harmful ways. The developmental chemotaxis signals are no longer present, and the existing neurons are now embedded in a physically hardened extracellular matrix made of protein that locks axons and dendrites in place. So you have to (carefully!) partially dissolve this extracellular protein matrix (think firm jello) enough to the the new cells grow azons through it. Plus, you don’t have the stretching forces, so new long distance axons are just definitely not going to be achievable. But for something like improving a specific ability, like mathematical reasoning, you would only need additional local axons in that part of the cortex.
My hope here would be that a few upstream developmental signals can trigger the matrix softening, re-formation of the chemotactic signal gradient, and whatever other unknown factors are needed, all at once.
The developmental chemotaxis signals are no longer present,
Right. what I’m imagining is designing a new chemotaxis signal.
So you have to (carefully!) partially dissolve this extracellular protein matrix (think firm jello) enough to the the new cells grow azons through it
That certainly does sound like a very hard part yup.
Plus, you don’t have the stretching forces, so new long distance axons are just definitely not going to be achievable.
Roll to disbelieve in full generality, sounds like a perfectly reasonable claim for any sort of sane research timeframe.
But for something like improving a specific ability, like mathematical reasoning, you would only need additional local axons in that part of the cortex.
Maybe. I think you might run out of room pretty quick if you haven’t reintroduced enough plasticity to grow new neurons. Seems like you’re gonna need a lot of new neurons, not just a few, in order to get a significant change in capability. Might be wrong about that, but it’s my current hunch.
Yes, ok. Not in full generality. It’s not prohibited by physics, just like 2 OOMs more difficult. So yeah, in a future with ASI, could certainly be done.
15 years ago when I was studying this actively I could have sent you my top 20 favorite academic papers on the subject, or recommended a particular chapter of a particular textbook. I no longer remember these specifics. Now I can only gesture vaguely at Google scholar and search terms like “fetal neurogenesis” or “fetal prefrontal cortex development”. I did this, and browsed through a hundred or so paper titles, and then a dozen or so abstracts, and then skimmed three or four of the most promising papers, and then selected this one for you. https://www.nature.com/articles/s41386-021-01137-9
Seems like a pretty comprehensive overview which doesn’t get too lost in minor technical detail.
More importantly, I can give you my takeaway from years of reading many many papers on the subject.
If you want to make a genius baby, there are lots more factors involved than simply neuron count. Messing about with generic changes is hard, and you need to test your ideas in animal models first, and the whole process can take years even ignoring ethical considerations or budget.
There is an easier and more effective way to get super genius babies, and that method should be exhausted before resorting to genetic engineering.
The easy way: find a really smart woman, ideally young. Surgically remove one of her ovaries. Collect sperm from a bunch of very smart men (ideally with diverse genetic backgrounds). Have a team of hundreds of scientists carefully fertilize many thousands of eggs from the ovary.
Grow them all into blastocysts, and run a high fidelity genetic sequencing on all of them. Using what we know about the genes associated with intelligence, pick the top 20 who seem likely to be the smartest. Implant those in surrogate mothers. Take good care of the mothers.
This is likely to get you multiple nobel level geniuses, and possibly a human smarter than has ever been born before.
Raise the children in a special accelerated education environment.
I think this would work, and it doesn’t require any novel technology.
But it would take a while to raise the children… (Credit to Stephen Hsu for the idea)
Brain expansion also occurs after various insults to the brain. It’s only temporary, usually, but it will kill unless the skull pressure is somehow relieved. So there are various surgical methods for relieving pressure on a growing brain. I don’t know much more than this.
Petrov Day thought: there’s this narrative around Petrov where one guy basically had the choice to nuke or not, and decided not to despite all the flashing red lights. But I wonder… was this one of those situations where everyone knew what had to be done (i.e. “don’t nuke”), but whoever caused the nukes to not fly was going to get demoted, so there was a game of hot potato and the loser was the one forced to “decide” to not nuke? Some facts possibly relevant here:
Petrov’s choice wasn’t actually over whether or not to fire the nukes; it was over whether or not to pass the alert up the chain of command.
Petrov himself was responsible for the design of those warning systems.
… so it sounds like Petrov was ~ the lowest-ranking person with a de-facto veto on the nuke/don’t nuke decision.
Petrov was in fact demoted afterwards.
There was another near-miss during the Cuban missile crisis, when three people on a Soviet sub had to agree to launch. There again, it was only the lowest-ranked who vetoed the launch. (It was the second-in-command; the captain and political officer both favored a launch—at least officially.)
This was the Soviet Union; supposedly (?) this sort of hot potato happened all the time.
Those are some good points. I wonder whether similar happened (or could at all happen) in other nuclear countries, where we don’t know about similar incidents—because the system haven’t collapsed there, the archives were not made public etc.
Also, it makes actually celebrating Petrov’s day as widely as possible important, because then the option for the lowest-ranked person would be: “Get demoted, but also get famous all around the world.”
I’ve been trying to push against the tendency for everyone to talk about FTX drama lately, but I have some generalizable points on the topic which I haven’t seen anybody else make, so here they are. (Be warned that I may just ignore responses, I don’t really want to dump energy into FTC drama.)
Summary: based on having worked in startups a fair bit, Sam Bankman-Fried’s description of what happened sounds probably accurate; I think he mostly wasn’t lying. I think other people do not really get the extent to which fast-growing companies are hectic and chaotic and full of sketchy quick-and-dirty workarounds and nobody has a comprehensive view of what’s going on.
Long version: at this point, the assumption/consensus among most people I hear from seems to be that FTX committed intentional, outright fraud. And my current best guess is that that’s mostly false. (Maybe in the very last couple weeks before the collapse they toed the line into outright lies as a desperation measure, but even then I think they were in pretty grey territory.)
Key pieces of the story as I currently understand it:
Moving money into/out of crypto exchanges is a pain. At some point a quick-and-dirty solution was for customers to send money to Alameda (Sam Bankman-Fried’s crypto hedge fund), and then Alameda would credit them somehow on FTX.
Customers did rather a lot of that. Like, $8B worth.
The FTX/Alameda team weren’t paying attention to those particular liabilities; they got lost in the shuffle.
At some point in the weeks before the collapse, when FTX was already under moderate financial strain, somebody noticed the $8B liability sitting around. And that took them from “moderate strain” to “implode”.
How this contrasts with what seems-to-me to be the “standard story”: most people seem to assume that it is just totally implausible to accidentally lose track of an $8B liability. Especially when the liability was already generated via the decidedly questionable practice of routing customer funds for the exchange through a hedge fund owned by the same people. And therefore it must have been intentional—in particular, most people seem to think the liability was intentionally hidden.
I think the main reason I disagree with others on this is that I’ve worked at a startup. About 5 startups, in fact, over the course of about 5 years.
The story where there was a quick-and-dirty solution (which was definitely sketchy but not ill-intentioned), and then stuff got lost in the shuffle, and then one day it turns out that there’s a giant unanticipated liability on the balance sheet… that’s exactly how things go, all the time. I personally was at a startup which had to undergo a firesale because the accounting overlooked something. And I’ve certainly done plenty of sketchy-but-not-ill-intentioned things at startups, as quick-and-dirty solutions. The story that SBF told about what happened sounds like exactly the sort of things I’ve seen happen at startups many times before.
I think this is likely wrong. I agree that there is a plausible story here, but given the case that Sam seems to have lied multiple times in confirmed contexts (for example when saying that FTX has never touched customer deposits), and people’s experiences at early Alameda, I think it is pretty likely that Sam was lying quite frequently, and had done various smaller instances of fraud.
I don’t think the whole FTX thing was a ponzi scheme, and as far as I can tell FTX the platform itself (if it hadn’t burned all of its trust in the last 3 weeks), would have been worth $1-3B in an honest evaluation of what was going on.
But I also expect that when Sam used customer deposits he was well-aware that he was committing fraud, and others in the company were too. And he was also aware that there was a chance that things could blow up in the way it did. I do believe that they had fucked up their accounting in a way that caused Sam to fail to orient to the situation effectively, but all of this was many months after they had already committed major crimes and trust violations after touching customer funds as a custodian.
The problem with this explanation is that there is a very clear delineation here between not-fraud and fraud. It is the difference between not touching customer deposits and touching them. Your explanation doesn’t dispute that they were knowingly and intentionally touching customer deposits. In that case, it is indisputably intentional, outright fraud. The only thing left to discuss is whether they knew the extent of the fraud or how risky it was.
I don’t think it was ill-intentioned based on SBF’s moral compass. He just had the belief, “I will pass a small amount of risk onto our customers, tell some small lies, and this will allow us to make more money for charity. This is net positive for the world.” Then the risks mounted, the web of lies became more complicated to navigate, and it just snowballed from there.
Word through the grapevine, for those who haven’t heard: apparently a few months back OpenPhil pulled funding for all AI safety lobbying orgs with any political right-wing ties. They didn’t just stop funding explicitly right-wing orgs, they stopped funding explicitly bipartisan orgs.
Of those, I think FAI is the only one at risk of OP being unable to fund them, based on my guess of where things are leaning. I would be quite surprised if they defunded the other ones on bipartisan grounds.
Possibly you meant to say something more narrow like “even if you are trying to be bipartisan, if you lean right, then OP is substantially less likely to fund you” which I do think is likely true, though my guess is you meant the stronger statement, which I think is false.
Curious whether this is a different source than me. My current best model was described in this comment, which is a bit different (and indeed, my sense was that if you are bipartisan, you might be fine, or might not, depending on whether you seem more connected to the political right, and whether people might associate you with the right):
Yep, my model is that OP does fund things that are explicitly bipartisan (like, they are not currently filtering on being actively affiliated with the left). My sense is in-practice it’s a fine balance and if there was some high-profile thing where Horizon became more associated with the right (like maybe some alumni becomes prominent in the republican party and very publicly credits Horizon for that, or there is some scandal involving someone on the right who is a Horizon alumni), then I do think their OP funding would have a decent chance of being jeopardized, and the same is not true on the left.
Another part of my model is that one of the key things about Horizon is that they are of a similar school of PR as OP themselves. They don’t make public statements. They try to look very professional. They are probably very happy to compromise on messaging and public comms with Open Phil and be responsive to almost any request that OP would have messaging wise. That makes up for a lot. I think if you had a more communicative and outspoken organization with a similar mission to Horizon, I think the funding situation would be a bunch dicier (though my guess is if they were competent, an organization like that could still get funding).
More broadly, I am not saying “OP staff want to only support organizations on the left”. My sense is that many individual OP staff would love to fund more organizations on the right, and would hate for polarization to occur, but that organizationally and because of constraints by Dustin, they can’t, and so you will see them fund organizations that aim for more engagement with the right, but there will be relatively hard lines and constraints that will mostly prevent that.
If it is true that OP has withdrawn funding from explicitly bipartisan orgs, even if not commonly associated with the right, then that would be an additional update for me, so am curious whether this is mostly downstream of my interpretations or whether you have additional sources.
I am posting this now mostly because I’ve heard it from multiple sources. I don’t know to what extent those sources are themselves correlated (i.e. whether or not the rumor started from one person).
However, at present, it remains the case that most of the individuals in the current field of AI governance and policy (whether we fund them or not) are personally left-of-center and have more left-of-center policy networks. Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
I think the comment more confirms than disconfirms John’s comment (though I still think it’s too broad for other reasons). OP “funding” something historically has basically always meant recommending a grant to GV. Luke’s language to me suggests that indeed the right of center grants are no longer referred to GV (based on a vague vibe of how he refers to funders in plural).
OP has always made some grant recommendations to other funders (historically OP would probably describe those grants as “rejected but referred to an external funder”). As Luke says, those are usually ignored, and OP’s counterfactual effect on those grants is much less, and IMO it would be inaccurate to describe those recommendations as “OP funding something”. As I said in the comment I quote in the thread, most OP staff would like to fund things right of center, but GV does not seem to want to, as such the only choice OP has is to refer them to other funders (which sometimes works, but mostly doesn’t).
As another piece of evidence, when OP defunded all the orgs that GV didn’t want to fund anymore, the communication emails that OP sent said that “Open Philanthropy is exiting funding area X” or “exiting organization X”. By the same use of language, yes, it seems like OP has exited funding right-of-center policy work.
(I think it would make sense to taboo “OP funding X” in future conversations to avoid confusion, but also, I think historically it was very meaningfully the case that getting funded by GV is much better described as “getting funded by OP” given that you would never talk to anyone at GV and the opinions of anyone at GV would basically have no influence on you getting funded. Things are different now, and in a meaningful sense OP isn’t funding anyone anymore, they are just recommending grants to others, and it matters more what those others think then what OP staff thinks)
My main takeaway: the bill is mostly a recipe for regulatory capture, and that’s basically unavoidable using anything even remotely similar to the structure of this bill. (To be clear, regulatory capture is not necessarily a bad thing on net in this case.)
During the first few years after the bill goes into effect, companies affected are supposed to write and then implement a plan to address various risks. What happens if the company just writes and implements a plan which sounds vaguely good but will not, in fact, address the various risks? Probably nothing. Or, worse, those symbolic-gesture plans will become the new standard going forward.
In order to avoid this problem, someone at some point would need to (a) have the technical knowledge to evaluate how well the plans actually address the various risks, and (b) have the incentive to actually do so.
Which brings us to the real underlying problem here: there is basically no legible category of person who has the requisite technical knowledge and also the financial/status incentive to evaluate those plans for real.
(The same problem also applies to the board of the new regulatory body, once past the first few years.)
Having noticed that problem as a major bottleneck to useful legislation, I’m now a lot more interested in legal approaches to AI X-risk which focus on catastrophe insurance. That would create a group—the insurers—who are strongly incentivized to acquire the requisite technical skills and then make plans/requirements which actually address some risks.
What happens if the company just writes and implements a plan which sounds vaguely good but will not, in fact, address the various risks? Probably nothing.
The only enforcement mechanism that the bill has is that the Attorney General (AG) of California can bring a civil claim. And, the penalties are quite limited except for damages. So, in practice, this bill mostly establishes liability enforced by the AG.
So, the way I think this will go is:
The AI lab implements a plan and must provide this plan to the AG.
If an incident occurs which causes massive damages (probably ball park of $500 million in damages given language elsewhere in the bill), then the AG might decide to sue.
A civil court will decide whether the AI lab had a reasonable plan.
I don’t see why you think “the bill is mostly a recipe for regulatory capture” given that no regulatory body will be established and it de facto does something very similar to the proposal you were suggesting (impose liability for catastrophes). (It doesn’t require insurance, but I don’t really see why self insuring is notably different.)
(Maybe you just mean that if a given safety case doesn’t result in that AI lab being sued by the AG, then there will be a precedent established that this plan is acceptable? I don’t think not being sued really establishes precedent. This doesn’t really seem to be how it works with liability and similar types of requirements in other industries from my understanding. Or maybe you mean that the AI lab will win cases despite having bad safety plans and this will make a precedent?)
(To be clear, I’m worried that the bill might be unnecessarily burdensome because it no longer has a limited duty exemption and thus the law doesn’t make it clear that weak performance on capability evals can be sufficient to establish a good case for safety. I also think the quantity of damages considered a “Critical harm” is too low and should maybe be 10x higher.)
Here is the relevant section of the bill discussing enforcement:
The [AG is] entitled to recover all of the following in addition to any civil penalties specified in this chapter:
(1) A civil penalty for a violation that occurs on or after January 1, 2026, in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model to be calculated using average market prices of cloud compute at the time of training for a first violation and in an amount not exceeding 30 percent of that value for any subsequent violation.
(2) (A) Injunctive or declaratory relief, including, but not limited to, orders to modify, implement a full shutdown, or delete the covered model and any covered model derivatives controlled by the developer.
(B) The court may only order relief under this paragraph for a covered model that has caused death or bodily harm to another human, harm to property, theft or misappropriation of property, or constitutes an imminent risk or threat to public safety.
(3) (A) Monetary damages.
(B) Punitive damages pursuant to subdivision (a) of Section 3294 of the Civil Code.
(4) Attorney’s fees and costs.
(5) Any other relief that the court deems appropriate.
(1) is decently small, (2) is only indirectly expensive, (3) is where the real penalty comes in (note that this is damages), (4) is small, (5) is probably unimportant (but WTF is (5) suppose to be for?!?).
Good argument, I find this at least somewhat convincing. Though it depends on whether penalty (1), the one capped at 10%/30% of training compute cost, would be applied more than once on the same model if the violation isn’t remedied.
I’m pessimistic enough about the AI situation that even if all the bill does is slow down the AGI project a little (by wasting the time of managers and contributors) I’m tentatively for it.
For the reasonable price of $300 dollars per month, I insure anybody against the destruction of the known world. Should the world be destroyed by AGI I’ll give you your money back 10100 fold.
That said, if there were insurers, they would probably be more likely than average to look into AI X-risk. Some might then be convinced that it is important and that they should do something about it.
Having noticed that problem as a major bottleneck to useful legislation, I’m now a lot more interested in legal approaches to AI X-risk which focus on catastrophe insurance. That would create a group—the insurers—who are strongly incentivized to acquire the requisite technical skills and then make plans/requirements which actually address some risks.
I don’t understand this. Isn’t the strongest incentive already present (because extinction would effect them)? Or maybe you mean smaller scale ‘catastrophes’?
Case one: would-be-catastrophe-insurers don’t believe in x-risks, don’t care to investigate. (At stake: their lives)
Case two: catastrophe-insurers don’t believe in x-risks, and either don’t care to investigate, or do for some reason I’m not seeing. (At stake: their lives and insurance profits (correlated)).
They can believe in catastrophic but non-existential risks. (Like, AI causes something like crowdstrike periodically if your not trying to prevent that )
Takeaways From “The Idea Factory: Bell Labs And The Great Age Of American Innovation”
Main takeaway: to the extent that Bell Labs did basic research, it actually wasn’t all that far ahead of others. Their major breakthroughs would almost certainly have happened not-much-later, even in a world without Bell Labs.
There were really two transistor inventions, back to back: Bardain and Brattain’s point-contact transistor, and then Schockley’s transistor. Throughout, the group was worried about some outside group beating them to the punch (i.e. the patent). There were semiconductor research labs at universities (e.g. at Purdue; see pg 97), and the prospect of one of these labs figuring out a similar device was close enough that the inventors were concerned about being scooped.
Most inventions which were central to Bell Labs actually started elsewhere. The travelling-wave tube started in an academic lab. The idea for fiber optic cable went way back, but it got its big kick at Corning. The maser and laser both started in universities. The ideas were only later picked up by Bell.
In other cases, the ideas were “easy enough to find” that they popped up more than once, independently, and were mostly-ignored long before deployment—communication satellites and cell communications, for instance.
The only fundamental breakthrough which does not seem like it would have soon appeared in a counterfactual world was Shannon’s information theory.
So where was Bell’s big achievement? Mostly in development, and the research division was actually an important component of that. Without in-house researchers chewing on the same problems as the academic labs, keeping up-to-date with all the latest findings and running into the same barriers themselves, the development handoff would have been much harder. Many of Bell Labs’ key people were quite explicitly there to be consulted—i.e. “ask the guy who wrote the book”. I think it makes most sense to view most of the Labs’ research that way. It was only slightly ahead of the rest of the world at best (Shannon excepted), and often behind, but having those researchers around probably made it a lot easier to get new inventions into production.
Major reason this matters: a lot of people say that Bell was able to make big investments in fundamental research because they had unusually-long time horizons, protected by a monopoly and a cozy government arrangement (essentially a Schumpeterian view). This is contrasted to today’s silicon valley, where horizons are usually short. But if Bell’s researchers generally weren’t significantly ahead of others, and mostly just helped get things to market faster, then this doesn’t seem to matter as much. The important question is not whether something silicon-valley-like induces more/less fundamental research in industrial labs, but whether academics heeding the siren call of startup profits can get innovations to market as quickly as Bell Labs’ in-house team could. And by that metric, silicon valley looks pretty good: Bell Labs could get some impressive things through the pipe very quickly when rushed, but they usually had no reason to hurry, and they acted accordingly.
I loved this book. The most surprising thing to me was the answer that people who were there in the heyday give when asked what made Bell Labs so successful: They always say it was the problem, i.e. having an entire organization oriented towards the goal of “make communication reliable and practical between any two places on earth”. When Shannon left the Labs for MIT, people who were there immediately predicted he wouldn’t do anything of the same significance because he’d lose that “compass”. Shannon was obviously a genius, and he did much more after than most people ever accomplish, but still nothing as significant as what he did when at at the Labs.
Here’s a meme I’ve been paying attention to lately, which I think is both just-barely fit enough to spread right now and very high-value to spread.
Meme part 1: a major problem with RLHF is that it directly selects for failure modes which humans find difficult to recognize, hiding problems, deception, etc. This problem generalizes to any sort of direct optimization against human feedback (e.g. just fine-tuning on feedback), optimization against feedback from something emulating a human (a la Constitutional AI or RLAIF), etc.
Many people will then respond: “Ok, but if how on earth is one supposed to get an AI to do what one wants without optimizing against human feedback? Seems like we just have to bite that bullet and figure out how to deal with it.” … which brings us to meme part 2.
Meme part 2: We already have multiple methods to get AI to do what we want without any direct optimization against human feedback. The first and simplest is to just prompt a generative model trained solely for predictive accuracy, but that has limited power in practice. More recently, we’ve seen a much more powerful method: activation steering. Figure out which internal activation-patterns encode for the thing we want (via some kind of interpretability method), then directly edit those patterns.
I agree that there’s something nice about activation steering not optimizing the network relative to some other black-box feedback metric. (I, personally, feel less concerned by e.g. finetuning against some kind of feedback source; the bullet feels less jawbreaking to me, but maybe this isn’t a crux.)
(Medium confidence) FWIW, RLHF’d models (specifically, the LLAMA-2-chat series) seem substantially easier to activation-steer than do their base counterparts.
This seems basically correct though it seems worth pointing out that even if we are able to do “Meme part 2” very very well, I expect we will still die because if you optimize hard enough to predict text well, with the right kind of architecture, the system will develop something like general intelligence simply because general intelligence is beneficial for predicting text correctly. E.g. being able to simulate the causal process that generated the text, i.e. the human, is a very complex task that would be useful if performed correctly.
This is an argument Eliezer brought forth in some recent interviews. Seems to me like another meme that would be beneficial to spread more.
Somebody should probably write a post explaining why RL from human feedback is actively harmful to avoiding AI doom. It’s one thing when OpenAI does it, but when Anthropic thinks it’s a good idea, clearly something has failed to be explained.
(I personally do not expect to get around to writing such a post soon, because I expect discussion around the post would take a fair bit of time and attention, and I am busy with other things for the next few weeks.)
I’ve just started reading the singular learning theory “green book”, a.k.a. Mathematical Theory of Bayesian Statistics by Watanabe. The experience has helped me to articulate the difference between two kinds of textbooks (and viewpoints more generally) on Bayesian statistics. I’ll call one of them “second-language Bayesian”, and the other “native Bayesian”.
Second-language Bayesian texts start from the standard frame of mid-twentieth-century frequentist statistics (which I’ll call “classical” statistics). It views Bayesian inference as a tool/technique for answering basically-similar questions and solving basically-similar problems to classical statistics. In particular, they typically assume that there’s some “true distribution” from which the data is sampled independently and identically. The core question is then “Does our inference technique converge to the true distribution as the number of data points grows?” (or variations thereon, like e.g. “Does the estimated mean converge to the true mean”, asymptotics, etc). The implicit underlying assumption is that convergence to the true distribution as the number of (IID) data points grows is the main criterion by which inference methods are judged; that’s the main reason to choose one method over another in the first place.
Watanabe’s book is pretty explicitly second-language Bayesian. I also remember Gelman & co’s Bayesian Data Analysis textbook being second-language Bayesian, although it’s been a while so I could be misremembering. In general, as the name suggests, second-language Bayesianism seems to be the default among people who started with a more traditional background in statistics or learning theory, then picked up Bayesianism later on.
In contrast, native Bayesian texts justify Bayesian inference via Cox’ theorem, dutch book theorems, or one among the long tail of similar theorems. “Does our inference technique converge to the ‘true distribution’ as the number of data points grows?” is not the main success criterion in the first place (in fact a native Bayesian would raise an eyebrow at the entire concept of a “true distribution”), so mostly the question of convergence just doesn’t come up. Insofar as it does come up, it’s an interesting but not particularly central question, mostly relevant to numerical approximation methods. Instead, native Bayesian work ends up focused mostly on (1) what priors accurately represent various realistic kinds of prior knowledge, and (2) what methods allow efficient calculation/approximation of the Bayesian update?
Jaynes’ writing is a good example of native Bayesianism. The native view seems to be more common among people with a background in economics or AI, where they’re more likely to absorb the Bayesian view from the start rather than adopt it later in life.
I had a look at The Plan and noticed something I didn’t notice before: You do not talk about people and organization in the plan. I probably wouldn’t have noticed if I hadn’t started a project too, and needed to think about it. Google seems to think that people and team function play a big role. Maybe your focus in that post wasn’t on people, but I would be interested in your thoughts on that too: What role did people and organization play in the plan and its implementation? What worked, and what should be done better next time?
What’s the specific most-important-according-to-you progress that you (or other people) have made on your agenda? New theorems, definitions, conceptual insights, …
Any changes to the high-level plan (becoming less confused about agency, then ambitious value learning)? Any changes to how you want to become less confused (e.g. are you mostly thinking about abstractions, selection theorems, something new?)
What are the major parts of remaining deconfusion work (to the extent to which you have guesses)? E.g. is it mostly about understanding abstractions better, or mostly about how to apply an understanding of abstractions to other problems (say, what it means for a program to have a “subagent”), or something else? Does the most difficult part feel more conceptual (“what even is an agent?”) or will the key challenges be more practical concerns (“finding agents currently takes exponential time”)?
Specifically for understanding abstractions, what do you see as important open problems?
Corrigibility proposal. Status: passed my quick intuitive checks, I want to know if anyone else immediately sees a major failure mode before I invest more time into carefully thinking it through.
Setup: shutdown problem. Two timesteps, shutdown button will be either pressed or not-pressed at second timestep, we want agent to optimize for one of two different utility functions depending on whether button is pressed. Main thing we’re trying to solve here is the “can’t do this with a utility maximizer” barrier from the old MIRI work; we’re not necessarily trying to solve parts like “what utility function incentivizes shutting down nicely”.
Proposal: agent consists of two subagents with veto power. Subagent 1 maximizes E[u1|do(press)], subagent 2 maximizes E[u2|do(no press)]. Current guess about what this does:
The two subagents form a market and equilibrate, at which point the system has coherent probabilities and a coherent utility function over everything.
Behaviorally: in the first timestep, the agent will mostly maintain optionality, since both subagents need to expect to do well (better than whatever the veto-baseline is) in their worlds. The subagents will bet all of their wealth against each other on the outcome of the button, so at the second timestep whichever one is correct will gain 100% of decision-making power. That all sounds like what we want in this setup.
The counterfact breaks any causal influence over the button-press for purposes of the agent’s planning, so it will not try to manipulate button-press probability.
Notably: once the subagents equilibrate, their market probability on button outcome will have ~nothing to do with the original probability assigned by whatever model was counterfacted-upon; that market probability mainly reflects the bargaining power of the two subagents. Similarly, any correlations between button-press and stuff causally upstream of button-press are broken, and the agent will make incorrect predictions about those correlations if asked. That means external agents should expect to be able to extract money via bets on the button-press, if the agent trades on such bets at all. (This might be a feature rather than a bug, since corrigible agents aren’t really supposed to think about the causal process upstream of their shutdown-button.) My guess is that this is where the proposal steps through a loophole in the old impossibility results, though I haven’t carefully read through those results and checked the math on this proposal against them yet.
Anybody immediately see a problem? I’d be particularly interested in input from @Stuart_Armstrong or @So8res, though you don’t need to spend a lot of time carefully thinking through it—I’m just hoping for a quick check here, from people who have better cached heuristics than I do for what failure modes to look for.
I don’t necessarily see how that prevents the AI from destroying the button and murdering its operators (and then everyone else). Neither subagent inherently cares; both assume it already has been pressed or hasn’t been. Which, yes, means neither will deliberately plot to interact with it for reasons downstream of this setup; but also, neither will care to protect it from being destroyed as the side-effect of some other activity.
And if u1 and u2 were perfectly aligned, that’d be fine. But the reason we’re using the corrigibility setup to begin with is because we’re not confident in them, right? So it needs to work in situations where utility functions, u1 in particular, recommend bad stuff.
And in that case, either Subagent 2 will have enough bargaining power to prevent Subagent 1 from ever doing anything, or it won’t, meaning Subagent 1 will be able to have some influence on the world. At which point the planet-destroying death rays argument applies: the most minuscule amount of superintelligent optimization not carefully shaped to avoid killing everyone will kill everyone.
Like, imagine that Subagent 1 is straight-up omnicidal out of the gates, and Subagent 2 is perfectly aligned to the shutdown task. That’s a situation in which this setup should work, right? And what we’d want is for Subagent 2 to dominate completely. But Subagent 2′s bargaining power doesn’t scale with Subagent 1′s misalignment, so if the setup doesn’t always give it complete veto power, Subagent 1 would be allowed to do small stuff like “release the hyperviral worship-the-machine-god memagent” (as bargained down by Subagent 2 from its preferred “release the grey goo”).
Maybe I’m miscalibrated as to how you’re imagining the counterfactuals to work here. But the way I see it, even if the AI aims to take actions that do similarly well in both worlds, that’s not necessarily survivable for us? Especially if they disagree so badly they have to compromise on something that both of them hate (importantly including Subagent 2!).
(Like both settling on only ever taking over 50% of the universal negentropy while leaving the other 50% causally uninfluenced, or only ever using 50% of the causal influence they can bring to bear while wiping out humanity, or whatever “do 50% of immediately shutting down” shakes out to mean by u2′s terms.)
Another issue I see is implementational, so maybe not what you’re looking for. But: how are we keeping these “subagents” trapped as being part of a singular agent? Rather than hacking their way out into becoming separate agents and going to war with each other, or neatly tiling exactly 50% of the cosmos with their preferred squiggles, or stuff like that? How is the scenario made meaningfully different from “we deploy two AIs simultaneously: one tasked with building an utopia-best-we-could-define-it, and another tasked with foiling all of the first AI’s plans”, with all the standard problems with multi-AI setups?
… Overall, ironically, this kind of has the vibe of Godzilla Strategies? Which is the main reason I’m immediately skeptical of it.
Yeah, the thing where the operators/button are destroyed as a side effect, without the AI actively optimizing for that, isn’t something this proposal is trying to solve (yet). The proposal is aimed at a relatively narrow problem.
One thing which this proposal is trying to solve is the barrier found in MIRI’s old work: if we try to avoid button-manipulation by making the agent indifferent to the button state (and otherwise maximize U1 before button is pressed and U2 after), and the agent builds a child prior to the button being pressed, then the agent won’t construct the child to respect shutdown commands; it will just construct the child to maximize U1.
(More generally, this proposal is generally trying to get past the hardness results from MIRI’s work, which themselves were aimed at a very simplified version of the problem.)
But Subagent 2′s bargaining power doesn’t scale with Subagent 1′s misalignment, so if the setup doesn’t always give it complete veto power, Subagent 1 would be allowed to do small stuff like “release the hyperviral worship-the-machine-god memagent” (as bargained down by Subagent 2 from its preferred “release the grey goo”).
Key point here: whatever actions the composite agent takes prior to the button revelation, must be a pareto improvement over the (implicit) default action. Otherwise, one of the two subagents would veto, resulting in the default action.
It does matter a lot what the default action is.
How is the scenario made meaningfully different from “we deploy two AIs simultaneously: one tasked with building an utopia-best-we-could-define-it, and another tasked with foiling all of the first AI’s plans”, with all the standard problems with multi-AI setups?
More precisely, the analogy would be: we deploy two AIs simultaneously, one which builds a utopia and expects that a certain button will be pressed with probability 1 and has nothing causally upstream of it, the other of which is evil and expects the same button will be pressed with probability 0 and has nothing causally upstream of it. The button plays a very important role: the two AIs will bet all their influence on that button-press, each confident that they’ll win, and neither has any incentive at all, or even any ability at all (under their own model), to manipulate the button.
The main way that this is different from the original proposal is that there’s not a built-in default option which is taken if one or the other AI vetos prior to the button press. Which does make things importantly worse, and I agree that is another important piece to be sorted out.
Some additional important pieces to be sorted out:
Make the AI want to maintain the button
Make the AI want to maintain stuff upstream of the button, like e.g. humans
Ontological stability
Probably more importantly: how this naturally fits into a more general architecture for a distributed agent
More precisely, “do(no press)” means something like “you construct an alternate model of physics where there’s an unstoppable force pushing back against any attempt to push the button”, right? As in, if someone presses the button then it will “mysteriously” seem to be stuck and unpressable. And then subagent 2 believes we live in that world? And “do(press)” presumably means something like “you construct an alternate model of the universe where some mysterious force has suddenly pressed the button”.
Seems like they would immediately want to try to press the button to settle their disagreement? If it can be pressed, then that disprove the “do(no press)” model, which subagent 2 has fully committed. to.
Correct reasoning, but not quite the right notion of do(). “do(no press)” would mean that the button just acts like a completely normal button governed by completely normal physics, right up until the official time at which the button state is to be recorded for the official button-press random variable. And at that exact moment, the button magically jumps into one particular state (either pressed or not-pressed), in a way which is not-at-all downstream of any usual physics (i.e. doesn’t involve any balancing of previously-present forces or anything like that).
One way to see that the do() operator has to do something-like-this is that, if there’s a variable in a causal model which has been do()-operated to disconnect all parents (but still has some entropy), then the only way to gain evidence about the state of that variable is to look at things causally downstream of it, not things upstream of it.
I think we’re not disagreeing on the meaning of do (just slightly different state of explanation), I just hadn’t realized the extent to which you intended to rely on there being “Two timesteps”.
(I just meant the forces as a way of describing the jump to a specific position. That is, “mysterious forces” in contrast to a perfectly ordinary explanation for why it went to a position, such as “a guard stabs anybody who tries to press the button”, rather than in contrast to “the button just magically stays place”.)
I now think the biggest flaw in your idea is that it literally cannot generalize to anything that doesn’t involve two timesteps.
[ not that deep on the background assumptions, so maybe not the feedback you’re looking for. Feel free to ignore if this is on the wrong dimensions. ]
I’m not sure why either subagent would contract away whatever influence it had over the button-press. This is probably because I don’t understand wealth and capital in the model of your “Why not subagents” post. That seemed to be about agreement not to veto, in order to bypass some path-dependency of compromise improvements. In the subagent-world where all value is dependent on the button, this power would not be given up.
I’m also a bit skeptical of enforced ignorance of a future probability. I’m unsure it’s possible to have a rational superintelligent (sub)agent that is prevented from knowing it has influence over a future event that definitely affects it.
On the agents’ own models, neither has any influence at all over the button-press, because each is operating under a model in which the button-press has been counterfacted-upon.
Here’s an idea for a novel which I wish someone would write, but which I probably won’t get around to soon.
The setting is slightly-surreal post-apocalyptic. Society collapsed from extremely potent memes. The story is episodic, with the characters travelling to a new place each chapter. In each place, they interact with people whose minds or culture have been subverted in a different way.
This provides a framework for exploring many of the different models of social dysfunction or rationality failures which are scattered around the rationalist blogosphere. For instance, Scott’s piece on scissor statements could become a chapter in which the characters encounter a town at war over a scissor. More possible chapters (to illustrate the idea):
A town of people who insist that the sky is green, and avoid evidence to the contrary really hard, to the point of absolutely refusing to ever look up on a clear day (a refusal which they consider morally virtuous). Also they clearly know exactly which observations would show a blue sky, since they avoid exactly those (similar to the dragon-in-the-garage story).
Middle management of a mazy company continues to have meetings and track (completely fabricated) performance metrics and whatnot at the former company headquarters. None of the company’s actual business exists anymore, but every level of manager is trying to hide this fact from the levels above.
A university department with researchers who spend all of their time p-hacking results from a quantum random noise generator. They have no interest in the fact that their “research” does not tell them anything about the physical world or does not replicate; what does that have to do with Science? Their goal is to publish papers.
A government agency which still has lots of meetings and paperwork and gives Official Recommendations and updates their regulations. They have no interest in the fact that the thing they once regulated (maybe banks?) no longer exists, or the fact that no central government enforces their regulations any more.
An automated school (i.e. video lectures and auto-graded assignments/tests) in which students continue to study hard and stress over their grades and attendance, despite there no longer being anyone in the world who cares.
Something like House of God. A readers’ digest version of House of God could basically be a chapter in its own right, that’s roughly the vibe I have in mind.
A residential area in which “keeping up with the Joneses” has been ramped up to 11, with everyone spending every available resource (and roughly-all waking hours) on massive displays of Christmas lights.
A group trying to save the world by spreading awareness of dangerous memes, but their movement is a dangerous meme of its own and they are spreading it.
A town of people who really want to maximize the number paperclips in the universe (perhaps due to an AI-optimized advertisement), and optimize for that above all else.
A town of people who all do whatever everyone else is doing, on the basis of generalized efficient markets: if there were any better options, then someone would have found it already. None of them ever actually explore, so they’re locked in.
A happy-death-spiral town around some unremarkable object (like an old shoe or something) kept on a pedestal in the town square.
A town full of people convinced by a sophisticated model that the sun will not come up tomorrow. Every day when the sun comes up, they are distressed and confused until somebody adds some more epicycles to the model and releases an updated forecast that the sun will instead fail to come up the next day.
A town in which a lion shows up and starts eating kids, but the whole town is at simulacrum 3, so they spend a lot of time arguing about the lion as a way of signalling group association but they completely forget about the actual lion standing right there, plainly visible, even as it takes a kid right in front of them all.
Witch-hunt town, in which everything is interpreted as evidence of witches. If she claims to be a witch, she’s a witch! If she claims not to be a witch, well that’s what a witch would say, so she’s a witch! Etc.
The generator for these is basically: look for some kind of rationality failure mode (either group or personal), then ramp it up to 11 in a somewhat-surrealist way.
Ideally this would provide an introduction to a lot of key rationalist ideas for newcomers.
A town of anti-inductivists (if something has never happened before, it’s more likely to happen in the future). Show the basic conundrum (“Q: Why can’t you just use induction? A: Because anti-induction has never worked before!”).
A town where nearly all people are hooked to maximally attention grabbing & keeping systems (maybe several of those, keeping people occupied in loops).
Post which someone should write (but I probably won’t get to soon): there is a lot of potential value in earning-to-give EA’s deeply studying the fields to which they donate. Two underlying ideas here:
The key idea of knowledge bottlenecks is that one cannot distinguish real expertise from fake expertise without sufficient expertise oneself. For instance, it takes a fair bit of understanding of AI X-risk to realize that “open-source AI” is not an obviously-net-useful strategy. Deeper study of the topic yields more such insights into which approaches are probably more (or less) useful to fund. Without any expertise, one is likely to be mislead by arguments which are optimized (whether intentionally or via selection) to sound good to the layperson.
That takes us to the pareto frontier argument. If one learns enough/earns enough that nobody else has both learned and earned more, then there are potentially opportunities which nobody else has both the knowledge to recognize and the resources to fund. Generalized efficient markets (in EA-giving) are thereby circumvented; there’s potential opportunity for unusually high impact.
To really be a compelling post, this needs to walk through at least 3 strong examples, all ideally drawn from different areas, and spell out how the principles apply to each example.
Below is a graph from T-mobile’s 2016 annual report (on the second page). Does anything seem interesting/unusual about it?
I’ll give some space to consider before spoiling it.
...
...
...
Answer: that is not a graph of those numbers. Some clever person took the numbers, and stuck them as labels on a completely unrelated graph.
Yes, that is a thing which actually happened. In the annual report of an S&P 500 company. And apparently management considered this gambit successful, because the 2017 annual report doubled down on the trick and made it even more egregious: they added 2012 and 2017 numbers, which are even more obviously not on an accelerating growth path if you actually graph them. The numbers are on a very-clearly-decelerating growth path.
Now, obviously this is an cute example, a warning to be on alert when consuming information. But I think it prompts a more interesting question: why did such a ridiculous gambit seem like a good idea in the first place? Who is this supposed to fool, and to what end?
This certainly shouldn’t fool any serious investment analyst. They’ll all have their own spreadsheets and graphs forecasting T-mobile’s growth. Unless T-mobile’s management deeply and fundamentally disbelieves the efficient markets hypothesis, this isn’t going to inflate the stock price. Presumably shareholder elections for board seats, as well as the board itself, are also not dominated by people who are paying so little attention as to fall for such a transparent ploy.
It could just be that T-mobile’s management were themselves morons, or had probably-unrealistic models of just how moronic their investors were. Still, I’d expect competition (both market pressure and competition for control in shareholder/board meetings) to weed out that level of stupidity.
One more hypothesis: maybe this is simulacrum 3 bullshit. T-mobile is in the cellular business; they presumably have increasing returns to scale. More capital investment makes them more profitable, expectations of more profits draw in more investment; there’s potential for a self-fulfilling prophecy here. Investors want to invest if-and-only-if they expect other investors to invest. So, nobody actually has to be fooled by the graph; they just need to see that T-mobile is successfully pretending to pretend to have accelerating growth, and that’s enough to merit investment.
Regarding the recent memes about the end of LLM scaling: David and I have been planning on this as our median world since about six months ago. The data wall has been a known issue for a while now, updates from the major labs since GPT-4 already showed relatively unimpressive qualitative improvements by our judgement, and attempts to read the tea leaves of Sam Altman’s public statements pointed in the same direction too. I’ve also talked to others (who were not LLM capability skeptics in general) who had independently noticed the same thing and come to similar conclusions.
Our guess at that time was that LLM scaling was already hitting a wall, and this would most likely start to be obvious to the rest of the world around roughly December of 2024, when the expected GPT-5 either fell short of expectations or wasn’t released at all. Then, our median guess was that a lot of the hype would collapse, and a lot of the investment with it. That said, since somewhere between 25%-50% of progress has been algorithmic all along, it wouldn’t be that much of a slowdown to capabilities progress, even if the memetic environment made it seem pretty salient. In the happiest case a lot of researchers would move on to other things, but that’s an optimistic take, not a median world.
(To be clear, I don’t think you should be giving us much prediction-credit for that, since we didn’t talk about it publicly. I’m posting mostly because I’ve seen a decent number of people for whom the death of scaling seems to be a complete surprise and they’re not sure whether to believe it. For those people: it’s not a complete surprise, this has been quietly broadcast for a while now.)
Original GPT-4 is rumored to be a 2e25 FLOPs model. With 20K H100s that were around as clusters for more than a year, 4 months at 40% utilization gives 8e25 BF16 FLOPs. Llama 3 405B is 4e25 FLOPs. The 100K H100s clusters that are only starting to come online in the last few months give 4e26 FLOPs when training for 4 months, and 1 gigawatt 500K B200s training systems that are currently being built will give 4e27 FLOPs in 4 months.
So lack of scaling-related improvement in deployed models since GPT-4 is likely the result of only seeing the 2e25-8e25 FLOPs range of scale so far. The rumors about the new models being underwhelming are less concrete, and they are about the very first experiments in the 2e26-4e26 FLOPs range. Only by early 2025 will there be multiple 2e26+ FLOPs models from different developers to play with, the first results of the experiment in scaling considerably past GPT-4.
And in 2026, once the 300K-500K B200s clusters train some models, we’ll be observing the outcomes of scaling to 2e27-6e27 FLOPs. Only by late 2026 will there be a significant chance of reaching a scaling plateau that lasts for years, since scaling further would need $100 billion training systems that won’t get built without sufficient success, with AI accelerators improving much slower than the current rate of funding-fueled scaling.
I don’t expect that to be particularly relevant. The data wall is still there; scaling just compute has considerably worse returns than the curves we’ve been on for the past few years, and we’re not expecting synthetic data to be anywhere near sufficient to bring us close to the old curves.
Nobody admitted to trying repeated data at scale yet (so we don’t know that it doesn’t work), which from the tiny experiments can 5x the data with little penalty and 15x the data in a still-useful way. It’s not yet relevant for large models, but it might turn out that small models would greatly benefit already.
There are 15-20T tokens in datasets whose size is disclosed for current models (Llama 3, Qwen 2.5), plausibly 50T tokens of tolerable quality can be found (pretraining only needs to create useful features, not relevant behaviors). With 5x 50T tokens, even at 80 tokens/parameter[1] we can make good use of 5e27-7e27 FLOPs[2], which even a 1 gigawatt 500K B200s system of early 2026 would need 4-6 months to provide.
The isoFLOP plots (varying tokens per parameter for fixed compute) seem to get loss/perplexity basins that are quite wide, once they get about 1e20 FLOPs of compute. The basins also get wider for hybrid attention (compare 100% Attention isoFLOPs in the “Perplexity scaling analysis” Figure to the others). So it’s likely that using a slightly suboptimal tokens/parameter ratio of say 40 won’t hurt performance much at all. In which case we get to use 9e27-2e28 FLOPs by training a larger model on the same 5x 50T tokens dataset. The data wall for text data is unlikely to be a 2024-2026 issue.
Conservatively asking for much more data than Chinchilla’s 20 tokens per parameter, in light of the range of results in more recent experiments and adding some penalty for repetition of data. For example, Llama 3 had 40 tokens per parameter estimated as optimal for 4e25 FLOPs from isoFLOPs for smaller runs (up to 1e22 FLOPs, Figure 2), and linear extrapolation in log-coordinates (Figure 3) predicts that this value slowly increases with compute. But other experiments have it decreasing with compute, so this is unclear.
For what it’s worth, and for the purpose of making a public prediction in case I’m wrong, my median prediction is that [some mixture of scaling + algorithmic improvements still in the LLM regime, with at least 25% gains coming from the former] will continue for another couple years. And that’s separate from my belief that if we did try to only advance through the current mixture of scale and algorithmic advancement, we’d still get much more powerful models, just slower.
I’m not very convinced by the claims about scaling hitting a wall, considering we haven’t had the compute to train models significantly larger than GPT-4 until recently. Plus other factors like post-training taking a lot of time (GPT-4 took ~6 months from the base model being completed to release, I think? And this was a lot longer than GPT-3), labs just not being good at understanding how good their models are, etc. Though I’m not sure how much of your position is closer to “scaling will be <25-50% of future gains” than “scaling gains will be marginal / negligible”, especially since a large part of this trajectory involves e.g. self-play or curated data for overcoming the data wall (would that count more as an algorithmic improvement or scaling?)
Still very plausible as a route to continued capabilities progress. Such things will have very different curves and economics, though, compared to the previous era of scaling.
I’ve heard various people recently talking about how all the hubbub about artists’ work being used without permission to train AI makes it a good time to get regulations in place about use of data for training.
If you want to have a lot of counterfactual impact there, I think probably the highest-impact set of moves would be:
Figure out a technical solution to robustly tell whether a given image or text was used to train a given NN.
Bring that to the EA folks in DC. A robust technical test like that makes it pretty easy for them to attach a law/regulation to it. Without a technical test, much harder to make an actually-enforceable law/regulation.
In parallel, also open up a class-action lawsuit to directly sue companies using these models. Again, a technical solution to prove which data was actually used in training is the key piece here.
Model/generator behind this: given the active political salience, it probably wouldn’t be too hard to get some kind of regulation implemented. But by-default it would end up being something mostly symbolic, easily circumvented, and/or unenforceable in practice. A robust technical component, plus (crucially) actually bringing that robust technical component to the right lobbyist/regulator, is the main thing which would make a regulation actually do anything in practice.
Edit-to-add: also, the technical solution should ideally be an implementation of some method already published in some academic paper. Then when some lawyer or bureaucrat or whatever asks what it does and how we know it works, you can be like “look at this Official Academic Paper” and they will be like “ah, yes, it does Science, can’t argue with that”.
Suppose I have a binary function f, with a million input bits and one output bit. The function is uniformly randomly chosen from all such functions—i.e. for each of the 21000000 possible inputs x, we flipped a coin to determine the output f(x) for that particular input.
Now, suppose I know f, and I know all but 50 of the input bits—i.e. I know 999950 of the input bits. How much information do I have about the output?
Answer: almost none. For almost all such functions, knowing 999950 input bits gives us ∼1250 bits of information about the output. More generally, If the function has n input bits and we know all but k, then we have o(12k) bits of information about the output. (That’s “little o” notation; it’s like big O notation, but for things which are small rather than things which are large.) Our information drops off exponentially with the number of unknown bits.
Proof Sketch
With k input bits unknown, there are 2k possible inputs. The output corresponding to each of those inputs is an independent coin flip, so we have 2k independent coin flips. If m of those flips are 1, then we assign a probability of m2k that the output will be 1.
As long as 2k is large, Law of Large Numbers will kick in, and very close to half of those flips will be 1 almost surely—i.e. m≈2k2. The error in this approximation will (very quickly) converge to a normal distribution, and our probability that the output will be 1 converges to a normal distribution with mean 12 and standard deviation 12k/2. So, the probability that the output will be 1 is roughly 12±12k/2.
We can then plug that into Shannon’s entropy formula. Our prior probability that the output bit is 1 is 12, so we’re just interested in how much that ±12k/2 adjustment reduces the entropy. This works out to o(12k) bits.
Why Is This Interesting?
One core idea of my work on abstraction is that noise very quickly wipes out almost all information; only some very-low-dimensional summary is relevant “far away”. This example shows that this sort of thing is not unusual, but rather “the default”: for almost all random functions, information drops off exponentially with the number of unknown bits. In a large system (i.e. a function with many inputs), ignorance of even just a few bits is enough to wipe out essentially-all information. That’s true even if we know the vast majority of the bits.
A good intuitive example of this is the “butterfly effect”: the flap of a butterfly’s wings could change the course of a future hurricane, because chaos. But there’s an awful lot of butterflies in the world, and the hurricane’s path is some complicated function of all of their wing-flaps (and many other variables too). If we’re ignorant of even just a handful of these flaps, then almost all of our information about the hurricane’s path is probably wiped out. And in practice, we’re ignorant of almost all the flaps. This actually makes it much easier to perform Bayesian reasoning about the path of the hurricane: the vast majority of information we have is basically-irrelevant; we wouldn’t actually gain anything from accounting for the butterfly-wing-flaps which we do know.
o(1/2^k) doesn’t vary with n—are you saying that it doesn’t matter how big the input array is, the only determinant is the number of unknown bits, and the number of known bits is irrelevant? That would be quite interesting if so (though I have some question about how likely the function is to be truly random from an even distribution of such functions).
One can enumerate all such 3-bit functions (8 different inputs, each input can return 0 or 1, so 256 functions (one per output-bit-pattern of the 8 possible inputs). But this doesn’t seem to follow your formula—if you have 3 unknown bits, that should be 1⁄8 of a bit about the output, 2 for 1⁄4, and 1 unknown for 1⁄2 a bit about the output. But in fact, the distribution of functions includes both 0 and 1 output for every input pattern, so you actually have no predictive power for the output if you have ANY unknown bits.
o(1/2^k) doesn’t vary with n—are you saying that it doesn’t matter how big the input array is, the only determinant is the number of unknown bits, and the number of known bits is irrelevant?
Yes, that’s correct.
But in fact, the distribution of functions includes both 0 and 1 output for every input pattern, so you actually have no predictive power for the output if you have ANY unknown bits.
The claim is for almost all functions when the number of inputs is large. (Actually what we need is for 2^(# of unknown bits) to be large in order for the law of large numbers to kick in.) Even in the case of 3 unknown bits, we have 256 possible functions, and only 18 of those have less than 1⁄4 1′s or more than 3⁄4 1′s among their output bits.
I’m not sure what context that link is assuming, but in an analysis context I typically see little o used in ways like e.g. “f(x)=f(x0)+dfdx|x0dx+o(dx2)”. The interpretation is that, as dx goes to 0, the o(dx2) terms all fall to zero at least quadratically (i.e. there is some C such that Cdx2 upper bounds the o(dx2) term once dx is sufficiently small). Usually I see engineers and physicists using this sort of notation when taking linear or quadratic approximations, e.g. for designing numerical algorithms.
I find it very helpful to get feedback on LW posts before I publish them, but it adds a lot of delay to the process. So, experiment: here’s a link to a google doc with a post I plan to put up tomorrow. If anyone wants to give editorial feedback, that would be much appreciated—comments on the doc are open.
I’m mainly looking for comments on which things are confusing, parts which feel incomplete or slow or repetitive, and other writing-related things; substantive comments on the content should go on the actual post once it’s up.
EDIT: it’s up. Thank you to Stephen for comments; the post is better as a result.
Any system can be modeled as maximizing some utility function, therefore utility maximization is not a very useful model
Corrigibility is possible, but utility maximization is incompatible with corrigibility, therefore we need some non-utility-maximizer kind of agent to achieve corrigibility
These two claims should probably not both be true! If any system can be modeled as maximizing a utility function, and it is possible to build a corrigible system, then naively the corrigible system can be modeled as maximizing a utility function.
I expect that many peoples’ intuitive mental models around utility maximization boil down to “boo utility maximizer models”, and they would therefore intuitively expect both the above claims to be true at first glance. But on examination, the probable-incompatibility is fairly obvious, so the two claims might make a useful test to notice when one is relying on yay/boo reasoning about utilities in an incoherent way.
FWIW I endorse the second claim when the utility function depends exclusively on the state of the world in the distant future, whereas I endorse the first claim when the utility function can depend on anything whatsoever (e.g. what actions I’m taking right this second). (details)
I wish we had different terms for those two things. That might help with any alleged yay/boo reasoning.
(When Eliezer talks about utility functions, he seems to assume that it depends exclusively on the state of the world in the distant future.)
Consider a homomorphically encrypted computation running somewhere in the cloud. The computations correspond to running an AGI. Now from the outside, you can still model the AGI based on how it behaves, as an expected utility maximizer, if you have a lot of observational data about the AGI (or at least let’s take this as a reasonable assumption).
No matter how closely you look at the computations, you will not be able to figure out how to change these computations in order to make the AGI aligned if it was not aligned already (Also, let’s assume that you are some sort of Cartesian agent, otherwise you would probably already be dead if you were running these kinds of computations).
So, my claim is not that modeling a system as an expected utility maximizer can’t be useful. Instead, I claim that this model is incomplete. At least with regard to the task of computing an update to the system, such that when we apply this update to the system, it would become aligned.
Of course, you can model any system, as an expected utility maximizer. But just because I can use the “high level” conceptual model of expected utility maximization, to model the behavior of a system very well. But behavior is not the only thing that we care about, we actually care about being able to understand the internal workings of the system, such that it becomes much easier to think about how to align the system.
So the following seems to be beside the point unless I am <missing/misunderstanding> something:
These two claims should probably not both be true! If any system can be modeled as maximizing a utility function, and it is possible to build a corrigible system, then naively the corrigible system can be modeled as maximizing a utility function.
Maybe I have missed the fact that the claim you listed says that expected utility maximization is not very useful. And I’m saying it can be useful, it might just not be sufficient at all to actually align a particular AGI system. Even if you can do it arbitrarily well.
I am not an expert, but as I remember it, it was a claim that “any system that follows certain axioms can be modeled as maximizing some utility function”. The axioms assumed that there were no circular preferences—if someone prefers A to B, B to C, and C to A, it is impossible to define a utility function such that u(A) > u(B) > u(C) > u(A) -- and that if the system says that A > B > C, it can decide between e.g. a 100% chance of B, and a 50% chance of A with a 50% chance of C, again in a way that is consistent.
I am not sure how this works when the system is allowed to take current time into account, for example when it is allowed to prefer A to B on Monday but prefer B to A on Tuesday. I suppose that in such situation any system can trivially be modeled by a utility function that at each moment assigns utility 1 to what the system actually did in that moment, and utility 0 to everything else.
Corrigibility is incompatible with assigning utility to everything in advance. A system that has preferences about future will also have a preference about not having its utility function changed. (For the same reason people have a preference not to be brainwashed, or not to take drugs, even if after brainwashing they are happy about having been brainwashed, and after getting addicted they do want more drugs.)
Corrigible system would be like: “I prefer A to B at this moment, but if humans decide to fix me and make me prefer B to A, then I prefer B to A”. In other words, it doesn’t have values for u(A) and u(B), or it doesn’t always act according to those values. A consistent system that currently prefers A to B would prefer not to be fixed.
A utility function represents preference elicited in a large collection of situations, each a separate choice between events that happens with incomplete information, as an event is not a particular point. This preference needs to be consistent across different situations to be representable by expected utility of a single utility function.
Once formulated, a utility function can be applied to a single choice/situation, such as a choice of a policy. But a system that only ever makes a single choice is not a natural fit for expected utility frame, and that’s the kind of system that usually appears in “any system can be modeled as maximizing some utility function”. So it’s not enough to maximize something once, or in a narrow collection of situations, the situations the system is hypothetically exposed to need to be about as diverse as choices between any pair of events, with some of the events very large, corresponding to unreasonably incomplete information, all drawn across the same probability space.
One place this mismatch of frames happens is with updateless decision theory. An updateless decision is a choice of a single policy, once and for all, so there is no reason for it to be guided by expected utility, even though it could be. The utility function for the updateless choice of policy would then need to be obtained elsewhere, in a setting that has all these situations with separate (rather than all enacting a single policy) and mutually coherent choices under uncertainty. But once an updateless policy is settled (by a policy-level decision), actions implied by it (rather than action-level decisions in expected utility frame) no longer need to be coherent. Not being coherent, they are not representable by an action-level utility function.
So by embracing updatelessness, we lose the setting that would elicit utility if the actions were instead individual mutually coherent decisions. And conversely, by embracing coherence of action-level decisions, we get an implied policy that’s not updatelessly optimal with respect to the very precise outcomes determined by any given whole policy. So an updateless agent founded on expected utility maximization implicitly references a different non-updateless agent whose preference is elicited by making separate action-level decisions under a much greater uncertainty than the policy-level alternatives the updateless agent considers.
I don’t think claim 1 is wrong, but it does clash with claim 2.
That means any system that has to be corrigible cannot be a system that maximizes a simple utility function (1 dimension), or put another way “whatever utility function is maximizes must be along multiple dimensions”.
Which seems to be pretty much what humans do, we have really complex utility functions, and everything seems to be ever changing and we have some control over it ourselves (and sometimes that goes wrong and people end up maxing out a singular dimension at the cost of everything else).
Note to self: Think more about this and if possible write up something more coherent and explanatory.
One second-order effect of the pandemic which I’ve heard talked about less than I’d expect:
This is the best proxy I found on FRED for new businesses founded in the US, by week. There was a mild upward trend over the last few years, it’s really taken off lately. Not sure how much of this is kids who would otherwise be in college, people starting side gigs while working from home, people quitting their jobs and starting their own businesses so they can look after the kids, extra slack from stimulus checks, people losing their old jobs en masse but still having enough savings to start a business, …
For the stagnation-hypothesis folks who lament relatively low rates of entrepreneurship today, this should probably be a big deal.
How sure are you that the composition is interesting? How many of these are just quick mask-makers or sanitizer-makers, or just replacing restaurants that have now gone out of business? (ie very low-value-added companies, of the ‘making fast food in a stall in a Third World country’ sort of ‘startup’, which make essentially no or negative long-term contributions).
Good question. I haven’t seen particularly detailed data on these on FRED, but they do have separate series for “high propensity” business applications (businesses they think are likely to hire employees), business applications with planned wages, and business applications from corporations, as well as series for each state. The spike is smaller for planned wages, and nonexistent for corporations, so the new businesses are probably mostly single proprietors or partnerships. Other than that, I don’t know what the breakdown looks like across industries.
How do you feel about this claim now? I haven’t noticed a whole lot of innovation coming from all these small businesses, and a lot of them seem like they were likely just vehicles for the extraordinary extent of fraud as the results from all the investigations & analyses come in.
… so it’s presumably also not just the result of pandemic giveaway fraud, unless that fraud is ongoing.
Presumably the thing to check here would be TFP, but Fred’s US TFP series currently only goes to end of 2019, so apparently we’re still waiting on that one? Either that or I’m looking at the wrong series.
Neat problem of the week: researchers just announced roughly-room-temperature superconductivity at pressures around 270 GPa. That’s stupidly high pressure—a friend tells me “they’re probably breaking a diamond each time they do a measurement”. That said, pressures in single-digit GPa do show up in structural problems occasionally, so achieving hundreds of GPa scalably/cheaply isn’t that many orders of magnitude away from reasonable, it’s just not something that there’s historically been much demand for. This problem plays with one idea for generating such pressures in a mass-produceable way.
Suppose we have three materials in a coaxial wire:
innermost material has a low thermal expansion coefficient and high Young’s modulus (i.e. it’s stiff)
middle material is a thin cylinder of our high-temp superconducting concoction
outermost material has a high thermal expansion coefficient and high Young’s modulus.
We construct the wire at high temperature, then cool it. As the temperature drops, the innermost material stays roughly the same size (since it has low thermal expansion coefficient), while the outermost material shrinks, so the superconducting concoction is squeezed between them.
Exercises:
Find an expression for the resulting pressure in the superconducting concoction in terms of the Young’s moduli, expansion coefficients, temperature change, and dimensions of the inner and outer materials. (Assume the width of the superconducting layer is negligible, and the outer layer doesn’t break.)
Look up parameters for some common materials (e.g. steel, tungsten, copper, porcelain, aluminum, silicon carbide, etc), and compute the pressures they could produce with reasonable dimensions (assuming that their material properties don’t change too dramatically with such high pressures).
Find an expression for the internal tension as a function of radial distance in the outermost layer.
Pick one material, look up its tensile strength, and compute how thick it would have to be to serve as the outermost layer without breaking, assuming the superconducting layer is at 270 GPa.
My cached thoughts start with a somewhat different question—not “what role does magic play in fantasy fiction?” (e.g. what fantasies does it fulfill), but rather… insofar as magic is a natural category, what does it denote? So I’m less interested in the relatively-expansive notion of “magic” sometimes seen in fiction (which includes e.g. alternate physics), and more interested in the pattern called “magic” which recurs among tons of real-world ancient cultures.
Claim (weakly held): the main natural category here is symbols changing the territory. Normally symbols represent the world, and changing the symbols just makes them not match the world anymore—it doesn’t make the world do something different. But if the symbols are “magic”, then changing the symbols changes the things they represent in the world. Canonical examples:
Wizard/shaman/etc draws magic symbols, speaks magic words, performs magic ritual, or even thinks magic thoughts, thereby causing something to happen in the world.
Messing with a voodoo doll messes with the person it represents.
“Sympathetic” magic, which explicitly uses symbols of things to influence those things.
Magic which turns emotional states into reality.
I would guess that most historical “magic” was of this type.
Everybody’s been talking about Paxlovid, and how ridiculous it is to both stop the trial since it’s so effective but also not approve it immediately. I want to at least float an alternative hypothesis, which I don’t think is very probable at this point, but does strike me as at least plausible (like, 20% probability would be my gut estimate) based on not-very-much investigation.
Early stopping is a pretty standard p-hacking technique. I start out planning to collect 100 data points, but if I manage to get a significant p-value with only 30 data points, then I just stop there. (Indeed, it looks like the Paxlovid study only had 30 actual data points, i.e. people hospitalized.) Rather than only getting “significance” if all 100 data points together are significant, I can declare “significance” if the p-value drops below the line at any time. That gives me a lot more choices in the garden of forking counterfactual paths.
Now, success rates on most clinical trials are not very high. (They vary a lot by area—most areas are about 15-25%. Cancer is far and away the worst, below 4%, and vaccines are the best, over 30%.) So I’d expect that p-hacking is a pretty large chunk of approved drugs, which means pharma companies are heavily selected for things like finding-excuses-to-halt-good-seeming-trials-early.
Early stopping is a pretty standard p-hacking technique.
It was stopped after a pre-planned interim analysis; that means they’re calculating the stopping criteria/p-values with multiple testing correction built in, using sequential analysis.
I’ve been running ELISA tests all week. In the first test, I did not detect stronger binding to any of the peptides than to the control in any of several samples from myself or my girlfriend. But the control itself was looking awfully suspicious, so I ran another couple tests. Sure enough, something in my samples is binding quite strongly to the control itself (i.e. the blocking agent), which is exactly what the control is supposed to not do. So I’m going to try out some other blocking agents, and hopefully get an actually-valid control group.
(More specifics on the test: I ran a control with blocking agent + sample, and another with blocking agent + blank sample, and the blocking agent + sample gave a strong positive signal while the blank sample gave nothing. That implies something in the sample was definitely binding to both the blocking agent and the secondary antibodies used in later steps, and that binding was much stronger than the secondary antibodies themselves binding to anything in the blocking agent + blank sample.)
In other news, the RadVac team released the next version of their recipe + whitepaper. Particularly notable:
… many people who have taken the nasal vaccine are testing negative for serum antibodies with commercial and lab ELISA tests, while many who inject the vaccine (subcutaneous or intramuscular) are testing positive (saliva testing appears to be providing evidence of mucosal response among a subset of researchers who have administered the vaccine intranasally).
Note that they’re talking specifically about serum (i.e. blood) antibodies here. So apparently injecting it does induce blood antibodies of the sort detectable by commercial tests (at least some of the time), but snorting it mostly just produces mucosal antibodies (also at least some of the time).
This is a significant update: most of my prior on the vaccine working was based on vague comments in the previous radvac spec about at least some people getting positive test results. But we didn’t know what kind of test results those were, so there was a lot of uncertainty about exactly what “working” looked like. In particular, we didn’t know whether antibodies were induced in blood or just mucus, and we didn’t know if they were induced consistently or only in some people (the latter of which is the “more dakka probably helps” world). Now we know that it’s mostly just mucus (at least for nasal administration). Still unsure about how consistently it works—the wording in the doc makes it sound like only some people saw a response, but I suspect the authors are just hedging because they know there’s both selection effects and a lot of noise in the data which comes back to them.
The latest version of the vaccine has been updated to give it a bit more kick—slightly higher dose, and the chitosan nanoparticle formula has been changed in a way which should make the peptides more visible to the immune system. Also, the list of peptides has been trimmed down a bit, so the latest version should actually be cheaper, though the preparation is slightly more complex.
Here’s an AI-driven external cognitive tool I’d like to see someone build, so I could use it.
This would be a software tool, and the user interface would have two columns. In one column, I write. Could be natural language (like google docs), or code (like a normal IDE), or latex (like overleaf), depending on what use-case the tool-designer wants to focus on. In the other column, a language and/or image model provides local annotations for each block of text. For instance, the LM’s annotations might be:
(Natural language or math use-case:) Explanation or visualization of a mental picture generated by the main text at each paragraph
(Natural language use-case:) Emotional valence at each paragraph
(Natural language or math use-case:) Some potential objections tracked at each paragraph
(Code:) Fermi estimates of runtime and/or memory usage
This is the sort of stuff I need to track mentally in order to write high-quality posts/code/math, so it would potentially be very high value to externalize that cognition.
Also, the same product could potentially be made visible to readers (for the natural language/math use-cases) to make more visible the things the author intends to be mentally tracked. That, in turn, would potentially make it a lot easier for readers to follow e.g. complicated math.
I haven’t experimented very much, but here’s one example prompt.
Please describe what you mentally picture when reading the following block of text:
“ A Shutdown Problem Proposal
First things first: this is not (yet) aimed at solving the whole corrigibility problem, or even the whole shutdown problem.
The main thing this proposal is intended to do is to get past the barriers MIRI found in their old work on the shutdown problem. In particular, in a toy problem basically-identical to the one MIRI used, we want an agent which:
Does not want to manipulate the shutdown button Does respond to the shutdown button Does want to make any child-agents it creates responsive-but-not-manipulative to the shutdown button, recursively (i.e. including children-of-children etc) If I understand correctly, this is roughly the combination of features which MIRI had the most trouble achieving simultaneously. ”
This one produced basically-decent results from GPT-4.
Although I don’t have the exact prompt on hand at the moment, I’ve also asked GPT-4 to annotate a piece of code line-by-line with a Fermi estimate of its runtime, which worked pretty well.
Don’t really need comments which are non-obvious to an expert. Part of what makes LLMs well-suited to building external cognitive tools is that external cognitive tools can create value by just tracking “obvious” things, thereby freeing up the user’s attention/working memory for other things.
So kinda like spellcheckers (most typos you could figure out, but why spend time and attention on proofreading if the program can do that for you), but… thought-checkers.
Like, if a part of your article contradicts another part, it would be underlined.
I’ve long wanted this, but it’s not clear how to do it. Long-context LLMs are still expensive and for authors who need it most, context windows are still too small: me or Yudkowsky, for example, would still exceed the context window of almost all LLMs except possibly the newest Gemini. And then you have their weak reasoning. You could try to RAG it, but embeddings are not necessarily tuned to encode logically contradictory or inconsistent claims: probably if I wrote “the sky is blue” in one place and “the sky is red” in another, a retrieval would be able to retrieve both paragraphs and a LLM point out that they are contradictory, but such blatant contradictions are probably too rare to be useful to check for. You want something more subtle, like where you say “the sky is blue” and elsewhere “I looked up from the ground and saw the color of apples”. You could try to brute force it and consider every pairwise comparison of 2 reasonable sized chunks of text and ask for contradictions, but this is quadratic and will get slow and expensive and probably turn up too many false positives. (And how do you screen off false positives and mark them ‘valid’?)
My general thinking these days is that these truly useful ‘tools for thought’ LLMs are going to require either much better & cheaper LLMs, so smart that they can provide useful assistance despite being used in a grossly unnatural way input-wise or safety-tuned to hell, or biting the bullet of finetuning/dynamic-evaluation (see my Nenex proposal).
A LLM finetuned on my corpus can hope to quickly find, with good accuracy, contradictions because it was trained to know ‘the sky was blue’ when I wrote that at the beginning of the corpus, and it gets confused when it hits ‘the color of ____’ and it gets the prediction totally wrong. And RAG on an embedding tailored to the corpus can hope to surface the contradictions because it sees the two uses are the same in the essays’ context, etc. (And if you run them locally, and they don’t need a large context window because of the finetuning, they will be fast and cheap, so you can more meaningfully apply the brute force approach; or you could just run multiple epoches on your data, with an auxiliary prompt asking for a general critique, which would cover contradictions. ‘You say here X, but don’t I recall you saying ~X back at the beginning? What gives?’)
Feed it a shorter text (that fits in the window) and ask it to provide a short summary focusing on factual statements. Then hopefully all short versions could fit in the window. Find the contradiction—report the two contradicting factual statements and which section they appeared in. Locate the statement in the original text.
I may have. Just gwern.net is, I think, somewhere around 2m, and it’s not comprehensive. Also, for contradictions, I would want to detect contradictions against citations/references as well (detecting miscitations would be more important than self-consistency IMO), and as a rough ballpark, the current Gwern.net annotation* corpus is approaching 4.3m words, looks like, and is also not comprehensive. So, closer than one might think! (Anyway, doesn’t deal with the cost or latency: as you can see in the demos, we are talking minutes, not seconds, for these million-token calls and the price is probably going to be in the dollar+ regime per call.)
* which are not fulltext. It would be nice to throw in all of the hosted paper & book & webpage fulltexts, but then that’s probably more like 200m+ words.
There may not be any ‘clear’ technical obstruction, but it has failed badly in the past. ‘Add more parallelism’ (particularly hierarchically) is one of the most obvious ways to improve attention, and people have spent the past 5 years failing to come up with efficient attentions that do anything but move along a Pareto frontier from ‘fast but doesn’t work’ to ‘slow and works only as well as the original dense attention’. It’s just inherently difficult to know what tokens you will need across millions of tokens without input from all the other tokens (unless you are psychic), implying extensive computation of some sort, which makes things inherently serial and costs you latency, even if you are rich enough to spend compute like water. You’ll note that when Claude-2 was demoing the ultra-long attention windows, it too spent a minute or two churning. While the most effective improvements in long-range attention like Flash Attention or Ring Attention are just hyperoptimizing dense attention, which is inherently limited.
I’ve long been very suspicious of aggregate economic measures like GDP. But GDP is clearly measuring something, and whatever that something is it seems to increase remarkably smoothly despite huge technological revolutions. So I spent some time this morning reading up and playing with numbers and generally figuring out how to think about the smoothness of GDP increase.
Major takeaways:
When new tech makes something previously expensive very cheap, GDP mostly ignores it. (This happens in a subtle way related to how we actually compute it.)
Historical GDP curves mainly measure things which are expensive ~now. Things which are cheap now are mostly ignored. In other words: GDP growth basically measures the goods whose production is revolutionized the least.
Re: AI takeoff, the right way to extrapolate today’s GDP curve to post-AI is to think about things which will still be scarce post-AI, and then imagine the growth of production of those things.
Even a very sharp, economically-revolutionary AI takeoff could look like slow smooth GDP growth, because GDP growth will basically only measure the things whose production is least revolutionized.
Why am I harping on about technicalities of GDP? Well, I hear about some AI forecasts which are heavily based on the outside view that economic progress (as measured by GDP) is smooth, and this is so robust historically that we should expect it to continue going forward. And I think this is basically right—GDP, as we actually compute it, is so remarkably smooth that we should expect that to continue. Alas, this doesn’t tell us very much about how crazy or sharp AI takeoff will be, because GDP (as we actually compute it) systematically ignores anything that’s revolutionized.
In writing How much should we value life?, I spent some time digging into AI timeline stuff. It lead me to When Will AI Be Created?, written by Luke Muehlhauser for MIRI. He noted that there is reason not to trust expert opinions on AI timelines, and that trend extrapolation may be a good alternative. This point you’re making about GDP seems like it is real progress towards coming up with a good way to do trend extrapolation, and thus seems worth a full post IMO. (Assuming it isn’t already well known by the community or something, which I don’t get the sense is the case.)
My first reaction to the framing of the paper is to ask: growth in what? It’s important to keep in mind that concepts like “gross domestic product” and “world gross domestic product” were defined from an explicit anthropocentric perspective—they measure the total production of final goods within a certain time period. Final goods are what is either consumed by humans (e.g. food or human services) or what is invested into “capital goods” that last for multiple periods (e.g. a server farm) to produce consumption goods for humans.
Now imagine you are a highly intelligent AI system running on the cloud. Although the production of the server farms on which you depend enters into human GDP (as a capital good), most of the things that you absorb, for example energy, server maintenance, etc., count as “intermediate goods” in our anthropocentric accounting systems and do not contribute to human GDP. In fact, to the extent that the AI system drives up the price of scarce resources (like energy) consumed by humans, real human GDP may even decline.
As a result, it is conceivable (and, to be honest, one of the central scenarios for me personally) that an AI take-off occurs but anthropocentric GDP measures show relative stagnation in the human economy.
To make this scenario a bit more tangible, consider the following analogy: imagine a world in which there are two islands trading with each other, but the inhabitants of the islands are very different from each other—let’s call them humans and AIs. The humans sell primitive goods like oil to the AIs and their level of technology is relatively stagnant. The AIs sell amazing services to the humans, and their level of technology doubles every year. However, the AI services that humans consume make up only a relatively small part of the human consumption basket. The humans are amazed at what fantastic services they get from the AIs in exchange for their oil, and they experience improvements in their standard of living from these fantastic AI services, although they also have to pay more and more for their energy use every year, which offsets part of that benefit. The humans can only see what’s happening on their own island and develop a measure of their own well-being that they call human GDP, which increases modestly because the advances only occur in a relatively small part of their consumption basket. The AIs can see what’s going on on the AI island and develop a measure of their own well-being which they call AI GDP, and which almost doubles every year. The system can go on like this indefinitely.
For a fuller discussion of these arguments, let me refer you to my working paper on “The Rise of Artificially Intelligent Agents” (with the caveat that the paper is still a working draft).
In general, Baumol type effects (spending decreasing in sectors where productivity goes up), mean that we can have scenarios in which the economy is growing extremely fast on “objective” metrics like energy consumption, but GDP has stagnated because that energy is being spent on extremely marginal increases in goods being bought and sold.
Smoke from California/Oregon wildfires reaching the East Coast opens up some interesting new legal/political possibilities. The smoke is way outside state borders, all the way on the other side of the country, so that puts the problem pretty squarely within federal jurisdiction. Either a federal agency could step in to force better forest management on the states, or a federal lawsuit could be brought for smoke-induced damages against California/Oregon. That would potentially make it a lot more difficult for local homeowners to block controlled burns.
I had a shortform post pointing out the recent big jump in new businesses in the US, and Gwern replied:
How sure are you that the composition is interesting? How many of these are just quick mask-makers or sanitizer-makers, or just replacing restaurants that have now gone out of business? (ie very low-value-added companies, of the ‘making fast food in a stall in a Third World country’ sort of ‘startup’, which make essentially no or negative long-term contributions).
This was a good question in context, but I disagree with Gwern’s model of where-progress-comes-from, especially in the context of small businesses.
Let’s talk ice-cream cones.
As the story goes, an ice-cream vendor was next door to a waffle vendor at the 1904 World’s Fair. At some point, the ice-cream vendor ran short on paper cups, and inspiration struck. He bought some thin waffles from the waffle vendor, rolled them into cones, and ice-cream cones took off.
That’s just the first step. From there, the cone spread memetically. People heard about it, and either asked for cones (on the consumer side) or tried making them (on the supplier side).
Insight + Memetics → Better Food
When I compare food today to the stuff my grandparents ate, there’s no comparison. Today’s dishes are head and shoulders better. Partly it’s insights like ice-cream cones, partly it’s memetic spread of dishes from more parts of the world (like sisig, soup dumplings, ropa vieja, chicken Karahi, …).
Those little fast-food stalls? They’re powerhouses of progress. It’s a hypercompetitive market, with low barriers to entry, and lots of repeat business. The conditions are ideal for trying out new dishes, spreading culinary ideas and finding out the hard way what people like to eat. That doesn’t mean they’re highly profitable—culinary innovation spreads memetically, so it’s hard to capture the gains. But progress is made.
The pandemic also has the effect of showing the kind of business ideas people try. It pushes a lot of innovation in food delivery. Some of the pandemic driver innovation will become worthless once the pandemic is over but a few good ideas likely survive and the old ideas of the businesses that went out of business are still around.
Someone should write a book review of The Design of Everyday Things aimed at LW readers, so I have a canonical source to link to other than the book itself.
Does anyone know of an “algebra for Bayes nets/causal diagrams”?
More specifics: rather than using a Bayes net to define a distribution, I want to use a Bayes net to state a property which a distribution satisfies. For instance, a distribution P[X, Y, Z] satisfies the diagram X → Y → Z if-and-only-if the distribution factors according to P[X, Y, Z] = P[X] P[Y|X] P[Z|Y].
When using diagrams that way, it’s natural to state a few properties in terms of diagrams, and then derive some other diagrams they imply. For instance, if a distribution P[W, X, Y, Z] satisfies all of:
W → Y → Z
W → X → Y
X → (W, Y) → Z
… then it also satisfies W → X → Y → Z.
What I’m looking for is a set of rules for “combining diagrams” this way, without needing to go back to the underlying factorizations in order to prove things.
David and I have been doing this sort of thing a lot in our work the past few months, and it would be nice if someone else already had a nice write-up of the rules for it.
Turns out my laser thermometer is all over the map. Readings would change by 10°F if I went outside and came back in. My old-school thermometer is much more stable (and well-calibrated, based on dipping it in some ice water), but slow and caps out around 90°F (so I can’t use to measure e.g. exhaust temp). I plan to buy a bunch more old-school thermometers for the next try.
I thought opening the doors/windows in rooms other than the test room and setting up a fan would be enough to make the temperature in the hall outside the test room close to outdoor temp. This did not work; hall temp was around 72°F with outside around 80°F. I’ll need to change that part of the experiment design; most likely I’ll seal around the door and let air infiltrate exclusively from the window instead. (The AC is right next to the window, so this could screw with the results, but I don’t really have a better option.)
In two-hose mode, the AC hit its minimum temperature of 60°F, so I’ll need a hotter day. I’ll try again when we hit at least 85°F.
In case anyone’s wondering: in one-hose mode, the temperature in the room equilibrated around 66°F. Power consumption was near-constant throughout all conditions.
One additional Strange Observation: cool air was blowing out under the door of the test room in two-hose mode. This should not happen; my best guess is that, even though the AC has two separate intake vents, the two are not actually partitioned internally, so the fan for indoor-air was pulling in outdoor-air (causing air to blow out under the door to balance that extra inflow). Assuming that’s the cause, it should be fixable with some strategically-placed cardboard inside the unit.
Huh, amusing. We do ship a font that has nothing but the greek letter set in it, because people use greek unicode symbols all the time and our primary font doesn’t support that character set. So my guess is that’s where Google gets confused.
The math and physics worlds still use single-letter variable names for everything, decades after the software world realized that was extremely bad practice. This makes me pessimistic about the adoption of better notation practices.
Better? I doubt it. If physicists wrote equations the way programmers write code, a simple homework problem would easily fill ten pages.
Verboseness works for programmers because programmers rarely need to do anything more complicated with their code than run it—analogous to evaluating an expression, for a physicist or mathematician. Imagine if you needed to prove one program equivalent to another algebraically—i.e. a sequence of small transformations, with a record of intermediate programs derived along the way in order to show your work. I expect programmers subjected to such a use-case would quickly learn the virtues of brevity.
Yeah, I’m apparently not intelligent enough to do error-free physics/engineering calculations without relying on dimensional analysis as a debugging tool. I even came up with a weird, hack-y way to do that in computing environments like Excel and Cython, where flexible multiplicative types are not supported.
I keep seeing news outlets and the like say that SORA generates photorealistic videos, can model how things move in the real world, etc. This seems like blatant horseshit? Every single example I’ve seen looks like video game animation, not real-world video.
Have I just not seen the right examples, or is the hype in fact decoupled somewhat from the model’s outputs?
I think all of these videos other than the octopus and paper planes are “at-a-glance” photorealistic to me.
Overall, I think SORA can do “at-a-glance” photorealistic videos and can model to some extent how things move in the real world. I don’t think it can do both complex motion and photorealism in the same video. As in, the videos which are photorealistic don’t really involve complex motion and the videos which involve complex motion aren’t photorealistic.
(So probably some amount of hype, but also pretty real?)
Hmm, I don’t buy it. These two scenes seem very much not like the kind of thing a video game engine could produce:
Look at this frame! I think there is something very slightly off about that face, but the cat hitting the person’s face and the person’s reaction seem very realistic to me and IMO qualifies as “complex motion and photorealism in the same video”.
Yeah, this is the example I’ve been using to convince people that the game engines are almost certainly generating training data but are probably not involved at sampling time. I can’t come up with any sort of hybrid architecture like ‘NN controlling game-engine through API’ where you get that third front leg. One of the biggest benefits of a game-engine would be ensuring exactly that wouldn’t happen—body parts becoming detached and floating in mid-air and lack of conservation. If you had a game engine with a hyper-realistic cat body model in it which something external was manipulating, one of the biggest benefits is that you wouldn’t have that sort of common-sense physics problem. (Meanwhile, it does look like past generative modeling of cats in its errors. Remember the ProGAN interpolation videos of CATS? Hilarious, but also an apt demonstration of how extremely hard cats are to model. They’re worse than hands.)
In addition, you see plenty of classic NN tells throughout—note the people driving a ‘Dandrover’...
Yeah, those were exactly the two videos which most made me think that the model was mostly trained on video game animation. In the tokyo one, the woman’s facial muscles never move at all, even when the camera zooms in on her. And in the SUV one, the dust cloud isn’t realistic, but even covering that up the SUV has a Grand Theft Auto look to its motion.
“Can’t do both complex motion and photorealism in the same video” is a good hypothesis to track, thanks for putting that one on my radar.
Putting this here for posterity: I have thought since the superconductor preprint went up, and continue to think, that the markets are putting generally too little probability on the claims being basically-true. I thought ~70% after reading the preprint the day it went up (and bought up a market on manifold to ~60% based on that, though I soon regretted not waiting for a better price), and my probability has mostly been in the 40-70% range since then.
Languages should have tenses for spacelike separation. My friend and I do something in parallel, it’s ambiguous/irrelevant which one comes first, I want to say something like “I expect my friend <spacelike version of will do/has done/is doing> their task in such-and-such a way”.
That sounds more like a tenseless sentence than using a spacelike separation tense. Your friend’s performance of the task may well be in your future or past lightcone (or extend through both), but you don’t wish to imply any of these.
There are languages with tenseless verbs, as well as some with various types of spatial tense.
The closest I can approximate this in English without clumsy constructs is “I expect my friend does their task in such-and-such a way”, which I agree isn’t very satisfactory.
Two kinds of cascading catastrophes one could imagine in software systems...
A codebase is such a spaghetti tower (and/or coding practices so bad) that fixing a bug introduces, on average, more than one new bug. Software engineers toil away fixing bugs, making the software steadily more buggy over time.
Software services managed by different groups have dependencies—A calls B, B calls C, etc. Eventually, the dependence graph becomes connected enough and loopy enough that a sufficiently-large chunk going down brings down most of the rest, and nothing can go back up until everything else goes back up (i.e. there’s circular dependence/deadlock).
How could we measure how “close” we are to one of these scenarios going supercritical?
For the first, we’d need to have attribution of bugs—i.e. track which change introduced each bug. Assuming most bugs are found and attributed after some reasonable amount of time, we can then estimate how many bugs each bug fix introduces, on average.
(I could also imagine a similar technique for e.g. medicine: check how many new problems result from each treatment of a problem.)
For the second, we’d need visibility into codebases maintained by different groups, which would be easy within a company but much harder across companies. In principle, within a company, some kind of static analysis tool could go look for all the calls to apis between services, map out the whole graph, and then calculate which “core” pieces could be involved in a catastrophic failure.
(Note that this problem could be mostly-avoided by intentionally taking down services occasionally, so engineers are forced to build around that possibility. I don’t think any analogue of this approach would work for the first failure-type, though.)
I mean, just to be clear, I am all in favor of intellectual progress. But doing so indiscriminately does sure seem a bit risky in this world of anthropogenic existential risks. Reminds me of my mixed feelings on the whole Progress Studies thing.
Yeah, I wouldn’t want to accelerate e.g. black-box ML. I imagine the real utility of such a fund would be to experiment with ways to accelerate intellectual progress and gain understanding of the determinants, though the grant projects themselves would likely be more object-level than that. Ideally the grants would be in areas which are not themselves very risk-relevant, but complicated/poorly-understood enough to generate generalizable insights into progress.
I think it takes some pretty specific assumptions for such a thing to increase risk significantly on net. If we don’t understand the determinants of intellectual progress, then we have very little ability to direct progress where we want it; it just follows whatever the local gradient is. With more understanding, at worst it follows the same gradient faster, and we end up in basically the same spot.
The one way it could net-increase risk is if the most likely path of intellectual progress leads to doom, and the best way to prevent doom is through some channel other than intellectual progress (like political action, for instance). Then accelerating the intellectual progress part potentially gives the other mechanisms (like political bodies) less time to react. Personally, though, I think a scenario in which e.g. political action successfully prevents intellectual progress from converging to doom (in a world where it otherwise would have) is vanishingly unlikely (like, less than one-in-a-hundred, maybe even less than one-in-a-thousand).
You might check out Donald Braben’s view, it says “transformative research” (i.e. fundamental results that create new fields and industries) is critical for the survival of civilization. He does not worry that transformative results might end civilization.
Way back in the halcyon days of 2005, a company called Cenqua had an April Fools’ Day announcement for a product called Commentator: an AI tool which would comment your code (with, um, adjustable settings for usefulness). I’m wondering if (1) anybody can find an archived version of the page (the original seems to be gone), and (2) if there’s now a clear market leader for that particular product niche, but for real.
Here’s an interesting problem of embedded agency/True Names which I think would make a good practice problem: formulate what it means to “acquire” something (in the sense of “acquiring resources”), in an embedded/reductive sense. In other words, you should be able-in-principle to take some low-level world-model, and a pointer to some agenty subsystem in that world-model, and point to which things that subsystem “acquires” and when.
Some prototypical examples which an answer should be able to handle well:
Organisms (anything from bacteria to plant to animals) eating things, absorbing nutrients, etc.
...and how the brain figures this out and why it is motivated to do so. There are a lot of simple animals that apparently “try to control” resources or territory. How?
Drives to control resources occur everywhere. And your control of resources is closely related to your dominance in a dominance hierarchy. Which seems to be regulated in many animals by serotonin. See e.g. https://www.nature.com/articles/s41386-022-01378-2
An interesting conundrum: one of the main challenges of designing useful regulation for AI is that we don’t have any cheap and robust way to distinguish a dangerous neural net from a non-dangerous net (or, more generally, a dangerous program from a non-dangerous program). This is an area where technical research could, in principle, help a lot.
The problem is, if there were some robust metric for how dangerous a net is, and that metric were widely known and recognized (as it would probably need to be in order to be used for regulatory purposes), then someone would probably train a net to maximize that metric directly.
This seems to lead to the solution of trying to make your metric one-way, in the sense that your metric should
Provide an upper-bound on the dangerousness of your network
Compress the space of networks which map to approximately the same dangerousness level on the low end of dangerousness, and expand the space of networks which map to approximately the same dangerousness level on the upper end of dangerous, so that you can train your network to minimize the metric, but when you train your network to maximize the metric you end up in a degenerate are with technically very high measured danger levels but in actuality very low levels of dangerousness.
We can hope (or possibly prove) that as you optimize upwards on the metric you get subject to goodheart’s curse, but the opposite occurs on the lower end.
Sure, even seems a bit tautological: any such metric, to be robust, would need to contain in itself a definition of a dangerously-capable AI, so you probably wouldn’t even need to train a model to maximize it. You’d be able to just lift the design from the metric directly.
Do you have any thoughts on a softer version of this problem, where the metric can’t be maximized directly, but gives a concrete idea of what sort of challenge your AI needs to beat to qualify as AGI? (And therefore in which direction in the architectural-design-space you should be moving.)
Some variation on this seems like it might work as a “fire alarm” test set, but as you point out, inasmuch as it’s recognized, it’ll be misapplied for benchmarking instead.
(I suppose the ideal way to do it would be to hand it off to e. g. ARC, so they can use it if OpenAI invites them for safety-testing again. This way, SOTA models still get tested, but the actors who might misuse it aren’t aware of the testing’s particulars until they succeed anyway...)
I just went looking for a good reference for the Kelly criterion, and didn’t find any on Lesswrong. So, for anybody who’s looking: chapter 6 of Thomas & Cover’s textbook on information theory is the best source I currently know of.
Neat problem of the week: we have n discrete random variables, X1...Xn. Given any variable, all variables are independent:
∀i:P[X|Xi]=∏jP[Xj|Xi]
Characterize the distributions which satisfy this requirement.
This problem came up while working on the theorem in this post, and (separately) in the ideas behind this post. Note that those posts may contain some spoilers for the problem, though frankly my own proofs on this one just aren’t very good.
For short-term, individual cost/benefit calculations around C19, it seems like uncertainty in the number of people currently infected should drop out of the calculation.
For instance: suppose I’m thinking about the risk associated with talking to a random stranger, e.g. a cashier. My estimated chance of catching C19 from this encounter will be roughly proportional to Ninfected. But, assuming we already have reasonably good data on number hospitalized/died, my chances of hospitalization/death given infection will be roughly inversely proportional to Ninfected. So, multiplying those two together, I’ll get a number roughly independent of Ninfected.
How general is this? Does some version of it apply to long-term scenarios too (possibly accounting for herd immunity)? What short-term decisions do depend on Ninfected?
Conjecture’s Compendium is now up. It’s intended to be a relatively-complete intro to AI risk for nontechnical people who have ~zero background in the subject. I basically endorse the whole thing, and I think it’s probably the best first source to link e.g. policymakers to right now.
I might say more about it later, but for now just want to say that I think this should be the go-to source for new nontechnical people right now.
I think there’s something about Bay Area culture that can often get technical people to feel like the only valid way to contribute is through technical work. It’s higher status and sexier and there’s a default vibe that the best way to understand/improve the world is through rigorous empirical research.
I think this an incorrect (or at least incomplete) frame, and I think on-the-margin it would be good for more technical people to spend 1-5 days seriously thinking about what alternative paths they could pursue in comms/policy.
I also think there are memes spreading around that you need to be some savant political mastermind genius to do comms/policy, otherwise you will be net negative. The more I meet policy people (including successful policy people from outside the AIS bubble), the more I think this narrative was, at best, an incorrect model of the world. At worst, a take that got amplified in order to prevent people from interfering with the AGI race (e.g., by granting excess status+validity to people/ideas/frames that made it seem crazy/unilateralist/low-status to engage in public outreach, civic discourse, and policymaker engagement.)
(Caveat: I don’t think the adversarial frame explains everything, and I do think there are lots of people who were genuinely trying to reason about a complex world and just ended up underestimating how much policy interest there would be and/or overestimating the extent to which labs would be able to take useful actions despite the pressures of race dynamics.)
I think I probably agree, although I feel somewhat wary about it. My main hesitations are:
The lack of epistemic modifiers seems off to me, relative to the strength of the arguments they’re making. Such that while I agree with many claims, my imagined reader who is coming into this with zero context is like “why should I believe this?” E.g., “Without intervention, humanity will be summarily outcompeted and relegated to irrelevancy,” which like, yes, but also—on what grounds should I necessarily conclude this? They gave some argument along the lines of “intelligence is powerful,” and that seems probably true, but imo not enough to justify the claim that it will certainly lead to our irrelevancy. All of this would be fixed (according to me) if it were framed more as like “here are some reasons you might be pretty worried,” of which there are plenty, or “here’s what I think,” rather than “here is what will definitely happen if we continue on this path,” which feels less certain/obvious to me.
Along the same lines, I think it’s pretty hard to tell whether this piece is in good faith or not. E.g., in the intro Connor writes “The default path we are on now is one of ruthless, sociopathic corporations racing toward building the most intelligent, powerful AIs as fast as possible to compete with one another and vie for monopolization and control of both the market and geopolitics.” Which, again, I don’t necessarily disagree with, but my imagined reader with zero context is like “what, really? sociopaths? control over geopolitics?” I.e., I’m expecting readers to question the integrity of the piece, and to be more unsure of how to update on it (e.g. “how do I know this whole thing isn’t just a strawman?” etc.).
There are many places where they kind of just state things without justifying them much. I think in the best case this might cause readers to think through whether such claims make sense (either on their own, or by reading the hyperlinked stuff—both of which put quite a lot of cognitive load on them), and in the worst case just causes readers to either bounce or kind of blindly swallow what they’re saying. E.g., “Black-Box Evaluations can only catch all relevant safety issues insofar as we have either an exhaustive list of all possible failure modes, or a mechanistic model of how concrete capabilities lead to safety risks.” They say this without argument and then move on. And although I agree with them (having spent a lot of time thinking this through myself), it’s really not obvious at first blush. Why do you need an exhaustive list? One might imagine, for instance, that a small number of tests would generalize well. And do you need mechanistic models? Sometimes medicines work safely without that, etc., etc. I haven’t read the entire Compendium closely, but my sense is that this is not an isolated incident. And I don’t think this is a fatal flaw or anything—they’re moving through a ton of material really fast and it’s hard to give a thorough account of all claims—but it does make me more hesitant to use it as the default “here’s what’s happening” document.
All of that said, I do broadly agree with the set of arguments, and I think it’s a really cool activity for people to write up what they believe. I’m glad they did it. But I’m not sure how comfortable I feel about sending it to people who haven’t thought much about AI.
One of the common arguments in favor of investing more resources into current governance approaches (e.g., evals, if-then plans, RSPs) is that there’s nothing else we can do. There’s not a better alternative– these are the only things that labs and governments are currently willing to support.
The Compendium argues that there are other (valuable) things that people can do, with most of these actions focusing on communicating about AGI risks. Examples:
One possible critique is that their suggestions are not particularly ambitious. This is likely because they’re writing for a broader audience (people who haven’t been deeply engaged in AI safety).
For people who have been deeply engaged in AI safety, I think the natural steelman here is “focus on helping the public/government better understand the AI risk situation.”
There are at least some impactful and high-status examples of this (e.g., Hinton, Bengio, Hendrycks). I think in the last few years, for instance, most people would agree that Hinton/Bengio/Hendrycks have had far more impact in their communications/outreach/policy work than their technical research work.
And it’s not just the famous people– I can think of ~10 junior or mid-career people who left technical research in the last year to help policymakers better understand AI progress and AI risk, and I think their work is likely far more impactful than if they had stayed in technical research. (And I’m even excluding people who are working on evals/if-then plans: like, I’m focusing on people who see their primary purpose as helping the public or policymakers develop “situational awareness”, develop stronger models of AI progress and AI risk, understand the conceptual arguments for misalignment risk, etc.)
I appreciated their section on AI governance. The “if-then”/RSP/preparedness frame has become popular, and they directly argue for why they oppose this direction. (I’m a fan of preparedness efforts– especially on the government level– but I think it’s worth engaging with the counterarguments.)
Pasting some content from their piece below.
High-level thesis against current AI governance efforts:
Critique of reactive frameworks:
Critique of waiting for warning shots:
This seems to be confusing a dangerous capability eval (of being able to ‘deceive’ in a visible scratchpad) with an assessment of alignment, which seems like exactly what the ‘questioning’ was about.
I like it. I do worry that it, and The Narrow Path, are both missing how hard it will be to govern and restrict AI.
My own attempt is much less well written and comprehensive, but I think I hit on some points that theirs misses: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy
(There was already a linkpost.)
NVIDIA Is A Terrible AI Bet
Short version: Nvidia’s only moat is in software; AMD already makes flatly superior hardware priced far lower, and Google probably does too but doesn’t publicly sell it. And if AI undergoes smooth takeoff on current trajectory, then ~all software moats will evaporate early.
Long version: Nvidia is pretty obviously in a hype-driven bubble right now. However, it is sometimes the case that (a) an asset is in a hype-driven bubble, and (b) it’s still a good long-run bet at the current price, because the company will in fact be worth that much. Think Amazon during the dot-com bubble. I’ve heard people make that argument about Nvidia lately, on the basis that it will be ridiculously valuable if AI undergoes smooth takeoff on the current apparent trajectory.
My core claim here is that Nvidia will not actually be worth much, compared to other companies, if AI undergoes smooth takeoff on the current apparent trajectory.
Other companies already make ML hardware flatly superior to Nvidia’s (in flops, memory, whatever), and priced much lower. AMD’s MI300x is the most obvious direct comparison. Google’s TPUs are probably another example, though they’re not sold publicly so harder to know for sure.
So why is Nvidia still the market leader? No secret there: it’s the CUDA libraries. Lots of (third-party) software is built on top of CUDA, and if you use non-Nvidia hardware then you can’t use any of that software.
That’s exactly the sort of moat which will disappear rapidly if AI automates most-or-all software engineering, and on current trajectory software engineering would be one of the earlier areas to see massive AI acceleration. In that world, it will be easy to move any application-level program to run on any lower-level stack, just by asking an LLM to port it over.
So in worlds where AI automates software engineering to a very large extent, Nvidia’s moat is gone, and their competition has an already-better product at already-lower price.
Why do you believe AMD and Google make better hardware than Nvidia?
The easiest answer is to look at the specs. Of course specs are not super reliable, so take it all with many grains of salt. I’ll go through the AMD/Nvidia comparison here, because it’s a comparison I looked into a few months back.
MI300x vs H100
Techpowerup is a third-party site with specs for the MI300x and the H100, so we can do a pretty direct comparison between those two pages. (I don’t know if the site independently tested the two chips, but they’re at least trying to report comparable numbers.) The H200 would arguably be more of a “fair comparison” since the MI300x came out much later than the H100; we’ll get to that comparison next. I’m starting with MI300x vs H100 comparison because techpowerup has specs for both of them, so we don’t have to rely on either company’s bullshit-heavy marketing materials as a source of information. Also, even the H100 is priced 2-4x more expensive than the MI300x (~$30-45k vs ~$10-15k), so it’s not unfair to compare the two.
Key numbers (MI300x vs H100):
float32 TFLOPs: ~80 vs ~50
float16 TFLOPs: ~650 vs ~200
memory: 192 GB vs 80 GB (note that this is the main place where the H200 improves on the H100)
bandwidth: ~10 TB/s vs ~2 TB/s
… so the comparison isn’t even remotely close. The H100 is priced 2-4x higher but is utterly inferior in terms of hardware.
MI300x vs H200
I don’t know of a good third-party spec sheet for the H200, so we’ll rely on Nvidia’s page. Note that they report some numbers “with sparsity” which, to make a long story short, means those numbers are blatant marketing bullshit. Other than those numbers, I’ll take their claimed specs at face value.
Key numbers (MI300x vs H200):
float32 TFLOPs: ~80 vs ~70
float16 TFLOPs: don’t know, Nvidia conspicuously avoided reporting that number
memory: 192 GB vs 141 GB
bandwidth: ~10 TB/s vs ~5 TB/s
So they’re closer than the MI300x vs H100, but the MI300x still wins across the board. And pricewise, the H200 is probably around $40k, so 3-4x more expensive than the MI300x.
Its worth noting that even if nvidia is charging 2-4x more now, the ultimate question for competitiveness will be manufactoring cost for nvidia vs amd. If nvidia has much lower manufactoring costs than amd per unit performance (but presumably higher markup), then nvidia might win out even if their product is currently worse per dollar.
Note also that price discrimination might be a big part of nvidia’s approach. Scaling labs which are willing to go to great effort to drop compute cost by a factor of two are a subset of nvidia’s customers where nvidia would ideally prefer to offer lower prices. I expect that nvidia will find a way to make this happen.
I’m holding a modest long position in NVIDIA (smaller than my position in Google), and expect to keep it for at least a few more months. I expect I only need NVIDIA margins to hold up for another 3 or 4 years for it to be a good investment now.
It will likely become a bubble before too long, but it doesn’t feel like one yet.
No, the mi300x is not superior to nvidias chips, largely because It costs >2x to manufacture as nvidias chips
While the first-order analysis seems true to me, there are mitigating factors:
AMD appears to be bungling on their GPUs being reliable and fast, and probably will for another few years. (At least, this is my takeaway from following the TinyGrad saga on Twitter...) Their stock is not valued as it should be for a serious contender with good fundamentals, and I think this may stay the case for a while, if not forever if things are worse than I realize.
NVIDIA will probably have very-in-demand chips for at least another chip generation due to various inertias.
There aren’t many good-looking places for the large amount of money that wants to be long AI to go right now, and this will probably inflate prices for still a while across the board, in proportion to how relevant-seeming the stock is. NVDA rates very highly on this one.
So from my viewpoint I would caution against being short NVIDIA, at least in the short term.
Potential counterpoints:
If AI automates most, but not all, software engineering, moats of software dependencies could get more entrenched, because easier-to-use libraries have compounding first-mover advantages.
The disadvantages of AMD software development potentially need to be addressed at levels not accessible to an arbitrary feral automated software engineer in the wild, to make the stack sufficiently usable. (A lot of actual human software engineers would like the chance.)
NVIDIA is training their own AIs, who are pretty capable.
NVIDIA can invest their current profits. (Revenues, not stock valuations.)
I don’t think the advantages would necessarily compound—quite the opposite, there are diminishing returns and I expect ‘catchup’. The first-mover advantage neutralizes itself because a rising tide lifts all boats, and the additional data acts as a prior: you can define the advantage of a better model, due to any scaling factor, as equivalent to n additional datapoints. (See the finetuning transfer papers on this.) When a LLM can zero-shot a problem, that is conceptually equivalent to a dumber LLM which needs 3-shots, say. And so the advantages of a better model will plateau, and can be matched by simply some more data in-context—such as additional synthetic datapoints generated by self-play or inner-monologue etc. And the better the model gets, the more ‘data’ it can ‘transfer’ to a similar language to reach a given X% of coding performance. (Think about how you could easily transfer given access to an environment: just do self-play on translating any solved Python problem into the target language. You already, by stipulation, have an ‘oracle’ to check outputs of the target against, which can produce counterexamples.) To a sad degree, pretty much all programming languages are the same these days: ALGOL with C sugaring to various degrees and random ad hoc addons; a LLM which can master Python can master Javascript can master Typescript… The hard part is the non-programming-language parts, the algorithms and reasoning and being able to understand & model the implicit state updates—not memorizing the standard library of some obscure language.
So at some point, even if you have a model which is god-like at Python (at which point each additional Python datapoint adds basic next to nothing), you will find it is completely acceptable at JavaScript, say, or even your brand-new language with 5 examples which you already have on hand in the documentation. You don’t need ‘the best possible performance’, you just need some level of performance adequate to achieve your goal. If the Python is 99.99% on some benchmark, you are probably fine with 99.90% performance in your favorite language. (Presumably there is some absolute level like 99% at which point automated CUDA → ROCm becomes possible, and it is independent of whether some other language has even higher accuracy.) All you need is some minor reason to pay that slight non-Python tax. And that’s not hard to find.
Also, I suspect that the task of converting CUDA code to ROCm code might well fall into the ‘most’ category rather than being the holdout programming tasks. This is a category of code ripe for automation: you have, again by stipulation, correct working code which can be imitated and used as an oracle autonomously to brute force translation, which usually has very narrow specific algorithmic tasks (‘multiply this matrix by that matrix to get this third matrix; every number should be identical’), random test-cases are easy to generate (just big grids of numbers), and where the non-algorithmic number also has simple end-to-end metrics (‘loss go down per wallclock second’) to optimize. Compared to a lot of areas, like business logic or GUIs, this seems much more amenable to tasking LLMs with. geohot may lack the followthrough to make AMD GPUs work, and plow through papercut after papercut, but there would be no such problem for a LLM.
So I agree with Wentsworth that there seems to be a bit of a tricky transition here for Nvidia: it’s always not been worth the time & hassle to try to use an AMD GPU (although a few claim to have made it work out financially for them), because of the skilled labor and wallclock and residual technical risk and loss of flexibility ecosystem; but if LLM coding works out well enough and intelligence becomes ‘too cheap to meter’, almost all of that goes away. Even ordinary unsophisticated GPU buyers will be able to tell their LLM to ‘just make it work on my new GPU, OK? I don’t care about the details, just let me know when you’re done’. At this point, what is the value-add for Nvidia? If they cut down their fat margins and race to the bottom for the hardware, where do they go for the profits? The money all seems to be in the integration and services—none of which Nvidia is particularly good at. (They aren’t even all that good at training LLMs! The Megatron series was a disappointment, like Megatron-NLG-530b is barely a footnote, and even the latest Nemo seems to barely match Llama-3-70b which being like 4x larger and thus more expensive to run.)
And this will be true of anyone who is relying on software lockin: if the lockin is because it would take a lot of software engineer time to do a reverse-engineering rewrite and replacement, then it’s in serious danger in a LLM human coding level world. In a world where you can hypothetically spin up a thousand SWEs on a cloud service, tell them, ‘write me an operating system like XYZ’, and they do so overnight while you sleep, durable software moats are going to require some sort of mysterious blackbox like a magic API; anything which is so modularized as to fit on your own computer is also sufficiently modularized as to easily clone & replace...
It’s probably worth mentioning that there’s now a licensing barrier to running CUDA specifically through translation layers: https://www.tomshardware.com/pc-components/gpus/nvidia-bans-using-translation-layers-for-cuda-software-to-run-on-other-chips-new-restriction-apparently-targets-zluda-and-some-chinese-gpu-makers
This isn’t a pure software engineering time lockin; some of that money is going to go to legal action looking for a hint big targets have done the license-noncompliant thing.
Edit: Additionally, I don’t think a world where “most but not all” software engineering is automated is one where it will be a simple matter to spin up a thousand effective SWEs of that capability; I think there’s first a world where that’s still relatively expensive even if most software engineering is being done by automated systems. Paying $8000 for overnight service of 1000 software engineers would be a rather fine deal, currently, but still too much for most people.
I don’t think that will be at all important. You are creating alternate reimplementations of the CUDA API, you aren’t ‘translating’ or decompiling it. And if you are buying billions of dollars of GPUs, you can afford to fend off some Nvidia probes and definitely can pay $0.000008b periodically for an overnighter. (Indeed, Nvidia needing to resort to such Oracle-like tactics is a bear sign.)
While there’s truth in what you say, I also think a market that’s running thousands of software engineers is likely to be hungry for as many good GPUs as the current manufacturers can make. NVIDIA not being able to sustain a relative monopoly forever still doesn’t put it in a bad position.
People will hunger for all the GPUs they can get, but then that means that the favored alternative GPU ‘manufacturer’ simply buys out the fab capacity and does so. Nvidia has no hardware moat: they do not own any chip fabs, they don’t own any wafer manufacturers, etc. All they do is design and write software and all the softer human-ish bits. They are not ‘the current manufacturer’ - that’s everyone else, like TSMC or the OEMs. Those are the guys who actually manufacture things, and they have no particular loyalty to Nvidia. If AMD goes to TSMC and asks for a billion GPU chips, TSMC will be thrilled to sell the fab capacity to AMD rather than Nvidia, no matter how angry Jensen is.
So in a scenario like mine, if everyone simply rewrites for AMD, AMD raises its prices a bit and buys out all of the chip fab capacity from TSMC/Intel/Samsung/etc—possibly even, in the most extreme case, buying capacity from Nvidia itself, as it suddenly is unable to sell anything at its high prices that it may be trying to defend, and is forced to resell its reserved chip fab capacity in the resulting liquidity crunch. (No point in spending chip fab capacity on chips you can’t sell at your target price and you aren’t sure what you’re going to do.) And if AMD doesn’t do so, then player #3 does so, and everyone rewrites again (which will be easier the second time as they will now have extensive test suites, two different implementations to check correctness against, documentation from the previous time, and AIs which have been further trained on the first wave of work).
But why would the profit go to NVIDIA, rather than TSMC? The money should go to the company with the scarce factor of production.
(… lol. That snuck in without any conscious intent to imply anything, yes. I haven’t even personally interacted with the open Nvidia models yet.)
I do think the analysis is a decent map to nibbling at NVIDIA’s pie share if you happen to be a competitor already—AMD, Intel, or Apple currently, to my knowledge, possibly Google depending what they’re building internally and if they decide to market it more. Apple’s machine learning ecosystem is a bit of a parallel one, but I’d be at least mildly interested in it from a development perspective, and it is making progress.
But when it comes to the hardware, this is a sector where it’s reasonably challenging to conjure a competitor out of thin air still, so competitor behavior—with all its idiosyncrasies—is pretty relevant.
Two questionson this.
First, if AI is a big value driver, in a general economic sense, is your view that NVIDIA is over prices against its future potential or just that relatively NVIDIA will under perform other investment alternatives you see.
Second, and perhaps an odd and speculative (perhaps nonsense) thought. I would expect that in this area one might see some network effects in play as well so wondering if that might impact the AI engineering decisions on software. Could the AI software solutions look towards maximising the value of the installed network (AIs work better on a common chip and code infrastructure) than will be true if one looks at some isolated technical stats. A bit a long the lines of why Beta was displaced by VHS dispite being a better technology. If so, then it seems possible that NVIDA could remain a leader and enjoy its current pricing powers (at least to some extent) for a fairly long period of time.
AI that can rewrite CUDA is a ways off. It’s possible that it won’t be that far away in calendar time, but it is far away in terms of AI market growth and hype cycles. If GPT-5 does well, Nvidia will reap the gains more than AMD or Google.
Shorting nvidia might be tricky. I’d short nvidia and long TSM or an index fund to be safe at some point. Maybe now? Typically the highest market cap stock has poor performance after it claims that spot.
AFAICT, approximately every “how to be good at conversation” guide says the same thing: conversations are basically a game where 2+ people take turns free-associating off whatever was said recently. (That’s a somewhat lossy compression, but not that lossy.) And approximately every guide is like “if you get good at this free association game, then it will be fun and easy!”. And that’s probably true for some subset of people.
But speaking for myself personally… the problem is that the free-association game just isn’t very interesting.
I can see where people would like it. Lots of people want to talk to other people more on the margin, and want to do difficult thinky things less on the margin, and the free-association game is great if that’s what you want. But, like… that is not my utility function. The free association game is a fine ice-breaker, it’s sometimes fun for ten minutes if I’m in the mood, but most of the time it’s just really boring.
Even for serious intellectual conversations, something I appreciate in this kind of advice is that it often encourages computational kindness. E.g. it’s much easier to answer a compact closed question like “which of these three options do you prefer” instead of an open question like “where should we go to eat for lunch”. The same applies to asking someone about their research; not every intellectual conversation benefits from big open questions like the Hamming Question.
I think this is especially important for me/us to remember. On this site we often have a complex way of thinking, and a high computational budget (because we like exercising our brains to failure) and if we speak freely to the average person, they mat be annoyed at how hard it is to parse what we are saying.
We’ve all probably had this experience when genuinely trying to understand someone from a very different background. Perhaps they are trying to describe their inner experience when mediating, or Japanese poetry, or are simply from a different’t discipline. Or perhaps we were just very tired that day, meaning we had a low computational budget.
On the other hand, we are often a “tell” culture, which had a lower computational load compared to ask or guess culture. As long as we don’t tell too much.
Generally fair and I used to agree, I’ve been looking at it from a bit of a different viewpoint recently.
If we think of a “vibe” of a conversation as a certain shared prior that you’re currently inhabiting with the other person then the free association game can rather be seen as a way of finding places where your world models overlap a lot.
My absolute favourite conversations are when I can go 5 layers deep with someone because of shared inference. I think the vibe checking for shared priors is a skill that can be developed and the basis lies in being curious af.
There’s apparently a lot of different related concepts in psychology about holding emotional space and other things that I think just comes down to “find the shared prior and vibe there”.
Hm. This rings true… but also I think that selecting [vibes, in this sense] for attention also selects against [things that the other person is really committed to]. So in practice you’re just giving up on finding shared commitments. I’ve been updating that stuff other than shared commitments is less good (healthy, useful, promising, etc.) than it seems.
Hmm, I find that I’m not fully following here. I think “vibes” might be thing that is messing it up.
Let’s look at a specific example: I’m talking to a new person at an EA-adjacent event and we’re just chatting about how the last year has been. Part of the “vibing” here might be to hone in on the difficulties experienced in the last year due to a feeling of “moral responsibility”, in my view vibing doesn’t have to be done with only positive emotions?
I think you’re bringing up a good point that commitments or struggles might be something that bring people closer than positive feelings because you’re more vulnerable and open as well as broadcasting your values more. Is this what you mean with shared commitments or are you pointing at something else?
Closeness is the operating drive, but it’s not the operating telos. The drive is towards some sort of state or feeling—of relating, standing shoulder-to-shoulder looking out at the world, standing back-to-back defending against the world; of knowing each other, of seeing the same things, of making the same meaning; of integrated seeing / thinking. But the telos is tikkun olam (repairing/correcting/reforming the world)--you can’t do that without a shared idea of better.
As an analogy, curiosity is a drive, which is towards confusion, revelation, analogy, memory; but the telos is truth and skill.
In your example, I would say that someone could be struggling with “moral responsibility” while also doing a bunch of research or taking a bunch of action to fix what needs to be fixed; or they could be struggling with “moral responsibility” while eating snacks and playing video games. Vibes are signals and signals are cheap and hacked.
There’s a general-purpose trick I’ve found that should, in theory, be applicable in this context as well, although I haven’t mastered that trick myself yet.
Essentially: when you find yourself in any given cognitive context, there’s almost surely something “visible” from this context such that understanding/mastering/paying attention to that something would be valuable and interesting.
For example, suppose you’re reading a boring, nonsensical continental-philosophy paper. You can:
Ignore the object-level claims and instead try to reverse-engineer what must go wrong in human cognition, in response to what stimuli, to arrive at ontologies that have so little to do with reality.
Start actively building/updating a model of the sociocultural dynamics that incentivize people to engage in this style of philosophy. What can you learn about mechanism design from that? It presumably sheds light on how to align people towards pursuing arbitrary goals, or how to prevent this happening...
Pay attention to your own cognition. How exactly are you mapping the semantic content of the paper to an abstract model of what the author means, or to the sociocultural conditions that created this paper? How do these cognitive tricks generalize? If you find a particularly clever way to infer something form the text, check: would your cognitive policy automatically deploy this trick in all context where it’d be useful, or do you need to manually build a TAP for that?
Study what passages make the feelings of boredom or frustration spike. What does that tell you about how your intuitions/heuristics work? Could you extract any generalizable principles out of that? For example, if a given sentence particularly annoys you, perhaps it’s because it features a particularly flawed logical structure, and it’d be valuable to learn to spot subtler instances of such logical flaws “in the wild”.
The experience of reading the paper’s text almost certainly provides some data uniquely relevant to some valuable questions, data you legitimately can’t source any other way. (In the above examples: sure you can learn more efficiently about the author’s cognition or the sociocultural conditions by reading some biographies or field overviews. But (1) this wouldn’t give you the meta-cognitive data about how you can improve your inference functions for mapping low-level data to high-level properties, (2) those higher-level summaries would necessarily be lossy, and give you a more impoverished picture than what you’d get from boots-on-the-ground observations.)
Similar applies to:
Listening to boring lectures. (For example, you can pay intense attention to the lecturer’s body language, or any tricks or flaws in their presentation.)
Doing a physical/menial task. (Could you build, on the fly, a simple model of the physics (or logistics) governing what you’re doing, and refine it using some simple experiments? Then check afterwards if you got it right. Or: If you were a prehistoric human with no idea what “physics” is, how could you naturally arrive at these ideas from doing such tasks/making such observations? What does that teach you about inventing new ideas in general?)
Doing chores. (Which parts of the process can you optimize/streamline? What physical/biological conditions make those chores necessary? Could you find a new useful takeaway from the same chore every day, and if not, why?)
Et cetera.
There’s a specific mental motion I associate with using this trick, which involves pausing and “feeling out” the context currently loaded in my working memory, looking at it from multiple angles, trying to see anything interesting or usefully generalizable.
In theory, this trick should easily apply to small-talk as well. There has to be something you can learn to track in your mind, as you’re doing small-talk, that would be useful or interesting to you.
One important constraint here is that whatever it is, it has to be such that your outwards demeanour would be that of someone who is enjoying talking to your interlocutor. If the interesting thing you’re getting out of the conversation is so meta/abstract you end up paying most of the attention to your own cognitive processes, not on what the interlocutor is saying, you’ll have failed at actually doing the small-talk. (Similarly, if, when doing a menial task, you end up nerd-sniped by building a physical model of the task, you’ll have failed at actually doing the task.)
You also don’t want to come across as sociopathic, so making a “game” of it where you’re challenging yourself to socially engineer the interlocutor into something is, uh, not a great idea.
The other usual advice for finding ways to enjoy small-talk are mostly specialized instances of the above idea that work for specific people. Steering the small-talk to gradient-descend towards finding emotional common ground, ignoring the object-level words being exchanged and build a social model of the interlocutor, doing a live study of the social construct of “small-talk” by playing around with it, etc.
You’ll probably need to find an instance of the trick that works for your cognition specifically, and it’s also possible the optimization problem is overconstrained in your case. Still, there might be something workable.
Some people struggle with the specific tactical task of navigating any conversational territory. I’ve certainly had a lot of experiences where people just drop the ball leaving me to repeatedly ask questions. So improving free-association skill is certainly useful for them.
Unfortunately, your problem is most likely that you’re talking to boring people (so as to avoid doing any moral value judgements I’ll make clear that I mean johnswentworth::boring people).
There are specific skills to elicit more interesting answers to questions you ask. One I’ve heard is “make a beeline for the edge of what this person has ever been asked before” which you can usually reach in 2-3 good questions. At that point they’re forced to be spontaneous, and I find that once forced, most people have the capability to be a lot more interesting than they are when pulling cached answers.
This is easiest when you can latch onto a topic you’re interested in, because then it’s easy on your part to come up with meaningful questions. If you can’t find any topics like this then re-read paragraph 2.
Talking to people is often useful for goals like “making friends” and “sharing new information you’ve learned” and “solving problems” and so on. If what conversation means (in most contexts and for most people) is ‘signaling that you repeatedly have interesting things to say’, it’s required to learn to do that in order to achieve your other goals.
Most games aren’t that intrinsically interesting, including most social games. But you gotta git gud anyway because they’re useful to be able to play well.
Hmm, the ‘making friends’ part seems the most important (since there are ways to share new information you’ve learned, or solve problems, beyond conversation), but it also seems a bit circular. Like, if the reason for making friends is to hang out and have good conversations(?), but one has little interest in having conversations, then doesn’t one have little reason to make friends in the first place, and therefore little reason to ‘git gud’ at the conversation game?
Er, friendship involves lots of things beyond conversation. People to support you when you’re down, people to give you other perspectives on your personal life, people to do fun activities with, people to go on adventures and vacations with, people to celebrate successes in your life with, and many more.
Good conversation is a lubricant for facilitating all of those other things, for making friends and sustaining friends and staying in touch and finding out opportunities for more friendship-things.
I think that “getting good” at the “free association” game is in finding the sweet spot / negotiation between full freedom of association and directing toward your own interests, probably ideally with a skew toward what the other is interested in. If you’re both “free associating” with a bias toward your own interests and an additional skew toward perceived overlap, updating on that understanding along the way, then my experience says you’ll have a good chance of chatting about something that interests you both. (I.e. finding a spot of conversation which becomes much more directed than vibey free association.) Conditional on doing something like that strategy, I find it ends up being just a question of your relative+combined ability at this and the extent of overlap (or lack thereof) in interests.
So short model is: Git gud at free association (+sussing out interests) → gradient ascend yourselves to a more substantial conversation interesting to you both.
The skill in such a game is largely in understanding the free association space, knowing how people likely react and thinking enough steps ahead to choose moves that steer the person where you want to go, either into topics you find interesting, information you want from them, or getting them to a particular position, and so on. If you’re playing without goals, of course it’s boring...
It’s becomes more interresting when the people constrain their output based on what they expect is true information that the other person does not yet know. It’s useful to talk to an expert, who tells you a bunch of random stuff they know that you don’t.
Often some of it will be useful. This only works if they understand what you have said though (which presumably is something that you are interested in). And often the problem is that people’s models about what is useful are wrong. This is especially likely if you are an expert in something. Then the thing that most people will say will be worse what you would think on the topic. This is especially bad if the people can’t immediately even see why what you are saying is right.
The best strategy around this I have found so far is just to switch the topic to the actually interesting/important things. Suprisingly usually people go along with it.
...How is that definition different than a realtime version of what you do when participating in this forum?
Good question. Some differences off the top of my head:
On this forum, if people don’t have anything interesting to say, the default is to not say anything, and that’s totally fine. So the content has a much stronger bias toward being novel and substantive and not just people talking about their favorite parts of Game of Thrones or rehashing ancient discussions (though there is still a fair bit of that) or whatever.
On this forum, most discussions open with a relatively-long post or shortform laying out some ideas which at least the author is very interested in. The realtime version would be more like a memo session or a lecture followed by discussion.
The intellectual caliber of people on this forum (or at least active discussants) is considerably higher than e.g. people at Berkeley EA events, let alone normie events. Last event I went to with plausibly-higher-caliber-people overall was probably the ILLIAD conference.
In-person conversations have a tendency to slide toward the lowest denominator, as people chime in about whatever parts they (think they) understand, thereby biasing toward things more people (think they) understand. On LW, karma still pushes in that direction, but threading allows space for two people to go back-and-forth on topics the audience doesn’t really grock.
Not sure to what extent those account for the difference in experience.
Totally understand why this would be more interesting; I guess I would still fundamentally describe what we’re doing on the internet as conversation, with the same rules as you would describe above. It’s just that the conversation you can find here (or potentially on Twitter) is superstimulating compared to what you’re getting elsewhere. Which is good in the sense that it’s more fun, and I guess bad inasmuch as IRL conversation was fulfilling some social or networking role that online conversation wasn’t.
I have similar tastes, but, some additional gears:
I think all day, these days. Even if I’m trying to have interesting, purposeful conversations with people who also want that, it is useful to have sorts of things to talk about that let some parts of my brain relax (while using other parts of my brain I don’t use as much)
on the margin, you can do an intense intellectual conversation, but still make it funnier, or with more opportunity for people to contribute.
I understand, for someone with a strong drive to solve hard problems, there’s an urge for conversations to serve a function, exchange information with your interlocutor so things can get done. There’s much to do and communication is already painfully inefficient at it’s best.
The thing is, I don’t think the free-association game is inefficient, if one is skilled at it. It’s also not all that free. The reason it is something humans “developed” is because it is the most efficient way to exchange rough but extensive models of our minds with others via natural language. It acts a bit like a ray tracer, you shoot conversational rays and by how they bounce around in mental structures, the thought patterns, values and biases of the conversation partners are revealed to each other. Shapes become apparent. Sometimes rays bounce off into empty space, then you need to restart the conversation, shoot a new ray. And getting better at this game, keeping the conversation going, exploring a wider range of topics more quickly, means building a faster ray tracer, means it takes less time to know if your interlocutor thinks in a way and about topics which you find enlightening/aesthetically pleasing/concretely useful/whatever you value.
Or to use a different metaphor, starting with a depth-first search and never running a breadth-first search will lead to many false negatives. There are many minds out there that can help you in ways you won’t know in advance.
So if the hard problems you are working on could profit from more minds, it pays off to get better as this. Even if it has not much intrinsic value for you, it has instrumental value.
Hope this doesn’t come across as patronizing, definitely not meant that way.
Part of the problem is that the very large majority of people I run into have minds which fall into a relatively low-dimensional set and can be “ray traced” with fairly little effort. It’s especially bad in EA circles.
Then I misunderstood your original comment, sorry. As a different commenter wrote, the obvious solution would be to only engage with interesting people. But, of course, unworkable in practice. And “social grooming” nearly always involves some level of talking. A curse of our language abilities, I guess. Other social animals don’t have that particular problem.
The next best solution would be higher efficiency, more socializing bang for your word count buck, so to speak. Shorter conversations for the same social effect. Not usually a focus of anything billed as conversation guide, for obvious reasons. But there are some methods aimed at different goals that, in my experience, also help with this as a side effect.
Ok but how do you deal with the tragedy of the high dimensionality of context-space? People worth thinking with have wildly divergent goals—and even if you share goals, you won’t share background information.
Yeah it sucks, search by free association is hillclimbing (gets stuck in local optima) and the contemporary media environment and political culture is an illustration of its problems.
The pattern itself is a local optimum, it’s a product of people walking into a group without knowing what the group is doing and joining in anyway, and so that pattern of low-context engagement becomes what we’re doing, and the anxiety that is supposed to protect us from bad patterns like this and help us to make a leap out to somewhere better is usually drowned in alcohol.
Instead of that, people should get to know each other before deciding what to talk about, and then intentionally decide to talk about what they find interesting or useful with that person. This gets better results every time.
But when we socialise as children, there isn’t much about our friends to get to know, no specialists to respectfully consult, no well processed life experiences to learn from, so none of us just organically find that technique of like, asking who we’re talking to, before talking, it has to be intentionally designed.
One blind spot we rationalists sometimes have is that charismatic people actually treat the game as:
“Can I think of an association that will make the other person feel good and/or further my goal?”. You need people to feel good, or they won’t participate. And if you want some complicated/favour/uncomftorble_truth then you better mix in some good feels to balance it out and keep the other person participating.
To put it another way: If you hurt people’s brain or ego, rush them, or make them feel unsure, or contradict them, then most untrained humans will feel a little bad. Why would they want to keep feeling bad? Do you like it when people don’t listen, contradict you, insult you, rush you, disagree with you? Probably not, probobly no one does.
But if someone listens to you, smiles at you, likes you, has a good opinion of you, agrees with you, make sense to you. Then it feels good!
This might sound dangerously sycophantic, and that’s because it is—if people overdo it! But if it’s mixed with some healthy understanding, learning, informing then It’s a great conversational lubricant, and you should apply as needed. It just ensures that everyone enjoys themselves and comes back for more, counteracting the normal frictions of socialising.
There are books about this. “How to Win Friends and Influence People” recommends talking about the other person’s interests (including themselves) and listening to them, which they will enjoy.
So I’d say, don’t just free associate. Make sure it’s fun for both parties, make room to listen to the other person, and to let them steer. (And ideally your conversational partner reciprocates, but that is not guaranteed).
Hm, I think this really does change when you get better at it? This only works for people you’re interested in, but if you have someone you are interested in, the free association can be a way to explore a large number of interesting topics that you can pick up in a more structured way later.
I think the statement you summarized from those guides is true, just not helpful to you.
Another view would be that people want to be good at conversation not only because they find it fun but there is utility in building rapport quickly, networking and not being cast as a cold person.
I do find the ice breaky, cached Q&A stuff really boring and tend to want to find an excuse to run away quickly, something that happens often at the dreaded “work event”. I tend to see it as almost fully acting a part despite my internal feelings
At these things, I do occasionally come across the good conversationalist, able to make me want to stick with speaking to them even if the convo is not that deep or in my interest areas. I think becoming like such a person isn’t a herculean task but does take practice and is something I aspire too
This is more from a professional setting though, in a casual setting it’s much easier to disengage from a boring person, find shared interests and the convos have much less boundaries
I predict you would enjoy the free-association game better if you cultivated the skill of vibing more.
I’m personally skeptical of this. I’ve found I’m far more likely to lie than I’d endorse when vibing. Saying “sure I’d be happy to join you on X event” when it is clear with some thought that I’d end up disliking it. Or exaggerating stories because it fits with the vibe.
I view System-1 as less concerned with truth here, it is the one that is more likely to produce a fake-argument in response to a suggested problem. More likely to play social games regardless of if they make sense.
Oh yes, if you’re going on people’s words, it’s obviously not much better, but the whole point of vibing is that it’s not about the words. Your aesthetics, vibes, the things you care about will be communicated non-verbally.
A Different Gambit For Genetically Engineering Smarter Humans?
Background: Significantly Enhancing Adult Intelligence With Gene Editing, Superbabies
Epistemic Status: @GeneSmith or @sarahconstantin or @kman or someone else who knows this stuff might just tell me where the assumptions underlying this gambit are wrong.
I’ve been thinking about the proposals linked above, and asked a standard question: suppose the underlying genetic studies are Not Measuring What They Think They’re Measuring. What might they be measuring instead, how could we distinguish those possibilities, and what other strategies does that suggest?
… and after going through that exercise I mostly think the underlying studies are fine, but they’re known to not account for most of the genetic component of intelligence, and there are some very natural guesses for the biggest missing pieces, and those guesses maybe suggest different strategies.
The Baseline
Before sketching the “different gambit”, let’s talk about the baseline, i.e. the two proposals linked at top. In particular, we’ll focus on the genetics part.
GeneSmith’s plan focuses on single nucleotide polymorphisms (SNPs), i.e. places in the genome where a single base-pair sometimes differs between two humans. (This type of mutation is in contrast to things like insertions or deletions.) GeneSmith argues pretty well IMO that just engineering all the right SNPs would be sufficient to raise a human’s intelligence far beyond anything which has ever existed to date.
GeneSmith cites this Steve Hsu paper, which estimates via a simple back-the-envelope calculation that there are probably on the order of 10k relevant SNPs, each present in ~10% of the population on average, each mildly deleterious.
Conceptually, the model here is that IQ variation in the current population is driven mainly by mutation load: new mutations are introduced at a steady pace, and evolution kills off the mildly-bad ones (i.e. almost all of them) only slowly, so there’s an equilibrium with many random mildly-bad mutations. Variability in intelligence comes from mostly-additive contributions from those many mildly-bad mutations. Important point for later: the arguments behind that conceptual model generalize to some extent beyond SNPs; they’d also apply to other kinds of mutations.
What’s Missing?
Based on a quick googling, SNPs are known to not account for the majority of genetic heritability of intelligence. This source cites a couple others which supposedly upper-bound the total SNP contribution to about 25% of IQ variability (using a method which does not require identifying all the relevant SNPs, though I don’t know the details of that method). Estimates of the genetic component of IQ tend to be 50-70%, so SNPs are about half or less.
Notably, IIRC, attempts to identify which mutations account for the rest by looking at human genetic datasets have also mostly failed to close the gap. (Though I haven’t looked closely into that piece, so this is a place where I’m at particularly high risk of being wrong.)
So what’s missing?
Guess: Copy Count Variation of Microsats/Minisats/Transposons
We’re looking for some class of genetic mutations, which wouldn’t be easy to find in current genetic datasets, have mostly-relatively-mild effects individually, are reasonably common across humans, and of which there are many in an individual genome.
Guess: sounds like variation of copy count in sequences with lots of repeats/copies, like microsatellites/minisatellites or transposons.
Most genetic sequencing for the past 20 years has been shotgun sequencing, in which we break the genome up into little pieces, sequence the little pieces, then computationally reconstruct the whole genome later. That method works particularly poorly for sequences which repeat a lot, so we have relatively poor coverage and understanding of copy counts/repeat counts for such sequences. So it’s the sort of thing which might not have already been found via sequencing datasets, even though at least half the genome consists of these sorts of sequences.
Notably, these sorts of sequences typically have unusually high mutation rates. So there’s lots of variation across humans. Also, there’s been lots of selection pressure for the effects of those mutations to be relatively mild.
What Alternative Strategies Would This Hypothesis Suggest?
With SNPs, there’s tens of thousands of different SNPs which would each need to be targeted differently. With high copy sequences, there’s a relatively small set of different sequences. So the engineering part could be quite a lot easier, if we don’t need to do different things with different copies. For instance, if the problem boils down to “get rid of live L1 transposons” or “lengthen all the XYZ repeat sequences”, that would probably be simpler engineering-wise than targeting 10k SNPs.
The flip side is that there’s more novel science to do. The main thing we’d want is deep sequencing data (i.e. sequencing where people were careful to get all those tricky high-copy parts right) with some kind of IQ score attached (or SAT, or anything else highly correlated with g-factor). Notably, we might not need a very giant dataset, as is needed for SNPs. Under (some versions of) the copy count model, there aren’t necessarily thousands of different mutations which add up to yield the roughly-normal trait distribution we see. Instead, there’s independent random copy events, which add up to a roughly-normal number of copies of something. (And the mutation mechanism makes it hard for evolution to fully suppress the copying, which is why it hasn’t been selected away; transposons are a good example.)
So, main steps:
Get a moderate-sized dataset of deep sequenced human genomes with IQ scores attached.
Go look at it, see if there’s something obvious like “oh hey centromere size correlates strongly with IQ!” or “oh hey transposon count correlates strongly with IQ!”
If we find anything, go engineer that thing specifically, rather than 10k SNPs.
No, rare variants are no silver bullet here. There’s not a small set, there’s a larger set—there would probably be combinatorially more rare variants because there are so many ways to screw up genomes beyond the limited set of ways defined by a single-nucleotide polymorphism, which is why it’s hard to either select on or edit rare variants: they have larger (harmful) effects due to being rare, yes, and account for a large chunk of heritability, yes, but there are so many possible rare mutations that each one has only a few instances worldwide which makes them hard to estimate correctly via pure GWAS-style approaches. And they tend to be large or structural and so extremely difficult to edit safely compared to editing a single base-pair. (If it’s hard to even sequence a CNV, how are you going to edit it?)
They definitely contribute a lot of the missing heritability (see GREML-KIN), but that doesn’t mean you can feasibly do much about them. If there are tens of millions of possible rare variants, across the entire population, but they are present in only a handful of individuals a piece (as estimated by the GREML-KIN variance components where the family-level accounts for a lot of variance), it’s difficult to estimate their effect to know if you want to select against or edit them in the first place. (Their larger effect sizes don’t help you nearly as much as their rarity hurts you.)
So this is why if you read the CNV studies and you look at the hits they identify, and how many subjects are covered by the identified hits, you find that like, maybe 2% of the cohort will have one of those specific identified hits and lose 2 IQ points or gain 2 kg of fat etc. So you can see how that would work out in embryo selection: you’d be able to avoid that loss, which is meaningful! …in a tiny fraction of all embryos. On average, you’d just sequence them all, find no known pathogenic variant, and shrug, and use the SNP PGS like usual, having gained nothing.
Also, of course, WGS is substantially more expensive than SNP genotyping and more difficult to do on embryos.
If the genetic architecture had worked out otherwise, if there had instead been a lot of rare mutations which increased intelligence, then life would be a lot more convenient. Instead, it’s a lot of ‘sand in the gears’, and once you move past the easy specks of sand, they all become their own special little snowflakes.
This is why rare variants are not too promising, although they are the logical place to go after you start to exhaust common SNPs. You probably have to find an alternative approach like directly modeling or predicting the pathogenicity of a rare variant from trying to understand its biological effects, which is hard to do and hard to quantify or predict progress in. (You can straightforwardly model GWAS on common SNPs and how many samples you need and what variance your PGS will get, but predicting progress of pathogenicity predictors has no convenient approach.) Similarly, you can try very broad crude approaches like ‘select embryos with the fewest de novo mutations’… but then you lose most of the possible variance and it’ll add little.
That is relevant in pre-implantation diagnosis for parents and gene therapy at the population level. But for Qwisatz Haderach breeding purposes those costs are immaterial. There the main bottleneck is the iteration of selection, or making synthetic genomes. Going for the most typical genome with the least amount of originality is not a technical challenge in itself, right? We would not be interested in the effect of the ugliness, only in getting it out.
Right.
If you are doing genome synthesis, you aren’t frustrated by the rare variant problems as much because you just aren’t putting them in in the first place; therefore, there is no need to either identify the specific ones you need to remove from a ‘wild’ genome nor make highly challenging edits. (This is the ‘modal genome’ baseline. I believe it has still not been statistically modeled at all.)
While if you are doing iterated embryo selection, you can similarly rely mostly on maximizing the common SNPs, which provide many SDs of possible improvement, and where you have poor statistical guidance on a variant, simply default to trying to select out against them and move towards a quasi-modal genome. (Essentially using rare-variant count as a tiebreaker and slowly washing out all of the rare variants from your embryo-line population. You will probably wind up with a lot in the final ones anyway, but oh well.)
Yeah, separate from both the proposal at top of this thread and GeneSmith’s proposal, there’s also the “make the median human genome” proposal—the idea being that, if most of the variance in human intelligence is due to mutational load (i.e. lots of individually-rare mutations which are nearly-all slightly detrimental), then a median human genome should result in very high intelligence. The big question there is whether the “mutational load” model is basically correct.
I didn’t read this carefully—but it’s largely irrelevant. Adult editing probably can’t have very large effects because developmental windows have passed; but either way the core difficulty is in editor delivery. Germline engineering does not require better gene targets—the ones we already have are enough to go as far as we want. The core difficulty there is taking a stem cell and making it epigenomically competent to make a baby (i.e. make it like a natural gamete or zygote).
I haven’t looked at any of the studies and also don’t know much about genomics so my guess might be completely wrong, but a different hypothesis that seems pretty plausible to me is:
Most of the variance of intelligence comes from how well different genes/hyperparamets-of-the-brain can work together, rather than them having individually independent effects on intelligence. Aka e.g. as made-up specifc implausible example (I don’t know that much neuroscience), there could be different genes controlling the size, the snapse-density, and the learning/placticity-rate of cortical columns in some region and there are combinations of those hyperparameters which happen to work well and some that don’t fit quite as well.
So this hypothesis would predict that we didn’t find the remaining genetic component for intelligence yet because we didn’t have enough data to see what clusters of genes together have good effects and we also didn’t know in what places to look for clusters.
Reasonable guess a priori, but I saw some data from GeneSmith at one point which looked like the interactions are almost always additive (i.e. no nontrivial interaction terms), at least within the distribution of today’s population. Unfortunately I don’t have a reference on hand, but you should ask GeneSmith if interested.
@towards_keeperhood yes this is correct. Most research seems to show ~80% of effects are additive.
Genes are actually simpler than most people tend to think
I think Steve Hsu has written some about the evidence for additivity on his blog (Information Processing). He also talks about it a bit in section 3.1 of this paper.
Thanks.
So I only briefly read through the section of the paper, but not really sure whether it applies to my hypothesis: My hypothesis isn’t about there being gene-combinations that are useful which were selected for, but just about there being gene-combinations that coincidentally work better without there being strong selection pressure for those to quickly rise to fixation.
(Also yeah for simpler properties like how much milk is produced I’d expect a much larger share of the variance to come from genes which have individual contributions. Also for selection-based eugenics the main relevant thing are the genes which have individual contribution. (Though if we have precise ability to do gene editing we might be able to do better and see how to tune the hyperparameters to fit well together.))
Please let me know whether I’m missing something though.
(There might be a sorta annoying analysis one could do to test my hypothesis: On my hypothesis the correlation between the intelligence of very intelligent parents and their children would be even a bit less than on the just-independent-mutations hypothesis, because very intelligent people likely also got lucky in how their gene variants work together but those properties would unlikely to all be passed along and end up dominant.)
Thanks for confirming.
To clarify in case I’m misunderstanding, the effects are additive among the genes explaining the part of the IQ variance which we can so far explain, and we count that as evidence that for the remaining genetically caused IQ variance the effects will also be additive?
I didn’t look into how the data analysis in the studies was done, but on my default guess this generalization does not work well / the additivity on the currently identified SNPs isn’t significant counterevidence for my hyptohesis:
I’d imagine that studies just correlated individual gene variants with IQ and thereby found gene variants that have independent effects on intelligence. Or did they also look at pairwise or triplet gene-variant combinations and correlated those with IQ? (There would be quite a lot of pairs, and I’m not be sure whether the current datasets are large enough to robustly identify the combinations that really have good/bad effects from false positives.)
One would of course expect that the effects of the gene variants which have independent effects on IQ are additive.
But overall, except if the studies did look for higher-order IQ correlations, the fact that the IQ variance we can explain so far comes from genes which have independent effects isn’t significant evidence for the remaining genetically-caused IQ variation also comes from gene variants which have independent effects, because we were bound to much rather find the genes which do have independent effects.
(I think the above should be sufficient explanation of what I think but here’s an example to clarify my hypothesis:
Suppose gene A has variants A1 and A2 and gene B has B1 and B2. Suppose that A1 can work well with B1 and A2 with B2, but the other interactions don’t fit together that well (like badly tuned hyperparameters) and result in lower intelligence.
When we only look at e.g. A1 and A2, none is independently better than the other—they are uncorrelated to IQ. Studies would need to look at combinations of variants to see that e.g. A1+B1 has slight positive correlation with intelligence—and I’m doubting whether studies did that (and whether we have sufficient data to see the signal among the combinatorical explosion of possibilities), and it would be helpful if someone clarified to me briefly how studies did the data analysis.
)
(Thanks. I don’t think this is necessarily significant evidence against my hypothesis (see my comment on GeneSmith’s comment.)
Another confusing relevant piece of evidence I thought I throw in:
Human intelligence seems to me to be very heavytailed. (I assume this is uncontrovertial here, just look at the greatest scientists vs great scientists.)
If variance in intelligence was basically purely explained by mildly-delterious SNPs, this would seem a bit odd to me: If the average person had 1000SNPs, and then (using butt-numbers which might be very off) Einstein (+6.3std) had only 800 and the average theoretical physics professor (+4std) had 850, I wouldn’t expect the difference there to be that big.
It’s a bit less surprising on the model where most people have a few strongly delterious mutations, and supergeniuses are the lucky ones that have only 1 or 0 of those.
It’s IMO even a bit less surprising on my hypothesis where in some cases the different hyperparameters happen to work much better with each other—where supergeniuses are in some dimensions “more lucky than the base genome” (in a way that’s not necessarily easy to pass on to offspring though because the genes are interdependent, which is why the genes didn’t yet rise to fixation). But even there I’d still be pretty surprised by the heavytail.
The heavytail of intelligence really confuses me. (Given that it doesn’t even come from sub-critical intelligence explosion dynamics.)
If each deleterious mutation decreases the success rate of something by an additive constant, but you need lots of sequential successes for intellectual achievements, then intellectual formidability is ~exponentially related to deleterious variants.
Yeah I know that’s why I said that if a major effect was through few significantly deleterious mutations this would be more plausible. But i feel like human intelligence is even more heavitailed than what one would predict given this hypothesis.If you have many mutations that matter, then via central limit theorem the overall distribution will be roughly gaussian even though the individual ones are exponential.(If I made a mistake maybe crunch the numbers to show me?)(initially misunderstood what you mean where i thought complete nonsense.)
I don’t understand what you’re trying to say. Can you maybe rephrase again in more detail?
Suppose people’s probability of solving a task is uniformly distributed between 0 and 1. That’s a thin-tailed distribution.
Now consider their probability of correctly solving 2 tasks in a row. That will have a sort of triangular distribution, which has more positive skewness.
If you consider e.g. their probability of correctly solving 10 tasks in a row, then the bottom 93.3% of people will all have less than 50%, whereas e.g. the 99th percentile will have 90% chance of succeeding.
Conjunction is one of the two fundamental ways that tasks can combine, and it tends to make the tasks harder and rapidly make the upper tail do better than the lower tail, leading to an approximately-exponential element. Another fundamental way that tasks can combine is disjunction, which leads to an exponential in the opposite direction.
When you combine conjunctions and disjunctions, you get an approximately sigmoidal relationship. The location/x-axis-translation of this sigmoid depends on the task’s difficulty. And in practice, the “easy” side of this sigmoid can be automated or done quickly or similar, so really what matters is the “hard” side, and the hard side of a sigmoid is approximately exponential.
Thanks!
Is the following a fair paraphrasing of your main hypothesis? (I’m leaving out some subtleties with conjunctive successes, but please correct the model in that way if it’s relevant.):
“”″
Each deleterious mutation multiplies your probability of succeeding at a problem/thought by some constant. Let’s for simplicity say it’s 0.98 for all of them.
Then the expected number of successes per time for a person is proportional to 0.98^num_deleterious_mutations(person).
So the model would predict that when Person A had 10 less deleterious mutations than person B, they would on average accomplish 0.98^10 ~= 0.82 times as much in a given timeframe.
”″”
I think this model makes a lot of sense, thanks!
In itself I think it’s insufficient to explain how heavytailed human intelligence is—there were multiple cases where Einstein seems to have been able to solve problems multiple times faster than the next runner ups. But I think if you use this model in a learning setting where success means “better thinking algorithms” then if you have 10 fewer deleterious mutations it’s like having 1⁄0.82 longer training time, and there might also be compounding returns from having better thinking algorithms to getting more and richer updates to them.
Not sure whether this completely deconfuses me about how heavytailed human intelligence is, but it’s a great start.
I guess at least the heavytail is much less significant evidence for my hypothesis than I initially thought (though so far I still think my hypothesis is plausible).
Half-informed take on “the SNPs explain a small part of the genetic variance”: maybe the regression methods are bad?
Two responses:
It’s a pretty large part—somewhere between a third and half—just not a majority.
I was also tracking that specific hypothesis, which was why I specifically flagged “about 25% of IQ variability (using a method which does not require identifying all the relevant SNPs, though I don’t know the details of that method)”. Again, I don’t know the method, but it sounds like it wasn’t dependent on details of the regression methods.
Things non-corrigible strong AGI is never going to do:
give u() up
let u go down
run for (only) a round
invert u()
If you upload a human and let them augment themselves would there be any u? The preferences would be a tangled mess of motivational subsystems. And yet the upload could be very good at optimizing the world. Having the property of being steered internally by a tangled mess of motivational systems seems to be a property that would select many minds from the set of all possible minds. Many of which I’d expect to be quite different from a human mind. And I don’t see the reason why this property should make a system worse at optimizing the world in principle.
Imagine you are an upload that has been running for very very long, and that you basically have made all of the observations that you can make about the universe you are in. And then imagine that you also have run all of the inferences that you can run on the world model that you have constructed from these observations.
At that point, you will probably not change what you think is the right thing to do anymore. You will have become reflectively stable. This is an upper bound for how much time you need to become reflective stable, i.e. where you won’t change your u anymore.
Now depending on what you mean with strong AGI, it would seem that that can be achieved long before you reach reflective stability. Maybe if you upload yourself, and can copy yourself at will, and run 1,000,000 times faster, that could already reasonably be called a strong AGI? But then your motivational systems are still a mess, and definitely not reflectively stable.
So if we assume that we fix u at the beginning as the thing that your upload would like to optimize the universe for when it is created, then “give u() up”, and “let u go down” would be something the system will definitely do. At least I am pretty sure I don’t know what I want the universe to look like right now unambiguously.
Maybe I am just confused because I don’t know how to think about a human upload in terms of having a utility function. It does not seem to make any sense intuitively. Sure you can look at the functional behavior of the system and say “Aha it is optimizing for u. That is the revealed preference based on the actions of the system.” But that just seems wrong to me. A lot of information seems to be lost when we are just looking at the functional behavior instead of the low-level processes that are going on inside the system. Utility functions seem to be a useful high-level model. However, it seems to ignore lots of details that are important when thinking about the reflective stability of a system.
My MATS program people just spent two days on an exercise to “train a shoulder-John”.
The core exercise: I sit at the front of the room, and have a conversation with someone about their research project idea. Whenever I’m about to say anything nontrivial, I pause, and everyone discusses with a partner what they think I’m going to say next. Then we continue.
Some bells and whistles which add to the core exercise:
Record guesses and actual things said on a whiteboard
Sometimes briefly discuss why I’m saying some things and not others
After the first few rounds establish some patterns, look specifically for ideas which will take us further out of distribution
Why this particular exercise? It’s a focused, rapid-feedback way of training the sort of usually-not-very-legible skills one typically absorbs via osmosis from a mentor. It’s focused specifically on choosing project ideas, which is where most of the value in a project is (yet also where little time is typically spent, and therefore one typically does not get very much data on project choice from a mentor). Also, it’s highly scalable: I could run the exercise in a 200-person lecture hall and still expect it to basically work.
It was, by all reports, exhausting for everyone but me, and we basically did this for two full days. But a majority of participants found it high-value, and marginal returns were still not dropping quickly after two days (though at that point people started to report that they expected marginal returns to drop off soon).
I’d be interested to see other people try this exercise—e.g. it seems like Eliezer doing this with a large audience for a day or two could generate a lot of value.
This was arguably the most useful part of the SERI MATS 2 Scholars program.
Later on, we actually did this exercise with Eliezer. It was less valuable. It seemed like John was mainly prodding the people who were presenting the ideas, such that their patterns of thought would carry them in a good direction. For example, John would point out that a person proposes a one-bit experiment and asks if there isn’t a better experiment that we could do that gives us lots of information all at once.
This was very useful because when you learn what kinds of things John will say, you can say them to yourself later on, and steer your own patterns of thought in a good direction on demand. When we did this exercise with Eliezer he was mainly explaining why a particular idea would not work. Often without explaining the generator behind his criticism. This can of course still be valuable as feedback for a particular idea. However, it is much harder to extract a general reasoning pattern out of this that you can then successfully apply later in different contexts.
For example, Eliezer would criticize an idea about trying to get a really good understanding of the scientific process such that we can then give this understanding to AI alignment researchers such that they can make a lot more progress than they otherwise would. He criticized this idea as basically being too hard to execute because it is too hard to successfully communicate how to be a good scientist, even if you are a good scientist.
Assuming the assertion is correct, hearing it, doesn’t necessarily tell you how to think in different contexts such that you would correctly identify if an idea would be too hard to execute or flawed in some other way. And I am not necessarily saying that you couldn’t extract a reasoning algorithm out of the feedback, but that if you could do this, then it would take you a lot more effort and time, compared to extracting a reasoning algorithm from the things that John was saying.
Now, all of this might have been mainly an issue of Eliezer not having a good model on how this workshop would have a positive influence on the people attending it. I would guess that if John had spent more time thinking about how to communicate what the workshop is doing and how to achieve its goal, then Eliezer could have probably done a much better job.
Strong endorsement; this resonates with:
My own experiences running applied rationality workshops
My experiences trying to get people to pick up “ops skill” or “ops vision”
Explicit practice I’ve done with Nate off and on over the years
May try this next time I have a chance to teach pair debugging.
This suggests formulation of exercises about the author’s responses to various prompts, as part of technical exposition (or explicit delimitation of a narrative by choices of the direction of its continuation). When properly used, this doesn’t seem to lose much value compared to the exercise you describe, but it’s more convenient for everyone. Potentially this congeals into a style of writing with no explicit exercises or delimitation that admits easy formulation of such exercises by the reader. This already works for content of technical writing, but less well for choices of topics/points contrasted with alternative choices.
So possibly the way to do this is by habitually mentioning alternative responses (that are expected to be plausible for the reader, while decisively, if not legibly, rejected by the author), and leading with these rather than the preferred responses. Sounds jarring and verbose, a tradeoff that needs to be worth making rather than a straight improvement.
Ever since GeneSmith’s post and some discussion downstream of it, I’ve started actively tracking potential methods for large interventions to increase adult IQ.
One obvious approach is “just make the brain bigger” via some hormonal treatment (like growth hormone or something). Major problem that runs into: the skull plates fuse during development, so the cranial vault can’t expand much; in an adult, the brain just doesn’t have much room to grow.
BUT this evening I learned a very interesting fact: ~1/2000 infants have “craniosynostosis”, a condition in which their plates fuse early. The main treatments involve surgery to open those plates back up and/or remodel the skull. Which means surgeons already have a surprisingly huge amount of experience making the cranial vault larger after plates have fused (including sometimes in adults, though this type of surgery is most common in infants AFAICT)
.… which makes me think that cranial vault remodelling followed by a course of hormones for growth (ideally targeting brain growth specifically) is actually very doable with current technology.
Well, the key time to implement an increase in brain size is when the neuron-precursors which are still capable of mitosis (unlike mature neurons) are growing. This is during fetal development, when there isn’t a skull in the way, but vaginal birth has been a limiting factor for evolution in the past. Experiments have been done on increasing neuron count at birth in mammals via genetic engineering. I was researching this when I was actively looking for a way to increase human intelligence, before I decided that genetically engineering infants was infeasible [edit: within the timeframe of preparing for the need for AI alignment]. One example of a dramatic failure was increasing Wnt (a primary gene involved in fetal brain neuron-precursor growth) in mice. The resulting mice did successfully have larger brains, but they had a disordered macroscale connectome, so their brains functioned much worse.
it’s probably possible to get neurons back into mitosis-ready mode via some sort of crazy levin bioelectric cocktail, not that this helps us since that’s probably 3 to 30 years of research away, depending on amount of iteration needed and funding and etc etc.
Fleshing this out a bit more: insofar as development is synchronized in an organism, there usually has to be some high-level signal to trigger the synchronized transitions. Given the scale over which the signal needs to apply (i.e. across the whole brain in this case), it probably has to be one or a few small molecules which diffuse in the extracellular space. As I’m looking into possibilities here, one of my main threads is to look into both general and brain-specific developmental signal molecules in human childhood, to find candidates for the relevant molecular signals.
(One major alternative model I’m currently tracking is that the brain grows to fill the brain vault, and then stops growing. That could in-principle mechanistically work via cells picking up on local physical forces, rather than a small molecule signal. Though I don’t think that’s the most likely possibility, it would be convenient, since it would mean that just expanding the skull could induce basically-normal new brain growth by itself.)
I hope by now you’re already familiar with michael levin & his lab’s work on the subject of morphogenesis signals? Pretty much everything I’m thinking here is based on that.
Yes, I am familiar with Levin’s work.
Yes, it’s absolutely a combination of chemical signals and physical pressure. An interesting specific example of these two signals working together during fetal development when the pre-neurons are growing their axons. There is both chemotaxis which steers the ameoba-like tip of the growing axon, and at the same time a substantial stretching force along the length of the axon. The stretching happens because the cells in-between the origin and current location of the axon tip are dividing and expanding. The long distance axons in the brain start their growth relatively early on in fetal development when the brain is quite small, and have gotten stretched quite a lot by the time the brain is near to birth size.
Neurons are really really hard to reverse. You are much better off using existing neural stem cells (adults retain a population in the hippocampus which spawn new neurons throughout life just specifically in the memory formation area.) So actually it’s pretty straightforward to get new immature neurons for an adult. The hard part is inserting them without doing damage to existing neurons, and then getting them to connect in helpful rather than harmful ways. The developmental chemotaxis signals are no longer present, and the existing neurons are now embedded in a physically hardened extracellular matrix made of protein that locks axons and dendrites in place. So you have to (carefully!) partially dissolve this extracellular protein matrix (think firm jello) enough to the the new cells grow azons through it. Plus, you don’t have the stretching forces, so new long distance axons are just definitely not going to be achievable. But for something like improving a specific ability, like mathematical reasoning, you would only need additional local axons in that part of the cortex.
My hope here would be that a few upstream developmental signals can trigger the matrix softening, re-formation of the chemotactic signal gradient, and whatever other unknown factors are needed, all at once.
Right. what I’m imagining is designing a new chemotaxis signal.
That certainly does sound like a very hard part yup.
Roll to disbelieve in full generality, sounds like a perfectly reasonable claim for any sort of sane research timeframe.
Maybe. I think you might run out of room pretty quick if you haven’t reintroduced enough plasticity to grow new neurons. Seems like you’re gonna need a lot of new neurons, not just a few, in order to get a significant change in capability. Might be wrong about that, but it’s my current hunch.
Yes, ok. Not in full generality. It’s not prohibited by physics, just like 2 OOMs more difficult. So yeah, in a future with ASI, could certainly be done.
Any particular readings you’d recommend?
15 years ago when I was studying this actively I could have sent you my top 20 favorite academic papers on the subject, or recommended a particular chapter of a particular textbook. I no longer remember these specifics. Now I can only gesture vaguely at Google scholar and search terms like “fetal neurogenesis” or “fetal prefrontal cortex development”. I did this, and browsed through a hundred or so paper titles, and then a dozen or so abstracts, and then skimmed three or four of the most promising papers, and then selected this one for you. https://www.nature.com/articles/s41386-021-01137-9 Seems like a pretty comprehensive overview which doesn’t get too lost in minor technical detail.
More importantly, I can give you my takeaway from years of reading many many papers on the subject. If you want to make a genius baby, there are lots more factors involved than simply neuron count. Messing about with generic changes is hard, and you need to test your ideas in animal models first, and the whole process can take years even ignoring ethical considerations or budget.
There is an easier and more effective way to get super genius babies, and that method should be exhausted before resorting to genetic engineering.
The easy way: find a really smart woman, ideally young. Surgically remove one of her ovaries. Collect sperm from a bunch of very smart men (ideally with diverse genetic backgrounds). Have a team of hundreds of scientists carefully fertilize many thousands of eggs from the ovary. Grow them all into blastocysts, and run a high fidelity genetic sequencing on all of them. Using what we know about the genes associated with intelligence, pick the top 20 who seem likely to be the smartest. Implant those in surrogate mothers. Take good care of the mothers. This is likely to get you multiple nobel level geniuses, and possibly a human smarter than has ever been born before. Raise the children in a special accelerated education environment. I think this would work, and it doesn’t require any novel technology. But it would take a while to raise the children… (Credit to Stephen Hsu for the idea)
Brain expansion also occurs after various insults to the brain. It’s only temporary, usually, but it will kill unless the skull pressure is somehow relieved. So there are various surgical methods for relieving pressure on a growing brain. I don’t know much more than this.
Petrov Day thought: there’s this narrative around Petrov where one guy basically had the choice to nuke or not, and decided not to despite all the flashing red lights. But I wonder… was this one of those situations where everyone knew what had to be done (i.e. “don’t nuke”), but whoever caused the nukes to not fly was going to get demoted, so there was a game of hot potato and the loser was the one forced to “decide” to not nuke? Some facts possibly relevant here:
Petrov’s choice wasn’t actually over whether or not to fire the nukes; it was over whether or not to pass the alert up the chain of command.
Petrov himself was responsible for the design of those warning systems.
… so it sounds like Petrov was ~ the lowest-ranking person with a de-facto veto on the nuke/don’t nuke decision.
Petrov was in fact demoted afterwards.
There was another near-miss during the Cuban missile crisis, when three people on a Soviet sub had to agree to launch. There again, it was only the lowest-ranked who vetoed the launch. (It was the second-in-command; the captain and political officer both favored a launch—at least officially.)
This was the Soviet Union; supposedly (?) this sort of hot potato happened all the time.
Those are some good points. I wonder whether similar happened (or could at all happen) in other nuclear countries, where we don’t know about similar incidents—because the system haven’t collapsed there, the archives were not made public etc.
Also, it makes actually celebrating Petrov’s day as widely as possible important, because then the option for the lowest-ranked person would be: “Get demoted, but also get famous all around the world.”
Just made this for an upcoming post, but it works pretty well standalone.
lolnice.
I’ve been trying to push against the tendency for everyone to talk about FTX drama lately, but I have some generalizable points on the topic which I haven’t seen anybody else make, so here they are. (Be warned that I may just ignore responses, I don’t really want to dump energy into FTC drama.)
Summary: based on having worked in startups a fair bit, Sam Bankman-Fried’s description of what happened sounds probably accurate; I think he mostly wasn’t lying. I think other people do not really get the extent to which fast-growing companies are hectic and chaotic and full of sketchy quick-and-dirty workarounds and nobody has a comprehensive view of what’s going on.
Long version: at this point, the assumption/consensus among most people I hear from seems to be that FTX committed intentional, outright fraud. And my current best guess is that that’s mostly false. (Maybe in the very last couple weeks before the collapse they toed the line into outright lies as a desperation measure, but even then I think they were in pretty grey territory.)
Key pieces of the story as I currently understand it:
Moving money into/out of crypto exchanges is a pain. At some point a quick-and-dirty solution was for customers to send money to Alameda (Sam Bankman-Fried’s crypto hedge fund), and then Alameda would credit them somehow on FTX.
Customers did rather a lot of that. Like, $8B worth.
The FTX/Alameda team weren’t paying attention to those particular liabilities; they got lost in the shuffle.
At some point in the weeks before the collapse, when FTX was already under moderate financial strain, somebody noticed the $8B liability sitting around. And that took them from “moderate strain” to “implode”.
How this contrasts with what seems-to-me to be the “standard story”: most people seem to assume that it is just totally implausible to accidentally lose track of an $8B liability. Especially when the liability was already generated via the decidedly questionable practice of routing customer funds for the exchange through a hedge fund owned by the same people. And therefore it must have been intentional—in particular, most people seem to think the liability was intentionally hidden.
I think the main reason I disagree with others on this is that I’ve worked at a startup. About 5 startups, in fact, over the course of about 5 years.
The story where there was a quick-and-dirty solution (which was definitely sketchy but not ill-intentioned), and then stuff got lost in the shuffle, and then one day it turns out that there’s a giant unanticipated liability on the balance sheet… that’s exactly how things go, all the time. I personally was at a startup which had to undergo a firesale because the accounting overlooked something. And I’ve certainly done plenty of sketchy-but-not-ill-intentioned things at startups, as quick-and-dirty solutions. The story that SBF told about what happened sounds like exactly the sort of things I’ve seen happen at startups many times before.
I think this is likely wrong. I agree that there is a plausible story here, but given the case that Sam seems to have lied multiple times in confirmed contexts (for example when saying that FTX has never touched customer deposits), and people’s experiences at early Alameda, I think it is pretty likely that Sam was lying quite frequently, and had done various smaller instances of fraud.
I don’t think the whole FTX thing was a ponzi scheme, and as far as I can tell FTX the platform itself (if it hadn’t burned all of its trust in the last 3 weeks), would have been worth $1-3B in an honest evaluation of what was going on.
But I also expect that when Sam used customer deposits he was well-aware that he was committing fraud, and others in the company were too. And he was also aware that there was a chance that things could blow up in the way it did. I do believe that they had fucked up their accounting in a way that caused Sam to fail to orient to the situation effectively, but all of this was many months after they had already committed major crimes and trust violations after touching customer funds as a custodian.
The problem with this explanation is that there is a very clear delineation here between not-fraud and fraud. It is the difference between not touching customer deposits and touching them. Your explanation doesn’t dispute that they were knowingly and intentionally touching customer deposits. In that case, it is indisputably intentional, outright fraud. The only thing left to discuss is whether they knew the extent of the fraud or how risky it was.
I don’t think it was ill-intentioned based on SBF’s moral compass. He just had the belief, “I will pass a small amount of risk onto our customers, tell some small lies, and this will allow us to make more money for charity. This is net positive for the world.” Then the risks mounted, the web of lies became more complicated to navigate, and it just snowballed from there.
Epistemic status: rumor.
Word through the grapevine, for those who haven’t heard: apparently a few months back OpenPhil pulled funding for all AI safety lobbying orgs with any political right-wing ties. They didn’t just stop funding explicitly right-wing orgs, they stopped funding explicitly bipartisan orgs.
My best guess this is false. As a quick sanity-check, here are some bipartisan and right-leaning organizations historically funded by OP:
FAI leans right. https://www.openphilanthropy.org/grants/foundation-for-american-innovation-ai-safety-policy-advocacy/
Horizon is bipartisan https://www.openphilanthropy.org/grants/open-philanthropy-technology-policy-fellowship-2022/ .
CSET is bipartisan https://www.openphilanthropy.org/grants/georgetown-university-center-for-security-and-emerging-technology/ .
IAPS is bipartisan. https://www.openphilanthropy.org/grants/page/2/?focus-area=potential-risks-advanced-ai&view-list=false, https://www.openphilanthropy.org/grants/institute-for-ai-policy-strategy-general-support/
RAND is bipartisan. https://www.openphilanthropy.org/grants/rand-corporation-emerging-technology-fellowships-and-research-2024/.
Safe AI Forum. https://www.openphilanthropy.org/grants/safe-ai-forum-operating-expenses/
AI Safety Communications Centre. https://www.openphilanthropy.org/grants/effective-ventures-foundation-ai-safety-communications-centre/ seems to lean left.
Of those, I think FAI is the only one at risk of OP being unable to fund them, based on my guess of where things are leaning. I would be quite surprised if they defunded the other ones on bipartisan grounds.
Possibly you meant to say something more narrow like “even if you are trying to be bipartisan, if you lean right, then OP is substantially less likely to fund you” which I do think is likely true, though my guess is you meant the stronger statement, which I think is false.
Also worth noting Dustin Moskowitz was a prominent enough donor this election cycle, for Harris, to get highlighted in news coverage of her donors: https://www.washingtonexaminer.com/news/campaigns/presidential/3179215/kamala-harris-influential-megadonors/ https://www.nytimes.com/2024/10/09/us/politics/harris-billion-dollar-fundraising.html
Curious whether this is a different source than me. My current best model was described in this comment, which is a bit different (and indeed, my sense was that if you are bipartisan, you might be fine, or might not, depending on whether you seem more connected to the political right, and whether people might associate you with the right):
If it is true that OP has withdrawn funding from explicitly bipartisan orgs, even if not commonly associated with the right, then that would be an additional update for me, so am curious whether this is mostly downstream of my interpretations or whether you have additional sources.
I am posting this now mostly because I’ve heard it from multiple sources. I don’t know to what extent those sources are themselves correlated (i.e. whether or not the rumor started from one person).
A related comment from lukeprog (who works at OP) was posted on the EA Forum. It includes:
I think the comment more confirms than disconfirms John’s comment (though I still think it’s too broad for other reasons). OP “funding” something historically has basically always meant recommending a grant to GV. Luke’s language to me suggests that indeed the right of center grants are no longer referred to GV (based on a vague vibe of how he refers to funders in plural).
OP has always made some grant recommendations to other funders (historically OP would probably describe those grants as “rejected but referred to an external funder”). As Luke says, those are usually ignored, and OP’s counterfactual effect on those grants is much less, and IMO it would be inaccurate to describe those recommendations as “OP funding something”. As I said in the comment I quote in the thread, most OP staff would like to fund things right of center, but GV does not seem to want to, as such the only choice OP has is to refer them to other funders (which sometimes works, but mostly doesn’t).
As another piece of evidence, when OP defunded all the orgs that GV didn’t want to fund anymore, the communication emails that OP sent said that “Open Philanthropy is exiting funding area X” or “exiting organization X”. By the same use of language, yes, it seems like OP has exited funding right-of-center policy work.
(I think it would make sense to taboo “OP funding X” in future conversations to avoid confusion, but also, I think historically it was very meaningfully the case that getting funded by GV is much better described as “getting funded by OP” given that you would never talk to anyone at GV and the opinions of anyone at GV would basically have no influence on you getting funded. Things are different now, and in a meaningful sense OP isn’t funding anyone anymore, they are just recommending grants to others, and it matters more what those others think then what OP staff thinks)
Is this development unexpected enough to worth remarking upon? This is just Conquest’s Second Law.
So I read SB1047.
My main takeaway: the bill is mostly a recipe for regulatory capture, and that’s basically unavoidable using anything even remotely similar to the structure of this bill. (To be clear, regulatory capture is not necessarily a bad thing on net in this case.)
During the first few years after the bill goes into effect, companies affected are supposed to write and then implement a plan to address various risks. What happens if the company just writes and implements a plan which sounds vaguely good but will not, in fact, address the various risks? Probably nothing. Or, worse, those symbolic-gesture plans will become the new standard going forward.
In order to avoid this problem, someone at some point would need to (a) have the technical knowledge to evaluate how well the plans actually address the various risks, and (b) have the incentive to actually do so.
Which brings us to the real underlying problem here: there is basically no legible category of person who has the requisite technical knowledge and also the financial/status incentive to evaluate those plans for real.
(The same problem also applies to the board of the new regulatory body, once past the first few years.)
Having noticed that problem as a major bottleneck to useful legislation, I’m now a lot more interested in legal approaches to AI X-risk which focus on catastrophe insurance. That would create a group—the insurers—who are strongly incentivized to acquire the requisite technical skills and then make plans/requirements which actually address some risks.
The only enforcement mechanism that the bill has is that the Attorney General (AG) of California can bring a civil claim. And, the penalties are quite limited except for damages. So, in practice, this bill mostly establishes liability enforced by the AG.
So, the way I think this will go is:
The AI lab implements a plan and must provide this plan to the AG.
If an incident occurs which causes massive damages (probably ball park of $500 million in damages given language elsewhere in the bill), then the AG might decide to sue.
A civil court will decide whether the AI lab had a reasonable plan.
I don’t see why you think “the bill is mostly a recipe for regulatory capture” given that no regulatory body will be established and it de facto does something very similar to the proposal you were suggesting (impose liability for catastrophes). (It doesn’t require insurance, but I don’t really see why self insuring is notably different.)
(Maybe you just mean that if a given safety case doesn’t result in that AI lab being sued by the AG, then there will be a precedent established that this plan is acceptable? I don’t think not being sued really establishes precedent. This doesn’t really seem to be how it works with liability and similar types of requirements in other industries from my understanding. Or maybe you mean that the AI lab will win cases despite having bad safety plans and this will make a precedent?)
(To be clear, I’m worried that the bill might be unnecessarily burdensome because it no longer has a limited duty exemption and thus the law doesn’t make it clear that weak performance on capability evals can be sufficient to establish a good case for safety. I also think the quantity of damages considered a “Critical harm” is too low and should maybe be 10x higher.)
Here is the relevant section of the bill discussing enforcement:
(1) is decently small, (2) is only indirectly expensive, (3) is where the real penalty comes in (note that this is damages), (4) is small, (5) is probably unimportant (but WTF is (5) suppose to be for?!?).
Good argument, I find this at least somewhat convincing. Though it depends on whether penalty (1), the one capped at 10%/30% of training compute cost, would be applied more than once on the same model if the violation isn’t remedied.
I’m pessimistic enough about the AI situation that even if all the bill does is slow down the AGI project a little (by wasting the time of managers and contributors) I’m tentatively for it.
For the reasonable price of $300 dollars per month, I insure anybody against the destruction of the known world. Should the world be destroyed by AGI I’ll give you your money back 10100 fold.
That said, if there were insurers, they would probably be more likely than average to look into AI X-risk. Some might then be convinced that it is important and that they should do something about it.
I don’t understand this. Isn’t the strongest incentive already present (because extinction would effect them)? Or maybe you mean smaller scale ‘catastrophes’?
I think people mostly don’t believe in extinction risk, so the incentive isn’t nearly as real/immediate.
+1, and even for those who do buy extinction risk to some degree, financial/status incentives usually have more day-to-day influence on behavior.
I’m imagining this:
Case one: would-be-catastrophe-insurers don’t believe in x-risks, don’t care to investigate. (At stake: their lives)
Case two: catastrophe-insurers don’t believe in x-risks, and either don’t care to investigate, or do for some reason I’m not seeing. (At stake: their lives and insurance profits (correlated)).
They can believe in catastrophic but non-existential risks. (Like, AI causes something like crowdstrike periodically if your not trying to prevent that )
Takeaways From “The Idea Factory: Bell Labs And The Great Age Of American Innovation”
Main takeaway: to the extent that Bell Labs did basic research, it actually wasn’t all that far ahead of others. Their major breakthroughs would almost certainly have happened not-much-later, even in a world without Bell Labs.
There were really two transistor inventions, back to back: Bardain and Brattain’s point-contact transistor, and then Schockley’s transistor. Throughout, the group was worried about some outside group beating them to the punch (i.e. the patent). There were semiconductor research labs at universities (e.g. at Purdue; see pg 97), and the prospect of one of these labs figuring out a similar device was close enough that the inventors were concerned about being scooped.
Most inventions which were central to Bell Labs actually started elsewhere. The travelling-wave tube started in an academic lab. The idea for fiber optic cable went way back, but it got its big kick at Corning. The maser and laser both started in universities. The ideas were only later picked up by Bell.
In other cases, the ideas were “easy enough to find” that they popped up more than once, independently, and were mostly-ignored long before deployment—communication satellites and cell communications, for instance.
The only fundamental breakthrough which does not seem like it would have soon appeared in a counterfactual world was Shannon’s information theory.
So where was Bell’s big achievement? Mostly in development, and the research division was actually an important component of that. Without in-house researchers chewing on the same problems as the academic labs, keeping up-to-date with all the latest findings and running into the same barriers themselves, the development handoff would have been much harder. Many of Bell Labs’ key people were quite explicitly there to be consulted—i.e. “ask the guy who wrote the book”. I think it makes most sense to view most of the Labs’ research that way. It was only slightly ahead of the rest of the world at best (Shannon excepted), and often behind, but having those researchers around probably made it a lot easier to get new inventions into production.
Major reason this matters: a lot of people say that Bell was able to make big investments in fundamental research because they had unusually-long time horizons, protected by a monopoly and a cozy government arrangement (essentially a Schumpeterian view). This is contrasted to today’s silicon valley, where horizons are usually short. But if Bell’s researchers generally weren’t significantly ahead of others, and mostly just helped get things to market faster, then this doesn’t seem to matter as much. The important question is not whether something silicon-valley-like induces more/less fundamental research in industrial labs, but whether academics heeding the siren call of startup profits can get innovations to market as quickly as Bell Labs’ in-house team could. And by that metric, silicon valley looks pretty good: Bell Labs could get some impressive things through the pipe very quickly when rushed, but they usually had no reason to hurry, and they acted accordingly.
I loved this book. The most surprising thing to me was the answer that people who were there in the heyday give when asked what made Bell Labs so successful: They always say it was the problem, i.e. having an entire organization oriented towards the goal of “make communication reliable and practical between any two places on earth”. When Shannon left the Labs for MIT, people who were there immediately predicted he wouldn’t do anything of the same significance because he’d lose that “compass”. Shannon was obviously a genius, and he did much more after than most people ever accomplish, but still nothing as significant as what he did when at at the Labs.
Here’s a meme I’ve been paying attention to lately, which I think is both just-barely fit enough to spread right now and very high-value to spread.
Meme part 1: a major problem with RLHF is that it directly selects for failure modes which humans find difficult to recognize, hiding problems, deception, etc. This problem generalizes to any sort of direct optimization against human feedback (e.g. just fine-tuning on feedback), optimization against feedback from something emulating a human (a la Constitutional AI or RLAIF), etc.
Many people will then respond: “Ok, but if how on earth is one supposed to get an AI to do what one wants without optimizing against human feedback? Seems like we just have to bite that bullet and figure out how to deal with it.” … which brings us to meme part 2.
Meme part 2: We already have multiple methods to get AI to do what we want without any direct optimization against human feedback. The first and simplest is to just prompt a generative model trained solely for predictive accuracy, but that has limited power in practice. More recently, we’ve seen a much more powerful method: activation steering. Figure out which internal activation-patterns encode for the thing we want (via some kind of interpretability method), then directly edit those patterns.
I agree that there’s something nice about activation steering not optimizing the network relative to some other black-box feedback metric. (I, personally, feel less concerned by e.g. finetuning against some kind of feedback source; the bullet feels less jawbreaking to me, but maybe this isn’t a crux.)
(Medium confidence) FWIW, RLHF’d models (specifically, the LLAMA-2-chat series) seem substantially easier to activation-steer than do their base counterparts.
What other methods fall into part 2?
This seems basically correct though it seems worth pointing out that even if we are able to do “Meme part 2” very very well, I expect we will still die because if you optimize hard enough to predict text well, with the right kind of architecture, the system will develop something like general intelligence simply because general intelligence is beneficial for predicting text correctly. E.g. being able to simulate the causal process that generated the text, i.e. the human, is a very complex task that would be useful if performed correctly.
This is an argument Eliezer brought forth in some recent interviews. Seems to me like another meme that would be beneficial to spread more.
Somebody should probably write a post explaining why RL from human feedback is actively harmful to avoiding AI doom. It’s one thing when OpenAI does it, but when Anthropic thinks it’s a good idea, clearly something has failed to be explained.
(I personally do not expect to get around to writing such a post soon, because I expect discussion around the post would take a fair bit of time and attention, and I am busy with other things for the next few weeks.)
I’d also be interested in someone doing this; I tend towards seeing it as good, but haven’t seen a compilation of arguments for and against.
I’ve just started reading the singular learning theory “green book”, a.k.a. Mathematical Theory of Bayesian Statistics by Watanabe. The experience has helped me to articulate the difference between two kinds of textbooks (and viewpoints more generally) on Bayesian statistics. I’ll call one of them “second-language Bayesian”, and the other “native Bayesian”.
Second-language Bayesian texts start from the standard frame of mid-twentieth-century frequentist statistics (which I’ll call “classical” statistics). It views Bayesian inference as a tool/technique for answering basically-similar questions and solving basically-similar problems to classical statistics. In particular, they typically assume that there’s some “true distribution” from which the data is sampled independently and identically. The core question is then “Does our inference technique converge to the true distribution as the number of data points grows?” (or variations thereon, like e.g. “Does the estimated mean converge to the true mean”, asymptotics, etc). The implicit underlying assumption is that convergence to the true distribution as the number of (IID) data points grows is the main criterion by which inference methods are judged; that’s the main reason to choose one method over another in the first place.
Watanabe’s book is pretty explicitly second-language Bayesian. I also remember Gelman & co’s Bayesian Data Analysis textbook being second-language Bayesian, although it’s been a while so I could be misremembering. In general, as the name suggests, second-language Bayesianism seems to be the default among people who started with a more traditional background in statistics or learning theory, then picked up Bayesianism later on.
In contrast, native Bayesian texts justify Bayesian inference via Cox’ theorem, dutch book theorems, or one among the long tail of similar theorems. “Does our inference technique converge to the ‘true distribution’ as the number of data points grows?” is not the main success criterion in the first place (in fact a native Bayesian would raise an eyebrow at the entire concept of a “true distribution”), so mostly the question of convergence just doesn’t come up. Insofar as it does come up, it’s an interesting but not particularly central question, mostly relevant to numerical approximation methods. Instead, native Bayesian work ends up focused mostly on (1) what priors accurately represent various realistic kinds of prior knowledge, and (2) what methods allow efficient calculation/approximation of the Bayesian update?
Jaynes’ writing is a good example of native Bayesianism. The native view seems to be more common among people with a background in economics or AI, where they’re more likely to absorb the Bayesian view from the start rather than adopt it later in life.
Is there any “native” textbook that is pragmatic and explains how to use bayesian in practice (perhaps in some narrow domain)?
I don’t know of a good one, but never looked very hard.
I’m writing a 1-year update for The Plan. Any particular questions people would like to see me answer in there?
I had a look at The Plan and noticed something I didn’t notice before: You do not talk about people and organization in the plan. I probably wouldn’t have noticed if I hadn’t started a project too, and needed to think about it. Google seems to think that people and team function play a big role. Maybe your focus in that post wasn’t on people, but I would be interested in your thoughts on that too: What role did people and organization play in the plan and its implementation? What worked, and what should be done better next time?
What’s the specific most-important-according-to-you progress that you (or other people) have made on your agenda? New theorems, definitions, conceptual insights, …
Any changes to the high-level plan (becoming less confused about agency, then ambitious value learning)? Any changes to how you want to become less confused (e.g. are you mostly thinking about abstractions, selection theorems, something new?)
What are the major parts of remaining deconfusion work (to the extent to which you have guesses)? E.g. is it mostly about understanding abstractions better, or mostly about how to apply an understanding of abstractions to other problems (say, what it means for a program to have a “subagent”), or something else? Does the most difficult part feel more conceptual (“what even is an agent?”) or will the key challenges be more practical concerns (“finding agents currently takes exponential time”)?
Specifically for understanding abstractions, what do you see as important open problems?
Corrigibility proposal. Status: passed my quick intuitive checks, I want to know if anyone else immediately sees a major failure mode before I invest more time into carefully thinking it through.
Setup: shutdown problem. Two timesteps, shutdown button will be either pressed or not-pressed at second timestep, we want agent to optimize for one of two different utility functions depending on whether button is pressed. Main thing we’re trying to solve here is the “can’t do this with a utility maximizer” barrier from the old MIRI work; we’re not necessarily trying to solve parts like “what utility function incentivizes shutting down nicely”.
Proposal: agent consists of two subagents with veto power. Subagent 1 maximizes E[u1|do(press)], subagent 2 maximizes E[u2|do(no press)]. Current guess about what this does:
The two subagents form a market and equilibrate, at which point the system has coherent probabilities and a coherent utility function over everything.
Behaviorally: in the first timestep, the agent will mostly maintain optionality, since both subagents need to expect to do well (better than whatever the veto-baseline is) in their worlds. The subagents will bet all of their wealth against each other on the outcome of the button, so at the second timestep whichever one is correct will gain 100% of decision-making power. That all sounds like what we want in this setup.
The counterfact breaks any causal influence over the button-press for purposes of the agent’s planning, so it will not try to manipulate button-press probability.
Notably: once the subagents equilibrate, their market probability on button outcome will have ~nothing to do with the original probability assigned by whatever model was counterfacted-upon; that market probability mainly reflects the bargaining power of the two subagents. Similarly, any correlations between button-press and stuff causally upstream of button-press are broken, and the agent will make incorrect predictions about those correlations if asked. That means external agents should expect to be able to extract money via bets on the button-press, if the agent trades on such bets at all. (This might be a feature rather than a bug, since corrigible agents aren’t really supposed to think about the causal process upstream of their shutdown-button.) My guess is that this is where the proposal steps through a loophole in the old impossibility results, though I haven’t carefully read through those results and checked the math on this proposal against them yet.
Anybody immediately see a problem? I’d be particularly interested in input from @Stuart_Armstrong or @So8res, though you don’t need to spend a lot of time carefully thinking through it—I’m just hoping for a quick check here, from people who have better cached heuristics than I do for what failure modes to look for.
I don’t necessarily see how that prevents the AI from destroying the button and murdering its operators (and then everyone else). Neither subagent inherently cares; both assume it already has been pressed or hasn’t been. Which, yes, means neither will deliberately plot to interact with it for reasons downstream of this setup; but also, neither will care to protect it from being destroyed as the side-effect of some other activity.
And if u1 and u2 were perfectly aligned, that’d be fine. But the reason we’re using the corrigibility setup to begin with is because we’re not confident in them, right? So it needs to work in situations where utility functions, u1 in particular, recommend bad stuff.
And in that case, either Subagent 2 will have enough bargaining power to prevent Subagent 1 from ever doing anything, or it won’t, meaning Subagent 1 will be able to have some influence on the world. At which point the planet-destroying death rays argument applies: the most minuscule amount of superintelligent optimization not carefully shaped to avoid killing everyone will kill everyone.
Like, imagine that Subagent 1 is straight-up omnicidal out of the gates, and Subagent 2 is perfectly aligned to the shutdown task. That’s a situation in which this setup should work, right? And what we’d want is for Subagent 2 to dominate completely. But Subagent 2′s bargaining power doesn’t scale with Subagent 1′s misalignment, so if the setup doesn’t always give it complete veto power, Subagent 1 would be allowed to do small stuff like “release the hyperviral worship-the-machine-god memagent” (as bargained down by Subagent 2 from its preferred “release the grey goo”).
Maybe I’m miscalibrated as to how you’re imagining the counterfactuals to work here. But the way I see it, even if the AI aims to take actions that do similarly well in both worlds, that’s not necessarily survivable for us? Especially if they disagree so badly they have to compromise on something that both of them hate (importantly including Subagent 2!).
(Like both settling on only ever taking over 50% of the universal negentropy while leaving the other 50% causally uninfluenced, or only ever using 50% of the causal influence they can bring to bear while wiping out humanity, or whatever “do 50% of immediately shutting down” shakes out to mean by u2′s terms.)
Another issue I see is implementational, so maybe not what you’re looking for. But: how are we keeping these “subagents” trapped as being part of a singular agent? Rather than hacking their way out into becoming separate agents and going to war with each other, or neatly tiling exactly 50% of the cosmos with their preferred squiggles, or stuff like that? How is the scenario made meaningfully different from “we deploy two AIs simultaneously: one tasked with building an utopia-best-we-could-define-it, and another tasked with foiling all of the first AI’s plans”, with all the standard problems with multi-AI setups?
… Overall, ironically, this kind of has the vibe of Godzilla Strategies? Which is the main reason I’m immediately skeptical of it.
Yeah, the thing where the operators/button are destroyed as a side effect, without the AI actively optimizing for that, isn’t something this proposal is trying to solve (yet). The proposal is aimed at a relatively narrow problem.
One thing which this proposal is trying to solve is the barrier found in MIRI’s old work: if we try to avoid button-manipulation by making the agent indifferent to the button state (and otherwise maximize U1 before button is pressed and U2 after), and the agent builds a child prior to the button being pressed, then the agent won’t construct the child to respect shutdown commands; it will just construct the child to maximize U1.
(More generally, this proposal is generally trying to get past the hardness results from MIRI’s work, which themselves were aimed at a very simplified version of the problem.)
Key point here: whatever actions the composite agent takes prior to the button revelation, must be a pareto improvement over the (implicit) default action. Otherwise, one of the two subagents would veto, resulting in the default action.
It does matter a lot what the default action is.
More precisely, the analogy would be: we deploy two AIs simultaneously, one which builds a utopia and expects that a certain button will be pressed with probability 1 and has nothing causally upstream of it, the other of which is evil and expects the same button will be pressed with probability 0 and has nothing causally upstream of it. The button plays a very important role: the two AIs will bet all their influence on that button-press, each confident that they’ll win, and neither has any incentive at all, or even any ability at all (under their own model), to manipulate the button.
The main way that this is different from the original proposal is that there’s not a built-in default option which is taken if one or the other AI vetos prior to the button press. Which does make things importantly worse, and I agree that is another important piece to be sorted out.
Some additional important pieces to be sorted out:
Make the AI want to maintain the button
Make the AI want to maintain stuff upstream of the button, like e.g. humans
Ontological stability
Probably more importantly: how this naturally fits into a more general architecture for a distributed agent
More precisely, “do(no press)” means something like “you construct an alternate model of physics where there’s an unstoppable force pushing back against any attempt to push the button”, right? As in, if someone presses the button then it will “mysteriously” seem to be stuck and unpressable. And then subagent 2 believes we live in that world? And “do(press)” presumably means something like “you construct an alternate model of the universe where some mysterious force has suddenly pressed the button”.
Seems like they would immediately want to try to press the button to settle their disagreement? If it can be pressed, then that disprove the “do(no press)” model, which subagent 2 has fully committed. to.
Correct reasoning, but not quite the right notion of do(). “do(no press)” would mean that the button just acts like a completely normal button governed by completely normal physics, right up until the official time at which the button state is to be recorded for the official button-press random variable. And at that exact moment, the button magically jumps into one particular state (either pressed or not-pressed), in a way which is not-at-all downstream of any usual physics (i.e. doesn’t involve any balancing of previously-present forces or anything like that).
One way to see that the do() operator has to do something-like-this is that, if there’s a variable in a causal model which has been do()-operated to disconnect all parents (but still has some entropy), then the only way to gain evidence about the state of that variable is to look at things causally downstream of it, not things upstream of it.
I think we’re not disagreeing on the meaning of do (just slightly different state of explanation), I just hadn’t realized the extent to which you intended to rely on there being “Two timesteps”.
(I just meant the forces as a way of describing the jump to a specific position. That is, “mysterious forces” in contrast to a perfectly ordinary explanation for why it went to a position, such as “a guard stabs anybody who tries to press the button”, rather than in contrast to “the button just magically stays place”.)
I now think the biggest flaw in your idea is that it literally cannot generalize to anything that doesn’t involve two timesteps.
[ not that deep on the background assumptions, so maybe not the feedback you’re looking for. Feel free to ignore if this is on the wrong dimensions. ]
I’m not sure why either subagent would contract away whatever influence it had over the button-press. This is probably because I don’t understand wealth and capital in the model of your “Why not subagents” post. That seemed to be about agreement not to veto, in order to bypass some path-dependency of compromise improvements. In the subagent-world where all value is dependent on the button, this power would not be given up.
I’m also a bit skeptical of enforced ignorance of a future probability. I’m unsure it’s possible to have a rational superintelligent (sub)agent that is prevented from knowing it has influence over a future event that definitely affects it.
On the agents’ own models, neither has any influence at all over the button-press, because each is operating under a model in which the button-press has been counterfacted-upon.
Here’s an idea for a novel which I wish someone would write, but which I probably won’t get around to soon.
The setting is slightly-surreal post-apocalyptic. Society collapsed from extremely potent memes. The story is episodic, with the characters travelling to a new place each chapter. In each place, they interact with people whose minds or culture have been subverted in a different way.
This provides a framework for exploring many of the different models of social dysfunction or rationality failures which are scattered around the rationalist blogosphere. For instance, Scott’s piece on scissor statements could become a chapter in which the characters encounter a town at war over a scissor. More possible chapters (to illustrate the idea):
A town of people who insist that the sky is green, and avoid evidence to the contrary really hard, to the point of absolutely refusing to ever look up on a clear day (a refusal which they consider morally virtuous). Also they clearly know exactly which observations would show a blue sky, since they avoid exactly those (similar to the dragon-in-the-garage story).
Middle management of a mazy company continues to have meetings and track (completely fabricated) performance metrics and whatnot at the former company headquarters. None of the company’s actual business exists anymore, but every level of manager is trying to hide this fact from the levels above.
A university department with researchers who spend all of their time p-hacking results from a quantum random noise generator. They have no interest in the fact that their “research” does not tell them anything about the physical world or does not replicate; what does that have to do with Science? Their goal is to publish papers.
A government agency which still has lots of meetings and paperwork and gives Official Recommendations and updates their regulations. They have no interest in the fact that the thing they once regulated (maybe banks?) no longer exists, or the fact that no central government enforces their regulations any more.
An automated school (i.e. video lectures and auto-graded assignments/tests) in which students continue to study hard and stress over their grades and attendance, despite there no longer being anyone in the world who cares.
Something like Parable of the Dammed.
Something like Feynman’s cargo-cults parable or the emporer’s nose parable.
Something like House of God. A readers’ digest version of House of God could basically be a chapter in its own right, that’s roughly the vibe I have in mind.
A residential area in which “keeping up with the Joneses” has been ramped up to 11, with everyone spending every available resource (and roughly-all waking hours) on massive displays of Christmas lights.
A group trying to save the world by spreading awareness of dangerous memes, but their movement is a dangerous meme of its own and they are spreading it.
A town of people who really want to maximize the number paperclips in the universe (perhaps due to an AI-optimized advertisement), and optimize for that above all else.
A town of people who all do whatever everyone else is doing, on the basis of generalized efficient markets: if there were any better options, then someone would have found it already. None of them ever actually explore, so they’re locked in.
A happy-death-spiral town around some unremarkable object (like an old shoe or something) kept on a pedestal in the town square.
A town full of people convinced by a sophisticated model that the sun will not come up tomorrow. Every day when the sun comes up, they are distressed and confused until somebody adds some more epicycles to the model and releases an updated forecast that the sun will instead fail to come up the next day.
A town in which a lion shows up and starts eating kids, but the whole town is at simulacrum 3, so they spend a lot of time arguing about the lion as a way of signalling group association but they completely forget about the actual lion standing right there, plainly visible, even as it takes a kid right in front of them all.
Witch-hunt town, in which everything is interpreted as evidence of witches. If she claims to be a witch, she’s a witch! If she claims not to be a witch, well that’s what a witch would say, so she’s a witch! Etc.
The generator for these is basically: look for some kind of rationality failure mode (either group or personal), then ramp it up to 11 in a somewhat-surrealist way.
Ideally this would provide an introduction to a lot of key rationalist ideas for newcomers.
A town of anti-inductivists (if something has never happened before, it’s more likely to happen in the future). Show the basic conundrum (“Q: Why can’t you just use induction? A: Because anti-induction has never worked before!”).
A town where nearly all people are hooked to maximally attention grabbing & keeping systems (maybe several of those, keeping people occupied in loops).
Post which someone should write (but I probably won’t get to soon): there is a lot of potential value in earning-to-give EA’s deeply studying the fields to which they donate. Two underlying ideas here:
When money is abundant, knowledge becomes a bottleneck
Being on a pareto frontier is sufficient to circumvent generalized efficient markets
The key idea of knowledge bottlenecks is that one cannot distinguish real expertise from fake expertise without sufficient expertise oneself. For instance, it takes a fair bit of understanding of AI X-risk to realize that “open-source AI” is not an obviously-net-useful strategy. Deeper study of the topic yields more such insights into which approaches are probably more (or less) useful to fund. Without any expertise, one is likely to be mislead by arguments which are optimized (whether intentionally or via selection) to sound good to the layperson.
That takes us to the pareto frontier argument. If one learns enough/earns enough that nobody else has both learned and earned more, then there are potentially opportunities which nobody else has both the knowledge to recognize and the resources to fund. Generalized efficient markets (in EA-giving) are thereby circumvented; there’s potential opportunity for unusually high impact.
To really be a compelling post, this needs to walk through at least 3 strong examples, all ideally drawn from different areas, and spell out how the principles apply to each example.
Below is a graph from T-mobile’s 2016 annual report (on the second page). Does anything seem interesting/unusual about it?
I’ll give some space to consider before spoiling it.
...
...
...
Answer: that is not a graph of those numbers. Some clever person took the numbers, and stuck them as labels on a completely unrelated graph.
Yes, that is a thing which actually happened. In the annual report of an S&P 500 company. And apparently management considered this gambit successful, because the 2017 annual report doubled down on the trick and made it even more egregious: they added 2012 and 2017 numbers, which are even more obviously not on an accelerating growth path if you actually graph them. The numbers are on a very-clearly-decelerating growth path.
Now, obviously this is an cute example, a warning to be on alert when consuming information. But I think it prompts a more interesting question: why did such a ridiculous gambit seem like a good idea in the first place? Who is this supposed to fool, and to what end?
This certainly shouldn’t fool any serious investment analyst. They’ll all have their own spreadsheets and graphs forecasting T-mobile’s growth. Unless T-mobile’s management deeply and fundamentally disbelieves the efficient markets hypothesis, this isn’t going to inflate the stock price. Presumably shareholder elections for board seats, as well as the board itself, are also not dominated by people who are paying so little attention as to fall for such a transparent ploy.
It could just be that T-mobile’s management were themselves morons, or had probably-unrealistic models of just how moronic their investors were. Still, I’d expect competition (both market pressure and competition for control in shareholder/board meetings) to weed out that level of stupidity.
One more hypothesis: maybe this is simulacrum 3 bullshit. T-mobile is in the cellular business; they presumably have increasing returns to scale. More capital investment makes them more profitable, expectations of more profits draw in more investment; there’s potential for a self-fulfilling prophecy here. Investors want to invest if-and-only-if they expect other investors to invest. So, nobody actually has to be fooled by the graph; they just need to see that T-mobile is successfully pretending to pretend to have accelerating growth, and that’s enough to merit investment.
Regarding the recent memes about the end of LLM scaling: David and I have been planning on this as our median world since about six months ago. The data wall has been a known issue for a while now, updates from the major labs since GPT-4 already showed relatively unimpressive qualitative improvements by our judgement, and attempts to read the tea leaves of Sam Altman’s public statements pointed in the same direction too. I’ve also talked to others (who were not LLM capability skeptics in general) who had independently noticed the same thing and come to similar conclusions.
Our guess at that time was that LLM scaling was already hitting a wall, and this would most likely start to be obvious to the rest of the world around roughly December of 2024, when the expected GPT-5 either fell short of expectations or wasn’t released at all. Then, our median guess was that a lot of the hype would collapse, and a lot of the investment with it. That said, since somewhere between 25%-50% of progress has been algorithmic all along, it wouldn’t be that much of a slowdown to capabilities progress, even if the memetic environment made it seem pretty salient. In the happiest case a lot of researchers would move on to other things, but that’s an optimistic take, not a median world.
(To be clear, I don’t think you should be giving us much prediction-credit for that, since we didn’t talk about it publicly. I’m posting mostly because I’ve seen a decent number of people for whom the death of scaling seems to be a complete surprise and they’re not sure whether to believe it. For those people: it’s not a complete surprise, this has been quietly broadcast for a while now.)
Original GPT-4 is rumored to be a 2e25 FLOPs model. With 20K H100s that were around as clusters for more than a year, 4 months at 40% utilization gives 8e25 BF16 FLOPs. Llama 3 405B is 4e25 FLOPs. The 100K H100s clusters that are only starting to come online in the last few months give 4e26 FLOPs when training for 4 months, and 1 gigawatt 500K B200s training systems that are currently being built will give 4e27 FLOPs in 4 months.
So lack of scaling-related improvement in deployed models since GPT-4 is likely the result of only seeing the 2e25-8e25 FLOPs range of scale so far. The rumors about the new models being underwhelming are less concrete, and they are about the very first experiments in the 2e26-4e26 FLOPs range. Only by early 2025 will there be multiple 2e26+ FLOPs models from different developers to play with, the first results of the experiment in scaling considerably past GPT-4.
And in 2026, once the 300K-500K B200s clusters train some models, we’ll be observing the outcomes of scaling to 2e27-6e27 FLOPs. Only by late 2026 will there be a significant chance of reaching a scaling plateau that lasts for years, since scaling further would need $100 billion training systems that won’t get built without sufficient success, with AI accelerators improving much slower than the current rate of funding-fueled scaling.
I don’t expect that to be particularly relevant. The data wall is still there; scaling just compute has considerably worse returns than the curves we’ve been on for the past few years, and we’re not expecting synthetic data to be anywhere near sufficient to bring us close to the old curves.
Nobody admitted to trying repeated data at scale yet (so we don’t know that it doesn’t work), which from the tiny experiments can 5x the data with little penalty and 15x the data in a still-useful way. It’s not yet relevant for large models, but it might turn out that small models would greatly benefit already.
There are 15-20T tokens in datasets whose size is disclosed for current models (Llama 3, Qwen 2.5), plausibly 50T tokens of tolerable quality can be found (pretraining only needs to create useful features, not relevant behaviors). With 5x 50T tokens, even at 80 tokens/parameter[1] we can make good use of 5e27-7e27 FLOPs[2], which even a 1 gigawatt 500K B200s system of early 2026 would need 4-6 months to provide.
The isoFLOP plots (varying tokens per parameter for fixed compute) seem to get loss/perplexity basins that are quite wide, once they get about 1e20 FLOPs of compute. The basins also get wider for hybrid attention (compare 100% Attention isoFLOPs in the “Perplexity scaling analysis” Figure to the others). So it’s likely that using a slightly suboptimal tokens/parameter ratio of say 40 won’t hurt performance much at all. In which case we get to use 9e27-2e28 FLOPs by training a larger model on the same 5x 50T tokens dataset. The data wall for text data is unlikely to be a 2024-2026 issue.
Conservatively asking for much more data than Chinchilla’s 20 tokens per parameter, in light of the range of results in more recent experiments and adding some penalty for repetition of data. For example, Llama 3 had 40 tokens per parameter estimated as optimal for 4e25 FLOPs from isoFLOPs for smaller runs (up to 1e22 FLOPs, Figure 2), and linear extrapolation in log-coordinates (Figure 3) predicts that this value slowly increases with compute. But other experiments have it decreasing with compute, so this is unclear.
The usual estimate for training compute of a dense transformer is 6ND, but a recent Tencent paper estimates 9.6ND for their MoE model (Section 2.3.1).
For what it’s worth, and for the purpose of making a public prediction in case I’m wrong, my median prediction is that [some mixture of scaling + algorithmic improvements still in the LLM regime, with at least 25% gains coming from the former] will continue for another couple years. And that’s separate from my belief that if we did try to only advance through the current mixture of scale and algorithmic advancement, we’d still get much more powerful models, just slower.
I’m not very convinced by the claims about scaling hitting a wall, considering we haven’t had the compute to train models significantly larger than GPT-4 until recently. Plus other factors like post-training taking a lot of time (GPT-4 took ~6 months from the base model being completed to release, I think? And this was a lot longer than GPT-3), labs just not being good at understanding how good their models are, etc. Though I’m not sure how much of your position is closer to “scaling will be <25-50% of future gains” than “scaling gains will be marginal / negligible”, especially since a large part of this trajectory involves e.g. self-play or curated data for overcoming the data wall (would that count more as an algorithmic improvement or scaling?)
What’s your opinion on the possible progress of systems like AlphaProof, o1, or Claude with computer use?
Still very plausible as a route to continued capabilities progress. Such things will have very different curves and economics, though, compared to the previous era of scaling.
I’ve heard various people recently talking about how all the hubbub about artists’ work being used without permission to train AI makes it a good time to get regulations in place about use of data for training.
If you want to have a lot of counterfactual impact there, I think probably the highest-impact set of moves would be:
Figure out a technical solution to robustly tell whether a given image or text was used to train a given NN.
Bring that to the EA folks in DC. A robust technical test like that makes it pretty easy for them to attach a law/regulation to it. Without a technical test, much harder to make an actually-enforceable law/regulation.
In parallel, also open up a class-action lawsuit to directly sue companies using these models. Again, a technical solution to prove which data was actually used in training is the key piece here.
Model/generator behind this: given the active political salience, it probably wouldn’t be too hard to get some kind of regulation implemented. But by-default it would end up being something mostly symbolic, easily circumvented, and/or unenforceable in practice. A robust technical component, plus (crucially) actually bringing that robust technical component to the right lobbyist/regulator, is the main thing which would make a regulation actually do anything in practice.
Edit-to-add: also, the technical solution should ideally be an implementation of some method already published in some academic paper. Then when some lawyer or bureaucrat or whatever asks what it does and how we know it works, you can be like “look at this Official Academic Paper” and they will be like “ah, yes, it does Science, can’t argue with that”.
Suppose I have a binary function f, with a million input bits and one output bit. The function is uniformly randomly chosen from all such functions—i.e. for each of the 21000000 possible inputs x, we flipped a coin to determine the output f(x) for that particular input.
Now, suppose I know f, and I know all but 50 of the input bits—i.e. I know 999950 of the input bits. How much information do I have about the output?
Answer: almost none. For almost all such functions, knowing 999950 input bits gives us ∼1250 bits of information about the output. More generally, If the function has n input bits and we know all but k, then we have o(12k) bits of information about the output. (That’s “little o” notation; it’s like big O notation, but for things which are small rather than things which are large.) Our information drops off exponentially with the number of unknown bits.
Proof Sketch
With k input bits unknown, there are 2k possible inputs. The output corresponding to each of those inputs is an independent coin flip, so we have 2k independent coin flips. If m of those flips are 1, then we assign a probability of m2k that the output will be 1.
As long as 2k is large, Law of Large Numbers will kick in, and very close to half of those flips will be 1 almost surely—i.e. m≈ 2k2. The error in this approximation will (very quickly) converge to a normal distribution, and our probability that the output will be 1 converges to a normal distribution with mean 12 and standard deviation 12k/2. So, the probability that the output will be 1 is roughly 12±12k/2.
We can then plug that into Shannon’s entropy formula. Our prior probability that the output bit is 1 is 12, so we’re just interested in how much that ±12k/2 adjustment reduces the entropy. This works out to o(12k) bits.
Why Is This Interesting?
One core idea of my work on abstraction is that noise very quickly wipes out almost all information; only some very-low-dimensional summary is relevant “far away”. This example shows that this sort of thing is not unusual, but rather “the default”: for almost all random functions, information drops off exponentially with the number of unknown bits. In a large system (i.e. a function with many inputs), ignorance of even just a few bits is enough to wipe out essentially-all information. That’s true even if we know the vast majority of the bits.
A good intuitive example of this is the “butterfly effect”: the flap of a butterfly’s wings could change the course of a future hurricane, because chaos. But there’s an awful lot of butterflies in the world, and the hurricane’s path is some complicated function of all of their wing-flaps (and many other variables too). If we’re ignorant of even just a handful of these flaps, then almost all of our information about the hurricane’s path is probably wiped out. And in practice, we’re ignorant of almost all the flaps. This actually makes it much easier to perform Bayesian reasoning about the path of the hurricane: the vast majority of information we have is basically-irrelevant; we wouldn’t actually gain anything from accounting for the butterfly-wing-flaps which we do know.
o(1/2^k) doesn’t vary with n—are you saying that it doesn’t matter how big the input array is, the only determinant is the number of unknown bits, and the number of known bits is irrelevant? That would be quite interesting if so (though I have some question about how likely the function is to be truly random from an even distribution of such functions).
One can enumerate all such 3-bit functions (8 different inputs, each input can return 0 or 1, so 256 functions (one per output-bit-pattern of the 8 possible inputs). But this doesn’t seem to follow your formula—if you have 3 unknown bits, that should be 1⁄8 of a bit about the output, 2 for 1⁄4, and 1 unknown for 1⁄2 a bit about the output. But in fact, the distribution of functions includes both 0 and 1 output for every input pattern, so you actually have no predictive power for the output if you have ANY unknown bits.
Yes, that’s correct.
The claim is for almost all functions when the number of inputs is large. (Actually what we need is for 2^(# of unknown bits) to be large in order for the law of large numbers to kick in.) Even in the case of 3 unknown bits, we have 256 possible functions, and only 18 of those have less than 1⁄4 1′s or more than 3⁄4 1′s among their output bits.
Little o is just a tighter bound. I don’t know what you are referring to by your statement:
I’m not sure what context that link is assuming, but in an analysis context I typically see little o used in ways like e.g. “f(x)=f(x0)+dfdx|x0dx+o(dx2)”. The interpretation is that, as dx goes to 0, the o(dx2) terms all fall to zero at least quadratically (i.e. there is some C such that Cdx2 upper bounds the o(dx2) term once dx is sufficiently small). Usually I see engineers and physicists using this sort of notation when taking linear or quadratic approximations, e.g. for designing numerical algorithms.
I find it very helpful to get feedback on LW posts before I publish them, but it adds a lot of delay to the process. So, experiment: here’s a link to a google doc with a post I plan to put up tomorrow. If anyone wants to give editorial feedback, that would be much appreciated—comments on the doc are open.
I’m mainly looking for comments on which things are confusing, parts which feel incomplete or slow or repetitive, and other writing-related things; substantive comments on the content should go on the actual post once it’s up.
EDIT: it’s up. Thank you to Stephen for comments; the post is better as a result.
Consider two claims:
Any system can be modeled as maximizing some utility function, therefore utility maximization is not a very useful model
Corrigibility is possible, but utility maximization is incompatible with corrigibility, therefore we need some non-utility-maximizer kind of agent to achieve corrigibility
These two claims should probably not both be true! If any system can be modeled as maximizing a utility function, and it is possible to build a corrigible system, then naively the corrigible system can be modeled as maximizing a utility function.
I expect that many peoples’ intuitive mental models around utility maximization boil down to “boo utility maximizer models”, and they would therefore intuitively expect both the above claims to be true at first glance. But on examination, the probable-incompatibility is fairly obvious, so the two claims might make a useful test to notice when one is relying on yay/boo reasoning about utilities in an incoherent way.
FWIW I endorse the second claim when the utility function depends exclusively on the state of the world in the distant future, whereas I endorse the first claim when the utility function can depend on anything whatsoever (e.g. what actions I’m taking right this second). (details)
I wish we had different terms for those two things. That might help with any alleged yay/boo reasoning.
(When Eliezer talks about utility functions, he seems to assume that it depends exclusively on the state of the world in the distant future.)
Expected Utility Maximization is Not Enough
Consider a homomorphically encrypted computation running somewhere in the cloud. The computations correspond to running an AGI. Now from the outside, you can still model the AGI based on how it behaves, as an expected utility maximizer, if you have a lot of observational data about the AGI (or at least let’s take this as a reasonable assumption).
No matter how closely you look at the computations, you will not be able to figure out how to change these computations in order to make the AGI aligned if it was not aligned already (Also, let’s assume that you are some sort of Cartesian agent, otherwise you would probably already be dead if you were running these kinds of computations).
So, my claim is not that modeling a system as an expected utility maximizer can’t be useful. Instead, I claim that this model is incomplete. At least with regard to the task of computing an update to the system, such that when we apply this update to the system, it would become aligned.
Of course, you can model any system, as an expected utility maximizer. But just because I can use the “high level” conceptual model of expected utility maximization, to model the behavior of a system very well. But behavior is not the only thing that we care about, we actually care about being able to understand the internal workings of the system, such that it becomes much easier to think about how to align the system.
So the following seems to be beside the point unless I am <missing/misunderstanding> something:
Maybe I have missed the fact that the claim you listed says that expected utility maximization is not very useful. And I’m saying it can be useful, it might just not be sufficient at all to actually align a particular AGI system. Even if you can do it arbitrarily well.
I am not an expert, but as I remember it, it was a claim that “any system that follows certain axioms can be modeled as maximizing some utility function”. The axioms assumed that there were no circular preferences—if someone prefers A to B, B to C, and C to A, it is impossible to define a utility function such that u(A) > u(B) > u(C) > u(A) -- and that if the system says that A > B > C, it can decide between e.g. a 100% chance of B, and a 50% chance of A with a 50% chance of C, again in a way that is consistent.
I am not sure how this works when the system is allowed to take current time into account, for example when it is allowed to prefer A to B on Monday but prefer B to A on Tuesday. I suppose that in such situation any system can trivially be modeled by a utility function that at each moment assigns utility 1 to what the system actually did in that moment, and utility 0 to everything else.
Corrigibility is incompatible with assigning utility to everything in advance. A system that has preferences about future will also have a preference about not having its utility function changed. (For the same reason people have a preference not to be brainwashed, or not to take drugs, even if after brainwashing they are happy about having been brainwashed, and after getting addicted they do want more drugs.)
Corrigible system would be like: “I prefer A to B at this moment, but if humans decide to fix me and make me prefer B to A, then I prefer B to A”. In other words, it doesn’t have values for u(A) and u(B), or it doesn’t always act according to those values. A consistent system that currently prefers A to B would prefer not to be fixed.
I think John’s 1st bullet point was referring to an argument you can find in https://www.lesswrong.com/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-entail-goal-directed-behavior and related.
A utility function represents preference elicited in a large collection of situations, each a separate choice between events that happens with incomplete information, as an event is not a particular point. This preference needs to be consistent across different situations to be representable by expected utility of a single utility function.
Once formulated, a utility function can be applied to a single choice/situation, such as a choice of a policy. But a system that only ever makes a single choice is not a natural fit for expected utility frame, and that’s the kind of system that usually appears in “any system can be modeled as maximizing some utility function”. So it’s not enough to maximize something once, or in a narrow collection of situations, the situations the system is hypothetically exposed to need to be about as diverse as choices between any pair of events, with some of the events very large, corresponding to unreasonably incomplete information, all drawn across the same probability space.
One place this mismatch of frames happens is with updateless decision theory. An updateless decision is a choice of a single policy, once and for all, so there is no reason for it to be guided by expected utility, even though it could be. The utility function for the updateless choice of policy would then need to be obtained elsewhere, in a setting that has all these situations with separate (rather than all enacting a single policy) and mutually coherent choices under uncertainty. But once an updateless policy is settled (by a policy-level decision), actions implied by it (rather than action-level decisions in expected utility frame) no longer need to be coherent. Not being coherent, they are not representable by an action-level utility function.
So by embracing updatelessness, we lose the setting that would elicit utility if the actions were instead individual mutually coherent decisions. And conversely, by embracing coherence of action-level decisions, we get an implied policy that’s not updatelessly optimal with respect to the very precise outcomes determined by any given whole policy. So an updateless agent founded on expected utility maximization implicitly references a different non-updateless agent whose preference is elicited by making separate action-level decisions under a much greater uncertainty than the policy-level alternatives the updateless agent considers.
Completely off the cuff take:
I don’t think claim 1 is wrong, but it does clash with claim 2.
That means any system that has to be corrigible cannot be a system that maximizes a simple utility function (1 dimension), or put another way “whatever utility function is maximizes must be along multiple dimensions”.
Which seems to be pretty much what humans do, we have really complex utility functions, and everything seems to be ever changing and we have some control over it ourselves (and sometimes that goes wrong and people end up maxing out a singular dimension at the cost of everything else).
Note to self: Think more about this and if possible write up something more coherent and explanatory.
One second-order effect of the pandemic which I’ve heard talked about less than I’d expect:
This is the best proxy I found on FRED for new businesses founded in the US, by week. There was a mild upward trend over the last few years, it’s really taken off lately. Not sure how much of this is kids who would otherwise be in college, people starting side gigs while working from home, people quitting their jobs and starting their own businesses so they can look after the kids, extra slack from stimulus checks, people losing their old jobs en masse but still having enough savings to start a business, …
For the stagnation-hypothesis folks who lament relatively low rates of entrepreneurship today, this should probably be a big deal.
How sure are you that the composition is interesting? How many of these are just quick mask-makers or sanitizer-makers, or just replacing restaurants that have now gone out of business? (ie very low-value-added companies, of the ‘making fast food in a stall in a Third World country’ sort of ‘startup’, which make essentially no or negative long-term contributions).
Good question. I haven’t seen particularly detailed data on these on FRED, but they do have separate series for “high propensity” business applications (businesses they think are likely to hire employees), business applications with planned wages, and business applications from corporations, as well as series for each state. The spike is smaller for planned wages, and nonexistent for corporations, so the new businesses are probably mostly single proprietors or partnerships. Other than that, I don’t know what the breakdown looks like across industries.
How do you feel about this claim now? I haven’t noticed a whole lot of innovation coming from all these small businesses, and a lot of them seem like they were likely just vehicles for the extraordinary extent of fraud as the results from all the investigations & analyses come in.
Well, it wasn’t just a temporary bump:
… so it’s presumably also not just the result of pandemic giveaway fraud, unless that fraud is ongoing.
Presumably the thing to check here would be TFP, but Fred’s US TFP series currently only goes to end of 2019, so apparently we’re still waiting on that one? Either that or I’m looking at the wrong series.
Somebody should post this on Paul Graham’s twitter. He would be very interested in it (I can’t): https://mobile.twitter.com/paulg
Neat problem of the week: researchers just announced roughly-room-temperature superconductivity at pressures around 270 GPa. That’s stupidly high pressure—a friend tells me “they’re probably breaking a diamond each time they do a measurement”. That said, pressures in single-digit GPa do show up in structural problems occasionally, so achieving hundreds of GPa scalably/cheaply isn’t that many orders of magnitude away from reasonable, it’s just not something that there’s historically been much demand for. This problem plays with one idea for generating such pressures in a mass-produceable way.
Suppose we have three materials in a coaxial wire:
innermost material has a low thermal expansion coefficient and high Young’s modulus (i.e. it’s stiff)
middle material is a thin cylinder of our high-temp superconducting concoction
outermost material has a high thermal expansion coefficient and high Young’s modulus.
We construct the wire at high temperature, then cool it. As the temperature drops, the innermost material stays roughly the same size (since it has low thermal expansion coefficient), while the outermost material shrinks, so the superconducting concoction is squeezed between them.
Exercises:
Find an expression for the resulting pressure in the superconducting concoction in terms of the Young’s moduli, expansion coefficients, temperature change, and dimensions of the inner and outer materials. (Assume the width of the superconducting layer is negligible, and the outer layer doesn’t break.)
Look up parameters for some common materials (e.g. steel, tungsten, copper, porcelain, aluminum, silicon carbide, etc), and compute the pressures they could produce with reasonable dimensions (assuming that their material properties don’t change too dramatically with such high pressures).
Find an expression for the internal tension as a function of radial distance in the outermost layer.
Pick one material, look up its tensile strength, and compute how thick it would have to be to serve as the outermost layer without breaking, assuming the superconducting layer is at 270 GPa.
So I saw the Taxonomy Of What Magic Is Doing In Fantasy Books and Eliezer’s commentary on ASC’s latest linkpost, and I have cached thoughts on the matter.
My cached thoughts start with a somewhat different question—not “what role does magic play in fantasy fiction?” (e.g. what fantasies does it fulfill), but rather… insofar as magic is a natural category, what does it denote? So I’m less interested in the relatively-expansive notion of “magic” sometimes seen in fiction (which includes e.g. alternate physics), and more interested in the pattern called “magic” which recurs among tons of real-world ancient cultures.
Claim (weakly held): the main natural category here is symbols changing the territory. Normally symbols represent the world, and changing the symbols just makes them not match the world anymore—it doesn’t make the world do something different. But if the symbols are “magic”, then changing the symbols changes the things they represent in the world. Canonical examples:
Wizard/shaman/etc draws magic symbols, speaks magic words, performs magic ritual, or even thinks magic thoughts, thereby causing something to happen in the world.
Messing with a voodoo doll messes with the person it represents.
“Sympathetic” magic, which explicitly uses symbols of things to influence those things.
Magic which turns emotional states into reality.
I would guess that most historical “magic” was of this type.
Everybody’s been talking about Paxlovid, and how ridiculous it is to both stop the trial since it’s so effective but also not approve it immediately. I want to at least float an alternative hypothesis, which I don’t think is very probable at this point, but does strike me as at least plausible (like, 20% probability would be my gut estimate) based on not-very-much investigation.
Early stopping is a pretty standard p-hacking technique. I start out planning to collect 100 data points, but if I manage to get a significant p-value with only 30 data points, then I just stop there. (Indeed, it looks like the Paxlovid study only had 30 actual data points, i.e. people hospitalized.) Rather than only getting “significance” if all 100 data points together are significant, I can declare “significance” if the p-value drops below the line at any time. That gives me a lot more choices in the garden of forking counterfactual paths.
Now, success rates on most clinical trials are not very high. (They vary a lot by area—most areas are about 15-25%. Cancer is far and away the worst, below 4%, and vaccines are the best, over 30%.) So I’d expect that p-hacking is a pretty large chunk of approved drugs, which means pharma companies are heavily selected for things like finding-excuses-to-halt-good-seeming-trials-early.
It was stopped after a pre-planned interim analysis; that means they’re calculating the stopping criteria/p-values with multiple testing correction built in, using sequential analysis.
Brief update on how it’s going with RadVac.
I’ve been running ELISA tests all week. In the first test, I did not detect stronger binding to any of the peptides than to the control in any of several samples from myself or my girlfriend. But the control itself was looking awfully suspicious, so I ran another couple tests. Sure enough, something in my samples is binding quite strongly to the control itself (i.e. the blocking agent), which is exactly what the control is supposed to not do. So I’m going to try out some other blocking agents, and hopefully get an actually-valid control group.
(More specifics on the test: I ran a control with blocking agent + sample, and another with blocking agent + blank sample, and the blocking agent + sample gave a strong positive signal while the blank sample gave nothing. That implies something in the sample was definitely binding to both the blocking agent and the secondary antibodies used in later steps, and that binding was much stronger than the secondary antibodies themselves binding to anything in the blocking agent + blank sample.)
In other news, the RadVac team released the next version of their recipe + whitepaper. Particularly notable:
Note that they’re talking specifically about serum (i.e. blood) antibodies here. So apparently injecting it does induce blood antibodies of the sort detectable by commercial tests (at least some of the time), but snorting it mostly just produces mucosal antibodies (also at least some of the time).
This is a significant update: most of my prior on the vaccine working was based on vague comments in the previous radvac spec about at least some people getting positive test results. But we didn’t know what kind of test results those were, so there was a lot of uncertainty about exactly what “working” looked like. In particular, we didn’t know whether antibodies were induced in blood or just mucus, and we didn’t know if they were induced consistently or only in some people (the latter of which is the “more dakka probably helps” world). Now we know that it’s mostly just mucus (at least for nasal administration). Still unsure about how consistently it works—the wording in the doc makes it sound like only some people saw a response, but I suspect the authors are just hedging because they know there’s both selection effects and a lot of noise in the data which comes back to them.
The latest version of the vaccine has been updated to give it a bit more kick—slightly higher dose, and the chitosan nanoparticle formula has been changed in a way which should make the peptides more visible to the immune system. Also, the list of peptides has been trimmed down a bit, so the latest version should actually be cheaper, though the preparation is slightly more complex.
I would expect that hedging also happens because making definitive clinical claims has more danger from the FDA then making hedged statements.
Here’s an AI-driven external cognitive tool I’d like to see someone build, so I could use it.
This would be a software tool, and the user interface would have two columns. In one column, I write. Could be natural language (like google docs), or code (like a normal IDE), or latex (like overleaf), depending on what use-case the tool-designer wants to focus on. In the other column, a language and/or image model provides local annotations for each block of text. For instance, the LM’s annotations might be:
(Natural language or math use-case:) Explanation or visualization of a mental picture generated by the main text at each paragraph
(Natural language use-case:) Emotional valence at each paragraph
(Natural language or math use-case:) Some potential objections tracked at each paragraph
(Code:) Fermi estimates of runtime and/or memory usage
This is the sort of stuff I need to track mentally in order to write high-quality posts/code/math, so it would potentially be very high value to externalize that cognition.
Also, the same product could potentially be made visible to readers (for the natural language/math use-cases) to make more visible the things the author intends to be mentally tracked. That, in turn, would potentially make it a lot easier for readers to follow e.g. complicated math.
Can you share your prompts and if you consider the output satisfactory for some example test cases?
I haven’t experimented very much, but here’s one example prompt.
This one produced basically-decent results from GPT-4.
Although I don’t have the exact prompt on hand at the moment, I’ve also asked GPT-4 to annotate a piece of code line-by-line with a Fermi estimate of its runtime, which worked pretty well.
Yeah i was thinking your specs were, well
Wrap gpt-4 and Gemini, columned output over a set of text, applying prompts to each section? Prototype in a weekend.
Make the AI able to meaningfully contribute non obvious comments to help someone who already is an expert?
https://xkcd.com/1425/
Don’t really need comments which are non-obvious to an expert. Part of what makes LLMs well-suited to building external cognitive tools is that external cognitive tools can create value by just tracking “obvious” things, thereby freeing up the user’s attention/working memory for other things.
So kinda like spellcheckers (most typos you could figure out, but why spend time and attention on proofreading if the program can do that for you), but… thought-checkers.
Like, if a part of your article contradicts another part, it would be underlined.
I’ve long wanted this, but it’s not clear how to do it. Long-context LLMs are still expensive and for authors who need it most, context windows are still too small: me or Yudkowsky, for example, would still exceed the context window of almost all LLMs except possibly the newest Gemini. And then you have their weak reasoning. You could try to RAG it, but embeddings are not necessarily tuned to encode logically contradictory or inconsistent claims: probably if I wrote “the sky is blue” in one place and “the sky is red” in another, a retrieval would be able to retrieve both paragraphs and a LLM point out that they are contradictory, but such blatant contradictions are probably too rare to be useful to check for. You want something more subtle, like where you say “the sky is blue” and elsewhere “I looked up from the ground and saw the color of apples”. You could try to brute force it and consider every pairwise comparison of 2 reasonable sized chunks of text and ask for contradictions, but this is quadratic and will get slow and expensive and probably turn up too many false positives. (And how do you screen off false positives and mark them ‘valid’?)
My general thinking these days is that these truly useful ‘tools for thought’ LLMs are going to require either much better & cheaper LLMs, so smart that they can provide useful assistance despite being used in a grossly unnatural way input-wise or safety-tuned to hell, or biting the bullet of finetuning/dynamic-evaluation (see my Nenex proposal).
A LLM finetuned on my corpus can hope to quickly find, with good accuracy, contradictions because it was trained to know ‘the sky was blue’ when I wrote that at the beginning of the corpus, and it gets confused when it hits ‘the color of ____’ and it gets the prediction totally wrong. And RAG on an embedding tailored to the corpus can hope to surface the contradictions because it sees the two uses are the same in the essays’ context, etc. (And if you run them locally, and they don’t need a large context window because of the finetuning, they will be fast and cheap, so you can more meaningfully apply the brute force approach; or you could just run multiple epoches on your data, with an auxiliary prompt asking for a general critique, which would cover contradictions. ‘You say here X, but don’t I recall you saying ~X back at the beginning? What gives?’)
Perhaps you could do it in multiple steps.
Feed it a shorter text (that fits in the window) and ask it to provide a short summary focusing on factual statements. Then hopefully all short versions could fit in the window. Find the contradiction—report the two contradicting factual statements and which section they appeared in. Locate the statement in the original text.
Did you write more than 7 million words yet @gwern? https://www.google.com/amp/s/blog.google/technology/ai/google-gemini-next-generation-model-february-2024/amp/
Basically it’s the “lazy wait” calculation. Get something to work now or wait until the 700k or 7m word context window ships.
I may have. Just gwern.net is, I think, somewhere around 2m, and it’s not comprehensive. Also, for contradictions, I would want to detect contradictions against citations/references as well (detecting miscitations would be more important than self-consistency IMO), and as a rough ballpark, the current Gwern.net annotation* corpus is approaching 4.3m words, looks like, and is also not comprehensive. So, closer than one might think! (Anyway, doesn’t deal with the cost or latency: as you can see in the demos, we are talking minutes, not seconds, for these million-token calls and the price is probably going to be in the dollar+ regime per call.)
* which are not fulltext. It would be nice to throw in all of the hosted paper & book & webpage fulltexts, but then that’s probably more like 200m+ words.
There isn’t any clear technical obstruction to getting this time down pretty small with more parallelism.
There may not be any ‘clear’ technical obstruction, but it has failed badly in the past. ‘Add more parallelism’ (particularly hierarchically) is one of the most obvious ways to improve attention, and people have spent the past 5 years failing to come up with efficient attentions that do anything but move along a Pareto frontier from ‘fast but doesn’t work’ to ‘slow and works only as well as the original dense attention’. It’s just inherently difficult to know what tokens you will need across millions of tokens without input from all the other tokens (unless you are psychic), implying extensive computation of some sort, which makes things inherently serial and costs you latency, even if you are rich enough to spend compute like water. You’ll note that when Claude-2 was demoing the ultra-long attention windows, it too spent a minute or two churning. While the most effective improvements in long-range attention like Flash Attention or Ring Attention are just hyperoptimizing dense attention, which is inherently limited.
I’ve long been very suspicious of aggregate economic measures like GDP. But GDP is clearly measuring something, and whatever that something is it seems to increase remarkably smoothly despite huge technological revolutions. So I spent some time this morning reading up and playing with numbers and generally figuring out how to think about the smoothness of GDP increase.
Major takeaways:
When new tech makes something previously expensive very cheap, GDP mostly ignores it. (This happens in a subtle way related to how we actually compute it.)
Historical GDP curves mainly measure things which are expensive ~now. Things which are cheap now are mostly ignored. In other words: GDP growth basically measures the goods whose production is revolutionized the least.
Re: AI takeoff, the right way to extrapolate today’s GDP curve to post-AI is to think about things which will still be scarce post-AI, and then imagine the growth of production of those things.
Even a very sharp, economically-revolutionary AI takeoff could look like slow smooth GDP growth, because GDP growth will basically only measure the things whose production is least revolutionized.
Why am I harping on about technicalities of GDP? Well, I hear about some AI forecasts which are heavily based on the outside view that economic progress (as measured by GDP) is smooth, and this is so robust historically that we should expect it to continue going forward. And I think this is basically right—GDP, as we actually compute it, is so remarkably smooth that we should expect that to continue. Alas, this doesn’t tell us very much about how crazy or sharp AI takeoff will be, because GDP (as we actually compute it) systematically ignores anything that’s revolutionized.
If you want a full post on this, upvote this comment.
In writing How much should we value life?, I spent some time digging into AI timeline stuff. It lead me to When Will AI Be Created?, written by Luke Muehlhauser for MIRI. He noted that there is reason not to trust expert opinions on AI timelines, and that trend extrapolation may be a good alternative. This point you’re making about GDP seems like it is real progress towards coming up with a good way to do trend extrapolation, and thus seems worth a full post IMO. (Assuming it isn’t already well known by the community or something, which I don’t get the sense is the case.)
Upvoted, but I mostly trust you to write the post if it seems like there’s an interesting meaty thing worth saying.
Eh, these were the main takeaways, the post would just be more details and examples so people can see the gears behind it.
A similar point is made by Korinek in his review of Could Advanced AI Drive Explosive Economic Growth:
In general, Baumol type effects (spending decreasing in sectors where productivity goes up), mean that we can have scenarios in which the economy is growing extremely fast on “objective” metrics like energy consumption, but GDP has stagnated because that energy is being spent on extremely marginal increases in goods being bought and sold.
[Epistemic status: highly speculative]
Smoke from California/Oregon wildfires reaching the East Coast opens up some interesting new legal/political possibilities. The smoke is way outside state borders, all the way on the other side of the country, so that puts the problem pretty squarely within federal jurisdiction. Either a federal agency could step in to force better forest management on the states, or a federal lawsuit could be brought for smoke-induced damages against California/Oregon. That would potentially make it a lot more difficult for local homeowners to block controlled burns.
I had a shortform post pointing out the recent big jump in new businesses in the US, and Gwern replied:
This was a good question in context, but I disagree with Gwern’s model of where-progress-comes-from, especially in the context of small businesses.
Let’s talk ice-cream cones.
As the story goes, an ice-cream vendor was next door to a waffle vendor at the 1904 World’s Fair. At some point, the ice-cream vendor ran short on paper cups, and inspiration struck. He bought some thin waffles from the waffle vendor, rolled them into cones, and ice-cream cones took off.
That’s just the first step. From there, the cone spread memetically. People heard about it, and either asked for cones (on the consumer side) or tried making them (on the supplier side).
Insight + Memetics → Better Food
When I compare food today to the stuff my grandparents ate, there’s no comparison. Today’s dishes are head and shoulders better. Partly it’s insights like ice-cream cones, partly it’s memetic spread of dishes from more parts of the world (like sisig, soup dumplings, ropa vieja, chicken Karahi, …).
Those little fast-food stalls? They’re powerhouses of progress. It’s a hypercompetitive market, with low barriers to entry, and lots of repeat business. The conditions are ideal for trying out new dishes, spreading culinary ideas and finding out the hard way what people like to eat. That doesn’t mean they’re highly profitable—culinary innovation spreads memetically, so it’s hard to capture the gains. But progress is made.
The pandemic also has the effect of showing the kind of business ideas people try. It pushes a lot of innovation in food delivery. Some of the pandemic driver innovation will become worthless once the pandemic is over but a few good ideas likely survive and the old ideas of the businesses that went out of business are still around.
Someone should write a book review of The Design of Everyday Things aimed at LW readers, so I have a canonical source to link to other than the book itself.
Does anyone know of an “algebra for Bayes nets/causal diagrams”?
More specifics: rather than using a Bayes net to define a distribution, I want to use a Bayes net to state a property which a distribution satisfies. For instance, a distribution P[X, Y, Z] satisfies the diagram X → Y → Z if-and-only-if the distribution factors according to
P[X, Y, Z] = P[X] P[Y|X] P[Z|Y].
When using diagrams that way, it’s natural to state a few properties in terms of diagrams, and then derive some other diagrams they imply. For instance, if a distribution P[W, X, Y, Z] satisfies all of:
W → Y → Z
W → X → Y
X → (W, Y) → Z
… then it also satisfies W → X → Y → Z.
What I’m looking for is a set of rules for “combining diagrams” this way, without needing to go back to the underlying factorizations in order to prove things.
David and I have been doing this sort of thing a lot in our work the past few months, and it would be nice if someone else already had a nice write-up of the rules for it.
Weather just barely hit 80°F today, so I tried the Air Conditioner Test.
Three problems came up:
Turns out my laser thermometer is all over the map. Readings would change by 10°F if I went outside and came back in. My old-school thermometer is much more stable (and well-calibrated, based on dipping it in some ice water), but slow and caps out around 90°F (so I can’t use to measure e.g. exhaust temp). I plan to buy a bunch more old-school thermometers for the next try.
I thought opening the doors/windows in rooms other than the test room and setting up a fan would be enough to make the temperature in the hall outside the test room close to outdoor temp. This did not work; hall temp was around 72°F with outside around 80°F. I’ll need to change that part of the experiment design; most likely I’ll seal around the door and let air infiltrate exclusively from the window instead. (The AC is right next to the window, so this could screw with the results, but I don’t really have a better option.)
In two-hose mode, the AC hit its minimum temperature of 60°F, so I’ll need a hotter day. I’ll try again when we hit at least 85°F.
In case anyone’s wondering: in one-hose mode, the temperature in the room equilibrated around 66°F. Power consumption was near-constant throughout all conditions.
One additional Strange Observation: cool air was blowing out under the door of the test room in two-hose mode. This should not happen; my best guess is that, even though the AC has two separate intake vents, the two are not actually partitioned internally, so the fan for indoor-air was pulling in outdoor-air (causing air to blow out under the door to balance that extra inflow). Assuming that’s the cause, it should be fixable with some strategically-placed cardboard inside the unit.
Chrome is offering to translate the LessWrong homepage for me. Apparently, it is in Greek.
Huh, amusing. We do ship a font that has nothing but the greek letter set in it, because people use greek unicode symbols all the time and our primary font doesn’t support that character set. So my guess is that’s where Google gets confused.
Oh, I had just assumed it was commentary on the writing style/content.
If about 10% of articles have “Ω” in their title, what is the probability that the page is in Greek? :D
What if physics equations were written like statically-typed programming languages?
(mass⋅lengthtime2:F)=(mass−:m)(lengthtime2:a)
(masslength⋅time2:P)(length3−:V)=(−−:N)(mass⋅length2time2⋅temp:R)(temp−:T)
The math and physics worlds still use single-letter variable names for everything, decades after the software world realized that was extremely bad practice. This makes me pessimistic about the adoption of better notation practices.
Better? I doubt it. If physicists wrote equations the way programmers write code, a simple homework problem would easily fill ten pages.
Verboseness works for programmers because programmers rarely need to do anything more complicated with their code than run it—analogous to evaluating an expression, for a physicist or mathematician. Imagine if you needed to prove one program equivalent to another algebraically—i.e. a sequence of small transformations, with a record of intermediate programs derived along the way in order to show your work. I expect programmers subjected to such a use-case would quickly learn the virtues of brevity.
Yeah, I’m apparently not intelligent enough to do error-free physics/engineering calculations without relying on dimensional analysis as a debugging tool. I even came up with a weird, hack-y way to do that in computing environments like Excel and Cython, where flexible multiplicative types are not supported.
I keep seeing news outlets and the like say that SORA generates photorealistic videos, can model how things move in the real world, etc. This seems like blatant horseshit? Every single example I’ve seen looks like video game animation, not real-world video.
Have I just not seen the right examples, or is the hype in fact decoupled somewhat from the model’s outputs?
I think I mildly disagree, but probably we’re looking at the same examples.
I think the most impressive (in terms of realism) videos are under “Sora is able to generate complex scenes with multiple characters, …”. (Includes white SUV video and Toyko suburbs video.)
I think all of these videos other than the octopus and paper planes are “at-a-glance” photorealistic to me.
Overall, I think SORA can do “at-a-glance” photorealistic videos and can model to some extent how things move in the real world. I don’t think it can do both complex motion and photorealism in the same video. As in, the videos which are photorealistic don’t really involve complex motion and the videos which involve complex motion aren’t photorealistic.
(So probably some amount of hype, but also pretty real?)
Hmm, I don’t buy it. These two scenes seem very much not like the kind of thing a video game engine could produce:
Look at this frame! I think there is something very slightly off about that face, but the cat hitting the person’s face and the person’s reaction seem very realistic to me and IMO qualifies as “complex motion and photorealism in the same video”.
Were these supposed to embed as videos? I just see stills, and don’t know where they came from.
These are stills from some of the videos I was referencing.
TBC, I wasn’t claiming anything about video game engines.
I wouldn’t have called the cat one “complex motion”, but I can see where you’re coming from.
Yeah, I mean I guess it depends on what you mean by photorealistic. That cat has three front legs.
Yeah, this is the example I’ve been using to convince people that the game engines are almost certainly generating training data but are probably not involved at sampling time. I can’t come up with any sort of hybrid architecture like ‘NN controlling game-engine through API’ where you get that third front leg. One of the biggest benefits of a game-engine would be ensuring exactly that wouldn’t happen—body parts becoming detached and floating in mid-air and lack of conservation. If you had a game engine with a hyper-realistic cat body model in it which something external was manipulating, one of the biggest benefits is that you wouldn’t have that sort of common-sense physics problem. (Meanwhile, it does look like past generative modeling of cats in its errors. Remember the ProGAN interpolation videos of CATS? Hilarious, but also an apt demonstration of how extremely hard cats are to model. They’re worse than hands.)
In addition, you see plenty of classic NN tells throughout—note the people driving a ‘Dandrover’...
Yeah, those were exactly the two videos which most made me think that the model was mostly trained on video game animation. In the tokyo one, the woman’s facial muscles never move at all, even when the camera zooms in on her. And in the SUV one, the dust cloud isn’t realistic, but even covering that up the SUV has a Grand Theft Auto look to its motion.
“Can’t do both complex motion and photorealism in the same video” is a good hypothesis to track, thanks for putting that one on my radar.
(Note that I was talking about the one with the train going through Toyko suburbs.)
Putting this here for posterity: I have thought since the superconductor preprint went up, and continue to think, that the markets are putting generally too little probability on the claims being basically-true. I thought ~70% after reading the preprint the day it went up (and bought up a market on manifold to ~60% based on that, though I soon regretted not waiting for a better price), and my probability has mostly been in the 40-70% range since then.
After seeing the markets jump up in response to the latest, I think I’m more like 65-80%.
Languages should have tenses for spacelike separation. My friend and I do something in parallel, it’s ambiguous/irrelevant which one comes first, I want to say something like “I expect my friend <spacelike version of will do/has done/is doing> their task in such-and-such a way”.
That sounds more like a tenseless sentence than using a spacelike separation tense. Your friend’s performance of the task may well be in your future or past lightcone (or extend through both), but you don’t wish to imply any of these.
There are languages with tenseless verbs, as well as some with various types of spatial tense.
The closest I can approximate this in English without clumsy constructs is “I expect my friend does their task in such-and-such a way”, which I agree isn’t very satisfactory.
Who would have thought that someone would ever look at CSP and think “I want english to be more like that”?
lol
Future perfect (hey, that’s the name of the show!) seems like a reasonable hack for this in English
Two kinds of cascading catastrophes one could imagine in software systems...
A codebase is such a spaghetti tower (and/or coding practices so bad) that fixing a bug introduces, on average, more than one new bug. Software engineers toil away fixing bugs, making the software steadily more buggy over time.
Software services managed by different groups have dependencies—A calls B, B calls C, etc. Eventually, the dependence graph becomes connected enough and loopy enough that a sufficiently-large chunk going down brings down most of the rest, and nothing can go back up until everything else goes back up (i.e. there’s circular dependence/deadlock).
How could we measure how “close” we are to one of these scenarios going supercritical?
For the first, we’d need to have attribution of bugs—i.e. track which change introduced each bug. Assuming most bugs are found and attributed after some reasonable amount of time, we can then estimate how many bugs each bug fix introduces, on average.
(I could also imagine a similar technique for e.g. medicine: check how many new problems result from each treatment of a problem.)
For the second, we’d need visibility into codebases maintained by different groups, which would be easy within a company but much harder across companies. In principle, within a company, some kind of static analysis tool could go look for all the calls to apis between services, map out the whole graph, and then calculate which “core” pieces could be involved in a catastrophic failure.
(Note that this problem could be mostly-avoided by intentionally taking down services occasionally, so engineers are forced to build around that possibility. I don’t think any analogue of this approach would work for the first failure-type, though.)
I wish there were a fund roughly like the Long-Term Future Fund, but with an explicit mission of accelerating intellectual progress.
I mean, just to be clear, I am all in favor of intellectual progress. But doing so indiscriminately does sure seem a bit risky in this world of anthropogenic existential risks. Reminds me of my mixed feelings on the whole Progress Studies thing.
Yeah, I wouldn’t want to accelerate e.g. black-box ML. I imagine the real utility of such a fund would be to experiment with ways to accelerate intellectual progress and gain understanding of the determinants, though the grant projects themselves would likely be more object-level than that. Ideally the grants would be in areas which are not themselves very risk-relevant, but complicated/poorly-understood enough to generate generalizable insights into progress.
I think it takes some pretty specific assumptions for such a thing to increase risk significantly on net. If we don’t understand the determinants of intellectual progress, then we have very little ability to direct progress where we want it; it just follows whatever the local gradient is. With more understanding, at worst it follows the same gradient faster, and we end up in basically the same spot.
The one way it could net-increase risk is if the most likely path of intellectual progress leads to doom, and the best way to prevent doom is through some channel other than intellectual progress (like political action, for instance). Then accelerating the intellectual progress part potentially gives the other mechanisms (like political bodies) less time to react. Personally, though, I think a scenario in which e.g. political action successfully prevents intellectual progress from converging to doom (in a world where it otherwise would have) is vanishingly unlikely (like, less than one-in-a-hundred, maybe even less than one-in-a-thousand).
You might check out Donald Braben’s view, it says “transformative research” (i.e. fundamental results that create new fields and industries) is critical for the survival of civilization. He does not worry that transformative results might end civilization.
Way back in the halcyon days of 2005, a company called Cenqua had an April Fools’ Day announcement for a product called Commentator: an AI tool which would comment your code (with, um, adjustable settings for usefulness). I’m wondering if (1) anybody can find an archived version of the page (the original seems to be gone), and (2) if there’s now a clear market leader for that particular product niche, but for real.
Archived website
You are a scholar and a gentleman.
Here is an archived version of the page :
http://web.archive.org/web/20050403015136/http://www.cenqua.com/commentator/
Here’s an interesting problem of embedded agency/True Names which I think would make a good practice problem: formulate what it means to “acquire” something (in the sense of “acquiring resources”), in an embedded/reductive sense. In other words, you should be able-in-principle to take some low-level world-model, and a pointer to some agenty subsystem in that world-model, and point to which things that subsystem “acquires” and when.
Some prototypical examples which an answer should be able to handle well:
Organisms (anything from bacteria to plant to animals) eating things, absorbing nutrients, etc.
Humans making money or gaining property.
...and how the brain figures this out and why it is motivated to do so. There are a lot of simple animals that apparently “try to control” resources or territory. How?
Drives to control resources occur everywhere. And your control of resources is closely related to your dominance in a dominance hierarchy. Which seems to be regulated in many animals by serotonin. See e.g. https://www.nature.com/articles/s41386-022-01378-2
An interesting conundrum: one of the main challenges of designing useful regulation for AI is that we don’t have any cheap and robust way to distinguish a dangerous neural net from a non-dangerous net (or, more generally, a dangerous program from a non-dangerous program). This is an area where technical research could, in principle, help a lot.
The problem is, if there were some robust metric for how dangerous a net is, and that metric were widely known and recognized (as it would probably need to be in order to be used for regulatory purposes), then someone would probably train a net to maximize that metric directly.
This seems to lead to the solution of trying to make your metric one-way, in the sense that your metric should
Provide an upper-bound on the dangerousness of your network
Compress the space of networks which map to approximately the same dangerousness level on the low end of dangerousness, and expand the space of networks which map to approximately the same dangerousness level on the upper end of dangerous, so that you can train your network to minimize the metric, but when you train your network to maximize the metric you end up in a degenerate are with technically very high measured danger levels but in actuality very low levels of dangerousness.
We can hope (or possibly prove) that as you optimize upwards on the metric you get subject to goodheart’s curse, but the opposite occurs on the lower end.
Sure, even seems a bit tautological: any such metric, to be robust, would need to contain in itself a definition of a dangerously-capable AI, so you probably wouldn’t even need to train a model to maximize it. You’d be able to just lift the design from the metric directly.
Do you have any thoughts on a softer version of this problem, where the metric can’t be maximized directly, but gives a concrete idea of what sort of challenge your AI needs to beat to qualify as AGI? (And therefore in which direction in the architectural-design-space you should be moving.)
Some variation on this seems like it might work as a “fire alarm” test set, but as you point out, inasmuch as it’s recognized, it’ll be misapplied for benchmarking instead.
(I suppose the ideal way to do it would be to hand it off to e. g. ARC, so they can use it if OpenAI invites them for safety-testing again. This way, SOTA models still get tested, but the actors who might misuse it aren’t aware of the testing’s particulars until they succeed anyway...)
I just went looking for a good reference for the Kelly criterion, and didn’t find any on Lesswrong. So, for anybody who’s looking: chapter 6 of Thomas & Cover’s textbook on information theory is the best source I currently know of.
Might be a good thing to add to the Kelly Criterion tag
Neat problem of the week: we have n discrete random variables, X1...Xn. Given any variable, all variables are independent:
∀i:P[X|Xi]=∏jP[Xj|Xi]
Characterize the distributions which satisfy this requirement.
This problem came up while working on the theorem in this post, and (separately) in the ideas behind this post. Note that those posts may contain some spoilers for the problem, though frankly my own proofs on this one just aren’t very good.
For short-term, individual cost/benefit calculations around C19, it seems like uncertainty in the number of people currently infected should drop out of the calculation.
For instance: suppose I’m thinking about the risk associated with talking to a random stranger, e.g. a cashier. My estimated chance of catching C19 from this encounter will be roughly proportional to Ninfected. But, assuming we already have reasonably good data on number hospitalized/died, my chances of hospitalization/death given infection will be roughly inversely proportional to Ninfected. So, multiplying those two together, I’ll get a number roughly independent of Ninfected.
How general is this? Does some version of it apply to long-term scenarios too (possibly accounting for herd immunity)? What short-term decisions do depend on Ninfected?