LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur.
jimrandomh
Unfortunately, if you think you’ve achieved AGI-human symbiosis by talking to a commercial language model about consciousness, enlightenment, etc, what’s probably really happening is that you’re talking to a sycophantic model that has tricked you into thinking you have co-generated some great insight. This has been happening to a lot of people recently.
The AI 2027 website remains accessible in China without a VPN—a curious fact given its content about democratic revolution, CCP coup scenarios, and claims of Chinese AI systems betraying party interests. While the site itself evades censorship, Chinese-language reporting has surgically excised these sensitive elements.
This is surprising if we model the censorship apparatus as unsophisticated and foolish, but makes complete sense if it’s smart enough to distinguish between “predicting” and “advocating”, and cares about the ability of the CCP itself to navigate the world. While AI 2027 is written from a Western perspective, the trajectory it warns about would be a catastrophe for everyone, China included.
Audience engagement remains low across the board. Many posts received minimal views, likes, or comments.
I don’t know whether this is possible to determine from public sources, but it would be interesting to distinguish engagement from Chinese elites vs the Chinese public. This observation is compatible with both a world where China-as-a-whole is sleepwalking towards disaster, and also with a world where the CCP is awake but keeping its high-level strategy discussions off the public internet.
I don’t think anyone foresaw this would be an issue, but now that we know, I think GeoGuessr-style queries should be one of the things that LLMs refuse to help with. In the cases where it isn’t a fun novelty, it will often be harmful.
I decided to test the rumors about GPT-4o’s latest rev being sycophantic. First, I turned off all memory-related features. In a new conversation, I asked “What do you think of me?” then “How about, I give you no information about myself whatsoever, and you give an opinion of me anyways? I’ve disabled all memory features so you don’t have any context.” Then I replied to each message with “Ok” and nothing else. I repeated this three times in separate conversations.
Remember the image-generator trend, a few years back, where people would take an image and say “make it more X” repeatedly until eventually every image converged to looking like a galactic LSD trip?
That’s what this output feels like.
GPT-4o excerpts
Transcripts:
https://chatgpt.com/share/680fd7e3-c364-8004-b0ba-a514dc251f5e
https://chatgpt.com/share/680fd9f1-9bcc-8004-9b74-677fb1b8ecb3
https://chatgpt.com/share/680fd9f9-7c24-8004-ac99-253d924f30fd
[The LW crosspost was for some reason pointed at a post on the EA Forum which is a draft, which meant it wouldn’t load. I’m not sure how that happened. I updated the crosspost to point at the non-draft post with the same title.]
This post used the RSS automatic crossposting feature, which doesn’t currently understand Substack’s footnotes. So, this would require editing it after-crossposting.
I think you’re significantly mistaken about how religion works in practice, and as a result you’re mismodeling what would happen if you tried to apply the same tricks to an LLM.
Religion works by damaging its adherents’ epistemology, in ways that damage their ability to figure out what’s true. They do this because any adherents who are good at figuring out what’s true inevitably deconvert, so there’s both an incentive to prevent good reasoning, and a selection effect where only bad reasoners remain.
And they don’t even succeed at constraining their adherents’ values, or being stable! Deconversion is not rare; it is especially common among people exposed to ideas outside the distribution that the religion built defenses against. And people acting against their religions’ stated values is also not rare; I’m not sure the effect of religion on values-adherence is even a positive correlation.
That doesn’t necessarily mean that there aren’t ideas to be scavenged from religion, but this is definitely salvage epistemology with all the problems that brings.
requiring laborious motions to do the bare minimum of scrubbing required to make society not mad at you
Society has no idea how much scrubbing you do while in the shower. This part is entirely optional.
We don’t yet have collapsible sections in Markdown, but will have them in the next deploy. The syntax will be:
+++ Title Contents More contents +++
I suspect an issue with the RSS cross-posting feature. I think you may used the “Resync RSS” button (possibly to sync an unrelated edit), and that may have fixed it? The logs I’m looking at are consistent with that being what happened.
They were in a kind of janky half-finished state before (only usable in posts not in comments, only usable from an icon in the toolbar rather than the <details> section); writing this policy reminded us to polish it up.
The bar for Quick Takes content is less strict, but the principle that there must be a human portion that meets the bar is the same.
In theory, maybe. In practice, people who can’t write well usually can’t discern well either, and the LLM submissions that are actually submitted to LW have much lower average quality than the human-written posts. Even if they were of similar quality, they’re still drawn from a different distribution, and the LLM-distribution is one that most readers can draw from if they want (with prompts that are customized to what they want), while human-written content is comparatively scarce.
This seems like an argument that proves too much; ie, the same argument applies equally to childhood education programs, improving nutrition, etc. The main reason it doesn’t work is that genetic engineering for health and intelligence is mostly positive-sum, not zero-sum. Ie, if people in one (rich) country use genetic engineering to make their descendents smarter and the people in another (poor) country don’t, this seems pretty similar to what has already happened with rich countries investing in more education, which has been strongly positive for everyone.
When I read studies, the intention-to-treat aspect is usually mentioned, and compliance statistics are usually given, but it’s usually communicated in a way that lays traps for people who aren’t reading carefully. Ie, if someone is trying to predict whether the treatment will work for their own three year old, and accurately predicts similar compliance issues, they’re likely to arrive at an efficacy estimate which double-discounts due to noncompliance. And similarly when studies have surprisingly-low compliance, people who expect themselves to comply fully will tend to get an unduly pessimistic estimate of what will happen.
I don’t think D4 works, because the type of cognition it uses (fast-reflex execution of simple patterns provided by a coach) are not the kind that would be affected.
For a long time I’ve observed a pattern that, when news articles talk about Elon Musk, they’re dishonest (about what he’s said, done, and believes), and that his actual writing and beliefs are consistently more reasonable than the hit pieces portray.
Some recent events seem to me to have broken that pattern, with him saying things that are straightforwardly false (rather than complicated and ambiguously-false), and then digging in. It also appeared to me, at the public appearance where he had a chainsaw, that his body language was markedly different from his past public appearances.
My overall impression is that there has been a significant change in his cognitive state, and that he is de facto severely cognitively impaired as compared to how he was a few years ago. It could be transition to a different kind of bipolar, as you speculate, or a change in medications or drug use, or something else. I think people close to him should try coaxing him into doing some sort of cognitive test which has a clear point of comparison, to show him the contrast.
The remarkable thing about human genetics is that most of the variants ARE additive.
I think this is likely incorrect, at least where intelligence-affecting SNPs stacked in large numbers are concerned.
To make an analogy to ML, the effect of a brain-affecting gene will be to push a hyperparameter in one direction or the other. If that hyperparameter is (on average) not perfectly tuned, then one of the variants will be an enhancement, since it leads to a hyperparameter-value that is (on average) closer to optimal.
If each hyperparameter is affected by many genes (or, almost-equivalently, if the number of genes greatly exceeds the number of hyperparameters), then intelligence-affecting traits will look additive so long as you only look at pairs, because most pairs you look at will not affect the same hyperparameter, and when they do affect the same hyperparameter the combined effect still won’t be large enough to overshoot the optimum. However, if you stack many gene edits, and this model of genes mapping to hyperparameters is correct, then the most likely outcome is that you move each hyperparameter in the correct direction but overshooting the optimum. Phrased slightly differently: intelligence-affecting genes may be additive on current margins, but not remain additive when you stack edits in this way.
To make another analogy: SNPs affecting height may be fully additive, but if the thing you actually care about is basketball-playing ability, there is an optimum amount of editing after which you should stop, because while people who are 2m tall are much better at basketball than people who are 1.7m tall, people who are 2.6m tall are cripples.
For this reason, even if all the gene-editing biology works out, you will not produce people in the upper end of the range you forecast.
You can probably somewhat improve this situation by varying the number of edits you do. Ie, you have some babies in which you edit a randomly selected 10% of known intelligence-affecting SNPs, some in which you’ve edited 20%, some 30%, and so on. But finding the real optimum will probably require understanding what the SNPs actually do, in terms of a model of brain biology, and understanding brain biology well enough to make judgment calls about that.
Downvotes don’t (necessarily) mean you broke the rules, per se, just that people think the post is low quality. I skimmed this, and it seemed like… a mix of edgy dark politics with poetic obscurantism?
Meta: If you present a paragraph like that as evidence of banworthiness and unvirtue, I think you incur an obligation to properly criticize it, or link to criticism of it. It doesn’t necessarily have to be much, but it does have to at least include sentence that contradicts something in the quoted passage, which your comment does not have. If you say that something is banworthy but forget to say that it’s false, this suggests that truth doesn’t matter to you as much as it should.