LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur.
jimrandomh
When I read studies, the intention-to-treat aspect is usually mentioned, and compliance statistics are usually given, but it’s usually communicated in a way that lays traps for people who aren’t reading carefully. Ie, if someone is trying to predict whether the treatment will work for their own three year old, and accurately predicts similar compliance issues, they’re likely to arrive at an efficacy estimate which double-discounts due to noncompliance. And similarly when studies have surprisingly-low compliance, people who expect themselves to comply fully will tend to get an unduly pessimistic estimate of what will happen.
I don’t think D4 works, because the type of cognition it uses (fast-reflex execution of simple patterns provided by a coach) are not the kind that would be affected.
For a long time I’ve observed a pattern that, when news articles talk about Elon Musk, they’re dishonest (about what he’s said, done, and believes), and that his actual writing and beliefs are consistently more reasonable than the hit pieces portray.
Some recent events seem to me to have broken that pattern, with him saying things that are straightforwardly false (rather than complicated and ambiguously-false), and then digging in. It also appeared to me, at the public appearance where he had a chainsaw, that his body language was markedly different from his past public appearances.
My overall impression is that there has been a significant change in his cognitive state, and that he is de facto severely cognitively impaired as compared to how he was a few years ago. It could be transition to a different kind of bipolar, as you speculate, or a change in medications or drug use, or something else. I think people close to him should try coaxing him into doing some sort of cognitive test which has a clear point of comparison, to show him the contrast.
The remarkable thing about human genetics is that most of the variants ARE additive.
I think this is likely incorrect, at least where intelligence-affecting SNPs stacked in large numbers are concerned.
To make an analogy to ML, the effect of a brain-affecting gene will be to push a hyperparameter in one direction or the other. If that hyperparameter is (on average) not perfectly tuned, then one of the variants will be an enhancement, since it leads to a hyperparameter-value that is (on average) closer to optimal.
If each hyperparameter is affected by many genes (or, almost-equivalently, if the number of genes greatly exceeds the number of hyperparameters), then intelligence-affecting traits will look additive so long as you only look at pairs, because most pairs you look at will not affect the same hyperparameter, and when they do affect the same hyperparameter the combined effect still won’t be large enough to overshoot the optimum. However, if you stack many gene edits, and this model of genes mapping to hyperparameters is correct, then the most likely outcome is that you move each hyperparameter in the correct direction but overshooting the optimum. Phrased slightly differently: intelligence-affecting genes may be additive on current margins, but not remain additive when you stack edits in this way.
To make another analogy: SNPs affecting height may be fully additive, but if the thing you actually care about is basketball-playing ability, there is an optimum amount of editing after which you should stop, because while people who are 2m tall are much better at basketball than people who are 1.7m tall, people who are 2.6m tall are cripples.
For this reason, even if all the gene-editing biology works out, you will not produce people in the upper end of the range you forecast.
You can probably somewhat improve this situation by varying the number of edits you do. Ie, you have some babies in which you edit a randomly selected 10% of known intelligence-affecting SNPs, some in which you’ve edited 20%, some 30%, and so on. But finding the real optimum will probably require understanding what the SNPs actually do, in terms of a model of brain biology, and understanding brain biology well enough to make judgment calls about that.
Arbital has been imported to LessWrong
Downvotes don’t (necessarily) mean you broke the rules, per se, just that people think the post is low quality. I skimmed this, and it seemed like… a mix of edgy dark politics with poetic obscurantism?
Any of the many nonprofits, academic research groups, or alignment teams within AI labs. You don’t have to bet on a specific research group to decide that it’s worth betting on the ecosystem as a whole.
There’s also a sizeable contingent that thinks none of the current work is promising, and that therefore buying a little time is value mainly insofar as it opens the possibility of buying a lot of time. Under this perspective, that still bottoms out in technical research progress eventually, even if, in the most pessimistic case, that progress has to route through future researchers who are cognitively enhanced.
The article seems to assume that the primary motivation for wanting to slow down AI is to buy time for institutional progress. Which seems incorrect as an interpretation of the motivation. Most people that I hear talk about buying time are talking about buying time for technical progress in alignment. Technical progress, unlike institution-building, tends to be cumulative at all timescales, which makes it much more strategically relevant.
All of the plans I know of for aligning superintelligence are timeline-sensitive, either because they involve research strategies that haven’t paid off yet, or because they involve using non-superintelligent AI to help with alignment of subsequent AIs. Acceleration specifically in the supply of compute makes all those plans harder. If you buy the argument that misaligned superintelligence is a risk at all, Stargate is a bad thing.
The one silver lining is that this is all legible. The current administration’s stance seems to be that we should build AI quickly in order to outrace China; the previous administration’s stance was to say that the real existential risk is minorities being denied on loan applications. I prefer the “race with China” position because at least there exists a set of factual beliefs that would make that correct, implying it may be possible to course-correct when additional information becomes available.
If bringing such attitudes to conscious awareness and verbalizing them allows you to examine and discard them, have you excised a vulnerability or installed one? Not clear.
Possibly both, but one thing breaks the symmetry: it is on average less bad to be hacked by distant forces than by close ones.
There’s a version of this that’s directional advice: if you get a “bad vibe” from someone, how strongly should this influence your actions towards them? Like all directional advice, whether it’s correct or incorrect depends on your starting point. Too little influence, and you’ll find yourself surrounded by bad characters; too much, and you’ll find yourself in a conformism bubble. The details of what does and doesn’t trigger your “bad vibe” feeling matters a lot; the better calibrated it is, the more you should trust it.
There’s a slightly more nuanced version, which is if you get a “bad vibe” from someone, do you promote it to attention and think explicitly about what it might mean, and how do you relate to those thoughts?
I think for many people, that kind of explicit thinking is somewhat hazardous, because it allows red flags to be explained away in ways that they shouldn’t be. To take a comically exaggerated example that nevertheless literally happened: There was someone who described themself as a Sith Lord and wears robes. If you engage with that using only subconscious “vibe” reasoning, you would have avoided them. If you engaged with that using verbal reasoning, they might convince you that “Sith” is just a flavorful way of saying anti-authoritarianism, and also that it’s a “religion” and you’re not supposed to “discriminate”. Or, phrased slightly differently: verbal thinking increases the surface area through which you can get hacked.
Recently, a lot of very-low-quality cryptocurrency tokens have been seeing enormous “market caps”. I think a lot of people are getting confused by that, and are resolving the confusion incorrectly. If you see a claim that a coin named $JUNK has a market cap of $10B, there are three possibilities. Either: (1) The claim is entirely false, (2) there are far more fools with more money than expected, or (3) the $10B number is real, but doesn’t mean what you’re meant to think it means.
The first possibility, that the number is simply made up, is pretty easy to cross off; you can check with a third party. Most people settle on the second possibility: that there are surprisingly many fools throwing away their money. The correct answer is option 3: “market cap” is a tricky concept. And, it turns out that fixing the misconception here also resolves several confusions elsewhere.
(This is sort-of vagueblogging a current event, but the same current event has been recurring every week with different names on it for over a year now. So I’m explaining the pattern, and deliberately avoiding mention of any specific memecoin.)
Suppose I autograph a hat, then offer to sell you one-trillionth of that hat for $1. You accept. This hat now has a “market cap” of $1T. Of course, it would be silly (or deceptive) if people then started calling me a trillionaire.
Meme-coins work similarly, but with extra steps. The trick is that while they superficially look like a market of people trading with each other, in reality almost all trades have the coin’s creator on one side of the transaction, they control the price, and they optimize the price for generating hype.
Suppose I autograph a hat, call it HatCoin, and start advertising it. Initially there are 1000 HatCoins, and I own all of them. I get 4 people, arriving one at a time, each of whom decides to spend $10 on HatCoin. They might be thinking of it as an investment, or they might be thinking of it as a form of gambling, or they might be using it as a tipping mechanism, because I have entertained them with a livestream about my hat. The two key elements at this stage are (1) I’m the only seller, and (2) the buyers aren’t paying much attention to what fraction of the HatCoin supply they’re getting. As each buyer arrives and spends their $10, I decide how many HatCoins to give them, and that decision sets the “price” and “market cap” of HatCoin. If I give the first buyer 10 coins, the second buyer 5 coins, the third buyer 2 coins, and the fourth buyer 1 coin then the “price per coin” went from $1 to $2 to $5 to $10, and since there are 1000 coins in existence, the “market cap” went from $1k to $2k to $5k to $10k. But only $40 has actually changed hands.
At this stage, where no one else has started selling yet, so I fully control the price graph. I choose a shape that is optimized for a combination of generating hype (so, big numbers), and convincing people that if they buy they’ll have time left before the bubble bursts (so, not too big).
Now suppose the third buyer, who has 2 coins that are supposedly worth $20, decides to sell them. One of three things happens. Option 1 is that I buy them back for $20 (half of my profit so far), and retain control of the price. Option 2 is that I don’t buy them, in which case the price goes to zero and I exit with $40.
If a news article is written about this, the article will say that I made off with $10k (the “market cap” of the coin at its peak). However, I only have $40. The honest version of the story, the one that says I made off with $40, isn’t newsworthy, so it doesn’t get published or shared.
Epistemic belief updating: Not noticeably different.
Task stickiness: Massively increased, but I believe this is improvement (at baseline my task stickiness is too low so the change is in the right direction).
I won’t think that’s true. Or rather, it’s only true in the specific case of studies that involve calorie restriction. In practice that’s a large (excessive) fraction of studies, but testing variations of the contamination hypothesis does not require it.
(We have a draft policy that we haven’t published yet, which would have rejected the OP’s paste of Claude. Though note that the OP was 9 months ago.)
All three of these are hard, and all three fail catastrophically.
If you could make a human-imitator, the approach people usually talk about is extending this to an emulation of a human under time dilation. Then you take your best alignment researcher(s), simulate them in a box thinking about AI alignment for a long time, and launch a superintelligence with whatever parameters they recommend. (Aka: Paul Boxing)
The whole point of a “test” is that it’s something you do before it matters.
As an analogy: suppose you have a “trustworthy bank teller test”, which you use when hiring for a role at a bank. Suppose someone passes the test, then after they’re hired, they steal everything they can access and flee. If your reaction is that they failed the test, then you have gotten confused about what is and isn’t a test, and what tests are for.
Now imagine you’re hiring for a bank-teller role, and the job ad has been posted in two places: a local community college, and a private forum for genius con artists who are masterful actors. In this case, your test is almost irrelevant: the con artists applicants will disguise themselves as community-college applicants until it’s too late. You would be better finding some way to avoid attracting the con artists in the first place.
Connecting the analogy back to AI: if you’re using overpowered training techniques that could have produced superintelligence, then trying to hobble it back down to an imitator that’s indistinguishable from a particular human, then applying a Turing test is silly, because it doesn’t distinguish between something you’ve successfully hobbled, and something which is hiding its strength.
That doesn’t mean that imitating humans can’t be a path to alignment, or that building wrappers on top of human-level systems doesn’t have advantages over building straight-shot superintelligent systems. But making something useful out of either of these strategies is not straightforward, and playing word games on the “Turing test” concept does not meaningfully add to either of them.
that does not mean it will continue to act indistuishable from a human when you are not looking
Then it failed the Turing Test because you successfully distinguished it from a human.
So, you must believe that it is impossible to make an AI that passes the Turing Test.
I feel like you are being obtuse here. Try again?
Did you skip the paragraph about the test/deploy distinction? If you have something that looks (to you) like it’s indistinguishable from a human, but it arose from something descended to the process by which modern AIs are produced, that does not mean it will continue to act indistuishable from a human when you are not looking. It is much more likely to mean you have produced deceptive alignment, and put it in a situation where it reasons that it should act indistinguishable from a human, for strategic reasons.
This seems like an argument that proves too much; ie, the same argument applies equally to childhood education programs, improving nutrition, etc. The main reason it doesn’t work is that genetic engineering for health and intelligence is mostly positive-sum, not zero-sum. Ie, if people in one (rich) country use genetic engineering to make their descendents smarter and the people in another (poor) country don’t, this seems pretty similar to what has already happened with rich countries investing in more education, which has been strongly positive for everyone.