Engineer at CoinList.co. Donor to LW 2.0.
ESRogs
It doesn’t differentially help capitalize them compared to everything else though, right? (Especially since some of them are private.)
With which model?
Wondering why this post just showed up as new today, since it was originally posted in February of 2023:
Use the most powerful AI tools.
FWIW, Claude 3.5 Sonnet was released today. Appears to outperform GPT-4o on most (but not all) benchmarks.
Does any efficient algorithm satisfy all three of the linearity, respect for proofs, and 0-1 boundedness? Unfortunately, the answer is no (under standard assumptions from complexity theory). However, I argue that 0-1 boundedness isn’t actually that important to satisfy, and that instead we should be aiming to satisfy the first two properties along with some other desiderata.
Have you thought much about the feasibility or desirability of training an ML model to do deductive estimation?
You wouldn’t get perfect conformity to your three criteria of linearity, respect for proofs, and 0-1 boundedness (which, as you say, is apparently impossible anyway), but you could use those to inform your computation of the loss in training. In which case, it seems like you could probably approximately satisfy those properties most of the time.
Then of course you’d have to worry about whether your deductive estimation model itself is deceiving you, but it seems like at least you’ve reduced the problem a bit.
I wouldn’t call this “AI lab watch.” “Lab” has the connotation that these are small projects instead of multibillion dollar corporate behemoths.
Disagree on “lab”. I think it’s the standard and most natural term now. As evidence, see your own usage a few sentences later:
They’ve all committed to this in the WH voluntary commitments and I think the labs are doing things on this front.
Yeah I figured Scott Sumner must have been involved.
Nitpick: Larry Summers not Larry Sumners
If “—quine” was passed, read the script’s own source code using the
__file__
variable and print it out.
Interesting that it included this in the plan, but not in the actual implementation.
(Would have been kind of cheating to do it that way anyway.)
Worth noting 11 months later that @Bernhard was more right than I expected. Tesla did in fact cut prices a bunch (eating into gross margins), and yet didn’t manage to hit 50% growth this year. (The year isn’t over yet, but I think we can go ahead and call it.)
Good summary in this tweet from Gary Black:
$TSLA bulls should reduce their expectations that $TSLA volumes can grow at +50% per year. I am at +37% vol growth in 2023 and +37% growth in 2024. WS is at +37% in 2023 and +22% in 2024.
And apparently @MartinViecha head of $TSLA IR recently advised investors that TSLA “is now in an intermediate low-growth period,” at a recent Deutsche Bank auto conference with institutional investors. 35-40% volume growth still translates to 35-40% EPS growth, which justifies a 60x-70x 2024 P/E ($240-$280 PT) at a normal megacap growth 2024 PEG of 1.7x.And this reply from Martin Viecha:
What I said specifically is that we’re between two major growth waves: the first driven by 3/Y platform since 2017 and the next one that will be driven by the next gen vehicle.
Paul Christiano on Dwarkesh Podcast
let’s build larger language models to tackle problems, test methods, and understand phenomenon that will emerge as we get closer to AGI
Nitpick: you want “phenomena” (plural) here rather than “phenomenon” (singular).
I’m not necessarily putting a lot of stock in my specific explanations but it would be a pretty big surprise to learn that it turns out they’re really the same.
Does it seem to you that the kinds of people who are good at science vs good at philosophy (or the kinds of reasoning processes they use) are especially different?
In your own case, it seems to me like you’re someone who’s good at philosophy, but you’re also good at more “mundane” technical tasks like programming and cryptography. Do you think this is a coincidence?
I would guess that there’s a common factor of intelligence + being a careful thinker. Would you guess that we can mechanize the intelligence part but not the careful thinking part?
Carl Shulman on The Lunar Society (7 hour, two-part podcast)
Happiness has been shown to increase with income up to a certain threshold ($ 200K per year now, roughly speaking), beyond which the effect tends to plateau.
Do you have a citation for this? My understanding is that it’s a logarithmic relationship — there’s no threshold. (See the Income & Happiness section here.)
Why antisocial? I think it’s great!
I would imagine one of the major factors explaining Tesla’s absence is that people are most worried about LLMs at the moment, and Tesla is not a leader in LLMs.
(I agree that people often seem to overlook Tesla as a leader in AI in general.)
I don’t know anything about the ‘evaluation platform developed by Scale AI—at the AI Village at DEFCON 31’.
Looks like it’s this.
Here are some predictions—mostly just based on my intuitions, but informed by the framework above. I predict with >50% credence that by the end of 2025 neural nets will:
To clarify, I think you mean that you predict each of these individually with >50% credence, not that you predict all of them jointly with >50% credence. Is that correct?
Default seems unlikely, unless the market moves very quickly, since anyone pursuing this strategy is likely to be very small compared to the market for the S&P 500.
(Also consider that these pay out in a scenario where the world gets much richer — in contrast to e.g. Michael Burry’s “Big Short” swaps, which paid out in a scenario where the market was way down — so you’re just skimming a little off the huge profits that others are making, rather than trying to get them to pay you at the same time they’re realizing other losses.)