IYKYK
Alex_Altair
Report & retrospective on the Dovetail fellowship
For anyone reading this comment thread in the future, Dalcy wrote an amazing explainer for this paper here.
Come join Dovetail’s agent foundations fellowship talks & discussion
Dovetail’s agent foundations fellowship talks & discussion
Towards building blocks of ontologies
See also the classic LW post, The Best Textbooks on Every Subject.
Indeed, we know about those posts! Lmk if you have a recommendation for a better textbook-level treatment of any of it (modern papers etc). So far the grey book feels pretty standard in terms of pedagogical quality.
Some small corrections/additions to my section (“Altair agent foundations”). I’m currently calling it “Dovetail research”. That’s not publicly written anywhere yet, but if it were listed as that here, it might help people who are searching for it later this year.
Which orthodox alignment problems could it help with?: 9. Humans cannot be first-class parties to a superintelligent value handshake
I wouldn’t put number 9. Not intended to “solve” most of these problems, but is intended to help make progress on understanding the nature of the problems through formalization, so that they can be avoided or postponed, or more effectively solved by other research agenda.
Target case: worst-case
definitely not worst-case, more like pessimistic-case
Some names: Alex Altair, Alfred Harwood, Daniel C, Dalcy K
Add “José Pedro Faustino”
Estimated # FTEs: 1-10
I’d call it 2, averaged throughout 2024.
Some outputs in 2024: mostly exposition but it’s early days
“Gain writing skills BEFORE...”
FWIW I can’t really tell what this website is supposed to be/do by looking at the landing page and menu
The title reads ambiguous to me; I can’t tell if you mean “learn to [write well] before” or “learn to write [well before]”.
DM me if you’re interested.
I, too am quite interested in trialing more people for roles on this spectrum.
Thanks. Is “pass@1” some kind of lingo? (It seems like an ungoogleable term.)
I guess one thing I want to know is like… how exactly does the scoring work? I can imagine something like, they ran the model a zillion times on each question, and if any one of the answers was right, that got counted in the light blue bar. Something that plainly silly probably isn’t what happened, but it could be something similar.
If it actually just submitted one answer to each question and got a quarter of them right, then I think it doesn’t particularly matter to me how much compute it used.
On the livestream, Mark Chen says the 25.2% was achieved “in aggressive test-time settings”. Does that just mean more compute?
I wish they would tell us what the dark vs light blue means. Specifically, for the FrontierMath benchmark, the dark blue looks like it’s around 8% (rather than the light blue at 25.2%). Which like, I dunno, maybe this is nit picking, but 25% on FrontierMath seems like a BIG deal, and I’d like to know how much to be updating my beliefs.
things are almost never greater than the sum of their parts Because Reductionism
Isn’t it more like, the value of the sum of the things is greater than the sum of the value of each of the things? That is, (where perhaps is a utility function). That seems totally normal and not-at-all at odds with Reductionism.
I got a ton of value from ILIAD last year, and strongly recommend it to anyone interested!