I think that distillations of research agendas such as this one are quite valuable, and hereby offer LawrenceC a $3,000 prize for writing it. (I’ll follow up via email.) Thanks, LawrenceC!
Going forward, I plan to keep an eye out for distillations such as this one that seem particularly skilled or insightful to me, and offer them a prize in the $1-10k range, depending on how much I like them.
Insofar as I do this, I’m going to be completely arbitrary about it, and I’m only going to notice attempts haphazardly, so please don’t do rely on the assumption that I’ll grant your next piece of writing a prize. I figure that this is better than nothing, but I don’t have the bandwidth at the moment to make guarantees about how many posts I’ll read.
Separately, here’s some quick suggestions about what sort of thing would make me like the distillation even more:
Distill out a 1-sentence-each bullet-point version of each of the 9 hypotheses at the top
Distill out a tweet-length version of the overall idea
These alone aren’t necessarily what makes a distillation better-according-to-me in and of themselves, but I hypothesize that the work required to add those things is likely to make the overall distillation better. (The uniting theme is something like: forcing yourself to really squeeze out the key ideas.)
For clarity as to what I mean by a “security assumption”, here are a few examples (not intended to apply in the case of shard theory, but intenderd rather to transmit the idea of a security assumption):
“This plan depends critically on future ability to convince leading orgs to adopt a particular training method.”
“This plan depends critically on the AI being unable to confidently distinguish deployment from training.”
“This plan depends critically on the hypothesis that AI tech will remain compute-intensive right up until the end.”
This is maybe too vague a notion; there’s an art to figuring out what counts and what doesn’t, and various people probably disagree about what should count.
I didn’t add a 1-sentence bullet point for each thesis because I thought the table of contents on the left was sufficient, though in retrospect I should’ve written it up mainly for learning value. Do you still think it’s worth doing after the fact?
Ditto the tweet thread, assuming I don’t plan on tweeting this.
I think that distillations of research agendas such as this one are quite valuable, and hereby offer LawrenceC a $3,000 prize for writing it. (I’ll follow up via email.) Thanks, LawrenceC!
Going forward, I plan to keep an eye out for distillations such as this one that seem particularly skilled or insightful to me, and offer them a prize in the $1-10k range, depending on how much I like them.
Insofar as I do this, I’m going to be completely arbitrary about it, and I’m only going to notice attempts haphazardly, so please don’t do rely on the assumption that I’ll grant your next piece of writing a prize. I figure that this is better than nothing, but I don’t have the bandwidth at the moment to make guarantees about how many posts I’ll read.
Separately, here’s some quick suggestions about what sort of thing would make me like the distillation even more:
Distill out a 1-sentence-each bullet-point version of each of the 9 hypotheses at the top
Distill out a tweet-length version of the overall idea
Distill out a bullet-point list of security assumptions
These alone aren’t necessarily what makes a distillation better-according-to-me in and of themselves, but I hypothesize that the work required to add those things is likely to make the overall distillation better. (The uniting theme is something like: forcing yourself to really squeeze out the key ideas.)
For clarity as to what I mean by a “security assumption”, here are a few examples (not intended to apply in the case of shard theory, but intenderd rather to transmit the idea of a security assumption):
“This plan depends critically on future ability to convince leading orgs to adopt a particular training method.”
“This plan depends critically on the AI being unable to confidently distinguish deployment from training.”
“This plan depends critically on the hypothesis that AI tech will remain compute-intensive right up until the end.”
This is maybe too vague a notion; there’s an art to figuring out what counts and what doesn’t, and various people probably disagree about what should count.
Are there any agendas you would particularly like to see distilled?
Thanks Nate!
I didn’t add a 1-sentence bullet point for each thesis because I thought the table of contents on the left was sufficient, though in retrospect I should’ve written it up mainly for learning value. Do you still think it’s worth doing after the fact?
Ditto the tweet thread, assuming I don’t plan on tweeting this.
It would still help like me to have a “short version” section at the top :-)
I’ve expanded the TL;DR at the top to include the nine theses. Thanks for the suggestion!