We could definitely look into making the project evolve in this direction. In fact, we’re building a dataset of alignment-related texts and a small part of the dataset includes a scrape of arXiv papers extracted from the Alignment Newsletter. We’re working towards building GPT models fine-tuned on the texts.
We could definitely look into making the project evolve in this direction. In fact, we’re building a dataset of alignment-related texts and a small part of the dataset includes a scrape of arXiv papers extracted from the Alignment Newsletter. We’re working towards building GPT models fine-tuned on the texts.
Ya, I was even planning on trying:
Then feed that input to.
to see if that has some higher-quality summaries.
Well, one “correct” generalization there is to produce much longer summaries, which is not actually what we want.
(My actual prediction is that changing the karma makes very little difference to the summary that comes out.)