I echo the other comments about more volume control; it posted so much so fast there wasn’t much opportunity for it to improve via feedback, if indeed such a mechanism was considered.
It’s trained on the whole corpus of LW comments and replies that got sufficiently high karma; naively I wouldn’t expect a day to make much of a dent in the training data. But there’s an interesting fact about training to match distributions, which is that most measures of distributional overlap (like the KL divergence) are asymmetric; how similar the corpus is to model outputs is different from how similar model outputs are to the corpus. Geoffrey Irving is interested in methods to use supervised learning to do distributional matching the other direction, and it might be the case that comment karma is a good way to do it; my guess is that you’re better off comparing outputs it generates on the same prompt head-to-head and picking which one is more ‘normal,’ and training a discriminator to attempt to mimic the human normality judgment.
Is there a writeup (or open source code) for the training and implementation? It would be interesting to personalize it—train based on each user’s posts/comments (in addition to high-karma comments from others), and give each of us a taste of our own medicine in replies to our comments/posts.
Sure, I am happy to share the training code, though we used our direct database access to export the data to train it, and that data doesn’t currently contain any author information. Though you can theoretically get all the data via the API.
I thought this was a great gag experiment.
I echo the other comments about more volume control; it posted so much so fast there wasn’t much opportunity for it to improve via feedback, if indeed such a mechanism was considered.
It’s trained on the whole corpus of LW comments and replies that got sufficiently high karma; naively I wouldn’t expect a day to make much of a dent in the training data. But there’s an interesting fact about training to match distributions, which is that most measures of distributional overlap (like the KL divergence) are asymmetric; how similar the corpus is to model outputs is different from how similar model outputs are to the corpus. Geoffrey Irving is interested in methods to use supervised learning to do distributional matching the other direction, and it might be the case that comment karma is a good way to do it; my guess is that you’re better off comparing outputs it generates on the same prompt head-to-head and picking which one is more ‘normal,’ and training a discriminator to attempt to mimic the human normality judgment.
Is there a writeup (or open source code) for the training and implementation? It would be interesting to personalize it—train based on each user’s posts/comments (in addition to high-karma comments from others), and give each of us a taste of our own medicine in replies to our comments/posts.
Sure, I am happy to share the training code, though we used our direct database access to export the data to train it, and that data doesn’t currently contain any author information. Though you can theoretically get all the data via the API.