Make an artbreeder that works via RLHF, starting with an MNIST demo.
Make a preference model of a human clicking on images to pick which is better, based on (hidden from the user, but included in the dataset) captions of those images. Maybe implement active learning. Then at the end, show the person what the model thinks are the best images for them from the dataset.
Make an interface that helps you construct adversarial examples in a pretrained image classifier.
Make an interface that lets you pick out neurons in a neural net image classifier, and shows you images from the dataset that are supposed to tell you the semantics of those neurons.
RLHF is too complex for people starting in ML? But I’m interested by the link from the mnist demo if you have it?
Preference model : why not, but there is no clear metric. So we cannot easily determine the winner of the Hackathon.
Make an interface: this is a cool project idea. But generally, gradient based methods like The fast gradient sign lethod works very well. I have no clue what would an an adversarial GUI interface look like. So I’m not comfortable with the idea.
Interface to find the image activating the most an image classifier neuron? Cool idea but i think it’s too simple.
Make an artbreeder that works via RLHF, starting with an MNIST demo.
Make a preference model of a human clicking on images to pick which is better, based on (hidden from the user, but included in the dataset) captions of those images. Maybe implement active learning. Then at the end, show the person what the model thinks are the best images for them from the dataset.
Make an interface that helps you construct adversarial examples in a pretrained image classifier.
Make an interface that lets you pick out neurons in a neural net image classifier, and shows you images from the dataset that are supposed to tell you the semantics of those neurons.
Thank you for your help
RLHF is too complex for people starting in ML? But I’m interested by the link from the mnist demo if you have it?
Preference model : why not, but there is no clear metric. So we cannot easily determine the winner of the Hackathon.
Make an interface: this is a cool project idea. But generally, gradient based methods like The fast gradient sign lethod works very well. I have no clue what would an an adversarial GUI interface look like. So I’m not comfortable with the idea.
Interface to find the image activating the most an image classifier neuron? Cool idea but i think it’s too simple.