What kind of example are you looking for / what does your question mean?
I think if someone just tries their hardest to make “something that people will think is useful ML hardware” they will typically end up making useful ML hardware. I think this is most obvious for humans and human firms, but also very probably true for alien intelligences with quite different ability profiles.
I’m not sure if that’s what you mean by “adversarial” (it seems like it’s usually the relevant question), and if so I’m not sure how/whether it differs from the examples I gave.
I think if someone tries their hardest to make “something that people will think is useful ML hardware but isn’t,” I’m sure that’s also possible (though apparently much harder than just making useful ML hardware). Though on the flip side if someone then said “Recognize an argument that this hardware isn’t actually useful” I think that’s also much easier than generating the deceptive hardware itself.
(That discussion seems the same for my other 4 examples. If someone tries their hardest to produce “something that looks like a really great scientific theory” or “something that looks like a ground-breaking paper in TCS after careful evaluation” or whatever, you will get something that has a good probability of being a great scientific theory or a ground-breaking paper.)
What kind of example are you looking for / what does your question mean?
I think if someone just tries their hardest to make “something that people will think is useful ML hardware” they will typically end up making useful ML hardware. I think this is most obvious for humans and human firms, but also very probably true for alien intelligences with quite different ability profiles.
I’m not sure if that’s what you mean by “adversarial” (it seems like it’s usually the relevant question), and if so I’m not sure how/whether it differs from the examples I gave.
I think if someone tries their hardest to make “something that people will think is useful ML hardware but isn’t,” I’m sure that’s also possible (though apparently much harder than just making useful ML hardware). Though on the flip side if someone then said “Recognize an argument that this hardware isn’t actually useful” I think that’s also much easier than generating the deceptive hardware itself.
(That discussion seems the same for my other 4 examples. If someone tries their hardest to produce “something that looks like a really great scientific theory” or “something that looks like a ground-breaking paper in TCS after careful evaluation” or whatever, you will get something that has a good probability of being a great scientific theory or a ground-breaking paper.)