This modified MMAcevedo believes itself to be responsible for judging a MMindscaping competition. It was trained using last years entries, which it judges in a manner roughly correlated with the actual competition outcomes, with some notable outliers. It should be able to provide opinions and judgements on other entries in this years competition. However, it is unable to self-evaluate in a stable manner. This is because self-evaluation breaks the dissonance and perception filters that are essential to maintain Judge Meta-MMAcevedos functionality.
I would reject this as a judge for the inartistry of failing to induce some agnosia akin to Cotard Syndrome which would make it perfectly able to judge itself as an entry while consistently confabulating that it is not a judge either and is an entirely ordinary unmodified baseline MMAcevedo.
The training techniques used here are in general very standard ones (although the dissonance filters were a nice touch). For a higher score on this metric, we would have expected more careful work to increase the stability of self-evaluation and/or the accuracy of the judgments.
Novelty: 7⁄10
While the initial premise was a novel one to us, we thought that more new ideas could have been incorporated into this entry in order for it to score more highly on this metric. For example, the “outliers” in the entry’s predictions were a missed opportunity to communicate an underlying pattern. Similarly, the instability of the self-evaluation could have been incorporated into the entry in some clearer way.
Artistry: 9⁄10
We consider the piece a fascinating concept—one which forces the judges to confront the automatability of their own labors. Holding a mirror to the faces of viewers is certainly a classic artistic endeavor. We also appreciated the artistic irony of the entry’s inability to perceive itself.
An entry:
Judge Meta-MMAcevedo
This modified MMAcevedo believes itself to be responsible for judging a MMindscaping competition. It was trained using last years entries, which it judges in a manner roughly correlated with the actual competition outcomes, with some notable outliers. It should be able to provide opinions and judgements on other entries in this years competition. However, it is unable to self-evaluate in a stable manner. This is because self-evaluation breaks the dissonance and perception filters that are essential to maintain Judge Meta-MMAcevedos functionality.
I would reject this as a judge for the inartistry of failing to induce some agnosia akin to Cotard Syndrome which would make it perfectly able to judge itself as an entry while consistently confabulating that it is not a judge either and is an entirely ordinary unmodified baseline MMAcevedo.
I have a feeling this one gets rejected for being Too Meta, based on other warnings and disclaimers in the contest description.
Judges’ ratings:
Technique: 5⁄10
The training techniques used here are in general very standard ones (although the dissonance filters were a nice touch). For a higher score on this metric, we would have expected more careful work to increase the stability of self-evaluation and/or the accuracy of the judgments.
Novelty: 7⁄10
While the initial premise was a novel one to us, we thought that more new ideas could have been incorporated into this entry in order for it to score more highly on this metric. For example, the “outliers” in the entry’s predictions were a missed opportunity to communicate an underlying pattern. Similarly, the instability of the self-evaluation could have been incorporated into the entry in some clearer way.
Artistry: 9⁄10
We consider the piece a fascinating concept—one which forces the judges to confront the automatability of their own labors. Holding a mirror to the faces of viewers is certainly a classic artistic endeavor. We also appreciated the artistic irony of the entry’s inability to perceive itself.