I think the paradigm of large scale data processing is itself uninteresting to me. I want to study problems in machine vision and perception that are not well-solved simply by processing large amounts of data … i.e. problems where if you give me a large amount of data, it is not at all known what I should do with that data to produce a “good” solution to the problem. Once you know what you’re supposed to do with the data, I feel like the rest is just engineering, which is uninteresting to me. After 3 years of reading huge chunks of the vision literature, and contemplating this and discussing this with many other faculty and researchers, the consensus seems to be that even if such a problem did exist, no one would be interested in publishing results on it or giving you money to study it. This is why I need to hack myself to cause myself to want to study the “uninteresting” engineering / data processing aspect.
I think the paradigm of large scale data processing is itself uninteresting to me. I want to study problems in machine vision and perception that are not well-solved simply by processing large amounts of data … i.e. problems where if you give me a large amount of data, it is not at all known what I should do with that data to produce a “good” solution to the problem. Once you know what you’re supposed to do with the data, I feel like the rest is just engineering, which is uninteresting to me. After 3 years of reading huge chunks of the vision literature, and contemplating this and discussing this with many other faculty and researchers, the consensus seems to be that even if such a problem did exist, no one would be interested in publishing results on it or giving you money to study it. This is why I need to hack myself to cause myself to want to study the “uninteresting” engineering / data processing aspect.