The machine learning stuff comes with preexisting artificial encoding. We label stuff ourselves.
Generally speaking, that’s not so true as it used to be. In particular, a lot of stuff from DeepMind (such as the Atari-playing breakthrough from a while ago) works with raw video inputs. I haven’t looked at the paper from the OP to verify it’s the same.)
Also, I have the impression that DeepMind takes a “copy the brain” approach fairly seriously, and they think of papers like this as relevant to that. But I am not sure of the details.
Generally speaking, that’s not so true as it used to be. In particular, a lot of stuff from DeepMind (such as the Atari-playing breakthrough from a while ago) works with raw video inputs. I haven’t looked at the paper from the OP to verify it’s the same.)
Also, I have the impression that DeepMind takes a “copy the brain” approach fairly seriously, and they think of papers like this as relevant to that. But I am not sure of the details.
They’re also working with raw RGB input here too.