The approach EY/MIRI seems to take is to de-anthropomorphize even human intelligence as an optimization engine. A pretty machine-like thing. This is what I mentioned that I am not sure I can buy it, but willing to accept as a working hypothesis. So the starting position is that intelligence is anthropomorphci, MIRI has a model that de-anthropomorphized it, which is strange, weird, but probably useful, yet at the end we probably need something re-anthropomorphized. Because if not then we don’t have AI in the human sense, a conversation machine, we just have a machine that does weird alien stuff pretty efficiently with a rather inscrutable logic.
Why reanthropomorphize? You have support for modeling other humans because that was selected for, but there’s no reason to expect that that ability to model humans would be useful for thinking about intelligence abstractly. There’s no reason to think about things in human terms; there’s only a reason to think about it in terms that allow you to understand it precisely and likewise make it do what you value.
Also, neural nets are inscrutable. Logic just feels inscrutable because you have native support for navigating human social situations and no native support for logic.
What I was asking is how to look at it from the inner view. What is the software on the inside, not what its output are. How does intelligence FEEL like, which may give a clue about how an intelligent software could actually be like as opposed to merely what its outputs (optimization) are. To me a sufficiently challenging task on Raven’s Progressive Matrices feels like disassembling a drawing, and then reassembling it as a model that predicts what should be on the missing puzzle. Is that a good approach?
If we knew precisely everything there was to know about intelligence, there would be AGI. As for what is now known, you would need to do some studying. I guess I signal more knowledge than I have.
Why reanthropomorphize? You have support for modeling other humans because that was selected for, but there’s no reason to expect that that ability to model humans would be useful for thinking about intelligence abstractly. There’s no reason to think about things in human terms; there’s only a reason to think about it in terms that allow you to understand it precisely and likewise make it do what you value.
Also, neural nets are inscrutable. Logic just feels inscrutable because you have native support for navigating human social situations and no native support for logic.
If we knew precisely everything there was to know about intelligence, there would be AGI. As for what is now known, you would need to do some studying. I guess I signal more knowledge than I have.
This is AIXI.