I’m having a really hard time modeling your thought process. Like, I don’t know what is generating the things that you are saying; I am confused.
I’m not sure what you mean by inner vs. outer view.
Well, IQ tests test lots of things.
Is IQ the size of the door on the shop that determines how big a machine can be brought in for breaking down?
This seems like a good metaphor for working memory, and even though WM correlates with IQ, it’s also just one component.
I don’t really get what you mean when you say that it’s important how we visualize it.
Well, if you take, say, AIXI, which sounds like this sort of hypothesis-testing-optimizer-type AI that you’re talking about, AIXI takes an action at every timestep, so if you consider a hypothetical where AIXI can exist and still be unbounded, or maybe a computable approximation in which it has a whole hell of a lot of resources and a realistic world model, one of those actions could be human natural language if it happened to be the action that maximized expected reward. So I’d say that you’re anthropomorphizing a bit too much. But AIXI is just the provably-best jack-of-all-trades; from what I understand there could be algorithms that are worse than AIXI in other domains but better in particular domains.
I think the keyword to my thought process is anthropomorphizing. The intuitive approach to intelligence is that it is a a human characteristic, almost like handsomeness or richness. Hence the pop culture AI is always an anthropomorphic conversation machine from Space Odyssey to Matrix 3 to Knight Rider. For example, it should probably have a sense of humor.
The approach EY/MIRI seems to take is to de-anthropomorphize even human intelligence as an optimization engine. A pretty machine-like thing. This is what I mentioned that I am not sure I can buy it, but willing to accept as a working hypothesis. So the starting position is that intelligence is anthropomorphci, MIRI has a model that de-anthropomorphized it, which is strange, weird, but probably useful, yet at the end we probably need something re-anthropomorphized. Because if not then we don’t have AI in the human sense, a conversation machine, we just have a machine that does weird alien stuff pretty efficiently with a rather inscrutable logic.
Looking at humans, beside optimization, the human traits that are considered part of intelligence, such as a sense of humor, or easily understanding difficult ideas in a conversation, are parts of it too, and they lie outside the optimization domain. The outer view is that we can observe intelligent humans optimizing things, this being one of their characteristics, although not exhaustive. However it does not lead to a full understanding of intelligence, just one facet of it, the optimization facet. It is merely an output, outcome of intelligence, not the process but its result.
So when a human with a high IQ tells you to do something in a different way, this is not intelligence, intelligence was the process that resulted in this optimization. To understand the process, you need to look at something else than optimization, the same way to understand software you cannot just look at its output.
What I was asking is how to look at it from the inner view. What is the software on the inside, not what its output are. How does intelligence FEEL like, which may give a clue about how an intelligent software could actually be like as opposed to merely what its outputs (optimization) are. To me a sufficiently challenging task on Raven’s Progressive Matrices feels like disassembling a drawing, and then reassembling it as a model that predicts what should be on the missing puzzle. Is that a good approach?
“IQ” is just a terms for something on the map. It’s what we measure. It’s not a platonic idea. It’s a mistake to treat it as such.
On the other hand it’s useful measurement. It correlates with a lot of quantities that we care about. We know that because people did scientific studies. That allows us to see things that we wouldn’t see if we just reason on an armchair with concepts that we developed as we go along in our daily lives.
Scientific thinking needs well defined concepts like IQ, that have a precise meaning and that don’t just mean what we feel they mean.
Those concepts have value when you move in areas where the naive map breaks down and doesn’t describe the territory well anymore.
The approach EY/MIRI seems to take is to de-anthropomorphize even human intelligence as an optimization engine. A pretty machine-like thing. This is what I mentioned that I am not sure I can buy it, but willing to accept as a working hypothesis. So the starting position is that intelligence is anthropomorphci, MIRI has a model that de-anthropomorphized it, which is strange, weird, but probably useful, yet at the end we probably need something re-anthropomorphized. Because if not then we don’t have AI in the human sense, a conversation machine, we just have a machine that does weird alien stuff pretty efficiently with a rather inscrutable logic.
Why reanthropomorphize? You have support for modeling other humans because that was selected for, but there’s no reason to expect that that ability to model humans would be useful for thinking about intelligence abstractly. There’s no reason to think about things in human terms; there’s only a reason to think about it in terms that allow you to understand it precisely and likewise make it do what you value.
Also, neural nets are inscrutable. Logic just feels inscrutable because you have native support for navigating human social situations and no native support for logic.
What I was asking is how to look at it from the inner view. What is the software on the inside, not what its output are. How does intelligence FEEL like, which may give a clue about how an intelligent software could actually be like as opposed to merely what its outputs (optimization) are. To me a sufficiently challenging task on Raven’s Progressive Matrices feels like disassembling a drawing, and then reassembling it as a model that predicts what should be on the missing puzzle. Is that a good approach?
If we knew precisely everything there was to know about intelligence, there would be AGI. As for what is now known, you would need to do some studying. I guess I signal more knowledge than I have.
I’m having a really hard time modeling your thought process. Like, I don’t know what is generating the things that you are saying; I am confused.
I’m not sure what you mean by inner vs. outer view.
Well, IQ tests test lots of things.
This seems like a good metaphor for working memory, and even though WM correlates with IQ, it’s also just one component.
I don’t really get what you mean when you say that it’s important how we visualize it.
Well, if you take, say, AIXI, which sounds like this sort of hypothesis-testing-optimizer-type AI that you’re talking about, AIXI takes an action at every timestep, so if you consider a hypothetical where AIXI can exist and still be unbounded, or maybe a computable approximation in which it has a whole hell of a lot of resources and a realistic world model, one of those actions could be human natural language if it happened to be the action that maximized expected reward. So I’d say that you’re anthropomorphizing a bit too much. But AIXI is just the provably-best jack-of-all-trades; from what I understand there could be algorithms that are worse than AIXI in other domains but better in particular domains.
I think the keyword to my thought process is anthropomorphizing. The intuitive approach to intelligence is that it is a a human characteristic, almost like handsomeness or richness. Hence the pop culture AI is always an anthropomorphic conversation machine from Space Odyssey to Matrix 3 to Knight Rider. For example, it should probably have a sense of humor.
The approach EY/MIRI seems to take is to de-anthropomorphize even human intelligence as an optimization engine. A pretty machine-like thing. This is what I mentioned that I am not sure I can buy it, but willing to accept as a working hypothesis. So the starting position is that intelligence is anthropomorphci, MIRI has a model that de-anthropomorphized it, which is strange, weird, but probably useful, yet at the end we probably need something re-anthropomorphized. Because if not then we don’t have AI in the human sense, a conversation machine, we just have a machine that does weird alien stuff pretty efficiently with a rather inscrutable logic.
Looking at humans, beside optimization, the human traits that are considered part of intelligence, such as a sense of humor, or easily understanding difficult ideas in a conversation, are parts of it too, and they lie outside the optimization domain. The outer view is that we can observe intelligent humans optimizing things, this being one of their characteristics, although not exhaustive. However it does not lead to a full understanding of intelligence, just one facet of it, the optimization facet. It is merely an output, outcome of intelligence, not the process but its result.
So when a human with a high IQ tells you to do something in a different way, this is not intelligence, intelligence was the process that resulted in this optimization. To understand the process, you need to look at something else than optimization, the same way to understand software you cannot just look at its output.
What I was asking is how to look at it from the inner view. What is the software on the inside, not what its output are. How does intelligence FEEL like, which may give a clue about how an intelligent software could actually be like as opposed to merely what its outputs (optimization) are. To me a sufficiently challenging task on Raven’s Progressive Matrices feels like disassembling a drawing, and then reassembling it as a model that predicts what should be on the missing puzzle. Is that a good approach?
What is AIXI?
“IQ” is just a terms for something on the map. It’s what we measure. It’s not a platonic idea. It’s a mistake to treat it as such. On the other hand it’s useful measurement. It correlates with a lot of quantities that we care about. We know that because people did scientific studies. That allows us to see things that we wouldn’t see if we just reason on an armchair with concepts that we developed as we go along in our daily lives.
Scientific thinking needs well defined concepts like IQ, that have a precise meaning and that don’t just mean what we feel they mean.
Those concepts have value when you move in areas where the naive map breaks down and doesn’t describe the territory well anymore.
Why reanthropomorphize? You have support for modeling other humans because that was selected for, but there’s no reason to expect that that ability to model humans would be useful for thinking about intelligence abstractly. There’s no reason to think about things in human terms; there’s only a reason to think about it in terms that allow you to understand it precisely and likewise make it do what you value.
Also, neural nets are inscrutable. Logic just feels inscrutable because you have native support for navigating human social situations and no native support for logic.
If we knew precisely everything there was to know about intelligence, there would be AGI. As for what is now known, you would need to do some studying. I guess I signal more knowledge than I have.
This is AIXI.