They’re not exactly patrolling Reddit for critics, but I’ll bite.
From what I understand, Bostrom’s only premise is that intelligent machines can in principle perform any intellectual task that a human can, and this includes the design of intelligent machines. Juliano says that Bostrom takes hard-takeoff as a premise:
The premise of Bostrom’s book is based on the assumption that the moment an advanced AI is created that it will overcome the human race within minutes to hours.
He doesn’t do that. Chapter 4 of Superintelligence addresses both hard- and soft-takeoff scenarios. However, Bostrom does consider medium- to hard-takeoff scenarios more likely than soft-takeoff scenarios.
Another thing, when he says:
This extraordinary claim is neither explained nor substantiated with any evidence. It is just expected that the reader will take it at face value that this will happen. It’s an incoherent premise from a scientific standpoint, but the idea is so sensational that people who don’t understand the technical issues behind why that is not going to happen don’t care.
There can’t be evidence of an intelligence explosion because one hasn’t happened yet. But we predict an intelligence explosion because it’s based on an extrapolation of our current scientific generalizations. This sort of criticism can be made against anything that is possible in principle but that has not yet happened. If he wanted to argue against the possibility of an intelligence explosion, he would need to explain how it isn’t in line with our current generalizations. You have to have a more complex algorithm for evaluating claims than “evidence = good & no-evidence = bad” to get around mistakes like this. He actually sort-of seems to imply that he doesn’t think it’s in line with our generalizations, when he says “people [...] don’t understand the technical issues behind why that is not going to happen”, which would be a step in the right direction, but he doesn’t actually say anything about where he disagrees.
Also, Bostrom has a whole section in Chapter 14 on whether or not AGI should be a collaborative effort, and he’s strongly in favor of collaboration. Race dynamics penalize safety-conscious AGI projects, and collaboration mitigates the risk of a race dynamic. Also, most people’s preferences are resource-satiable; in other words, there’s not much more that someone could do with a billion galaxies’ worth of resources as opposed to one galaxy’s worth, so it’s better for everyone to collaborate and maximize their chances of getting something (which in this scenario is necessarily a lot) as opposed to taking on a large risk of getting nothing and a small chance of getting a lot more than they would ever probably want.
But this is a very different conception from Juliano’s, because I guess Juliano doesn’t think that machines could become far more intelligent than any human. His recommendations make sense if you think that strong AI is sort of like really smart computer viruses, and all we need to do is have an open community that collaborates to enact countermeasures like we do with modern computer viruses. But if you think that superintelligent machines are in line with our current generalizations, then his suggestions are wholly inadequate.
Can you recommend an article about the inner view on intelligence? The outer view seems to be an optimization ability, which I am not sure I buy but won’t challenge either, let’s say accepted as a working hypothesis. But what it is it on the inside? Can we say that it is like a machine shop? Where ideas are first disassembled, and this is called understanding them, taking them apart and seeing their connections. (Latin: intelligo = to understand.) And then reassembled e.g. to generate a prediction. Is IQ the size of the door on the shop that determines how big a machine can be brought in for breaking down?
For example randomly generating hypotheses and testing them, while it may be very efficient for optimization, does not really sound like textbook intelligence. Textbook intelligence must have a feature of understanding, and understanding is IMHO idea-disassembly, model-disassembly. Intelligence-as-.understanding (intelligo), interpreted as the ability to understand ideas proposed by other minds and hence conversational ability, have this disassembly feature.
From this angle one could build an efficient hypothesis-generator-and-tester type optimizer who is not intelligent in the textbook sense, is not too good at “intelligo”, could not discuss Kant’s philosophy. I am not sure I would call that AI and it is not simply a question of terminology, most popular AI fiction is about conversation-machines, not “silent optimizers” so it is important how we visualize it.
I’m having a really hard time modeling your thought process. Like, I don’t know what is generating the things that you are saying; I am confused.
I’m not sure what you mean by inner vs. outer view.
Well, IQ tests test lots of things.
Is IQ the size of the door on the shop that determines how big a machine can be brought in for breaking down?
This seems like a good metaphor for working memory, and even though WM correlates with IQ, it’s also just one component.
I don’t really get what you mean when you say that it’s important how we visualize it.
Well, if you take, say, AIXI, which sounds like this sort of hypothesis-testing-optimizer-type AI that you’re talking about, AIXI takes an action at every timestep, so if you consider a hypothetical where AIXI can exist and still be unbounded, or maybe a computable approximation in which it has a whole hell of a lot of resources and a realistic world model, one of those actions could be human natural language if it happened to be the action that maximized expected reward. So I’d say that you’re anthropomorphizing a bit too much. But AIXI is just the provably-best jack-of-all-trades; from what I understand there could be algorithms that are worse than AIXI in other domains but better in particular domains.
I think the keyword to my thought process is anthropomorphizing. The intuitive approach to intelligence is that it is a a human characteristic, almost like handsomeness or richness. Hence the pop culture AI is always an anthropomorphic conversation machine from Space Odyssey to Matrix 3 to Knight Rider. For example, it should probably have a sense of humor.
The approach EY/MIRI seems to take is to de-anthropomorphize even human intelligence as an optimization engine. A pretty machine-like thing. This is what I mentioned that I am not sure I can buy it, but willing to accept as a working hypothesis. So the starting position is that intelligence is anthropomorphci, MIRI has a model that de-anthropomorphized it, which is strange, weird, but probably useful, yet at the end we probably need something re-anthropomorphized. Because if not then we don’t have AI in the human sense, a conversation machine, we just have a machine that does weird alien stuff pretty efficiently with a rather inscrutable logic.
Looking at humans, beside optimization, the human traits that are considered part of intelligence, such as a sense of humor, or easily understanding difficult ideas in a conversation, are parts of it too, and they lie outside the optimization domain. The outer view is that we can observe intelligent humans optimizing things, this being one of their characteristics, although not exhaustive. However it does not lead to a full understanding of intelligence, just one facet of it, the optimization facet. It is merely an output, outcome of intelligence, not the process but its result.
So when a human with a high IQ tells you to do something in a different way, this is not intelligence, intelligence was the process that resulted in this optimization. To understand the process, you need to look at something else than optimization, the same way to understand software you cannot just look at its output.
What I was asking is how to look at it from the inner view. What is the software on the inside, not what its output are. How does intelligence FEEL like, which may give a clue about how an intelligent software could actually be like as opposed to merely what its outputs (optimization) are. To me a sufficiently challenging task on Raven’s Progressive Matrices feels like disassembling a drawing, and then reassembling it as a model that predicts what should be on the missing puzzle. Is that a good approach?
“IQ” is just a terms for something on the map. It’s what we measure. It’s not a platonic idea. It’s a mistake to treat it as such.
On the other hand it’s useful measurement. It correlates with a lot of quantities that we care about. We know that because people did scientific studies. That allows us to see things that we wouldn’t see if we just reason on an armchair with concepts that we developed as we go along in our daily lives.
Scientific thinking needs well defined concepts like IQ, that have a precise meaning and that don’t just mean what we feel they mean.
Those concepts have value when you move in areas where the naive map breaks down and doesn’t describe the territory well anymore.
The approach EY/MIRI seems to take is to de-anthropomorphize even human intelligence as an optimization engine. A pretty machine-like thing. This is what I mentioned that I am not sure I can buy it, but willing to accept as a working hypothesis. So the starting position is that intelligence is anthropomorphci, MIRI has a model that de-anthropomorphized it, which is strange, weird, but probably useful, yet at the end we probably need something re-anthropomorphized. Because if not then we don’t have AI in the human sense, a conversation machine, we just have a machine that does weird alien stuff pretty efficiently with a rather inscrutable logic.
Why reanthropomorphize? You have support for modeling other humans because that was selected for, but there’s no reason to expect that that ability to model humans would be useful for thinking about intelligence abstractly. There’s no reason to think about things in human terms; there’s only a reason to think about it in terms that allow you to understand it precisely and likewise make it do what you value.
Also, neural nets are inscrutable. Logic just feels inscrutable because you have native support for navigating human social situations and no native support for logic.
What I was asking is how to look at it from the inner view. What is the software on the inside, not what its output are. How does intelligence FEEL like, which may give a clue about how an intelligent software could actually be like as opposed to merely what its outputs (optimization) are. To me a sufficiently challenging task on Raven’s Progressive Matrices feels like disassembling a drawing, and then reassembling it as a model that predicts what should be on the missing puzzle. Is that a good approach?
If we knew precisely everything there was to know about intelligence, there would be AGI. As for what is now known, you would need to do some studying. I guess I signal more knowledge than I have.
They’re not exactly patrolling Reddit for critics, but I’ll bite.
From what I understand, Bostrom’s only premise is that intelligent machines can in principle perform any intellectual task that a human can, and this includes the design of intelligent machines. Juliano says that Bostrom takes hard-takeoff as a premise:
He doesn’t do that. Chapter 4 of Superintelligence addresses both hard- and soft-takeoff scenarios. However, Bostrom does consider medium- to hard-takeoff scenarios more likely than soft-takeoff scenarios.
Another thing, when he says:
There can’t be evidence of an intelligence explosion because one hasn’t happened yet. But we predict an intelligence explosion because it’s based on an extrapolation of our current scientific generalizations. This sort of criticism can be made against anything that is possible in principle but that has not yet happened. If he wanted to argue against the possibility of an intelligence explosion, he would need to explain how it isn’t in line with our current generalizations. You have to have a more complex algorithm for evaluating claims than “evidence = good & no-evidence = bad” to get around mistakes like this. He actually sort-of seems to imply that he doesn’t think it’s in line with our generalizations, when he says “people [...] don’t understand the technical issues behind why that is not going to happen”, which would be a step in the right direction, but he doesn’t actually say anything about where he disagrees.
Also, Bostrom has a whole section in Chapter 14 on whether or not AGI should be a collaborative effort, and he’s strongly in favor of collaboration. Race dynamics penalize safety-conscious AGI projects, and collaboration mitigates the risk of a race dynamic. Also, most people’s preferences are resource-satiable; in other words, there’s not much more that someone could do with a billion galaxies’ worth of resources as opposed to one galaxy’s worth, so it’s better for everyone to collaborate and maximize their chances of getting something (which in this scenario is necessarily a lot) as opposed to taking on a large risk of getting nothing and a small chance of getting a lot more than they would ever probably want.
But this is a very different conception from Juliano’s, because I guess Juliano doesn’t think that machines could become far more intelligent than any human. His recommendations make sense if you think that strong AI is sort of like really smart computer viruses, and all we need to do is have an open community that collaborates to enact countermeasures like we do with modern computer viruses. But if you think that superintelligent machines are in line with our current generalizations, then his suggestions are wholly inadequate.
Can you recommend an article about the inner view on intelligence? The outer view seems to be an optimization ability, which I am not sure I buy but won’t challenge either, let’s say accepted as a working hypothesis. But what it is it on the inside? Can we say that it is like a machine shop? Where ideas are first disassembled, and this is called understanding them, taking them apart and seeing their connections. (Latin: intelligo = to understand.) And then reassembled e.g. to generate a prediction. Is IQ the size of the door on the shop that determines how big a machine can be brought in for breaking down?
For example randomly generating hypotheses and testing them, while it may be very efficient for optimization, does not really sound like textbook intelligence. Textbook intelligence must have a feature of understanding, and understanding is IMHO idea-disassembly, model-disassembly. Intelligence-as-.understanding (intelligo), interpreted as the ability to understand ideas proposed by other minds and hence conversational ability, have this disassembly feature.
From this angle one could build an efficient hypothesis-generator-and-tester type optimizer who is not intelligent in the textbook sense, is not too good at “intelligo”, could not discuss Kant’s philosophy. I am not sure I would call that AI and it is not simply a question of terminology, most popular AI fiction is about conversation-machines, not “silent optimizers” so it is important how we visualize it.
I’m having a really hard time modeling your thought process. Like, I don’t know what is generating the things that you are saying; I am confused.
I’m not sure what you mean by inner vs. outer view.
Well, IQ tests test lots of things.
This seems like a good metaphor for working memory, and even though WM correlates with IQ, it’s also just one component.
I don’t really get what you mean when you say that it’s important how we visualize it.
Well, if you take, say, AIXI, which sounds like this sort of hypothesis-testing-optimizer-type AI that you’re talking about, AIXI takes an action at every timestep, so if you consider a hypothetical where AIXI can exist and still be unbounded, or maybe a computable approximation in which it has a whole hell of a lot of resources and a realistic world model, one of those actions could be human natural language if it happened to be the action that maximized expected reward. So I’d say that you’re anthropomorphizing a bit too much. But AIXI is just the provably-best jack-of-all-trades; from what I understand there could be algorithms that are worse than AIXI in other domains but better in particular domains.
I think the keyword to my thought process is anthropomorphizing. The intuitive approach to intelligence is that it is a a human characteristic, almost like handsomeness or richness. Hence the pop culture AI is always an anthropomorphic conversation machine from Space Odyssey to Matrix 3 to Knight Rider. For example, it should probably have a sense of humor.
The approach EY/MIRI seems to take is to de-anthropomorphize even human intelligence as an optimization engine. A pretty machine-like thing. This is what I mentioned that I am not sure I can buy it, but willing to accept as a working hypothesis. So the starting position is that intelligence is anthropomorphci, MIRI has a model that de-anthropomorphized it, which is strange, weird, but probably useful, yet at the end we probably need something re-anthropomorphized. Because if not then we don’t have AI in the human sense, a conversation machine, we just have a machine that does weird alien stuff pretty efficiently with a rather inscrutable logic.
Looking at humans, beside optimization, the human traits that are considered part of intelligence, such as a sense of humor, or easily understanding difficult ideas in a conversation, are parts of it too, and they lie outside the optimization domain. The outer view is that we can observe intelligent humans optimizing things, this being one of their characteristics, although not exhaustive. However it does not lead to a full understanding of intelligence, just one facet of it, the optimization facet. It is merely an output, outcome of intelligence, not the process but its result.
So when a human with a high IQ tells you to do something in a different way, this is not intelligence, intelligence was the process that resulted in this optimization. To understand the process, you need to look at something else than optimization, the same way to understand software you cannot just look at its output.
What I was asking is how to look at it from the inner view. What is the software on the inside, not what its output are. How does intelligence FEEL like, which may give a clue about how an intelligent software could actually be like as opposed to merely what its outputs (optimization) are. To me a sufficiently challenging task on Raven’s Progressive Matrices feels like disassembling a drawing, and then reassembling it as a model that predicts what should be on the missing puzzle. Is that a good approach?
What is AIXI?
“IQ” is just a terms for something on the map. It’s what we measure. It’s not a platonic idea. It’s a mistake to treat it as such. On the other hand it’s useful measurement. It correlates with a lot of quantities that we care about. We know that because people did scientific studies. That allows us to see things that we wouldn’t see if we just reason on an armchair with concepts that we developed as we go along in our daily lives.
Scientific thinking needs well defined concepts like IQ, that have a precise meaning and that don’t just mean what we feel they mean.
Those concepts have value when you move in areas where the naive map breaks down and doesn’t describe the territory well anymore.
Why reanthropomorphize? You have support for modeling other humans because that was selected for, but there’s no reason to expect that that ability to model humans would be useful for thinking about intelligence abstractly. There’s no reason to think about things in human terms; there’s only a reason to think about it in terms that allow you to understand it precisely and likewise make it do what you value.
Also, neural nets are inscrutable. Logic just feels inscrutable because you have native support for navigating human social situations and no native support for logic.
If we knew precisely everything there was to know about intelligence, there would be AGI. As for what is now known, you would need to do some studying. I guess I signal more knowledge than I have.
This is AIXI.