I haven’t offered a definition of intelligence as far as I know, so I’m a little bewildered to suddenly be talking about what does or doesn’t match it.
I infer from the rest of your comment that what you’re taking to be my definition of intelligence is “a process that modifies its environment to more effectively achieve a pre-determined result”, which I neither intended nor endorse as a definition of intelligence.
That aside, though, I think I now understand the context of your initial response… thanks for the clarification. It almost completely fails to overlap with the context I intended in the comment you were responding to.
Well, the point is that if we start to not using the ‘intelligence’ to describe it, but using the ‘really powerful optimization process’ or the like, it gets us to things like:
“a process that modifies its environment to more effectively achieve a pre-determined result”
which are a very apt description of scary AI, but not of anything which is normally described as intelligent. This way the scary AI has more in common with gray goo than with anything that is normally described as intelligent.
I infer from this that your preferred label for the class of things we’re talking around is “intelligence”. Yes?
Edit: I subsequently infer from the downvote: no. Or perhaps irritation that I still want my original question answered.
Edit: Repudiating previous edit.
The preferred label for things like seed AI, or giant neural network sim, or the like, should be intelligence unless they are actually written as a “really powerful optimization process”, in which case it is useful to refer to what exactly they optimize (which is something within themselves, not outside). The scary AI idea arises from lack of understanding of what the intelligences do, and latching onto the first plausible definition, them optimizing towards a goal defined from the start. It may be a good idea to refer to the scary AIs as really powerful optimization processes that optimize the real world towards some specific state, but don’t confuse this with intelligences in general, of which this is a tiny, and so far purely theoretical, subset.
So, suppose I am looking at a system in the world… call it X. Perhaps X is a bunch of mold in my refrigerator. Perhaps it’s my neighbor’s kid. Perhaps it’s a pile of rocks in my backyard. Perhaps it’s a software program on my computer. Perhaps it’s something on an alien planet I’m visiting. Doesn’t matter.
Suppose I want to know whether X is intelligent.
What would you recommend I pay attention to in X’s observable behavior in order to make that decision? That is, what observable properties of X are evidence of intelligence?
Well, if you observe it optimizing something very powerfully, it might be intelligent or it might be a thermostat with high powered heater and cooler and a PID controller. I define intelligence as capability of solving problems, which is about choosing a course of action out of a giant possible space of courses of action based on some sort of criteria, where normally there is no obvious polynomial time solution. One could call it ‘powerful optimization process’ but that brings the connotations of the choice having some strong effect on the environment (which you yourself mentioned), while one could just as well proposition an agent whose goal includes preservation of status quo (i.e. the way things would have been without it) and minimization of it’s own impact, to the detriment of other goals that appeal more to us—and that agent could still be very intelligent even though it’s modification to the environment would be smaller than that by some dumber agent working under exact same goals with exact same weights (as the smarter agent processes larger space of possible solutions and find the solutions that can satisfy both goals better; the agents may have identical algorithm with different CPU speed and the faster may end up visibly ‘optimizing’ its environment less).
edit: I imagine there can be different definitions of intelligence. The Eliezer grew up in a religious family, is himself an atheist, and seem to see the function of intelligence as primarily forming the correct beliefs; something that I don’t find so plausible given that there are very intelligent people who seem to believe in some really odd things, which are so defined as to have no impact on their life. That’s similar to belief that all behaviours in MWI should equate to normality, while believing in MWI. There are also intelligent people whose strange beliefs have impact on their life. The one thing common about people i’d call intelligent, is that they are good at problem solving. The problems being solved vary, and some people’s intelligence is tasked with making un-falsifiable theory of dragon in their garage. Few people would think of this behaviour when they hear phrase ‘optimization process’. But the majority of intelligent people’s intelligence is tasked with something very silly most of the time.
So, echoing that back to you to make sure I understand so far: one important difference between “intelligence” and “optimization process” is that the latter (at least connotatively) implies affecting the environment whereas the former doesn’t. We should be more concerned with the internal operations of the system than with its effects on the environment, and therefore we should talk about “intelligence” rather than “optimization process.” Some people believe “intelligence” refers to the ability to form correct beliefs, but it properly refers to the ability to choose a specific course of action out of a larger space of possibilities based on how well it matches a criterion.
Is that about right, or have I misunderstood something key?
Well, the ‘optimization process’ has the connotations of making something more optimal, the connotations of certain productivity and purpose. Optimal is a positive word. The intelligence, on the other hand, can have goals even less productive than tiling the universe with paperclips. The problem with word intelligence is that it may or may not have positive moral connotations. The internal operation shouldn’t really be very relevant in theory, but in practice, you can have a dumb brick that is just sitting here, and you can have a brick of computronium inside which entire boxed society lives which for some reason decided to go solipsist and deny the outside of the brick. Or you can have a brick that is sitting plotting to take over the world, but it didn’t make a single move yet (and is going to chill out for another million years coz it got patience and isn’t really in a hurry because the goal is bounded, and its e.g. safely in orbit).
If you start talking about powerful optimization processes that do something in the real world, you leave out all the simple, probable, harmless goal systems that the AI can have (and still be immensely useful). The external goals are enormously difficult to define on a system that builds it’s own model of the world.
Well, I think you understood what I meant, it’s just felt as you made a short summary partially out of context. People typically (i.e. virtually always) do that for purpose of twisting other people’s words later on. The arguments over definitions are usually (virtually always) a debate technique designed to obscure the topic and substitute some meanings to edge towards some predefined conclusion. In particular most typically one would want to substitute the ‘powerful optimization process’ for intelligence to create the support for the notion of scary AI.
I do it, here and elsewhere, because most of your comments seem to me entirely orthogonal to the thing they ostensibly respond to, and the charitable interpretation of that is that I’m failing to understand your responses the way you meant them, and my response to that is typically to echo back those responses as I understood them and ask you to either endorse my echo or correct it.
Which, frequently, you respond to with a yet another comment that seems to me entirely orthogonal to my request.
But I can certainly appreciate why, if you’re assuming that I’m trying to twist your words and otherwise being malicious, you’d refuse to cooperate with me in this project.
That’s fine; you’re under no obligation to cooperate, and your assumption isn’t a senseless one.
Neither am I under any obligation to keep trying to communicate in the absence of cooperation, especially when I see no way to prove my good will, especially given that I’m now rather irritated at having been treated as malicious until proven otherwise.
So, as I said, I think the best thing to do is just end this exchange here.
Not really as malicious, just it is an extremely common pattern of behaviour. People are goal driven agents and their reading is also goal driven, picking the meanings for the words as to fit some specific goal, which is surprisingly seldom understanding. Especially in a charged issue like risks of anything, where people typically choose their position via some mix of their political orientation, cynicism, etc etc etc then defend this position like a lawyer defending a client. edit: I guess it echoes the assumption that AI typically isn’t friendly if it has pre-determined goals that it optimizes towards. People typically do have pre-determined goals in discussion.
Sure. And sometimes those goals don’t involve understanding, and involve twisting other people’s words, obscuring the topic, and substituting meanings to edge the conversation towards a predefined conclusion, just as you suggest. In fact, that’s not uncommon. Agreed.
If you mean to suggest by that that I ought not be irritated by you attributing those properties to me, or that I ought not disengage from the conversation in consequence, well, perhaps you’re right. Nevertheless I am irritated, and am consequently disengaging.
I haven’t offered a definition of intelligence as far as I know, so I’m a little bewildered to suddenly be talking about what does or doesn’t match it.
I infer from the rest of your comment that what you’re taking to be my definition of intelligence is “a process that modifies its environment to more effectively achieve a pre-determined result”, which I neither intended nor endorse as a definition of intelligence.
That aside, though, I think I now understand the context of your initial response… thanks for the clarification. It almost completely fails to overlap with the context I intended in the comment you were responding to.
Well, the point is that if we start to not using the ‘intelligence’ to describe it, but using the ‘really powerful optimization process’ or the like, it gets us to things like:
“a process that modifies its environment to more effectively achieve a pre-determined result”
which are a very apt description of scary AI, but not of anything which is normally described as intelligent. This way the scary AI has more in common with gray goo than with anything that is normally described as intelligent.
I infer from this that your preferred label for the class of things we’re talking around is “intelligence”. Yes?
Edit: I subsequently infer from the downvote: no. Or perhaps irritation that I still want my original question answered. Edit: Repudiating previous edit.
I didn’t downvote this comment.
The preferred label for things like seed AI, or giant neural network sim, or the like, should be intelligence unless they are actually written as a “really powerful optimization process”, in which case it is useful to refer to what exactly they optimize (which is something within themselves, not outside). The scary AI idea arises from lack of understanding of what the intelligences do, and latching onto the first plausible definition, them optimizing towards a goal defined from the start. It may be a good idea to refer to the scary AIs as really powerful optimization processes that optimize the real world towards some specific state, but don’t confuse this with intelligences in general, of which this is a tiny, and so far purely theoretical, subset.
OK, cool.
So, suppose I am looking at a system in the world… call it X. Perhaps X is a bunch of mold in my refrigerator. Perhaps it’s my neighbor’s kid. Perhaps it’s a pile of rocks in my backyard. Perhaps it’s a software program on my computer. Perhaps it’s something on an alien planet I’m visiting. Doesn’t matter.
Suppose I want to know whether X is intelligent.
What would you recommend I pay attention to in X’s observable behavior in order to make that decision? That is, what observable properties of X are evidence of intelligence?
Well, if you observe it optimizing something very powerfully, it might be intelligent or it might be a thermostat with high powered heater and cooler and a PID controller. I define intelligence as capability of solving problems, which is about choosing a course of action out of a giant possible space of courses of action based on some sort of criteria, where normally there is no obvious polynomial time solution. One could call it ‘powerful optimization process’ but that brings the connotations of the choice having some strong effect on the environment (which you yourself mentioned), while one could just as well proposition an agent whose goal includes preservation of status quo (i.e. the way things would have been without it) and minimization of it’s own impact, to the detriment of other goals that appeal more to us—and that agent could still be very intelligent even though it’s modification to the environment would be smaller than that by some dumber agent working under exact same goals with exact same weights (as the smarter agent processes larger space of possible solutions and find the solutions that can satisfy both goals better; the agents may have identical algorithm with different CPU speed and the faster may end up visibly ‘optimizing’ its environment less).
edit: I imagine there can be different definitions of intelligence. The Eliezer grew up in a religious family, is himself an atheist, and seem to see the function of intelligence as primarily forming the correct beliefs; something that I don’t find so plausible given that there are very intelligent people who seem to believe in some really odd things, which are so defined as to have no impact on their life. That’s similar to belief that all behaviours in MWI should equate to normality, while believing in MWI. There are also intelligent people whose strange beliefs have impact on their life. The one thing common about people i’d call intelligent, is that they are good at problem solving. The problems being solved vary, and some people’s intelligence is tasked with making un-falsifiable theory of dragon in their garage. Few people would think of this behaviour when they hear phrase ‘optimization process’. But the majority of intelligent people’s intelligence is tasked with something very silly most of the time.
OK.
So, echoing that back to you to make sure I understand so far: one important difference between “intelligence” and “optimization process” is that the latter (at least connotatively) implies affecting the environment whereas the former doesn’t. We should be more concerned with the internal operations of the system than with its effects on the environment, and therefore we should talk about “intelligence” rather than “optimization process.” Some people believe “intelligence” refers to the ability to form correct beliefs, but it properly refers to the ability to choose a specific course of action out of a larger space of possibilities based on how well it matches a criterion.
Is that about right, or have I misunderstood something key?
Well, the ‘optimization process’ has the connotations of making something more optimal, the connotations of certain productivity and purpose. Optimal is a positive word. The intelligence, on the other hand, can have goals even less productive than tiling the universe with paperclips. The problem with word intelligence is that it may or may not have positive moral connotations. The internal operation shouldn’t really be very relevant in theory, but in practice, you can have a dumb brick that is just sitting here, and you can have a brick of computronium inside which entire boxed society lives which for some reason decided to go solipsist and deny the outside of the brick. Or you can have a brick that is sitting plotting to take over the world, but it didn’t make a single move yet (and is going to chill out for another million years coz it got patience and isn’t really in a hurry because the goal is bounded, and its e.g. safely in orbit).
If you start talking about powerful optimization processes that do something in the real world, you leave out all the simple, probable, harmless goal systems that the AI can have (and still be immensely useful). The external goals are enormously difficult to define on a system that builds it’s own model of the world.
Agreed that “optimization process” connotes purpose and making something more optimal in the context of that purpose.
Agreed that “optimal” has positive connotations.
Agreed that an intelligence can have goals that are unproductive, in the colloquial modern cultural sense of “unproductive”.
Agreed that “intelligence” may or may not have positive moral connotations.
Agreed that internal operations that don’t affect anything outside the black box are of at-best-problematic relevance to anything outside that box.
Completely at a loss for how any of that relates to any of what I said, or answers my question.
I think I’m going to tap out of the conversation here. Thanks for your time.
Well, I think you understood what I meant, it’s just felt as you made a short summary partially out of context. People typically (i.e. virtually always) do that for purpose of twisting other people’s words later on. The arguments over definitions are usually (virtually always) a debate technique designed to obscure the topic and substitute some meanings to edge towards some predefined conclusion. In particular most typically one would want to substitute the ‘powerful optimization process’ for intelligence to create the support for the notion of scary AI.
I do it, here and elsewhere, because most of your comments seem to me entirely orthogonal to the thing they ostensibly respond to, and the charitable interpretation of that is that I’m failing to understand your responses the way you meant them, and my response to that is typically to echo back those responses as I understood them and ask you to either endorse my echo or correct it.
Which, frequently, you respond to with a yet another comment that seems to me entirely orthogonal to my request.
But I can certainly appreciate why, if you’re assuming that I’m trying to twist your words and otherwise being malicious, you’d refuse to cooperate with me in this project.
That’s fine; you’re under no obligation to cooperate, and your assumption isn’t a senseless one.
Neither am I under any obligation to keep trying to communicate in the absence of cooperation, especially when I see no way to prove my good will, especially given that I’m now rather irritated at having been treated as malicious until proven otherwise.
So, as I said, I think the best thing to do is just end this exchange here.
Not really as malicious, just it is an extremely common pattern of behaviour. People are goal driven agents and their reading is also goal driven, picking the meanings for the words as to fit some specific goal, which is surprisingly seldom understanding. Especially in a charged issue like risks of anything, where people typically choose their position via some mix of their political orientation, cynicism, etc etc etc then defend this position like a lawyer defending a client. edit: I guess it echoes the assumption that AI typically isn’t friendly if it has pre-determined goals that it optimizes towards. People typically do have pre-determined goals in discussion.
Sure. And sometimes those goals don’t involve understanding, and involve twisting other people’s words, obscuring the topic, and substituting meanings to edge the conversation towards a predefined conclusion, just as you suggest. In fact, that’s not uncommon. Agreed.
If you mean to suggest by that that I ought not be irritated by you attributing those properties to me, or that I ought not disengage from the conversation in consequence, well, perhaps you’re right. Nevertheless I am irritated, and am consequently disengaging.