Yup, I’m not sure I do either. It’s clear to me that “intelligence” has connotations that “optimizer” lacks, with the result that the former label in practice refers to a subset of the latter, but connotations are notoriously difficult to pin down precisely. One approximation is that “intelligent” is often strongly associated with human-like intelligence, so an optimizer that is significantly non-human-like in a way that’s relevant to the domain of discourse is less likely to be labelled “intelligent.”
It seems to me that optimizer got a lot of connotations with specific architecture in mind (unworkable architecture too for making it care to do things in real world).
Well, the way I see it, you take the possibility of the AI that just e.g. maximizes the performance of airplane wing inside a fluid simulator (by conducting a zillion runs of this simulator), and then after a bit of map-territory confusion and misunderstanding of how the former optimizer works, you equate this with optimizing some real world wing in the real world (without conducting a zillion trials in the real world, evolution style). The latter has the issue of symbol grounding, and of building a model of the world, and optimizing inside this model, and then building it in the real world, et cetera.
Interesting. It would never have occurred to me to assume that “optimizer” connotes a trial-and-error brute-force-search architecture of this sort, but apparently it does for at least some listeners. Good to know. So on balance do you endorse “intelligence” instead, or do you prefer some other label for a process that modifies its environment to more effectively achieve a pre-determined result?
process that modifies its environment to more effectively achieve a pre-determined result?
That is the issue, you assume the conclusion. Let’s just call it scary AI, and agree that scary AI is, by definition, scary.
Then let’s move on to actual implementations other than bruteforce nonsense, the actual implementations that need to build a model of the world, and have to operate on basis of this model, rather than the world itself (excluding e.g. evolution that doesn’t need this), processes which may or may not be scary.
Certainly agreed that it’s more useful to implement the thing than to label it. If you have ideas for how to do that, by all means share them. I suggest you do so in a new post, rather than in the comments thread of an unrelated post.
To the extent that we’re still talking about labels, I prefer “optimizer” to “scary AI,” especially when used to describe a class that includes things that aren’t scary, things that aren’t artificial, and things that are at-least-not-unproblematically intelligent. Your mileage may vary.
The phrase “assuming the conclusion” is getting tossed around an awful lot lately. I’m at a loss for what conclusion I’m assuming in the phrase you quote, or what makes that “the issue.” And labelling the whole class of things-we’re-talking-about as “scary AIs” seems to be assuming quite a bit, so if you meant that as an alternative to assuming a conclusion, I’m really bewildered.
Agreed that the distinction you suggest between model-based whatever-they-ares and non-model-based whatever-they-ares is a useful distinction in a lot of discussions.
None of the existing things described as intelligent match your definition of intelligence, and of the hypothetical, just the scary and friendly AIs do (i see the friendly AI as a subtype of scary AI).
Evolution: Doesn’t really work in direction of doing anything pre-defined to environment. Mankind: ditto, go ask ancient Egyptians what exactly we are optimizing about environment or what pre-determined result we were working towards. Individual H. Sapiens: some individuals might do something like that, but not very close. Narrow AIs like circuit designers, airplane wing optimizers, and such: don’t work on the environment.
Only the scary AI fits your definition here. That’s part of why this FAI effort and the scary-AI scare is seen as complete nonsense. There isn’t a single example of general intelligence that works by your definition, natural or otherwise. Your definition of intelligence is narrowed down to the tiny but extremely scary area right near the FAI, and it excludes all the things anyone normally describes as intelligent.
I haven’t offered a definition of intelligence as far as I know, so I’m a little bewildered to suddenly be talking about what does or doesn’t match it.
I infer from the rest of your comment that what you’re taking to be my definition of intelligence is “a process that modifies its environment to more effectively achieve a pre-determined result”, which I neither intended nor endorse as a definition of intelligence.
That aside, though, I think I now understand the context of your initial response… thanks for the clarification. It almost completely fails to overlap with the context I intended in the comment you were responding to.
Well, the point is that if we start to not using the ‘intelligence’ to describe it, but using the ‘really powerful optimization process’ or the like, it gets us to things like:
“a process that modifies its environment to more effectively achieve a pre-determined result”
which are a very apt description of scary AI, but not of anything which is normally described as intelligent. This way the scary AI has more in common with gray goo than with anything that is normally described as intelligent.
I infer from this that your preferred label for the class of things we’re talking around is “intelligence”. Yes?
Edit: I subsequently infer from the downvote: no. Or perhaps irritation that I still want my original question answered.
Edit: Repudiating previous edit.
The preferred label for things like seed AI, or giant neural network sim, or the like, should be intelligence unless they are actually written as a “really powerful optimization process”, in which case it is useful to refer to what exactly they optimize (which is something within themselves, not outside). The scary AI idea arises from lack of understanding of what the intelligences do, and latching onto the first plausible definition, them optimizing towards a goal defined from the start. It may be a good idea to refer to the scary AIs as really powerful optimization processes that optimize the real world towards some specific state, but don’t confuse this with intelligences in general, of which this is a tiny, and so far purely theoretical, subset.
So, suppose I am looking at a system in the world… call it X. Perhaps X is a bunch of mold in my refrigerator. Perhaps it’s my neighbor’s kid. Perhaps it’s a pile of rocks in my backyard. Perhaps it’s a software program on my computer. Perhaps it’s something on an alien planet I’m visiting. Doesn’t matter.
Suppose I want to know whether X is intelligent.
What would you recommend I pay attention to in X’s observable behavior in order to make that decision? That is, what observable properties of X are evidence of intelligence?
Well, if you observe it optimizing something very powerfully, it might be intelligent or it might be a thermostat with high powered heater and cooler and a PID controller. I define intelligence as capability of solving problems, which is about choosing a course of action out of a giant possible space of courses of action based on some sort of criteria, where normally there is no obvious polynomial time solution. One could call it ‘powerful optimization process’ but that brings the connotations of the choice having some strong effect on the environment (which you yourself mentioned), while one could just as well proposition an agent whose goal includes preservation of status quo (i.e. the way things would have been without it) and minimization of it’s own impact, to the detriment of other goals that appeal more to us—and that agent could still be very intelligent even though it’s modification to the environment would be smaller than that by some dumber agent working under exact same goals with exact same weights (as the smarter agent processes larger space of possible solutions and find the solutions that can satisfy both goals better; the agents may have identical algorithm with different CPU speed and the faster may end up visibly ‘optimizing’ its environment less).
edit: I imagine there can be different definitions of intelligence. The Eliezer grew up in a religious family, is himself an atheist, and seem to see the function of intelligence as primarily forming the correct beliefs; something that I don’t find so plausible given that there are very intelligent people who seem to believe in some really odd things, which are so defined as to have no impact on their life. That’s similar to belief that all behaviours in MWI should equate to normality, while believing in MWI. There are also intelligent people whose strange beliefs have impact on their life. The one thing common about people i’d call intelligent, is that they are good at problem solving. The problems being solved vary, and some people’s intelligence is tasked with making un-falsifiable theory of dragon in their garage. Few people would think of this behaviour when they hear phrase ‘optimization process’. But the majority of intelligent people’s intelligence is tasked with something very silly most of the time.
So, echoing that back to you to make sure I understand so far: one important difference between “intelligence” and “optimization process” is that the latter (at least connotatively) implies affecting the environment whereas the former doesn’t. We should be more concerned with the internal operations of the system than with its effects on the environment, and therefore we should talk about “intelligence” rather than “optimization process.” Some people believe “intelligence” refers to the ability to form correct beliefs, but it properly refers to the ability to choose a specific course of action out of a larger space of possibilities based on how well it matches a criterion.
Is that about right, or have I misunderstood something key?
Well, the ‘optimization process’ has the connotations of making something more optimal, the connotations of certain productivity and purpose. Optimal is a positive word. The intelligence, on the other hand, can have goals even less productive than tiling the universe with paperclips. The problem with word intelligence is that it may or may not have positive moral connotations. The internal operation shouldn’t really be very relevant in theory, but in practice, you can have a dumb brick that is just sitting here, and you can have a brick of computronium inside which entire boxed society lives which for some reason decided to go solipsist and deny the outside of the brick. Or you can have a brick that is sitting plotting to take over the world, but it didn’t make a single move yet (and is going to chill out for another million years coz it got patience and isn’t really in a hurry because the goal is bounded, and its e.g. safely in orbit).
If you start talking about powerful optimization processes that do something in the real world, you leave out all the simple, probable, harmless goal systems that the AI can have (and still be immensely useful). The external goals are enormously difficult to define on a system that builds it’s own model of the world.
Well, I think you understood what I meant, it’s just felt as you made a short summary partially out of context. People typically (i.e. virtually always) do that for purpose of twisting other people’s words later on. The arguments over definitions are usually (virtually always) a debate technique designed to obscure the topic and substitute some meanings to edge towards some predefined conclusion. In particular most typically one would want to substitute the ‘powerful optimization process’ for intelligence to create the support for the notion of scary AI.
I do it, here and elsewhere, because most of your comments seem to me entirely orthogonal to the thing they ostensibly respond to, and the charitable interpretation of that is that I’m failing to understand your responses the way you meant them, and my response to that is typically to echo back those responses as I understood them and ask you to either endorse my echo or correct it.
Which, frequently, you respond to with a yet another comment that seems to me entirely orthogonal to my request.
But I can certainly appreciate why, if you’re assuming that I’m trying to twist your words and otherwise being malicious, you’d refuse to cooperate with me in this project.
That’s fine; you’re under no obligation to cooperate, and your assumption isn’t a senseless one.
Neither am I under any obligation to keep trying to communicate in the absence of cooperation, especially when I see no way to prove my good will, especially given that I’m now rather irritated at having been treated as malicious until proven otherwise.
So, as I said, I think the best thing to do is just end this exchange here.
Not really as malicious, just it is an extremely common pattern of behaviour. People are goal driven agents and their reading is also goal driven, picking the meanings for the words as to fit some specific goal, which is surprisingly seldom understanding. Especially in a charged issue like risks of anything, where people typically choose their position via some mix of their political orientation, cynicism, etc etc etc then defend this position like a lawyer defending a client. edit: I guess it echoes the assumption that AI typically isn’t friendly if it has pre-determined goals that it optimizes towards. People typically do have pre-determined goals in discussion.
Sure. And sometimes those goals don’t involve understanding, and involve twisting other people’s words, obscuring the topic, and substituting meanings to edge the conversation towards a predefined conclusion, just as you suggest. In fact, that’s not uncommon. Agreed.
If you mean to suggest by that that I ought not be irritated by you attributing those properties to me, or that I ought not disengage from the conversation in consequence, well, perhaps you’re right. Nevertheless I am irritated, and am consequently disengaging.
I don’t know; I tend to antikibbitz unless I’m involved in the conversation. This most recent time was Dmytry, certainly. The others may have been you.
And I’m not really sure what’s going on between me and Dmytry, really, though we sure do seem to be talking at cross-purposes. Perhaps he misunderstood me, I don’t know.
That said, it’s a failure mode I’ve noticed I get into not-uncommonly. My usual reaction to a conversation getting confusing is to slow all the way down and take very small steps and seek confirmation for each step. Usually it works well, but sometimes interlocutors will neither confirm my step, nor refute it, but rather make some other statement that’s just as opaque to me as the statement I was trying to clarify, and pretty soon I start feeling like they’re having a completely different conversation to which I haven’t even been invited.
I don’t know a good conversational fix for this; past a certain point I tend to just give up and listen.
Yup, I’m not sure I do either. It’s clear to me that “intelligence” has connotations that “optimizer” lacks, with the result that the former label in practice refers to a subset of the latter, but connotations are notoriously difficult to pin down precisely. One approximation is that “intelligent” is often strongly associated with human-like intelligence, so an optimizer that is significantly non-human-like in a way that’s relevant to the domain of discourse is less likely to be labelled “intelligent.”
It seems to me that optimizer got a lot of connotations with specific architecture in mind (unworkable architecture too for making it care to do things in real world).
Interesting. What specific architectural connotations do you see?
Well, the way I see it, you take the possibility of the AI that just e.g. maximizes the performance of airplane wing inside a fluid simulator (by conducting a zillion runs of this simulator), and then after a bit of map-territory confusion and misunderstanding of how the former optimizer works, you equate this with optimizing some real world wing in the real world (without conducting a zillion trials in the real world, evolution style). The latter has the issue of symbol grounding, and of building a model of the world, and optimizing inside this model, and then building it in the real world, et cetera.
Interesting. It would never have occurred to me to assume that “optimizer” connotes a trial-and-error brute-force-search architecture of this sort, but apparently it does for at least some listeners. Good to know. So on balance do you endorse “intelligence” instead, or do you prefer some other label for a process that modifies its environment to more effectively achieve a pre-determined result?
That is the issue, you assume the conclusion. Let’s just call it scary AI, and agree that scary AI is, by definition, scary.
Then let’s move on to actual implementations other than bruteforce nonsense, the actual implementations that need to build a model of the world, and have to operate on basis of this model, rather than the world itself (excluding e.g. evolution that doesn’t need this), processes which may or may not be scary.
Certainly agreed that it’s more useful to implement the thing than to label it. If you have ideas for how to do that, by all means share them. I suggest you do so in a new post, rather than in the comments thread of an unrelated post.
To the extent that we’re still talking about labels, I prefer “optimizer” to “scary AI,” especially when used to describe a class that includes things that aren’t scary, things that aren’t artificial, and things that are at-least-not-unproblematically intelligent. Your mileage may vary.
The phrase “assuming the conclusion” is getting tossed around an awful lot lately. I’m at a loss for what conclusion I’m assuming in the phrase you quote, or what makes that “the issue.” And labelling the whole class of things-we’re-talking-about as “scary AIs” seems to be assuming quite a bit, so if you meant that as an alternative to assuming a conclusion, I’m really bewildered.
Agreed that the distinction you suggest between model-based whatever-they-ares and non-model-based whatever-they-ares is a useful distinction in a lot of discussions.
None of the existing things described as intelligent match your definition of intelligence, and of the hypothetical, just the scary and friendly AIs do (i see the friendly AI as a subtype of scary AI).
Evolution: Doesn’t really work in direction of doing anything pre-defined to environment. Mankind: ditto, go ask ancient Egyptians what exactly we are optimizing about environment or what pre-determined result we were working towards. Individual H. Sapiens: some individuals might do something like that, but not very close. Narrow AIs like circuit designers, airplane wing optimizers, and such: don’t work on the environment.
Only the scary AI fits your definition here. That’s part of why this FAI effort and the scary-AI scare is seen as complete nonsense. There isn’t a single example of general intelligence that works by your definition, natural or otherwise. Your definition of intelligence is narrowed down to the tiny but extremely scary area right near the FAI, and it excludes all the things anyone normally describes as intelligent.
I haven’t offered a definition of intelligence as far as I know, so I’m a little bewildered to suddenly be talking about what does or doesn’t match it.
I infer from the rest of your comment that what you’re taking to be my definition of intelligence is “a process that modifies its environment to more effectively achieve a pre-determined result”, which I neither intended nor endorse as a definition of intelligence.
That aside, though, I think I now understand the context of your initial response… thanks for the clarification. It almost completely fails to overlap with the context I intended in the comment you were responding to.
Well, the point is that if we start to not using the ‘intelligence’ to describe it, but using the ‘really powerful optimization process’ or the like, it gets us to things like:
“a process that modifies its environment to more effectively achieve a pre-determined result”
which are a very apt description of scary AI, but not of anything which is normally described as intelligent. This way the scary AI has more in common with gray goo than with anything that is normally described as intelligent.
I infer from this that your preferred label for the class of things we’re talking around is “intelligence”. Yes?
Edit: I subsequently infer from the downvote: no. Or perhaps irritation that I still want my original question answered. Edit: Repudiating previous edit.
I didn’t downvote this comment.
The preferred label for things like seed AI, or giant neural network sim, or the like, should be intelligence unless they are actually written as a “really powerful optimization process”, in which case it is useful to refer to what exactly they optimize (which is something within themselves, not outside). The scary AI idea arises from lack of understanding of what the intelligences do, and latching onto the first plausible definition, them optimizing towards a goal defined from the start. It may be a good idea to refer to the scary AIs as really powerful optimization processes that optimize the real world towards some specific state, but don’t confuse this with intelligences in general, of which this is a tiny, and so far purely theoretical, subset.
OK, cool.
So, suppose I am looking at a system in the world… call it X. Perhaps X is a bunch of mold in my refrigerator. Perhaps it’s my neighbor’s kid. Perhaps it’s a pile of rocks in my backyard. Perhaps it’s a software program on my computer. Perhaps it’s something on an alien planet I’m visiting. Doesn’t matter.
Suppose I want to know whether X is intelligent.
What would you recommend I pay attention to in X’s observable behavior in order to make that decision? That is, what observable properties of X are evidence of intelligence?
Well, if you observe it optimizing something very powerfully, it might be intelligent or it might be a thermostat with high powered heater and cooler and a PID controller. I define intelligence as capability of solving problems, which is about choosing a course of action out of a giant possible space of courses of action based on some sort of criteria, where normally there is no obvious polynomial time solution. One could call it ‘powerful optimization process’ but that brings the connotations of the choice having some strong effect on the environment (which you yourself mentioned), while one could just as well proposition an agent whose goal includes preservation of status quo (i.e. the way things would have been without it) and minimization of it’s own impact, to the detriment of other goals that appeal more to us—and that agent could still be very intelligent even though it’s modification to the environment would be smaller than that by some dumber agent working under exact same goals with exact same weights (as the smarter agent processes larger space of possible solutions and find the solutions that can satisfy both goals better; the agents may have identical algorithm with different CPU speed and the faster may end up visibly ‘optimizing’ its environment less).
edit: I imagine there can be different definitions of intelligence. The Eliezer grew up in a religious family, is himself an atheist, and seem to see the function of intelligence as primarily forming the correct beliefs; something that I don’t find so plausible given that there are very intelligent people who seem to believe in some really odd things, which are so defined as to have no impact on their life. That’s similar to belief that all behaviours in MWI should equate to normality, while believing in MWI. There are also intelligent people whose strange beliefs have impact on their life. The one thing common about people i’d call intelligent, is that they are good at problem solving. The problems being solved vary, and some people’s intelligence is tasked with making un-falsifiable theory of dragon in their garage. Few people would think of this behaviour when they hear phrase ‘optimization process’. But the majority of intelligent people’s intelligence is tasked with something very silly most of the time.
OK.
So, echoing that back to you to make sure I understand so far: one important difference between “intelligence” and “optimization process” is that the latter (at least connotatively) implies affecting the environment whereas the former doesn’t. We should be more concerned with the internal operations of the system than with its effects on the environment, and therefore we should talk about “intelligence” rather than “optimization process.” Some people believe “intelligence” refers to the ability to form correct beliefs, but it properly refers to the ability to choose a specific course of action out of a larger space of possibilities based on how well it matches a criterion.
Is that about right, or have I misunderstood something key?
Well, the ‘optimization process’ has the connotations of making something more optimal, the connotations of certain productivity and purpose. Optimal is a positive word. The intelligence, on the other hand, can have goals even less productive than tiling the universe with paperclips. The problem with word intelligence is that it may or may not have positive moral connotations. The internal operation shouldn’t really be very relevant in theory, but in practice, you can have a dumb brick that is just sitting here, and you can have a brick of computronium inside which entire boxed society lives which for some reason decided to go solipsist and deny the outside of the brick. Or you can have a brick that is sitting plotting to take over the world, but it didn’t make a single move yet (and is going to chill out for another million years coz it got patience and isn’t really in a hurry because the goal is bounded, and its e.g. safely in orbit).
If you start talking about powerful optimization processes that do something in the real world, you leave out all the simple, probable, harmless goal systems that the AI can have (and still be immensely useful). The external goals are enormously difficult to define on a system that builds it’s own model of the world.
Agreed that “optimization process” connotes purpose and making something more optimal in the context of that purpose.
Agreed that “optimal” has positive connotations.
Agreed that an intelligence can have goals that are unproductive, in the colloquial modern cultural sense of “unproductive”.
Agreed that “intelligence” may or may not have positive moral connotations.
Agreed that internal operations that don’t affect anything outside the black box are of at-best-problematic relevance to anything outside that box.
Completely at a loss for how any of that relates to any of what I said, or answers my question.
I think I’m going to tap out of the conversation here. Thanks for your time.
Well, I think you understood what I meant, it’s just felt as you made a short summary partially out of context. People typically (i.e. virtually always) do that for purpose of twisting other people’s words later on. The arguments over definitions are usually (virtually always) a debate technique designed to obscure the topic and substitute some meanings to edge towards some predefined conclusion. In particular most typically one would want to substitute the ‘powerful optimization process’ for intelligence to create the support for the notion of scary AI.
I do it, here and elsewhere, because most of your comments seem to me entirely orthogonal to the thing they ostensibly respond to, and the charitable interpretation of that is that I’m failing to understand your responses the way you meant them, and my response to that is typically to echo back those responses as I understood them and ask you to either endorse my echo or correct it.
Which, frequently, you respond to with a yet another comment that seems to me entirely orthogonal to my request.
But I can certainly appreciate why, if you’re assuming that I’m trying to twist your words and otherwise being malicious, you’d refuse to cooperate with me in this project.
That’s fine; you’re under no obligation to cooperate, and your assumption isn’t a senseless one.
Neither am I under any obligation to keep trying to communicate in the absence of cooperation, especially when I see no way to prove my good will, especially given that I’m now rather irritated at having been treated as malicious until proven otherwise.
So, as I said, I think the best thing to do is just end this exchange here.
Not really as malicious, just it is an extremely common pattern of behaviour. People are goal driven agents and their reading is also goal driven, picking the meanings for the words as to fit some specific goal, which is surprisingly seldom understanding. Especially in a charged issue like risks of anything, where people typically choose their position via some mix of their political orientation, cynicism, etc etc etc then defend this position like a lawyer defending a client. edit: I guess it echoes the assumption that AI typically isn’t friendly if it has pre-determined goals that it optimizes towards. People typically do have pre-determined goals in discussion.
Sure. And sometimes those goals don’t involve understanding, and involve twisting other people’s words, obscuring the topic, and substituting meanings to edge the conversation towards a predefined conclusion, just as you suggest. In fact, that’s not uncommon. Agreed.
If you mean to suggest by that that I ought not be irritated by you attributing those properties to me, or that I ought not disengage from the conversation in consequence, well, perhaps you’re right. Nevertheless I am irritated, and am consequently disengaging.
Just by me, right? I deliberately used it like fifty times. (FWIW I’m not sure but I think Dmytry misunderstood you somewhere/somehow.)
I don’t know; I tend to antikibbitz unless I’m involved in the conversation. This most recent time was Dmytry, certainly. The others may have been you.
And I’m not really sure what’s going on between me and Dmytry, really, though we sure do seem to be talking at cross-purposes. Perhaps he misunderstood me, I don’t know.
That said, it’s a failure mode I’ve noticed I get into not-uncommonly. My usual reaction to a conversation getting confusing is to slow all the way down and take very small steps and seek confirmation for each step. Usually it works well, but sometimes interlocutors will neither confirm my step, nor refute it, but rather make some other statement that’s just as opaque to me as the statement I was trying to clarify, and pretty soon I start feeling like they’re having a completely different conversation to which I haven’t even been invited.
I don’t know a good conversational fix for this; past a certain point I tend to just give up and listen.