So, echoing that back to you to make sure I understand so far: one important difference between “intelligence” and “optimization process” is that the latter (at least connotatively) implies affecting the environment whereas the former doesn’t. We should be more concerned with the internal operations of the system than with its effects on the environment, and therefore we should talk about “intelligence” rather than “optimization process.” Some people believe “intelligence” refers to the ability to form correct beliefs, but it properly refers to the ability to choose a specific course of action out of a larger space of possibilities based on how well it matches a criterion.
Is that about right, or have I misunderstood something key?
Well, the ‘optimization process’ has the connotations of making something more optimal, the connotations of certain productivity and purpose. Optimal is a positive word. The intelligence, on the other hand, can have goals even less productive than tiling the universe with paperclips. The problem with word intelligence is that it may or may not have positive moral connotations. The internal operation shouldn’t really be very relevant in theory, but in practice, you can have a dumb brick that is just sitting here, and you can have a brick of computronium inside which entire boxed society lives which for some reason decided to go solipsist and deny the outside of the brick. Or you can have a brick that is sitting plotting to take over the world, but it didn’t make a single move yet (and is going to chill out for another million years coz it got patience and isn’t really in a hurry because the goal is bounded, and its e.g. safely in orbit).
If you start talking about powerful optimization processes that do something in the real world, you leave out all the simple, probable, harmless goal systems that the AI can have (and still be immensely useful). The external goals are enormously difficult to define on a system that builds it’s own model of the world.
Well, I think you understood what I meant, it’s just felt as you made a short summary partially out of context. People typically (i.e. virtually always) do that for purpose of twisting other people’s words later on. The arguments over definitions are usually (virtually always) a debate technique designed to obscure the topic and substitute some meanings to edge towards some predefined conclusion. In particular most typically one would want to substitute the ‘powerful optimization process’ for intelligence to create the support for the notion of scary AI.
I do it, here and elsewhere, because most of your comments seem to me entirely orthogonal to the thing they ostensibly respond to, and the charitable interpretation of that is that I’m failing to understand your responses the way you meant them, and my response to that is typically to echo back those responses as I understood them and ask you to either endorse my echo or correct it.
Which, frequently, you respond to with a yet another comment that seems to me entirely orthogonal to my request.
But I can certainly appreciate why, if you’re assuming that I’m trying to twist your words and otherwise being malicious, you’d refuse to cooperate with me in this project.
That’s fine; you’re under no obligation to cooperate, and your assumption isn’t a senseless one.
Neither am I under any obligation to keep trying to communicate in the absence of cooperation, especially when I see no way to prove my good will, especially given that I’m now rather irritated at having been treated as malicious until proven otherwise.
So, as I said, I think the best thing to do is just end this exchange here.
Not really as malicious, just it is an extremely common pattern of behaviour. People are goal driven agents and their reading is also goal driven, picking the meanings for the words as to fit some specific goal, which is surprisingly seldom understanding. Especially in a charged issue like risks of anything, where people typically choose their position via some mix of their political orientation, cynicism, etc etc etc then defend this position like a lawyer defending a client. edit: I guess it echoes the assumption that AI typically isn’t friendly if it has pre-determined goals that it optimizes towards. People typically do have pre-determined goals in discussion.
Sure. And sometimes those goals don’t involve understanding, and involve twisting other people’s words, obscuring the topic, and substituting meanings to edge the conversation towards a predefined conclusion, just as you suggest. In fact, that’s not uncommon. Agreed.
If you mean to suggest by that that I ought not be irritated by you attributing those properties to me, or that I ought not disengage from the conversation in consequence, well, perhaps you’re right. Nevertheless I am irritated, and am consequently disengaging.
OK.
So, echoing that back to you to make sure I understand so far: one important difference between “intelligence” and “optimization process” is that the latter (at least connotatively) implies affecting the environment whereas the former doesn’t. We should be more concerned with the internal operations of the system than with its effects on the environment, and therefore we should talk about “intelligence” rather than “optimization process.” Some people believe “intelligence” refers to the ability to form correct beliefs, but it properly refers to the ability to choose a specific course of action out of a larger space of possibilities based on how well it matches a criterion.
Is that about right, or have I misunderstood something key?
Well, the ‘optimization process’ has the connotations of making something more optimal, the connotations of certain productivity and purpose. Optimal is a positive word. The intelligence, on the other hand, can have goals even less productive than tiling the universe with paperclips. The problem with word intelligence is that it may or may not have positive moral connotations. The internal operation shouldn’t really be very relevant in theory, but in practice, you can have a dumb brick that is just sitting here, and you can have a brick of computronium inside which entire boxed society lives which for some reason decided to go solipsist and deny the outside of the brick. Or you can have a brick that is sitting plotting to take over the world, but it didn’t make a single move yet (and is going to chill out for another million years coz it got patience and isn’t really in a hurry because the goal is bounded, and its e.g. safely in orbit).
If you start talking about powerful optimization processes that do something in the real world, you leave out all the simple, probable, harmless goal systems that the AI can have (and still be immensely useful). The external goals are enormously difficult to define on a system that builds it’s own model of the world.
Agreed that “optimization process” connotes purpose and making something more optimal in the context of that purpose.
Agreed that “optimal” has positive connotations.
Agreed that an intelligence can have goals that are unproductive, in the colloquial modern cultural sense of “unproductive”.
Agreed that “intelligence” may or may not have positive moral connotations.
Agreed that internal operations that don’t affect anything outside the black box are of at-best-problematic relevance to anything outside that box.
Completely at a loss for how any of that relates to any of what I said, or answers my question.
I think I’m going to tap out of the conversation here. Thanks for your time.
Well, I think you understood what I meant, it’s just felt as you made a short summary partially out of context. People typically (i.e. virtually always) do that for purpose of twisting other people’s words later on. The arguments over definitions are usually (virtually always) a debate technique designed to obscure the topic and substitute some meanings to edge towards some predefined conclusion. In particular most typically one would want to substitute the ‘powerful optimization process’ for intelligence to create the support for the notion of scary AI.
I do it, here and elsewhere, because most of your comments seem to me entirely orthogonal to the thing they ostensibly respond to, and the charitable interpretation of that is that I’m failing to understand your responses the way you meant them, and my response to that is typically to echo back those responses as I understood them and ask you to either endorse my echo or correct it.
Which, frequently, you respond to with a yet another comment that seems to me entirely orthogonal to my request.
But I can certainly appreciate why, if you’re assuming that I’m trying to twist your words and otherwise being malicious, you’d refuse to cooperate with me in this project.
That’s fine; you’re under no obligation to cooperate, and your assumption isn’t a senseless one.
Neither am I under any obligation to keep trying to communicate in the absence of cooperation, especially when I see no way to prove my good will, especially given that I’m now rather irritated at having been treated as malicious until proven otherwise.
So, as I said, I think the best thing to do is just end this exchange here.
Not really as malicious, just it is an extremely common pattern of behaviour. People are goal driven agents and their reading is also goal driven, picking the meanings for the words as to fit some specific goal, which is surprisingly seldom understanding. Especially in a charged issue like risks of anything, where people typically choose their position via some mix of their political orientation, cynicism, etc etc etc then defend this position like a lawyer defending a client. edit: I guess it echoes the assumption that AI typically isn’t friendly if it has pre-determined goals that it optimizes towards. People typically do have pre-determined goals in discussion.
Sure. And sometimes those goals don’t involve understanding, and involve twisting other people’s words, obscuring the topic, and substituting meanings to edge the conversation towards a predefined conclusion, just as you suggest. In fact, that’s not uncommon. Agreed.
If you mean to suggest by that that I ought not be irritated by you attributing those properties to me, or that I ought not disengage from the conversation in consequence, well, perhaps you’re right. Nevertheless I am irritated, and am consequently disengaging.