As I understand it, the idea is that we want to design an AI that is difficult or impossible to blackmail, but which makes a good trading partner.
In other words there are a cluster of behaviors that we do NOT want our AI to have, which seem blackmailish to us, and a cluster of behaviors that we DO want it to have, which seem tradeish to us. So we are now trying to draw a line in conceptual space between them so that we can figure out how to program an AI appropriately.
As I understand it, the idea is that we want to design an AI that is difficult or impossible to blackmail, but which makes a good trading partner.
You and Stuart seem to have different goals. You want to understand and prevent some behaviors (in which case, start by tabooing culturally-dense words like “blackmail”). He wants to understand linguistic or legal definitions (so tabooing the word is counterproductive).
“You want to understand and prevent some behaviors (in which case, start by tabooing culturally-dense words like “blackmail”)”
In a sense, that’s exactly what Stuart was doing all along. The whole point of this post was to come up with a rigorous definition of blackmail, i.e. to find a way to say what we wanted to say without using the word.
Now I’m really confused. Are you trying to define words, or trying to understand (and manipulate) behaviors? I’m hearing you say something like “I don’t know what blackmail is, but I want to make sure an AI doesn’t do it”. This must be a misunderstanding on my part.
I guess you might be trying to understand WHY some people don’t like blackmail, so you can decide whether you want to to guard against it, but even that seems pretty backward.
You make it sound like those two things are mutually exclusive. They aren’t. We are trying to define words so that we can understand and manipulate behavior.
“I don’t know what blackmail is, but I want to make sure an AI doesn’t do it.” Yes, exactly, as long as you interpret it in the way I explained it above.* What’s wrong with that? Isn’t that exactly what the AI safety project is, in general? “I don’t know what bad behaviors are, but I want to make sure the AI doesn’t do them.”
*”In other words there are a cluster of behaviors that we do NOT want our AI to have, which seem blackmailish to us, and a cluster of behaviors that we DO want it to have, which seem tradeish to us. So we are now trying to draw a line in conceptual space between them so that we can figure out how to program an AI appropriately.”
It’s a badly formulated question, likely to lead to confusion.
there are a cluster of behaviors that we do NOT want our AI to have, which seem blackmailish to us
So, can you specify what this cluster is? Can you list the criteria by which a behaviour would be included in or excluded from this cluster? If you do this, you have defined blackmail.
“It’s a badly formulated question, likely to lead to confusion.” Why? That’s precisely what I’m denying.
“So, can you specify what this cluster is? Can you list the criteria by which a behaviour would be included in or excluded from this cluster? If you do this, you have defined blackmail.”
That’s precisely what I (Stuart really) am trying to do! I said so, you even quoted me saying so, and as I interpret him, Stuart said so too in the OP. I don’t care about the word blackmail except as a means to an end; I’m trying to come up with criteria by which to separate the bad behaviors from the good.
I’m honestly baffled at this whole conversation. What Stuart is doing seems the opposite of confused to me.
To avoid reinventing the wheel, I suggest looking at legal definitions of blackmail, as well as reading a couple of law research articles on the topic. Lawyers and judges had to deal with this issue for a very long time and they need to have criteria which produce a definite answer.
I think there’s also an element that if you know how to identify and resist blackmail yourself, not only do you win more, but also it’s easier for you to choose to not blackmail others.
As I understand it, the idea is that we want to design an AI that is difficult or impossible to blackmail, but which makes a good trading partner.
In other words there are a cluster of behaviors that we do NOT want our AI to have, which seem blackmailish to us, and a cluster of behaviors that we DO want it to have, which seem tradeish to us. So we are now trying to draw a line in conceptual space between them so that we can figure out how to program an AI appropriately.
You and Stuart seem to have different goals. You want to understand and prevent some behaviors (in which case, start by tabooing culturally-dense words like “blackmail”). He wants to understand linguistic or legal definitions (so tabooing the word is counterproductive).
? No, I have the same goals as kokotajlod.
“You want to understand and prevent some behaviors (in which case, start by tabooing culturally-dense words like “blackmail”)”
In a sense, that’s exactly what Stuart was doing all along. The whole point of this post was to come up with a rigorous definition of blackmail, i.e. to find a way to say what we wanted to say without using the word.
Now I’m really confused. Are you trying to define words, or trying to understand (and manipulate) behaviors? I’m hearing you say something like “I don’t know what blackmail is, but I want to make sure an AI doesn’t do it”. This must be a misunderstanding on my part.
I guess you might be trying to understand WHY some people don’t like blackmail, so you can decide whether you want to to guard against it, but even that seems pretty backward.
You make it sound like those two things are mutually exclusive. They aren’t. We are trying to define words so that we can understand and manipulate behavior.
“I don’t know what blackmail is, but I want to make sure an AI doesn’t do it.” Yes, exactly, as long as you interpret it in the way I explained it above.* What’s wrong with that? Isn’t that exactly what the AI safety project is, in general? “I don’t know what bad behaviors are, but I want to make sure the AI doesn’t do them.”
*”In other words there are a cluster of behaviors that we do NOT want our AI to have, which seem blackmailish to us, and a cluster of behaviors that we DO want it to have, which seem tradeish to us. So we are now trying to draw a line in conceptual space between them so that we can figure out how to program an AI appropriately.”
It’s a badly formulated question, likely to lead to confusion.
So, can you specify what this cluster is? Can you list the criteria by which a behaviour would be included in or excluded from this cluster? If you do this, you have defined blackmail.
“It’s a badly formulated question, likely to lead to confusion.” Why? That’s precisely what I’m denying.
“So, can you specify what this cluster is? Can you list the criteria by which a behaviour would be included in or excluded from this cluster? If you do this, you have defined blackmail.”
That’s precisely what I (Stuart really) am trying to do! I said so, you even quoted me saying so, and as I interpret him, Stuart said so too in the OP. I don’t care about the word blackmail except as a means to an end; I’m trying to come up with criteria by which to separate the bad behaviors from the good.
I’m honestly baffled at this whole conversation. What Stuart is doing seems the opposite of confused to me.
To avoid reinventing the wheel, I suggest looking at legal definitions of blackmail, as well as reading a couple of law research articles on the topic. Lawyers and judges had to deal with this issue for a very long time and they need to have criteria which produce a definite answer.
I think there’s also an element that if you know how to identify and resist blackmail yourself, not only do you win more, but also it’s easier for you to choose to not blackmail others.