A position I think Duncan might hold reminds me of these two pictures that I stole from Wikipedia:”Isolation Forest”.
Imagine a statement (maybe Duncan’s example that “People should be nicer”). The dots represent all the possible sentiments conveyed by this statement [1]. The dot that is being pointed to represents what we meant to say. You can see that a sentiment that is similar to many others takes many additional clarifying statements (the red lines) to pick it out from other possibilities, but that a statement that is very different from other statements can be clarified in only a few statements.
The only reason I say this is to point out that Duncan’s abstract model has been theoretical and empirical studied, and that the advice for how to solve it is pretty similar to what Duncan said [2].
Also, a note on the idea of clarifying statements to complicate what I just said (Duncan appears to be aware of this too, but I want to bring it up explicitly for discussion).
You don’t actually get to fence things off or rule them out in conversations, you get to make them less reasonable.
When Duncan has the pictures where he slaps stickers on top to rule things out, this is a simplification. In reality, you get to provide a new statement that represents a new distribution of represented sentiments.
Different people are going to combine your “layers” differently, like in an image editor.
Some people will stick with the first layer no matter what. Some people will stick with the last layer no matter what. You cannot be nuanced with people who pick one layer.
Some people will pick parts out of each of your layers. They might pick the highest peak of each and add them together, hoping to select what you’re “most talking about” in each and then see what ends up most likely.
The best model might be multiplying layers together, so that combining a very likely region with a kind of unlikely region results in a kind of likely one.
I don’t know if this generalization is helpful, but I hope sharing ideas can lead to the developing of new ones.
I’d be willing to bet Duncan has heard of Isolation Forests before and has mental tooling for them (I’m not claiming non-uniqueness of the conversational tactic though!)
Oh, and the second block I talked about is relevant because it means you can’t use an algorithm like what I discussed in the first block. You can try to combine your statements with statements that have exceptionally low probability for the regions you want to “delete”, but you those statements may have odd properties elsewhere.
For example, you might issue a clarification B which scores bad regions at 1/1000000. However, B also incidentally scores a completely different region at 1000. Now you have a different problem.
You run into a few different risks here.
The people I can call “multipliers” from my above note on combining layers might multiply these together and find the incidental region more likely than your intended selection.
People who add or select the most recent region might end up selecting only the incidental region.
If you issue a high number of clarifications, you might end up in 1 of 2 bad attractors:
A smooth probability-space, in which every outcome looks basically as likely. Here, no one has any clue what you are saying.
A really bumpy probability space, in which listeners don’t know which idea you’re pointing at out of many exclusive possibilities.
A position I think Duncan might hold reminds me of these two pictures that I stole from Wikipedia:”Isolation Forest”.
Imagine a statement (maybe Duncan’s example that “People should be nicer”). The dots represent all the possible sentiments conveyed by this statement [1]. The dot that is being pointed to represents what we meant to say. You can see that a sentiment that is similar to many others takes many additional clarifying statements (the red lines) to pick it out from other possibilities, but that a statement that is very different from other statements can be clarified in only a few statements.
The only reason I say this is to point out that Duncan’s abstract model has been theoretical and empirical studied, and that the advice for how to solve it is pretty similar to what Duncan said [2].
Also, a note on the idea of clarifying statements to complicate what I just said (Duncan appears to be aware of this too, but I want to bring it up explicitly for discussion).
You don’t actually get to fence things off or rule them out in conversations, you get to make them less reasonable.
When Duncan has the pictures where he slaps stickers on top to rule things out, this is a simplification. In reality, you get to provide a new statement that represents a new distribution of represented sentiments.
Different people are going to combine your “layers” differently, like in an image editor.
Some people will stick with the first layer no matter what. Some people will stick with the last layer no matter what. You cannot be nuanced with people who pick one layer.
Some people will pick parts out of each of your layers. They might pick the highest peak of each and add them together, hoping to select what you’re “most talking about” in each and then see what ends up most likely.
The best model might be multiplying layers together, so that combining a very likely region with a kind of unlikely region results in a kind of likely one.
I don’t know if this generalization is helpful, but I hope sharing ideas can lead to the developing of new ones.
Note that, of course, real statements are probably arbitrarily complex because of all the individual differences in how people interpret things.
I’d be willing to bet Duncan has heard of Isolation Forests before and has mental tooling for them (I’m not claiming non-uniqueness of the conversational tactic though!)
Oh, and the second block I talked about is relevant because it means you can’t use an algorithm like what I discussed in the first block. You can try to combine your statements with statements that have exceptionally low probability for the regions you want to “delete”, but you those statements may have odd properties elsewhere.
For example, you might issue a clarification B which scores bad regions at 1/1000000. However, B also incidentally scores a completely different region at 1000. Now you have a different problem.
You run into a few different risks here.
The people I can call “multipliers” from my above note on combining layers might multiply these together and find the incidental region more likely than your intended selection.
People who add or select the most recent region might end up selecting only the incidental region.
If you issue a high number of clarifications, you might end up in 1 of 2 bad attractors:
A smooth probability-space, in which every outcome looks basically as likely. Here, no one has any clue what you are saying.
A really bumpy probability space, in which listeners don’t know which idea you’re pointing at out of many exclusive possibilities.