I can’t speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant. That said, I certainly agree that our emotions and moral judgments are the result of reasoning (for a properly broad understanding of “reasoning”, though I’d be more inclined to say “algorithms” to avoid misleading connotations) of which we’re unaware. And, yes, recapitulating that covert reasoning overtly frequently gives us influence over those judgments. Similar things are true of social behavior when someone articulates the underlying social algorithms that are ordinarily left covert.
I can’t speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant.
Sorry for that, was a bit of leak out of how the interactions here about the AI issues are rather adversarial in nature, in the sense that ambiguity—unavoidable in human language—of anything that is in disagreement with the opinion here, is resolved in favour of interpretation that makes least amount of sense. The AI is, definitely, a very scary risk. Scariness doesn’t result in most reasonable processing. I do not claim to be immune to this.
I agree that some level of ambiguity is unavoidable, especially on initial exchange. Given iterated exchange, I usually find that ambiguity can be reduced to negligible levels, but sometimes that fails. I agree that some folks here have the habit you describe, of interpreting other people’s comments uncharitably. This is not unique to AI issues; the same occurs from time to time with respect to decision theory, moral philosophy, theology, various other things. I don’t find it as common here as you describe it as being, either with respect to AI risks or anything else. Perhaps it’s more common here than I think but I attend to the exceptions disproportionally; perhaps it’s less common here than you think but you attend to it disproportionally; perhaps we actually perceive it as equally common but you choose to describe it as the general case for rhetorical reasons; perhaps your notion of “the interpretation that makes the least amount of sense” is not what I would consider an uncharitable interpretation; perhaps something else is going on. I agree that fear tends to inhibit reasonable processing.
Well, I think it is the case that the fear is mind killer to some extent. Fear rapidly assigns the truth value to a proposition, using a heuristic. That is necessary for survival. Unfortunately this value makes a very bad prior.
ambiguity—unavoidable in human language—of anything that is in disagreement with the opinion here, is resolved in favour of interpretation that makes least amount of sense
Ambiguity should be resolved by figuring out the intended meaning, irrespective of the intended meaning’s merits, which should be discussed separately from the procedure of ambiguity resolution.
I can’t speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant. That said, I certainly agree that our emotions and moral judgments are the result of reasoning (for a properly broad understanding of “reasoning”, though I’d be more inclined to say “algorithms” to avoid misleading connotations) of which we’re unaware. And, yes, recapitulating that covert reasoning overtly frequently gives us influence over those judgments. Similar things are true of social behavior when someone articulates the underlying social algorithms that are ordinarily left covert.
Sorry for that, was a bit of leak out of how the interactions here about the AI issues are rather adversarial in nature, in the sense that ambiguity—unavoidable in human language—of anything that is in disagreement with the opinion here, is resolved in favour of interpretation that makes least amount of sense. The AI is, definitely, a very scary risk. Scariness doesn’t result in most reasonable processing. I do not claim to be immune to this.
I agree that some level of ambiguity is unavoidable, especially on initial exchange.
Given iterated exchange, I usually find that ambiguity can be reduced to negligible levels, but sometimes that fails.
I agree that some folks here have the habit you describe, of interpreting other people’s comments uncharitably. This is not unique to AI issues; the same occurs from time to time with respect to decision theory, moral philosophy, theology, various other things.
I don’t find it as common here as you describe it as being, either with respect to AI risks or anything else.
Perhaps it’s more common here than I think but I attend to the exceptions disproportionally; perhaps it’s less common here than you think but you attend to it disproportionally; perhaps we actually perceive it as equally common but you choose to describe it as the general case for rhetorical reasons; perhaps your notion of “the interpretation that makes the least amount of sense” is not what I would consider an uncharitable interpretation; perhaps something else is going on.
I agree that fear tends to inhibit reasonable processing.
Well, I think it is the case that the fear is mind killer to some extent. Fear rapidly assigns the truth value to a proposition, using a heuristic. That is necessary for survival. Unfortunately this value makes a very bad prior.
Yup, that’s one mechanism whereby fear tends to inhibit reasonable processing.
Excellent use of fogging in this conversation Dave.
Seconding TheOtherDave’s thanks. I stumbled on this technique a couple days ago, it’s nice to know that it has a name.
Upvoted back to zero for teaching me a new word.
.
Ambiguity should be resolved by figuring out the intended meaning, irrespective of the intended meaning’s merits, which should be discussed separately from the procedure of ambiguity resolution.