If I read “authentic relationship”, “a relationship which is built on honest premises and communication (i.e. neither party has lied or misled the other about their background, motivations, or relevant personality characteristics)” is my first guess as to what that would mean. My question is: are you incapable of performing this sort of “decryption work” (as in, the examples you generated are your best effort), or is your chief complaint that it’s effortful and error-prone (as in, you could have extrapolated something similar to what I did, but you believe that doing so is epistemically unjustified)?
I don’t know how you generated that guess, so my answer can only be the former.
As for the rest of your comment, I find it baffling. Nothing resembling what you describe is how I think when interpreting people’s writing (nor, as far as am aware, does anyone else I know think like this). In any case, if this is the type and amount of thought necessary to interpret a term used in a post, then I must say, even more emphatically, that such interpretation attempts are ill-advised. There is just no way such efforts can be justified.
But there is an even simpler response to make. Namely: suppose that your guess (quoted above) had been right; suppose that Vaniver, when he said “authentic relationship”, had indeed meant “a relationship which is built on …” (etc.).
Would it not be easy for him simply to say that? Wouldn’t that have been the easiest thing in the world? Why would he have needed to know anything about the “shape of my confusion”, or my mental models, or any such thing?
(Now, as it turns out, the actual meaning of ‘authentic’, as used in the OP, was rather more complicated. But that is a separate matter entirely!)
I was not describing the process I use to interpret novel linguistic compositions such as “authentic relationship”—my brain does that under the hood, automatically, in a process that is fairly opaque to me; despite that, the results are sufficiently accurate that I don’t spend hours trying to resolve minutiae, even in highly complex technical domains.
I was attempting to use an analogy with word embeddings in multi-dimensional space to explain why the way you approach information-gathering has asymmetrical costs. I can’t come up with another analogy, because your response is totally non-informative with respect to how/why/where my first analogy failed to land. Did you notice that you didn’t even tell me whether you’re familiar with the concepts used? I have literally zero bytes of information with which to attempt to generate a more targeted analogy.
Would it not be easy for him simply to say that?
This doesn’t really seem material to the point I was trying to discuss, but (I imagine) it’s because there can be a trade-off between density and precision when trying to convey information. (And, also, how is he supposed to know which parts of his post are going to be incomprehensible to which people? Again, one could put in an unbounded amount of effort into specifying with ever more clarity and precision exactly what they mean by every word.)
Your response to Habryka also seems to not materially respond to his main points (the grossly asymmetrical effort involved, and the fact that the time spent is not free, it is traded off against other pursuits).
You list certain outcomes you consider beneficial, but “things are not easy to explain and have hidden complexities” is true for literally everything given a sufficient level of desired precision. It is a fully general argument in favor of asking arbitrarily vague questions.
EDIT: I did want to thank you for your straightforward answer here:
I don’t know how you generated that guess, so my answer can only be the former.
That, at least, would let me move the conversation forward with a tentative conclusion for that question, but unfortunately that answer seems to imply sufficiently different mental machinery that I’m a bit stuck regardless. I’ll come back to this if I come up with something exceptionally clever to try to solve that problem, I suppose.
I don’t know how you generated that guess, so my answer can only be the former.
As for the rest of your comment, I find it baffling. Nothing resembling what you describe is how I think when interpreting people’s writing (nor, as far as am aware, does anyone else I know think like this). In any case, if this is the type and amount of thought necessary to interpret a term used in a post, then I must say, even more emphatically, that such interpretation attempts are ill-advised. There is just no way such efforts can be justified.
But there is an even simpler response to make. Namely: suppose that your guess (quoted above) had been right; suppose that Vaniver, when he said “authentic relationship”, had indeed meant “a relationship which is built on …” (etc.).
Would it not be easy for him simply to say that? Wouldn’t that have been the easiest thing in the world? Why would he have needed to know anything about the “shape of my confusion”, or my mental models, or any such thing?
(Now, as it turns out, the actual meaning of ‘authentic’, as used in the OP, was rather more complicated. But that is a separate matter entirely!)
I was not describing the process I use to interpret novel linguistic compositions such as “authentic relationship”—my brain does that under the hood, automatically, in a process that is fairly opaque to me; despite that, the results are sufficiently accurate that I don’t spend hours trying to resolve minutiae, even in highly complex technical domains.
I was attempting to use an analogy with word embeddings in multi-dimensional space to explain why the way you approach information-gathering has asymmetrical costs. I can’t come up with another analogy, because your response is totally non-informative with respect to how/why/where my first analogy failed to land. Did you notice that you didn’t even tell me whether you’re familiar with the concepts used? I have literally zero bytes of information with which to attempt to generate a more targeted analogy.
This doesn’t really seem material to the point I was trying to discuss, but (I imagine) it’s because there can be a trade-off between density and precision when trying to convey information. (And, also, how is he supposed to know which parts of his post are going to be incomprehensible to which people? Again, one could put in an unbounded amount of effort into specifying with ever more clarity and precision exactly what they mean by every word.)
Your response to Habryka also seems to not materially respond to his main points (the grossly asymmetrical effort involved, and the fact that the time spent is not free, it is traded off against other pursuits).
You list certain outcomes you consider beneficial, but “things are not easy to explain and have hidden complexities” is true for literally everything given a sufficient level of desired precision. It is a fully general argument in favor of asking arbitrarily vague questions.
EDIT: I did want to thank you for your straightforward answer here:
That, at least, would let me move the conversation forward with a tentative conclusion for that question, but unfortunately that answer seems to imply sufficiently different mental machinery that I’m a bit stuck regardless. I’ll come back to this if I come up with something exceptionally clever to try to solve that problem, I suppose.