The other problem is that opposite is ill defined depending and requires someone else to know which dimension you’re inverting along as well as what you consider neutral/0 for that dimension
While this would be an inconvenience for the on-boarding process for a new mode of communication, I actually don’t think it’s that big of a deal for people who are already used to the dialect (which would probably make up the majority of communication) and have a mutual understanding of what is meant by [inverse(X)] even when X could in principle have more than one inverse.
That makes the concept much less useful though. Might as well just have two different words that are unrelated. The point of having the inverse idea is to be able to guess words right?
I’d say the main benefit it provides is making learning easier—instead of learning “foo” means ‘good’ and “bar” means ‘bad’, one only needs to learn “foo” = good, and inverse(“foo”) = bad, which halves the total number of tokens needed to learn a lexicon. One still needs to learn the association between concepts and their canonical inverses, but that information is more easily compressible
The other problem is that opposite is ill defined depending and requires someone else to know which dimension you’re inverting along as well as what you consider neutral/0 for that dimension
While this would be an inconvenience for the on-boarding process for a new mode of communication, I actually don’t think it’s that big of a deal for people who are already used to the dialect (which would probably make up the majority of communication) and have a mutual understanding of what is meant by [inverse(X)] even when X could in principle have more than one inverse.
That makes the concept much less useful though. Might as well just have two different words that are unrelated. The point of having the inverse idea is to be able to guess words right?
I’d say the main benefit it provides is making learning easier—instead of learning “foo” means ‘good’ and “bar” means ‘bad’, one only needs to learn “foo” = good, and inverse(“foo”) = bad, which halves the total number of tokens needed to learn a lexicon. One still needs to learn the association between concepts and their canonical inverses, but that information is more easily compressible