it’s going to be approximately the same degree of problem for humans as it is for ai. I’m not sure we can assume that’s zero, because current models’ lack of casual prediction makes their words less meaningful, and could on average introduce noise that makes it harder to extract causality from mere text. could be fine, if causal meaning-association can be established for words in the minds of both humans and ais.
it’s going to be approximately the same degree of problem for humans as it is for ai. I’m not sure we can assume that’s zero, because current models’ lack of casual prediction makes their words less meaningful, and could on average introduce noise that makes it harder to extract causality from mere text. could be fine, if causal meaning-association can be established for words in the minds of both humans and ais.