While we’re talking about jargon, I have a couple other terms to propose.
Hidden abstraction. For an abstraction that would be useful but is so non-obvious that it doesn’t show up in intelligent agents trained on the environment containing it. Part of the work of science then is effortfully discovering hidden abstractions and describing them clearly. For anyone who understands (and believes) the description the hidden abstraction becomes a natural abstraction.
Illusory abstraction. For an abstraction which seems like a natural abstraction, but is due to a weird coincidence in the environment. Agents exposed to this particular training environment (or educated by agents already having the concept) will tend to mistakenly think that this illusory abstraction is a truth about the world.
While we’re talking about jargon, I have a couple other terms to propose.
Hidden abstraction. For an abstraction that would be useful but is so non-obvious that it doesn’t show up in intelligent agents trained on the environment containing it. Part of the work of science then is effortfully discovering hidden abstractions and describing them clearly. For anyone who understands (and believes) the description the hidden abstraction becomes a natural abstraction.
Illusory abstraction. For an abstraction which seems like a natural abstraction, but is due to a weird coincidence in the environment. Agents exposed to this particular training environment (or educated by agents already having the concept) will tend to mistakenly think that this illusory abstraction is a truth about the world.