Also I had never actually floated the hypothesis that “people who are optimistic about HCH-like things generally believe that language is a good interface” before; natural language seems like such an obviously leaky and lossy API that I had never actually considered that other people might think it’s a good idea.
Natural language is lossy because the communication channel isnarrow, hence the need for lower-dimensional representation (see ML embeddings) of what we’re trying to convey. Lossy representations is also what Abstractions are about. But in practice, you expect Natural Abstractions (if discovered) cannot be expressed in natural language?
I expect words are usually pointers to natural abstractions, so that part isn’t the main issue—e.g. when we look at how natural language fails all the time in real-world coordination problems, the issue usually isn’t that two people have different ideas of what “tree” means. (That kind of failure does sometimes happen, but it’s unusual enough to be funny/notable.) The much more common failure mode is that a person is unable to clearly express what they want—e.g. a client failing to communicate what they want to a seller. That sort of thing is one reason why I’m highly uncertain about the extent to which human values (or other variations of “what humans want”) are a natural abstraction.
Good description.
Also I had never actually floated the hypothesis that “people who are optimistic about HCH-like things generally believe that language is a good interface” before; natural language seems like such an obviously leaky and lossy API that I had never actually considered that other people might think it’s a good idea.
Natural language is lossy because the communication channel is narrow, hence the need for lower-dimensional representation (see ML embeddings) of what we’re trying to convey. Lossy representations is also what Abstractions are about.
But in practice, you expect Natural Abstractions (if discovered) cannot be expressed in natural language?
I expect words are usually pointers to natural abstractions, so that part isn’t the main issue—e.g. when we look at how natural language fails all the time in real-world coordination problems, the issue usually isn’t that two people have different ideas of what “tree” means. (That kind of failure does sometimes happen, but it’s unusual enough to be funny/notable.) The much more common failure mode is that a person is unable to clearly express what they want—e.g. a client failing to communicate what they want to a seller. That sort of thing is one reason why I’m highly uncertain about the extent to which human values (or other variations of “what humans want”) are a natural abstraction.