Both philosophers and mathematicians sometimes mistake the formal theoretical constructs they work with professionally for the related informal cognitive structures that existed prior to the development of those constructs
Both (pre-theoretic) “informal cognitive structures” and “formal theoretical constructs” (clearly things of different kinds) are ways of working with (in many cases) the same ideas, with formal tools being an improvement over informal understanding in accessing the same thing. The tools are clearly different, and the answers they can get are clearly different, but their purpose could well be identical. To evaluate their relation to their purpose, we need to look “from the outside” at how the tools relate to the assumed purpose’s properties, and we might judge suitability of various tools for reflecting a given idea even where they clearly can’t precisely define it or even in principle compute some of its properties.
This actually seems to be an important point to get right in thinking about FAI/metaethics. Human decision problem (friendliness content), the thing FAI needs to capture more formally, is (in my current understanding) something human minds can’t explicitly represent, can’t use definition of in their operation. Instead, they perform something only superficially similar.
(See also this comment.)
Both (pre-theoretic) “informal cognitive structures” and “formal theoretical constructs” (clearly things of different kinds) are ways of working with (in many cases) the same ideas, with formal tools being an improvement over informal understanding in accessing the same thing. The tools are clearly different, and the answers they can get are clearly different, but their purpose could well be identical. To evaluate their relation to their purpose, we need to look “from the outside” at how the tools relate to the assumed purpose’s properties, and we might judge suitability of various tools for reflecting a given idea even where they clearly can’t precisely define it or even in principle compute some of its properties.
This actually seems to be an important point to get right in thinking about FAI/metaethics. Human decision problem (friendliness content), the thing FAI needs to capture more formally, is (in my current understanding) something human minds can’t explicitly represent, can’t use definition of in their operation. Instead, they perform something only superficially similar.
For the record, I agree with what this comment means to me when I read it. :)