Object-level: This updated me closer to Nate’s view, though I think abstractions-but-not-necessarily-natural-ones would still be valuable enough to justify quite a lot of focus on them.
Meta-level: This furthers my hunch that the AI alignment field is an absurdly inadequate market, lacking what-seems-to-me-like basic infrastructure. In other fields with smart people, informal results don’t sit around, unexamined and undistilled, for months on end.
I’m not even sure where the bottleneck is on this anymore; do we lack infrastructure due to a lack of funds, or a lack of talent?
(My current answer: More high-talent people may be needed to get a field paradigm, more medium-talent people would help the infrastructure, funders would help infrastructure, and funders are waiting on a field paradigm. Gah!)
Object-level: This updated me closer to Nate’s view, though I think abstractions-but-not-necessarily-natural-ones would still be valuable enough to justify quite a lot of focus on them.
Meta-level: This furthers my hunch that the AI alignment field is an absurdly inadequate market, lacking what-seems-to-me-like basic infrastructure. In other fields with smart people, informal results don’t sit around, unexamined and undistilled, for months on end.
I’m not even sure where the bottleneck is on this anymore; do we lack infrastructure due to a lack of funds, or a lack of talent? (My current answer: More high-talent people may be needed to get a field paradigm, more medium-talent people would help the infrastructure, funders would help infrastructure, and funders are waiting on a field paradigm. Gah!)