As a non-expert fan of AI research, I simply wanted to mention that this and other recent papers seem to go a fair ways toward addressing one of the Karnofsky’s criticisms of the SI I remember agreeing with:
Overall disconnect between SI’s goals and its activities. SI seeks to build FAI and/or to develop and promote “Friendliness theory” that can be useful to others in building FAI. Yet it seems that most of its time goes to activities other than developing AI or theory. Its per-person output in terms of publications seems low. Its core staff seem more focused on Less Wrong posts, “rationality training” and other activities that don’t seem connected to the core goals; Eliezer Yudkowsky, in particular, appears (from the strategic plan) to be focused on writing books for popular consumption. These activities seem neither to be advancing the state of FAI-related theory nor to be engaging the sort of people most likely to be crucial for building AGI.
Hopefully more interesting stuff will follow, even if I am not in a position to evaluate its validity.
As a non-expert fan of AI research, I simply wanted to mention that this and other recent papers seem to go a fair ways toward addressing one of the Karnofsky’s criticisms of the SI I remember agreeing with:
Hopefully more interesting stuff will follow, even if I am not in a position to evaluate its validity.