Sure, this seems reasonable. For example, in my own work I think much of the value I bring to AI alignment discussions is having a different philosophical perspective and deeper knowledge of a set of philosophical ideas not wisely considered by most people thinking about the problem. However it’s not clear to me how someone might take the idea you’ve presented and make it their work as opposed to doing something more like what I do. Thoughts on how we might operationalized your idea?
I intended to make that clear in the “Concretely, I imagine a project around this with the following stages (each yielding at least one publication)” section. The TL;DR is: do a literature review of analytic philosophy research on (e.g.) honesty.
Sure, this seems reasonable. For example, in my own work I think much of the value I bring to AI alignment discussions is having a different philosophical perspective and deeper knowledge of a set of philosophical ideas not wisely considered by most people thinking about the problem. However it’s not clear to me how someone might take the idea you’ve presented and make it their work as opposed to doing something more like what I do. Thoughts on how we might operationalized your idea?
I intended to make that clear in the “Concretely, I imagine a project around this with the following stages (each yielding at least one publication)” section. The TL;DR is: do a literature review of analytic philosophy research on (e.g.) honesty.