I have spent *some* time on it (on the order of 10-15hrs maybe? counting discussions, reading, etc.), and I have a vague intention to do so again, in the future. At the moment, though, I’m very focused on getting my PhD and trying to land a good professorship ~ASAP.
The genesis of this list is basically me repeatedly noticing that there are crucial considerations I’m ignoring (/more like procrastinating on :P) that I don’t feel like I have a good justification for ignoring, and being bothered by that.
It seemed important enough to at least *flag* these things.
If you think most AI alignment researchers should have some level of familiarity with these topics, it seems like it would be valuable for someone to put together a summary for us. I might be interested in such a project at some point in the next few years.
The genesis of this list is basically me repeatedly noticing that there are crucial considerations I’m ignoring (/more like procrastinating on :P) that I don’t feel like I have a good justification for ignoring, and being bothered by that.
It seemed important enough to at least flag these things.
That makes sense. Suggest putting this kind of background info in your future posts to give people more context.
If you think most AI alignment researchers should have some level of familiarity with these topics, it seems like it would be valuable for someone to put together a summary for us.
Hmm, I guess I think that more for moral uncertainty than for infinite ethics. For infinite ethics, it’s more that I think at least some people in AI alignment should have some level of familiarity, and it makes sense for whoever is most interested the topic (or otherwise motivated to learn it) to learn about it. Others could just have some sense of “this is a philosophical problem that may be relevant, I’ll look into it more in the future if I need to.”
I’m often prioritizing posting over polishing posts, for better or worse.
I’m also sometimes somewhat deliberately underspecific in my statements because I think it can lead to more interesting / diverse / “outside-the-box” kinds of responses that I think are very valuable from an “idea/perspective generation/exposure” point-of-view (and that’s something I find very valuable in general).
I have spent *some* time on it (on the order of 10-15hrs maybe? counting discussions, reading, etc.), and I have a vague intention to do so again, in the future. At the moment, though, I’m very focused on getting my PhD and trying to land a good professorship ~ASAP.
The genesis of this list is basically me repeatedly noticing that there are crucial considerations I’m ignoring (/more like procrastinating on :P) that I don’t feel like I have a good justification for ignoring, and being bothered by that.
It seemed important enough to at least *flag* these things.
If you think most AI alignment researchers should have some level of familiarity with these topics, it seems like it would be valuable for someone to put together a summary for us. I might be interested in such a project at some point in the next few years.
That makes sense. Suggest putting this kind of background info in your future posts to give people more context.
Hmm, I guess I think that more for moral uncertainty than for infinite ethics. For infinite ethics, it’s more that I think at least some people in AI alignment should have some level of familiarity, and it makes sense for whoever is most interested the topic (or otherwise motivated to learn it) to learn about it. Others could just have some sense of “this is a philosophical problem that may be relevant, I’ll look into it more in the future if I need to.”
I’m often prioritizing posting over polishing posts, for better or worse.
I’m also sometimes somewhat deliberately underspecific in my statements because I think it can lead to more interesting / diverse / “outside-the-box” kinds of responses that I think are very valuable from an “idea/perspective generation/exposure” point-of-view (and that’s something I find very valuable in general).