Don’t you think that things being perfectly balanced in a way such that there is no resolution is sort of a measure zero set of outcomes?
I don’t really have any good data on this: my preliminary notion that some such conflicts might be unresolvable is mostly just based on introspection, but we all know how reliable that is. And even if it was reliable, I’m still young and it could turn out that my conflicts will eventually be resolved as well. So if there are theoretical reasons to presume that there will eventually be a resolution, I will update in that direction.
That said, based on a brief skim of the page you linked, the drift-diffusion model seems to mostly just predict that a person will eventually take some action—I’m not sure whether it excludes the possibility of a person taking an action, but regardless remaining conflicted of whether it was the right one. This seems to often be the case with moral uncertainty.
For example, my personal conflict gets rather complicated, but basically it’s over the fact that I work in the x-risk field, which part of my brain considers the Right Thing To Do due to all the usual reasons that you’d expect. But I also have strong negative utilitarian intuitions which “argue” that life going extinct would in the long run be the right thing as it would eliminate suffering. I don’t assign a very high probability on humanity actually surviving the Singularity regardless of what we do, so I don’t exactly feel that my work is actively unethical, but I do feel that it might be a waste of time and that my efforts might be better spent on something that actually did reduce suffering while life on Earth still existed. This conflict keeps eating into my motivation and making me accomplish less, and I don’t see it getting resolved anytime soon. Even if I did switch to another line of work, I expect that I would just end up conflicted and guilty over not working on AI risk.
(I also have other personal conflicts, but that’s the biggest one.)
I don’t really have any good data on this: my preliminary notion that some such conflicts might be unresolvable is mostly just based on introspection, but we all know how reliable that is. And even if it was reliable, I’m still young and it could turn out that my conflicts will eventually be resolved as well. So if there are theoretical reasons to presume that there will eventually be a resolution, I will update in that direction.
That said, based on a brief skim of the page you linked, the drift-diffusion model seems to mostly just predict that a person will eventually take some action—I’m not sure whether it excludes the possibility of a person taking an action, but regardless remaining conflicted of whether it was the right one. This seems to often be the case with moral uncertainty.
For example, my personal conflict gets rather complicated, but basically it’s over the fact that I work in the x-risk field, which part of my brain considers the Right Thing To Do due to all the usual reasons that you’d expect. But I also have strong negative utilitarian intuitions which “argue” that life going extinct would in the long run be the right thing as it would eliminate suffering. I don’t assign a very high probability on humanity actually surviving the Singularity regardless of what we do, so I don’t exactly feel that my work is actively unethical, but I do feel that it might be a waste of time and that my efforts might be better spent on something that actually did reduce suffering while life on Earth still existed. This conflict keeps eating into my motivation and making me accomplish less, and I don’t see it getting resolved anytime soon. Even if I did switch to another line of work, I expect that I would just end up conflicted and guilty over not working on AI risk.
(I also have other personal conflicts, but that’s the biggest one.)