Definitely a parallel sort of move! I would have already said that I was pretty rationality anti-realist, but I seem to have become even more so.
I think if I had to describe how I’ve changed my mind briefly, it would be something about before, I thought that to learn an AI stand-in for human preferences you should look at the effect on the real world. Now, I take much more seriously the idea that human preferences “live” in a model that is itself a useful fiction.
Definitely a parallel sort of move! I would have already said that I was pretty rationality anti-realist, but I seem to have become even more so.
I think if I had to describe how I’ve changed my mind briefly, it would be something about before, I thought that to learn an AI stand-in for human preferences you should look at the effect on the real world. Now, I take much more seriously the idea that human preferences “live” in a model that is itself a useful fiction.