The fear of anthropomorphising AI is one of the more ridiculous traditional mental blindspots in the LW/rationalist sphere.
You’re really going to love Thursday’s post :).
Jokes aside, I actually am not sure LW is that against anthropomorphising. It seems like a much stronger injunction among ML researchers than it is on this forum.
I personally am not very into using humans as a reference class because it is a reference class with a single data point, whereas e.g. “complex systems” has a much larger number of data points.
In addition, it seems like intuition about how humans behave is already pretty baked in to how we think about intelligent agents, so I’d guess by default we overweight it and have to consciously get ourselves to consider other anchors.
I would agree that it’s better to do this by explicitly proposing additional anchors, rather than never talking about humans.
You’re really going to love Thursday’s post :).
Jokes aside, I actually am not sure LW is that against anthropomorphising. It seems like a much stronger injunction among ML researchers than it is on this forum.
I personally am not very into using humans as a reference class because it is a reference class with a single data point, whereas e.g. “complex systems” has a much larger number of data points.
In addition, it seems like intuition about how humans behave is already pretty baked in to how we think about intelligent agents, so I’d guess by default we overweight it and have to consciously get ourselves to consider other anchors.
I would agree that it’s better to do this by explicitly proposing additional anchors, rather than never talking about humans.