Sounds related to the failure class I call “living in the should-universe”.
It seems to be a pretty common and easily corrected failure mode. Maybe you could write a post about it? I’m sure you have lots of useful cached thoughts on the matter.
Added: Ah, I’d thought you’d just talked about it at LW meetups, but a Google search reveals that the theme is also in Above-Average AI Scientists and Points of Departure.
Sounds related to the failure class I call “living in the should-universe”.
It seems to be a pretty common and easily corrected failure mode. Maybe you could write a post about it? I’m sure you have lots of useful cached thoughts on the matter.
Added: Ah, I’d thought you’d just talked about it at LW meetups, but a Google search reveals that the theme is also in Above-Average AI Scientists and Points of Departure.