Yes they’re correct in assuming success is possible in those situations—but their assumption of the possible routes to success is highly incorrect. People are making a very large error in overestimating how well they understand the situation and failing to think about other possibities. This logical error sounds highly relevant to alignment and AI risk.
Yes they’re correct in assuming success is possible in those situations—but their assumption of the possible routes to success is highly incorrect. People are making a very large error in overestimating how well they understand the situation and failing to think about other possibities. This logical error sounds highly relevant to alignment and AI risk.
I agree.