Fair enough.
I admittedly have a lot of agreement with you, and that’s despite thinking we can make machines that do follow orders/are intent-aligned ala Seth Herd’s definition:
https://www.lesswrong.com/posts/7NvKrqoQgJkZJmcuD/instruction-following-agi-is-easier-and-more-likely-than
Fair enough.
I admittedly have a lot of agreement with you, and that’s despite thinking we can make machines that do follow orders/are intent-aligned ala Seth Herd’s definition:
https://www.lesswrong.com/posts/7NvKrqoQgJkZJmcuD/instruction-following-agi-is-easier-and-more-likely-than