How to actually construct the AI was not part of the scope of the essay request, as I understood it. My intention was to describe some conceptual building blocks that are necessary to adequately frame the problem. For example, I address how utility functions are generated in sapient beings, including both humans and AI. Additionally, that explanation works whether or not huge paradigms shifts occur. No amount of technical understanding is going to substitute for an understanding of why we have utility functions in the first place, and what shapes they take. Rather than the tip of the iceberg, these ideas are supposed to be the foundation of the pyramid. I didn’t write about my approach to the problems of external reference and model specification because they were not the subject of the call for ideas, but I can do so if you are interested.
Furthermore, at no point do I describe “programming” the AI to do anything—quite the opposite, actually. I address that when I rule out the concept of the 3 Laws. The idea is effectively to “raise” an AI in such a way as to instill the values we want it to have. Many concepts specific to humans don’t apply to AIs, but many concepts specific to people do, and those are ones we’ll need to be aware of. Apparently I was not clear enough on that point.
How to actually construct the AI was not part of the scope of the essay request, as I understood it. My intention was to describe some conceptual building blocks that are necessary to adequately frame the problem. For example, I address how utility functions are generated in sapient beings, including both humans and AI. Additionally, that explanation works whether or not huge paradigms shifts occur.
No amount of technical understanding is going to substitute for an understanding of why we have utility functions in the first place, and what shapes they take. Rather than the tip of the iceberg, these ideas are supposed to be the foundation of the pyramid. I didn’t write about my approach to the problems of external reference and model specification because they were not the subject of the call for ideas, but I can do so if you are interested.
Furthermore, at no point do I describe “programming” the AI to do anything—quite the opposite, actually. I address that when I rule out the concept of the 3 Laws. The idea is effectively to “raise” an AI in such a way as to instill the values we want it to have. Many concepts specific to humans don’t apply to AIs, but many concepts specific to people do, and those are ones we’ll need to be aware of. Apparently I was not clear enough on that point.