The second question is: if we’re making some artificial utility function for an AI or just to prove a philosophical point, how should that work—and I think your answer is spot on. I would hope that people don’t really disagree with you here and are just getting bogged down by confusion about real brains and some map-territory distinctions and importing epistemology where it’s not really necessary.
I’m pretty sure that the first reasonably-intelligent machines will work much as illustrated in the first diagram—for engineering reasons: it is so much easier to build them that way. Most animals are wired up that way too—as we can see from their drug-taking behaviour.
I’m pretty sure that the first reasonably-intelligent machines will work much as illustrated in the first diagram—for engineering reasons: it is so much easier to build them that way. Most animals are wired up that way too—as we can see from their drug-taking behaviour.