I enjoyed reading this, pjeby. It answered and tied together a lot of the things I’d wondered since I started reading about artificial intelligence. I won’t spell out the relationship between your post and the issues (this will be long anyway...), but I’ll list the connections I saw and what it brought to mind:
-What it would mean to, as evolutionary psychology attempts to do, “explain emotions by the behaviors they induce”. And similarly, what feelings an animal would have to get in order to execute complex but innate activities (e.g. cats burying excrement, beavers building dams).
-That paper that got a lot of attention by talking about the five dimensions or moral reasoning which went into how in certain cultures they feel physically ill at the though of failing their duties.
-The issue of “men are more rational, women more emotional”. I had thought that it would be more accurate to distinguish by reductionist/holist, i.e., what we call “emotional” means basing judgments on a broader array of factors that are aggregated automatically by the brain through useful heuristics.
-The standard model of an “agent” in AI—whereby it takes in sense data, models its environment, makes predictions, and selects actions that optimize some utility function. This had long seemed to me like the wrong way to go about the problem, at least because of all the infinite regress one runs into. I figured that very simple mechanisms go through crude but effective versions of this (e.g. a mass/spring oscillating back to its equilibrium position), and emotions in the sense you mean are another way to build up to a non-regessing agent.
I enjoyed reading this, pjeby. It answered and tied together a lot of the things I’d wondered since I started reading about artificial intelligence. I won’t spell out the relationship between your post and the issues (this will be long anyway...), but I’ll list the connections I saw and what it brought to mind:
-How evolution’s “shards of desire” translate into actions.
-What it would mean to, as evolutionary psychology attempts to do, “explain emotions by the behaviors they induce”. And similarly, what feelings an animal would have to get in order to execute complex but innate activities (e.g. cats burying excrement, beavers building dams).
-That paper that got a lot of attention by talking about the five dimensions or moral reasoning which went into how in certain cultures they feel physically ill at the though of failing their duties.
-The issue of “men are more rational, women more emotional”. I had thought that it would be more accurate to distinguish by reductionist/holist, i.e., what we call “emotional” means basing judgments on a broader array of factors that are aggregated automatically by the brain through useful heuristics.
-The standard model of an “agent” in AI—whereby it takes in sense data, models its environment, makes predictions, and selects actions that optimize some utility function. This had long seemed to me like the wrong way to go about the problem, at least because of all the infinite regress one runs into. I figured that very simple mechanisms go through crude but effective versions of this (e.g. a mass/spring oscillating back to its equilibrium position), and emotions in the sense you mean are another way to build up to a non-regessing agent.