Before we continue, one more warning. If you’re not already doing most of your thinking at least half-way along the 3 to 4 transition (which I will hereon refer to as reaching 4⁄3), you will probably also not fully understand what I’ve written below because that’s unfortunately also about how far along you have to be before constructive development theory makes intuitive sense to most people. I know that sounds like an excuse so I can say whatever I want, but before reaching 4⁄3 people tend to find constructive development theory confusing and probably not useful...
I understand this kind of bind. I am over in the AI—Bostrom forum, which I like very much. As it happens, I have been working on a theory with numerous parts that is connected to, and and extension of, ideas and existing theories drawn from several scientific and philosophical subdisciplines. And, I often find myself tyring to meaningfully answer questions within the forum, with replies that I cannot really make transparent and comelling cases for, without having the rest of my theory on the table, to establish context, motivation, justification and so on, because the whole theory (and it’s supporting rationale) is, size-wise, outside the word limit and scope of any individual post.
Sometimes I have tried, then squeezed it down, and my comments have ended up looking like cramped, word salad, because of the lack of context—in the sense I presume you caution that applies to your remarks.
So I will have a look at the materials you counsel as prerequisite concepts, before I go on reading the rest of your remarks.
It is “with difficulty” I am not reading further down, because agency is one of my central preoccupations, both in general mind-body considerations, and in AI most particularly (not narrow AI, but human equivalent and above, or “real AI” as I privately think of it.
In general, I have volumes to say about agency, and have been struggling to carve out a meaningful and coherent, scientifically and neuroscientifically informed set of concepts relating to “agency” for some time.
You also refer to “existential” issues of some kind, which can of course mean many things to many people. But this also makes me want to jump in whole hog and read what you are going to say, because I also have been giving detailed consideration to the role of “existential pressure” (and trying to see what it might amount to, in both ordinary and more unconventional terms by trying to see it through different templates—some more, some less—humanly phenomenological) in the formation of features of naturalistic minds and sentience (i.e. in biological organisms, the idea being of course to then abstract this to more general systems.)
A nice route or stepping stone path for examing existential pressure princibles is human, to general terrestrial-biological, to then exobiological (so far as we can reasonably speculate), and then finally move on to AIs, when we have educated our intuitions a little.
The results emerging from those considerations may or may not suggest what we need to include, at least by suitable analogy, in AIs, to make them “strivers” , or “agents”, or systems that deliberatelydo anything, and have “motives” (as opposed to behaviors), desires, and so on…
We must have some theory or theory cluster, about what this may or may not contrubute to the overall functionality of the AI; it’s “understanding” of the world that is (we hope) to be shared by us, so it is also front and center among my key proccupations.
An timely clarifying idea I use frequently in discussing agency—when reminding people that not everything that exhibits behavior automatically qualifies for agency: do google’s autopilot cars have “agency”? Do they have “goals”? My view is: “obviously not—that would be using ‘goal’ and ‘agency’ metaphorically.”
Going up the ladder of examples, we might consider someone sleepwalking, or a person acting-out a sequence of learned, habituated behaviors while in an absence seizure in epilepsy. Are they exhibiting agency?
The answers might be slightly less clear, and invite more contention, but given the pretty good evidence that absence seizures are not post-ictal failures to remember agency states, but are really automatisms (modern neurologists are remarkably subtle, openminded to these distinctions, and clever in setting up scenarios which discriminate satisfactorily the difference), it seems also, that lack of attention, intention, praxis, i.e. missing agency is the most accurate characterization.
Indeed, apparently tt is satisfactory enough for experts who understand the phenomena that, in the contemporary legal environment in which “insanity” style defenses are out of fashion with judges and the public, nonetheless a veridical establishment of sleepwalking and/or absence seizure status (different cases, of course) while comitting murder or manslaughter, has, even in recent years, gotten some people “innocent” verdicts.
In short, most neurologists who are not in the grip of any dictums of behavioristic apologetics would say—here too—no agency, though information processing behavior occurred.
Indeed, in the case of absence seizures, we might further ask about metacognition vs just cognition. But experimentally this is also well understood. Metacognition, or metaconsciousness, or self-awareness, all are by a large consensus now understood as correllated with “Default Node Network” activity.
Absence seizures under whitnessed, lab conditions are not just departures from DFN activity. Indeed, all consciously, intentionally directed activity of any complexity that involves conscious attention on external activities or situations, involve shut down of DFN systems. (Look up Default Node Network on PubMed, if you want more.)
So absence seizure behavior, which can be very complex, involve driving across town, etc, is not agency misplaced or mislaid. It is actually unconscious, “missing-agent” automatism. A brain in a temporary zombie state, the way philosophers of mind use the term zombie.
But back to the autopilot cars, or autopilot Boeng 777s, automatic anythings… even the ubiquitous anti-virus daemons running in background which are automatically “watching” to intercept malware attacks. It seems clear that, while some of the language of agency might be convenient shorthand, it is not literally true.
Rather, these cases are those of mere mechanical, newtonian-level, deterministic causation from conditionally activated preprogrammed behavior sequences. Activation conditions are deterministic. The causal chains thereby activated are deterministic, just as the Interrupt service routines in an ISR jump table are all deterministic.
Anyway… agency is intimately at the heart of AGI—style AI, and we need to be as attentive and rigorous as possible about using the term literally, vs, metaphorically.
I will check out your references and see if I have anything useful to say after I look at what you mention.
I understand this kind of bind. I am over in the AI—Bostrom forum, which I like very much. As it happens, I have been working on a theory with numerous parts that is connected to, and and extension of, ideas and existing theories drawn from several scientific and philosophical subdisciplines. And, I often find myself tyring to meaningfully answer questions within the forum, with replies that I cannot really make transparent and comelling cases for, without having the rest of my theory on the table, to establish context, motivation, justification and so on, because the whole theory (and it’s supporting rationale) is, size-wise, outside the word limit and scope of any individual post.
Sometimes I have tried, then squeezed it down, and my comments have ended up looking like cramped, word salad, because of the lack of context—in the sense I presume you caution that applies to your remarks.
So I will have a look at the materials you counsel as prerequisite concepts, before I go on reading the rest of your remarks.
It is “with difficulty” I am not reading further down, because agency is one of my central preoccupations, both in general mind-body considerations, and in AI most particularly (not narrow AI, but human equivalent and above, or “real AI” as I privately think of it.
In general, I have volumes to say about agency, and have been struggling to carve out a meaningful and coherent, scientifically and neuroscientifically informed set of concepts relating to “agency” for some time.
You also refer to “existential” issues of some kind, which can of course mean many things to many people. But this also makes me want to jump in whole hog and read what you are going to say, because I also have been giving detailed consideration to the role of “existential pressure” (and trying to see what it might amount to, in both ordinary and more unconventional terms by trying to see it through different templates—some more, some less—humanly phenomenological) in the formation of features of naturalistic minds and sentience (i.e. in biological organisms, the idea being of course to then abstract this to more general systems.)
A nice route or stepping stone path for examing existential pressure princibles is human, to general terrestrial-biological, to then exobiological (so far as we can reasonably speculate), and then finally move on to AIs, when we have educated our intuitions a little.
The results emerging from those considerations may or may not suggest what we need to include, at least by suitable analogy, in AIs, to make them “strivers” , or “agents”, or systems that deliberately do anything, and have “motives” (as opposed to behaviors), desires, and so on…
We must have some theory or theory cluster, about what this may or may not contrubute to the overall functionality of the AI; it’s “understanding” of the world that is (we hope) to be shared by us, so it is also front and center among my key proccupations.
An timely clarifying idea I use frequently in discussing agency—when reminding people that not everything that exhibits behavior automatically qualifies for agency: do google’s autopilot cars have “agency”? Do they have “goals”? My view is: “obviously not—that would be using ‘goal’ and ‘agency’ metaphorically.”
Going up the ladder of examples, we might consider someone sleepwalking, or a person acting-out a sequence of learned, habituated behaviors while in an absence seizure in epilepsy. Are they exhibiting agency?
The answers might be slightly less clear, and invite more contention, but given the pretty good evidence that absence seizures are not post-ictal failures to remember agency states, but are really automatisms (modern neurologists are remarkably subtle, openminded to these distinctions, and clever in setting up scenarios which discriminate satisfactorily the difference), it seems also, that lack of attention, intention, praxis, i.e. missing agency is the most accurate characterization.
Indeed, apparently tt is satisfactory enough for experts who understand the phenomena that, in the contemporary legal environment in which “insanity” style defenses are out of fashion with judges and the public, nonetheless a veridical establishment of sleepwalking and/or absence seizure status (different cases, of course) while comitting murder or manslaughter, has, even in recent years, gotten some people “innocent” verdicts.
In short, most neurologists who are not in the grip of any dictums of behavioristic apologetics would say—here too—no agency, though information processing behavior occurred.
Indeed, in the case of absence seizures, we might further ask about metacognition vs just cognition. But experimentally this is also well understood. Metacognition, or metaconsciousness, or self-awareness, all are by a large consensus now understood as correllated with “Default Node Network” activity.
Absence seizures under whitnessed, lab conditions are not just departures from DFN activity. Indeed, all consciously, intentionally directed activity of any complexity that involves conscious attention on external activities or situations, involve shut down of DFN systems. (Look up Default Node Network on PubMed, if you want more.)
So absence seizure behavior, which can be very complex, involve driving across town, etc, is not agency misplaced or mislaid. It is actually unconscious, “missing-agent” automatism. A brain in a temporary zombie state, the way philosophers of mind use the term zombie.
But back to the autopilot cars, or autopilot Boeng 777s, automatic anythings… even the ubiquitous anti-virus daemons running in background which are automatically “watching” to intercept malware attacks. It seems clear that, while some of the language of agency might be convenient shorthand, it is not literally true.
Rather, these cases are those of mere mechanical, newtonian-level, deterministic causation from conditionally activated preprogrammed behavior sequences. Activation conditions are deterministic. The causal chains thereby activated are deterministic, just as the Interrupt service routines in an ISR jump table are all deterministic.
Anyway… agency is intimately at the heart of AGI—style AI, and we need to be as attentive and rigorous as possible about using the term literally, vs, metaphorically.
I will check out your references and see if I have anything useful to say after I look at what you mention.