Not clear to me what you mean by free will and where you fall on the philosophical spectrum. I guess my standard question is “Do bacteria have free will? What about fish? Cats? People with brain trauma? Etc.”
Thanks shminux. My apologies for the confusion, part of my point was that we don’t have consensus on whether we have free will (the professional philosophers usually fall into ~60% compatibilists; but the sociologists have a different conception altogether; and the physicists etc.). I think this got lost because I was not trying to explain the philosophical position on free will. [I have added a very brief note in the main text to clarify what I think of as the “free will problem”].
The rest of the post was an attempt to argue that because human action is likely caused by many factors, AI techs will likely help us uncover the role of these factors; and that AI-misuse or even AI-aligned agents may act to change human will/intent etc.
Re: whether bacteria/fish/cats etc have free will: I propose we’ll have better answers soon (if you consider AGI is coming soon-ish). Or more precisely, the huge body of philosophical, psychological and sociological ideas will be tested against some empirical findings in this field. I actually work on these type of questions from the empirical side (neuroscience and open behavior). And—to summarize this entire post in 1 sentence—I am concerned that most organisms including humans have quite predictable behaviors (given behavior + other internal state data) and that these entire causal networks will be constantly under pressure by both nefarious—but also well-meaning agents like (aligned-AIs) because of the inherently complex nature of behavior.
Not clear to me what you mean by free will and where you fall on the philosophical spectrum. I guess my standard question is “Do bacteria have free will? What about fish? Cats? People with brain trauma? Etc.”
Thanks shminux. My apologies for the confusion, part of my point was that we don’t have consensus on whether we have free will (the professional philosophers usually fall into ~60% compatibilists; but the sociologists have a different conception altogether; and the physicists etc.). I think this got lost because I was not trying to explain the philosophical position on free will. [I have added a very brief note in the main text to clarify what I think of as the “free will problem”].
The rest of the post was an attempt to argue that because human action is likely caused by many factors, AI techs will likely help us uncover the role of these factors; and that AI-misuse or even AI-aligned agents may act to change human will/intent etc.
Re: whether bacteria/fish/cats etc have free will: I propose we’ll have better answers soon (if you consider AGI is coming soon-ish). Or more precisely, the huge body of philosophical, psychological and sociological ideas will be tested against some empirical findings in this field. I actually work on these type of questions from the empirical side (neuroscience and open behavior). And—to summarize this entire post in 1 sentence—I am concerned that most organisms including humans have quite predictable behaviors (given behavior + other internal state data) and that these entire causal networks will be constantly under pressure by both nefarious—but also well-meaning agents like (aligned-AIs) because of the inherently complex nature of behavior.