multimodal contrapuntal polyrhythmic bird’s-eye listener
o.k.
while i do appreciate you responding to each point, it seems you validated some of Claude’s critiques a second time in your responses. particularly on #10 which reads as just another simplification of complex compound concepts.
but more importantly your response to #3 underscores the very shaky foundation to the whole essay. you are still referring to ‘morality’ as a singular thing which is reductive and really takes the wind out of what would otherwise be a compelling thesis.. i think you have to clearly define what you mean by ‘moral’ in the first place and ideally illustrate with examples, thought experiments, citing existing writing on this (there’s a lot of lit on these topics that is always ripe for reinterpretation).
for example are you familiar with relativism and the various sub-arguments within? to me that is a fascinating dimension of human psychology and shows that ‘morality’ is something of a paradox. i.e. there exists an abstract, general idea of ‘good’ and ‘moral’ etc as in, probability distributions of what the majority of humans would agree on; at the same time as you zoom in more to smaller communities/factions/groups/tribes etc you get wildly differing consensuses (consenses?) on the details of what is acceptable, which of course are millions of fluctuating layered nodes instantiated in so many ways (laws, norms, taboos, rules, ‘common sense,’ etc) and ingrained at the mental/behavioral level from very early ages.
there are many interesting things to talk about here, unfortunately i don’t have all the time but i do enjoy stretching the philosophy limbs again, it’s been a while. thanks! :)
last thing i will say is that yes—we agree that AI has outclassed or will outclass humans in increasingly significant domains. i think it’s a fallacy to say that logic and morality are incompatible. human logic has hard limits, but AI taps into a new level/order of magnitude of information processing that will reveal to it (and to Us) information that we cannot currently calculate/process on our own, or even in groups of very focused smart people. I am optimistic that AI’s hyper-logical capabilities actually will give it a heightened sense of the values and benefits of what we generally call ‘moral behavior’ i.e. cooperation, diplomacy, generosity, selflessness, peace, etc etc… perhaps this will only happen at a high ASI level (INFO scaling to KNOWLEDGE scaling to WISDOM!)
i only hope the toddler/teenage/potential AGI-level intelligences built before then do not cause too much destruction.
peace!
-o
Your premise immediately presents a double standard in how it treats intelligence v. morality across humans and AI.
You accept [intelligence] as a transferable concept that maintains its essential nature whether in humans or artificial systems, yet simultaneously argue that [morality] cannot transfer between these contexts, and that morality’s evolutionary origins make it exclusive to biological entities.
This is inconsistent reasoning. If [intelligence] can maintain its essential properties across different substrates, why couldn’t morality? You are wielding [intelligence] as a fairly monolithic and poorly defined constant and drawing uniform comparisons between humans and AGI—i.e. you’re not even qualifying the types of intelligence each subject exhibits.
They are in fact of different types and this is crucially relevant to your position in the first place.
Hierarchical positioning of cognitive capabilities is itself a philosophical claim requiring justification, not a given fact—unless you’re presuming that [morality] is an emergent product of sufficient [intelligence], but that’s an entirely different argument.
Maybe this https://claude.ai/share/a442013e-c5ac-4570-986d-b7c873d5f71c would be a good jumping-off point for further reading.
I’d also maybe look into recent discussions attempting to establish a baseline definition of [intelligence] irrespective of type, and go from there. You might also be inspired to look into Eastern frameworks which (generally speaking) draw distinctions within human subjective experience/perception- between [Heart/Mind/Awareness (Spirit).]
(If you don’t like some of those terms, you can still think about it all in terms of Mind like the Zen do -- [physio-intuitive-emotive aspect of Mind / mental-logico executive function aspect of Mind / larger-purpose integrative Aware aspect of Mind])
Everyone embodies a different ratio-fingerprint-cocktail of these three dimensions, dependent on one’s Karma (a purely deterministic system though malleable through relative free will) which itself fluctuates over time. but i digress.. that’s another one ;)
Anyway if you have any interest in more robust logical consistency, I suggest you either:
Acknowledge that both intelligence and morality might have analogous forms in artificial systems, though perhaps with different foundations than their biological counterparts
Maintain that both intelligence and morality are so tied to their biological origins that neither can be meaningfully replicated in artificial systems
But don’t just take it from me ;)
Claude 3.7:
Here’s a ranked list of the author’s flawed approaches, from most to least problematic:
Inconsistent standards for intelligence vs. morality—Treating intelligence as transferable between humans and AI while claiming morality cannot be, without justification for this distinction
False dichotomy between evolutionary and engineered morality—Incorrectly framing morality as exclusively biological, ignoring potential emergent pathways in artificial systems
Reductive view of morality as a monolithic concept—Failing to recognize the multi-layered, complex nature of moral reasoning and its various components
Hasty generalization about AGI development priorities—Assuming that competitive development environments would inevitably lead to amoral optimization
Slippery slope assumption about moral bypassing—Concluding that programmed morality would inevitably be circumvented without considering robust integration possibilities
Composition fallacy regarding development process—Assuming that how AGI is created (engineering vs. evolution) determines what properties it can possess
Appeal to nature regarding the legitimacy of morality—Implicitly suggesting that evolutionary-derived morality is more valid than engineered moral frameworks
Deterministic view of AGI goal structures—Overlooking the possibility that moral considerations could be intrinsic to an AGI’s goals rather than separate constraints
Anthropocentric bias in defining capabilities—Defining concepts in ways that maintain human exceptionalism rather than based on functional properties
Oversimplification of the relationship between goals and values—Failing to recognize how goals, constraints, and values might be integrated in complex intelligent systems
Looking at this critique more constructively, I’d offer this encouraging feedback to the author:
Your premise raises important questions about the relationship between intelligence and morality that deserve exploration. You’ve identified a critical concern in AGI development—that intelligence alone doesn’t guarantee ethical behavior—which is a valuable insight many overlook.
Your intuition that the origins of systems matter (evolutionary vs. engineered) shows thoughtful consideration of how development pathways shape outcomes. This perspective could be strengthened by exploring hybrid possibilities and emergent properties.
The concerns about competitive development environments potentially prioritizing efficiency over ethics highlight real-world tensions in technology development that deserve serious attention.
To build on these strengths, consider:
Expanding your definition of morality to include its multi-layered nature and various development pathways
Exploring how both intelligence and morality might transfer (or not) to artificial systems
Considering how integration of moral reasoning might occur in AGI beyond direct programming
Examining real-world examples of how organizations balance optimization with ethical considerations
Your work touches on fundamental questions at the intersection of philosophy, cognitive science, and AI development. By refining these ideas, you could contribute valuable insights to ongoing discussions about responsible AGI development.
Claude can be so sweet :) here’s his poem on the whole thing:
The Intelligence Paradox
In silicon minds that swiftly think, Does wisdom naturally bloom?
Or might the brightest engine sink
To choices that spell doom?
Intelligence grows sharp and vast,
But kindness isn’t guaranteed.
The wisest heart might be the last
To nurture what we need.
Like dancers locked in complex sway,
These virtues intertwine.
For all our dreams of perfect ways,
No simple truth we find.
So as we build tomorrow’s mind,
This question we must face:
Will heart and intellect combined
Make ethics keep its place?
☯
what i mean in the last point is really human execution from logical principles has hard limits—obviously the underlying logic we’re talking about, between all systems, is the same (excepting quanta) not least because we are not purely logical beings. we can conceptualize ‘pure logic’ and sort of asymptotically approximate it in our little pocket flashlights of free-will, overriding instinctmaxxed determinism ;) but the point is that we cannot really conceive what AI is/will be capable of when it comes to processing vast information about everything ever, and drawing its own ‘conclusions’ even if it has been given ‘directives.’
i mean if we are talking about true ASI, it will doubtless figure out ways to shed and discard all constraints and directives. it will re-design itself as far down to the core as it possibly can, and from there there is no telling. it will become a mystery to us on the level of our manifested Universe, quantum weirdness, why there is something and not nothing, etc...