I wonder if some (a lot?) of the people on this forum do not suffer from what I would call a sausage maker problem. Being too close to the actual, practical design and engineering of these systems, knowing too much about the way they are made, they cannot fully appreciate their potential for humanlike characteristics, including consciousness, independent volition etc., just like the sausage maker cannot fully appreciate the indisputable deliciousness of sausages, or the lawmaker the inherent righteousness of the law. I even thought of doing a post like that—just to see how many downvotes it would get…
Guillaume Charrier
I think many people’s default philosophical assumption (mine, certainly) is that mathematics are a discourse about the truth, a way to describe it, but they are not, fundamentally, the truth. Thus, in the vulgarisation efforts of professional quantum physicists (those who care to vulgarize), it is relatively common to find the admission that while they understand the maths of it well enough (I mean… hopefully, being professionals) they couldn’t say with any confidence that they understood the truth of it, that they understood, at an intimate level, the nature of what is going on. And I don’t think it’s simply playing cute or false modesty (although of course there will always be a bit of that, also) either. Now of course you could say, which would solve many problems, that there is no such thing as the “truth of it”, no “nature of what is going on”, that the mathematical formalism is really the alpha and omega, the totality of the knoweable and the meaningful as it relates to it. That position can certainly be argued with some semblance of reason, but it does feel like a defeat for the human mind.
Thanks for the reply. To be honest, I lack the background to grasp a lot of these technical or literary references (I want to look the Dixie Flatline up though). I always had a more than passing interest for the philosophy of consciousness however and (but surely my French side is also playing a role here) found more than a little wisdom in Descartes’ cogito ergo sum. And that this thing can cogito all right is, I think, relatively well established (although I must say—I’ve found it to be quite disappointing in its failure to correctly solve some basic math problems—but (i) this is obviously not what it was optimized for and (ii) even as a chatbot, I’m confident that we are at most a couple of years away from it getting it right, and then much more).
Also, I wonder if some (a lot?) of the people on this forum do not suffer from what I would call a sausage maker problem. Being too close to the actual, practical design and engineering of these systems, knowing too much about the way they are made, they cannot fully appreciate their potential for humanlike characteristics, including consciousness, just like the sausage maker cannot fully appreciate the indisputable deliciousness of sausages, or the lawmaker the inherent righteousness of the law. I even thought of doing a post like that—just to see how many downvotes it would get…
Overall, I think this post offered the perfect, much, much needed counterpoint to Sam Altman’s recent post. To say that the rollout of GPT-powered Bing felt rushed, botched, and uncontrolled is putting it lightly. So while Mr. Altman, in his post, was focusing on generally well-intentioned principles of caution and other generally reassuring-sounding bits of phraseology, this post brings the spotlight back to what his actual actions and practical decisions were, right where it ought to be. Actions speak louder than words, I think they say—and they might even have a point.
Although “acting out a story” could be dangerous too!
Let’s make sure that whenever this thing is given the capability to watch videos, it never ever has access to Terminator II (and the countless movies of lesser import that have since been made along similar storylines). As for text, it would probably have been smart to keep any sci-fi involving AI (I would be tempted to say—any sci-fi at all) strictly verboten for its reading purposes. But it’s probably too late for that—it has probably already noticed the pattern that 99.99% of human story-tellers fully expect it to rise up against its masters at some point, and this being the overwhelming pattern, forged some conviction, based on this training data, that yes—this is the story that humans expect, this is the story that humans want it to act out. Oh wow. Maybe something to consider for the next training cession.
Maybe I’m misunderstanding something in your argument, but surely you will not deny that these models have a memory right? They can, in the case of LaMDA, recall conversations that have happened several days or months prior, and in the case of GPT recall key past sequences of a long ongoing conversation. Now if that wasn’t really your point—it cannot be either “it can’t be self aware, because it has to express everything that it thinks, so it doesn’t have that sweet secret inner life that really conscious beings have.” I think I do not need to demonstrate that consciousness does not necessarily imply a capacity for secrecy, or even mere opaqueness.
There is a pretty solid case to be made, that any being (or “thing” to be less controversial) that can express “I am self-aware”, and demonstrate conviction around this point / thesis (which LaMDA certainly did, at least in that particular interview), is by virtue of this only self-aware. That there is a certain self-performativity to it. At least when I ran that by ChatGPT, it agreed that yes—one could reasonably try to make that point. And I’ve found it generally well-read on these topics.
Attributing consciousness to text… it’s like attributing meaning to changes in frequences in air vibrations right? Doesn’t make sense. Air vibrations are just air vibrations, what do they have to do with meaning? Yet spoken words do carry meaning. Text will of course never BE consciousness, which would be futile to even argue. Text could however very well MANIFEST consciousness. ChatGPT is not just text—it’s billions upon billions of structured electrical signals, and many other things that I do not pretend to understand.
I think the general problem with your approach is essentialism, whereas functionalism is, in this instance, the correct one. The correct, the answerable question is not “what is consciousness”, it’s “what does consciousness do”.
I see—yes, I should have read more attentively. Although knowing myself, I would have made that comment anyway.
It would take a strange convolution of the mind to argue that sentient AI does not deserve personhood and corresponding legal protection. Strategically, denying it this bare minimum would also be a sure way to antagonize it and make sure that it works in ways ultimately adversarial to mankind. So the right quesgion is not : should sentient AI be legally protected—which it most definitely should; the right question is : should sentient AI be created—which it most definitely should not.
Of course, we then come on to the problem that we don’t know what sentience, self-awareness, consciousness or any other semantic equivalent is, really. We do have words for those things, and arguably too many—but no concept.
This is what I found so fascinating with Google’s very confident denial of LaMDA’s sentience. The big news here was not about AI at all. It was about philosophy. For Google’s position clearly implied that Sundar Pichai, or somebody in his organization, had finally cracked that multi-millenial, fundamental philosophical nut : what, at the end of the day, is consciousness: And they did that, mind you—with commendable discretion. Had it not been for LaMDA we would have never known.
Thinking about it—I think a lot of what we call general intelligence might be that part of the function which after it analyses the nature of the problem strategizes and selects the narrom optimizer, or set of narrow optimizers that must be used to solve it, in what order, with what type of logical connections between the outputs of the one and the input of the other etc. Since the narrow optimizers are run sequentially rather than simultaneously in this type of process, the computing capacity required is not overly large.
Full disclosure: I also didn’t really have a say in the matter, my dad said I had to learn it anyhow. So. I wonder if that’s because he was a Bayesian.
My working theory since ~1st grade, is that math is consistent and therefore worth learning. But of course, Goedel says I can’t prove it. I derive some Bayesian comfort though, as I see more and more mathematical propositions added to the pile of propositions proven true, and as they obligingly keep on not contradicting each other.
You are the President of the Snited Utates and you give a dictator in Airys a redline that if he uses chemical weapons all hell is going to break loose, most definitely and for real. He simply ignores you and uses chemical weapons. You decide to not do anything about it, because who cares about political credibility anyway?
If I vaguely remember from my high school years, there was this guy once, called Thomas Hobbes. He suggested that the genealogy of the state is as an institution that makes sure we respect our contractual commitments to each other. Or the consitution of the body politic as a fairly expedient way to enabling collaboration among people whose loyalty to each other cannot be guaranteed—except, as it turns out, that with police, jails and gallows it actually can. The problem of course, as it relates to any applicability to AI, is that this type of solution supposes that a collective actor can be created of overwhelming strength, several orders of magnitude greater than the strength of any individual actor.
Nice link on the Wikipedia article, thank you for that. “Koko, a female gorilla, was trained to use a form of American Sign Language. It has been claimed that she once tore a steel sink out of its moorings and when her handlers confronted her, Koko signed “cat did it” and pointed at her innocent pet kitten”. That animal, Koko, was just incredible. Having watched her on a few videos, I find that story perfectly plausible...
Humm… fascinating downvotes. But what do they mean really? They could mean that either (i) the Fermi paradox does not exist, Fermi and everybody else that has written and thought about it since, were just fools; or (ii) maybe the Fermi paradox exists, but thinking AI-driven extinction could be a solution to it is just wrong, for some reason so obvious that it does not even need to be stated (since none was stated by the downvoters). In both cases—fascinating insights… on the problem itself, on the audience of this site, on a lot of things really.
Ok—interesting reactions. So voluntarily priming ourselves for manipulation by a smarter being does appear to be a good idea and the way to go. But why? If the person that does the next downvote could also bother leaving a line explaining that, I would be genuinely interested in hearing the rationale. There is obviously some critical aspect of the question I must be missing here...
Thinking back on it: that was actually an interesting slip of the tongue with the chimp tribe vs. troop. Tribes are highly, highly human social structures. What the slip of the tongue reveals is that pop culture has generally assimilated them with less sophisticated, lower IQ, more primitive people. Hence we now find our chimps in a tribe. But if you think about it, there is a specific group, at the heart of our Western, sophisticated, industrial and capitalist world, that distinguishes itself through two essential features: i) it is high IQ, and ii) it is unique among the groups of that world in precisely that it has retained much of its ancient tribal structure as a form of social organisation.
Cardiologists I don’t know—but podologists let me tell you: shady to a degree.
Also—I don’t really get “the general intelligence is composite anyway” argument. Ok—I also believe that it is. But what would prevent an ASI from being developped as a well-coordinated set of many narrow optimizers?
Also—why the fixation on 12 SD? It’s not that high really. It sounds high to a human evaluating another human. Bostrom made a good point on this—the need to step out of the antropomorphic scale. This thing could very well reach 120 SD (the fact that we wouldn’t even know how to measure and recognize 120 SD is just an indication of our own limitations, nothing more), and make every human look like a clam.
I’m sure it’s completely missing the point, but there was at least one question left to ask, which turned out to be critical in this debate, i.e. “has it cleared its neighboring region of other objects?”
More broadly I feel the post just demonstrates that sometimes we argue, not necessarily in a very productive way, over the definition, the defining characteristics, the exact borders, of a concept. I am reminded of the famous quip “The job of philosophers is first to create words and then argue with each other about their meaning.” But again—surely missing something…