Thanks for posting this link, and for the auxiliary comments. I try to follow these issues as viewed from this sector of thinkers, pretty closely (the web site Defense One often has some good articles, and their tech reporter Patrick Tucker touches on some of these issues fairly often.) But I had missed this paper, until now. Grateful, as I say, for your posting of this.
NxGenSentience
Before we continue, one more warning. If you’re not already doing most of your thinking at least half-way along the 3 to 4 transition (which I will hereon refer to as reaching 4⁄3), you will probably also not fully understand what I’ve written below because that’s unfortunately also about how far along you have to be before constructive development theory makes intuitive sense to most people. I know that sounds like an excuse so I can say whatever I want, but before reaching 4⁄3 people tend to find constructive development theory confusing and probably not useful...
I understand this kind of bind. I am over in the AI—Bostrom forum, which I like very much. As it happens, I have been working on a theory with numerous parts that is connected to, and and extension of, ideas and existing theories drawn from several scientific and philosophical subdisciplines. And, I often find myself tyring to meaningfully answer questions within the forum, with replies that I cannot really make transparent and comelling cases for, without having the rest of my theory on the table, to establish context, motivation, justification and so on, because the whole theory (and it’s supporting rationale) is, size-wise, outside the word limit and scope of any individual post.
Sometimes I have tried, then squeezed it down, and my comments have ended up looking like cramped, word salad, because of the lack of context—in the sense I presume you caution that applies to your remarks.
So I will have a look at the materials you counsel as prerequisite concepts, before I go on reading the rest of your remarks.
It is “with difficulty” I am not reading further down, because agency is one of my central preoccupations, both in general mind-body considerations, and in AI most particularly (not narrow AI, but human equivalent and above, or “real AI” as I privately think of it.
In general, I have volumes to say about agency, and have been struggling to carve out a meaningful and coherent, scientifically and neuroscientifically informed set of concepts relating to “agency” for some time.
You also refer to “existential” issues of some kind, which can of course mean many things to many people. But this also makes me want to jump in whole hog and read what you are going to say, because I also have been giving detailed consideration to the role of “existential pressure” (and trying to see what it might amount to, in both ordinary and more unconventional terms by trying to see it through different templates—some more, some less—humanly phenomenological) in the formation of features of naturalistic minds and sentience (i.e. in biological organisms, the idea being of course to then abstract this to more general systems.)
A nice route or stepping stone path for examing existential pressure princibles is human, to general terrestrial-biological, to then exobiological (so far as we can reasonably speculate), and then finally move on to AIs, when we have educated our intuitions a little.
The results emerging from those considerations may or may not suggest what we need to include, at least by suitable analogy, in AIs, to make them “strivers” , or “agents”, or systems that deliberately do anything, and have “motives” (as opposed to behaviors), desires, and so on…
We must have some theory or theory cluster, about what this may or may not contrubute to the overall functionality of the AI; it’s “understanding” of the world that is (we hope) to be shared by us, so it is also front and center among my key proccupations.
An timely clarifying idea I use frequently in discussing agency—when reminding people that not everything that exhibits behavior automatically qualifies for agency: do google’s autopilot cars have “agency”? Do they have “goals”? My view is: “obviously not—that would be using ‘goal’ and ‘agency’ metaphorically.”
Going up the ladder of examples, we might consider someone sleepwalking, or a person acting-out a sequence of learned, habituated behaviors while in an absence seizure in epilepsy. Are they exhibiting agency?
The answers might be slightly less clear, and invite more contention, but given the pretty good evidence that absence seizures are not post-ictal failures to remember agency states, but are really automatisms (modern neurologists are remarkably subtle, openminded to these distinctions, and clever in setting up scenarios which discriminate satisfactorily the difference), it seems also, that lack of attention, intention, praxis, i.e. missing agency is the most accurate characterization.
Indeed, apparently tt is satisfactory enough for experts who understand the phenomena that, in the contemporary legal environment in which “insanity” style defenses are out of fashion with judges and the public, nonetheless a veridical establishment of sleepwalking and/or absence seizure status (different cases, of course) while comitting murder or manslaughter, has, even in recent years, gotten some people “innocent” verdicts.
In short, most neurologists who are not in the grip of any dictums of behavioristic apologetics would say—here too—no agency, though information processing behavior occurred.
Indeed, in the case of absence seizures, we might further ask about metacognition vs just cognition. But experimentally this is also well understood. Metacognition, or metaconsciousness, or self-awareness, all are by a large consensus now understood as correllated with “Default Node Network” activity.
Absence seizures under whitnessed, lab conditions are not just departures from DFN activity. Indeed, all consciously, intentionally directed activity of any complexity that involves conscious attention on external activities or situations, involve shut down of DFN systems. (Look up Default Node Network on PubMed, if you want more.)
So absence seizure behavior, which can be very complex, involve driving across town, etc, is not agency misplaced or mislaid. It is actually unconscious, “missing-agent” automatism. A brain in a temporary zombie state, the way philosophers of mind use the term zombie.
But back to the autopilot cars, or autopilot Boeng 777s, automatic anythings… even the ubiquitous anti-virus daemons running in background which are automatically “watching” to intercept malware attacks. It seems clear that, while some of the language of agency might be convenient shorthand, it is not literally true.
Rather, these cases are those of mere mechanical, newtonian-level, deterministic causation from conditionally activated preprogrammed behavior sequences. Activation conditions are deterministic. The causal chains thereby activated are deterministic, just as the Interrupt service routines in an ISR jump table are all deterministic.
Anyway… agency is intimately at the heart of AGI—style AI, and we need to be as attentive and rigorous as possible about using the term literally, vs, metaphorically.
I will check out your references and see if I have anything useful to say after I look at what you mention.
I didn’t exactly say that, or at least, didn’t intend to exactly say that. It’s correct of you to ask for that clarification.
When I say “vindicated the theory”, that was, admittedly, pretty vague.
What I should have said was the recent experiments removed what has been more or less statistically the most common and continuing objection to the theory, by showing that quantum effects in microtubules, under the kind of environmental conditons that are relevant, can indeed be maintained long enough for quantum processes to “run their course” in a manner that, according to Hameroff and Penrose, makes a difference that can propogate causally to a level that is of significance to the organism.
Now, as to “decision making”. I am honestly NOT trying to be coy here, but that is not entirely a transparent phrase. I would have to take a couple thousand words to unpack that (not obfuscate, but unpack), and depending on this and that, and which sorts of decisions (conscious or preconscious, highly attended or habituated and automatic), the answer could be yes or no… that is, even given that consciousness “lights up” under the influence of microtubule-dependent processes like Orch OR suggests—admittedly something that, per se, is a further condition, for which quantum coherence within the microtubule regime is a necessary but not sufficient condition.
But the latter is plausible so many people, given a pile of other suggestive evidence. The deal breaker has always been the can or can’t quantum coherence be maintained in the stated environs.
Orch OR is a very multifaceted theory, as you know, and I should not have said “vindicated” without very careful qualification. Removing a stumbling block is not proof of truth, of a theory with so many moving parts.
I do think, as a physiological theory of brain function, it has a lot of positives (some from vectors of increasing plausibility coming in from other directions, theorists and experiments) and the removal of the most commonly cited objection, on the basis of which many people have claimed Orch OR is a non-starter, is a pretty big deal.
Hameroff is not a wild-eyed speculator (and I am not suggesting that you are claiming he is.)
I find him interesting and worthy of close attention, in part because has accumulated an enormous amount of evidence for microtubule effects, and he knows the math, and presents it regularly.
I first read his Biomolecular Mind hardback book, back in the early 90′s, which he actually wrote in the late 80′s, at which time he had already amassed quite a bit of empiracle study regarding the role of microtubules in neurons, and in creatures whithout neurons, posessing only microtubules, that exhibit intelligent behavior.
Other experiments in various quarters over quite a few recent years (though there are still those neurobiologists who do disagree) have on the whole seemed to validate Hameroff’s claim that it is quantum effects—not “ordinry” synapse-level effects that can be described without use of the quantum level of description—that are responsible for anaesthesia’s effects on consciousness, in living brains.
Again, not a proof of Orch OR, but an indication that Hameroff is, perhaps, on to some kind of right track.
I do think that evidence is accumulating, from what I have seen in PubMed and elsewhere, that microtubule effects at least partially modulate dendritic computations, and seem to mediate the rapid remodeling of the dendritic tree (spines come and go with amazing rapidity), making it likely that the “integrate and fire” mechanism involves microtubule computation, at least in some cases.
I have seen, for example, experiments that give microtubule corrupting enzymes to some, but not control, neurons and observe dendritic tree behavior. Microtubules are in the loop in learning, attention, etc. Quantum effects in MTs.… evidence seems to grow by the month.
But, to your ending question, I would have to say what I said… which amounts to “sometimes yes, sometimes no,” and in the ‘yes’ cases, not necessarily for the reasons that Hameroff thinks, but maybe partly, and maybe for a hybrid of additional reasons. Stapp’s views have a role to play here, I think, as well.
One of my “wish list” items would be to take SOME of Hameroff’s ideas and ask Stapp about them, and vice versa, in interviews, after carefully preparing questions and submitting them in advance. I have thought about how the two theories might compliment each other, or which parts of each might be independently verifyable and could be combined in a rationally coherent fashion that has some independent conceptual motivation (i.e. is other than ad hoc.)
I am in the process of preparing and writing a lenghty technical queston for Stapp, to clarify (and see what he thinks of a possible extension of) his theory of the relevance of the quantum zeno effect.
I thought of a way the quantum zeno effect, the way Stapp conceives of it, might be a way to resolve (with caveats) the simulation argument … i.e. assess whether we are at the bottom level in the hierarchy, or are up on a sim. At least it would add another stipulation to the overall argument, which is significant in itself.
But that is another story. I have said enough to get me in trouble already, for a Friday night (grin).
Hi, Yes, for the kickstarter option, that seems to be almost a requirement. People have to see what they are asked to invest in.
The kickstarter option is somewhat my second choice plan, or I’d be furher along on that already. I have several things going on that are pulling me in different directions.
To expand just a bit on the evolution of my You Tube idea: originally – a couple months before I recognized more poignantly the value to the HLAI R & D community of doing well-designed, issue-sophisticated, genuinely useful (to other than a naïve audience) interviews with other thinkers and researchers—I had already decided to create a You Tube (hereafter, ‘YT’) channel of my own. This one will have a different, though complimentary, emphasis.
This (first) YT channel will present a concentrated video course (perhaps 20 to 30 presentations in the plan I have, with more to grown in as audience demand or reaction dictates.) The course presentations, with myself at the whiteboard, graphics, video clips, whatever can help make it both enjoyable and more comprehensible, will consist of what are essential ideas and concepts, that are not only of use to people working in creating HLAI (and above), but are so important that they constitute essential background, without which, I believe, people creating HLAI are at least partly floundering in the dark.
The value add for this course comes from several things. I do have a gift for exposition. My time as a tutor and writer has demonstrated to me (from my audiences) that I have a good talent for playing my own devil’s advocate, listening and watching through audience ears and eyes, and getting inside the intuitions likely to occur in the listener. When I was a math tutor in college, I always did that from the outset, and was always complimented for it. My experience with studying this for decades and debating it, metabolizing all the useful points of view on the issues that I have studied – while always trying to push forward to find what is really true – allows me to gather many perspectives together, anticipate the standard objections or misunderstandings, and help people with less experience navigate the issues. I have an unusual mix of accumulated areas of expertise—software development, neuroscience, philosophy, physics – which contributes to the ability to see and synthesize productive paths that might (and have) been missed elsewhere. Perspective – enough time seeing intellectual fads come and go, to recognize how they worked even “before my time.” Unless one sees – and can critique or free oneself from – contextual assumptions, one is likely to be entrained within conceptual expernalities that define the universe of discourse, possibly pruning away preemptively any chance for genuine progress and novel ideas. Einstein, Crick and Watson, Heisenberg and Bohr, all were able to think new thoughts and entertain new possibilities.
Like someone just posted in Less Wrong, you have a certain number of weirdness points, spend them wisely. People in the grips of an intellectual trance who don’t even know they are pruning away anything, cannot muster either the courage, or the creativity, to have any weirdness points to spend.
For example. Apparently, very few people understand the context and intellectual climate … the formative “conceptual externalities” that permeated the intellectual ether at the time Turing proposed his “imitation game.”
I alluded to some of these contextual elements of what – then – was the intellectual culture, without providing any kind of exposition (in other words, just making the claim in passing), in my dual message to you and Luke, earlier today (Friday.)
That kind of thing – were it to be explained rigorously, articulately, engagingly—is a mild eye-opening moment to a lot of people (I have explained it before to people who are very sure of themselves, who went away changed by the knowledge.) I can open the door to questioning what seems like such a “reasonable dogma”, i.e. that an “imitation game” is all there is, and all there rationally could be, to the question of, and criteria for, human-equivalent mentality.
Neuroscience, as I wrote in the Bostrom forum a couple weeks ago (perhaps a bit too stridently in tone, and not to my best credit, in that case) is no longer held in the spell of the dogma that being “rational” and “scientific” means banishing consciousness from our investigation.
Neither should we be. Further, I am convinced that if we dig a little deeper, we CAN come up with a replacement for the Turing test (but first we have to be willing to look!) … some difference that makes a difference, and actually develop some (at least probabilistic) test(s) for whether a system that behaves intelligently, has, in addition, consciousness.
So, this video course will be a combination of selected topics in scientific intellectual history that are essential to understand, in order to see where we have come from, and then will develop current and new ideas, so see where we might go.
I have a developing theory with elements that seem very promising. It is more than elements, it is becoming, by degrees, a system of related ideas that fit together perfectly, are partly based on accepted scientific results, and are partly extensions that a strong, rational case can be made for.
What is becoming interesting and exciting to me about the extensions, is that sometime during the last year (and I work on this every day, unless I am exhausted from a previous day and need to rest), the individual insights, which were exciting enough individually, and independently arguable, are starting to reveal a systematic cluster of concepts that all fit together.
This is extremely exciting, even a little scary at times. But suddenly, it is as if a lifetime of work and piecemeal study, with a new insight here, another insight there, a possible route of investigation elsewhere… all are fitting into a mosaic.
So, to begin with the point I began with, my time is pulling me in various directions. I am in the Bostrom forum, but on days that I am hot on the scent of another layer of this theory that is being born, I have to follow that. I do a lot of dictation when the ideas are coming quickly.
It is, of course, very complicated. But it will also be quite explainable, with systematic, orderly presentation.
So, that was the original plan for my own YT channel. It was to begin with essential intellectual history in physics, philosophy of mind, early AI, language comprehension, knowledge representation, formal semantics.… and that ball of interrelated concepts that set, to an extent, either correct or incorrect boundary conditions on what a theory has to look like.
Then my intent was to carefully present and argue for (and take devils advocate for) my new insights, one by one, then as a system.
I don’t know how it will turn out, or whether I will suddenly discover a dead end. But assuming no dead end, I want it out there where interested theorists can see it and judge it on its merits, up or down, or modify it.
I am going to tun out of word allowance any moment. But it was after planning this, that I thought of the opportunity to do interviews of other thinkers for possibly someone else’s YT channel. Both projects are obviously compatible. More later as interest dictates, I have to make dinner. Best, Tom NxGenSentience
Same question as Luke’s. I probably have jumped at it. I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions, with people like Bostrom, Google-ites setting up the AI laboratories, and other vibrant, creative, contempory AI-relevant players.
I have knowledge of AI, general comp sci, deep and broad neuroscience, the mind-body problem (philosophically understood in GREAT detail—college honors thesis at UCB was on that) and deep, detailed knowledge of all the big neurophilosphy players’ theories.
These players include but are not limited to Dennett, Searle, Dreyfus, Turing, as well as modern players too numerous to mention, plus some under-discussed people like the LBL quantum physicist Henry Stapp (quantum zeno effect and it’s relation to the possibility of consciousness and free will, whose papers I have been following assiduously for 15 years and think are absolutely required reading for anyone in this business.)
I have also closely followed Stuart Hameroff and Roger Penrose’s “Orch OR” theory—which has just been vindicated by major experiments refuting the long-running, standard objection to the possibility of quantum intra-neuronal processes (the objection based upon purportedly almost immediate, unavoidable, quantum decoherence caused by the warm, wet, noisy brain milleu) -- an objection Hameroff, Penrose, occasionally Max Tegmark (who has waxed and waned a bit over the last 15 years on this one, as I read his comments all over the web) and others, have mathematically dealth with for years, but has lacked—until just this last year empirical support.
Said support is now there—and with with some fanfare, I might add, in the nich scientific and philosophical mind-body and AI theoretic community that follows this—and vindicates core aspects of this theory (although doesn’t of confirm the Platonic qualia aspect.)
Worth digressing, though… for those who see this.… just as a physiological, quantum computational-theoretic account of how the brain does what it does … particularly how it implements dendritic processing (spatial and temporal summation, triggers to LTP, inter-neuron gap junction transience, etc.) which is by consensus the locus of the bulk of the neuronal integrate and fire desicion making, this Orch OR theory is amazing in its implications. (Essentially squares the entire synaptic-level information processing of the brain as a whole, to begin with. I think this is destined to be a nobel prize-level theory eventually.)
I know Hameroff as a formerly first name basis contact, and could, though it’s been a few years, rapidly trigger his memory, and get an on-tape detailed interview with him at any time.
Point is.… I have a standing offer to create detailed and theoretically competent—thus relevant interviews and discussions -- documentaries, edit them professionally, make them available on DVD, or trnascode them for someone’s branded You Tube channel (like MIRI, for example.)
No one has taken me up on that yet, either. I have a 6 thousand dollar digital camera and professional editing software to do this with, but more importantly, have 25 years of detailed study I can draw upon to make interviews that COUNT, are unique, and relevant.
No takers yet. So maybe I will go kickstarter and do them myself, on my own branded you Tube channel. Seems easier if I could get an exisitng organization like MIRI or even AAAI to sponsor my work, however. (I’d also like to cover the AAAI turing test conference in January in Texas, and do this, but need sponsorship at this point, because I am not independently wealthy.)
Same question as Luke’s. I probably would have jumped at it, if only to make seed money to sponsor other useful projects, like the following.
I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions and documentaries with key, relevant players and theoreticians in AI and related work. This includes individual thinkers, labs, Google’s AI work, the list is endless.
I have knowledge of AI, general comp sci, consideralble knowledge of neuroscience, the mind-body problem (philosophically understood in GREAT detail—college honors thesis at UCB was on that) and deep, long-term evolutionary knowledge of all the big neurophilosphy players’ theories.
These players include but are not limited to Dennett, Searle, Dreyfus, Turing, as well as modern players too numerous to mention, plus some under-discussed people like the LBL quantum physicist Henry Stapp (quantum zeno effect and it’s relation to the possibility of consciousness and free will, whose papers I have been following assiduously for 15 years and think are absolutely required reading for anyone in this business.)
I have also closely followed Stuart Hameroff and Roger Penrose’s “Orch OR” theory—which has just been vindicated by major experiments refuting the long-running, standard objection to the possibility of quantum intra-neuronal processes (the objection based upon purportedly almost immediate, unavoidable, quantum decoherence caused by the warm, wet, noisy brain milleu) -- an objection Hameroff, Penrose, occasionally Max Tegmark (who has waxed and waned a bit over the last 15 years on this one, as I read his comments all over the web) and others, have mathematically dealth with for years, but has lacked—until just this last year empirical support.
Said support is now there—and with with some fanfare, I might add, within the nich scientific and philosophical mind-body and AI theoretic community that follows this work. Experiments vindicate core aspects of this theory (although do not confirm the Platonic qualia aspect.)
Worth digressing, though… for those who see this message.… so I will mention that, just as a physiological, quantum computational-theoretic account of how the brain does what it does … particularly how it implements dendritic processing (spatial and temporal summation, triggers to LTP, inter-neuron gap junction transience, etc.) which (the dendritic tree) is by consensus the neuronal locus of the bulk of neurons’ integrate and fire desicion making, this Orch OR theory is amazing in its implications. (Essentially it squares the entire synaptic-level information processing aggregate estimate, of the brain as a whole, for starters! I think this is destined to be a nobel prize-level theory eventually.)
I know Hameroff on a formerly first name basis contact, and could, though it’s been a couple years, rapidly trigger his memory of who I am—he held me in good stead -- and I could get an on-tape detailed interview with him at any time.
Point is.… I have a standing offer to create detailed and theoretically competent—thus relevant interviews and discussions -- documentaries, edit them professionally, make them available on DVD, or trnascode them for someone’s branded You Tube channel (like MIRI, for example.)
I got this idea, when I was watching an early interview at Google with Kurzweil, by some 2x year-old bright-eyed google-ite employee, who was asking the most shallow, immature, clueless questions! (I thought at the time—“jeeze, is this the best they can find to plumb Kurzweil’s thinking on the future of AI at Google, or in general?”)
Anyway, no one has taken me up on that offer to create what could be terrific documentary-interviews, either. I have a 6 thousand dollar digital camera and professional editing software to do this with, not some pocket camera.
But more importantly, I have 25 years of detailed study of the mind body problem and AI, and I can draw upon that to make interviews that COUNT, are unique, and relevant, and unparalleled.
AI is my life’s work (that, and the co-entailed problem of mind-body theory generally.) I have been working hard to supplant the Turing test with something that tests for consciousness, instead of relies on the positiivistic denial of the existence of consciousness qua consciousness, beyond behavior. That test came out of an intellectual soil that was dominated with positivism, which in turn was based on a mistaken and defective attempt to metabolize the Newtonian to Quantum phsical transition.
It’s partly based on a scientific ontology that is fundamentally false, and has been demonstrably so for 100 years—Newton’s deterministic clockwork universe model that has no room for “consciousness”, only physical behavior—and partly based on an incomplete attempt to intellectually metabolize the true lessons of quantum theory (please see Henry Stapp’s papers , on his “stapp files” LBL website, for a crystal clear set of expositions of this point.)
No takers yet. So maybe I will have to go kickstarter too, and do these documentaries myself, on my own branded you Tube channel. (It will be doing a great service to countless thinkers to have GOOD q and a with their peers. I am not without my own original questions about their theories, that I would like to ask, as well.)
Seems easier if I could get an exisitng organization like MIRI or even AAAI to sponsor my work, however. (I’d also like to cover the AAAI turing test conference in January in Texas, and do this, but need sponsorship at this point, because I am not independently wealthy. I am forming a general theory, from which I think the keynote speaker’s Turing Test 2 “Lovelace 2.0” might actually be a derivable correllate.)
It’s nice to hear a quote from Wittgenstein. I hope we can get around to discussing the deeper meaning of this, which applies to all kinds of things… most especially, the process by which each kind of creature (bats, fish, homo sapiens, and potential embodied artifactual (n.1) minds (and also not embodied in the contemporaneously most often used sense of the term—Watson was not embodied in that sense) *constructs it’s own ontology) (or ought to, by virtuue of being embued with the right sort of architecture.)
That latter sense, and the incommensurability of competing ontologies in competing creatures (where ‘creature’ is defined defined as a hybrid, and N-tuple, of cultural legacy contructs, endemic evolutionarily bequeathed physiological sensorium, it’s individual autobiographical experience...), but not (in my view, in the theory I am developing) opaque to enlightened translatability—though the conceptual scaffolding for translaiton involves the nature of, purpose of, and boundaries, both logical and temporal of the “specious present”, the quantum zeno effect, and other considerations, so it is more suble than meets the eye)… is more of what Wittengensttein was thinking about, considering Kant’s answer to skepticism, and lots of other issues.
Your more straightforward point bears merit, however. Most of us have spend a good deal of our lives battling not issue opacity, as much as human opacity to new, expanded, revised, or unconventional ideas.
Note 1.: BY the way, I occasionally write ‘artifactual’ as opposed to ‘artificial’ because of the sense in which, as products of nature, everything we do—including building AIs—is, ipso facto, a product of nature, and hence, ‘artificial’ is an adjective we should be careful about.
People do not behave as if we have utilities given by a particular numerical function that collapses all of their hopes and goals into one number, and machines need not do it that way, either.
I think this point is well said, and completely correct.
..
Why not also think about making other kinds of systems?
An AGI could have a vast array of hedges, controls, limitations, conflicting tendencies and tropisms which frequently cancel each other out and prevent dangerous action.
The book does scratch the surface on these issues, but it is not all about fail-safe mind design and managed roll-out. We can develop a whole literature on those topics.
I agree. I find myself continually wanting to bring up issues in the latter class of issues… so copiously so, that frequently it feels like I am trying to redesign our forum topic. So, I have deleted numerous posts-in-progress that fall into that category. I guess those of us who have ideas about fail-safe mind design that are more subtle—or to put it more neutrally—do not fit the running paradigm in which the universe of discourse is that of transparent, low-dimensional (low dimensional function range space, not low dimensional function domain space) utility functions, need to start writing our own white papers.
When I hear the Bostrom claims only 7 people in the world are thinking full time and productively about (in essence) fail safe mind design, or that someone at MIRI wrote only FIVE people are doing so (though in the latter case, the author of that remark did say that there might be others doing this kind of work “on the margin”, whatever that means), I am shocked.
It’s hard to believe, for one thing. Though, the people making those statements must have good reasons for doing so.
But maybe the deriviation of such low numbers could be more understandable, if one stipulates that “work on the problem” is to be counted if and only if candidate people belong to the equivalence class of thinkers restricting their approach to this ONE, very narrow conceptual and computational vocabulary.
That kind of utility function-based discussion (remember when they were called ‘heuristics’ in the assigned projects, in our first AI courses?) has its value, but it’s a tiny slice of the possible conceptual, logical and design pie … about like looking at the night sky through a soda straw. If we restrict ourselves to such approaches, no wonder people think it will take 50 or 100 years to do AI of interest.
Ourside of the culture of collapsing utility functions and the like, I see lots of smart (often highly mathematical, so they count as serious) papers in whole brain chaotic resonant neurodynamics; new approachs to foundations of mental health issues and disorders of subjective empathy (even some application of deviant neurodynamics to deviant cohort value theory, and defective cohort “theory of mind”—in the neuropsychiatric and mirror neuron sense) that are grounded in, say, pathologies with transient Default Node Network coupling… and distrubances of phase coupled equilibria across the brain.
If we run out of our own ideas to use from scratch (which I don’t think is at all the case … as your post might suggest, we have barely scratched the surface), then we can go have a look at current neurology and neurobiology, where people are not at all shy about looking for “information processing” mechanisms underlying complex personality traits, even underlying value and aesthetic judgements.
I saw a visual system neuroscientist’s paper the other day offering a theory of why abstract (ie. non-representational) art is so intriguing to (not all, but some) human brains. It was a multi-layered paper, discussing some transiently coupled neurodynamical mechanisms of vision (the authors’ specialties), some reward system neuromodulator concepts, and some traditional concepts expressed at a phenomenological, psychological level of description. An ambitious paper, yes!
But ambition is good. I keep saying, we can’t expect to do real AI on the cheap.
A few hours or days reading such papers is good fertilizer, even if we do not seek to translate, in any direct way (like copying “algorithms” from natural brains) wetware brain research, into our goal, which presumably is to do dryware mind design --- and do it in a way where we choose our own functional limits, not have nature’s 4.5 billion years of accidents choose boundary conditions on substrate platforms, for us.
Of course, not everyone is interested in doing this. I HAVE learned in this forum, that “AI” is a “big tent”. Lots of uses exist for narrow AI, in thousands of indutries and fields. Thousands of narrow AI systems are already in play.
But, really… aren’t most of us interested in this topic because we want the more ambitious result?
Bostrom says “we will not be concerned with the metaphysics of mind...” and ”...not concern ourselves whether these entities have genuine self-awareness....”
Well, I guess we won’t be BUILDING real minds anytime soon, then. One can hardly expect to create, that which one won’t even openly discuss. Bostrom is wrting and speaking, using the language of “agency” and “goals” and “motivational sets”, but he is only using those terms metaphorically.
Unless, that is, everyone else in here (other than me) actually is prepared to deny that we—who spawned those concepts, to describe rich, conscious, intentionally entrained features of the lives of self-aware, genuine conscious creatures—are different, i.e., that we are conscious and self-aware.
No one here needs a lesson in intellectual history. We all know that people did deny that , back in the behaviorism era. (I have studied the reasons—philosophical and cultural—and continue to uncover in great detail, mistaken assumptions out of which that intellectual fad grew.)
Only ff we do THAT again, will we NOT be using “agent” metaphorically, when we apply that to machines with no real consciousness, because ex hypothesi WE’d posess no minds either, in the sense we all know we do posess, as conscious humans.
We’d THEN be using it (’agent”, “goal”, “motive” … the whole equivalence class of related nouns and predicates) in the same sense for both classes of entities (ourselves, and machines with no “awareness”, where the latter is defined as anyting other than public, 3rd person observable behavior.)
Only in this case, would it not be a metaphor to use ‘agent, motive’, etc. in describing intelligent (but not conscious) machines, whcih evidently is the astringent conceptual model within which Bostrom wishes to frame HLAI --- proscribing considerations, as he does, of whether they are genuinely self-aware.
But, well, I always thought that that excessively positivistic attitude, had more than a little something to do with the “AI winter” (just like it is widely acknowledged to have been responsible for the neuroscience winter that paralleled it.)
Yet neuroscientists are not embarassed to now say, “That was a MISTAKE, and—fortunately—we are over it. We wasted some good years, but are no longer wasting time denying the existence of consciousness, the very thing that makes the brain interesting and so full of fundamental scientific interest. And now, the race is on to understand how the brain creates real mental states.”
NEUROSCIENCE has gotten over that problem with discussing mental states qua mental states , clearly.
And this is one of the most striking about-faces in the modern intelllectual history of science.
So, back to us. What’s wrong with computer science? Either AI-ers KNOW that real consciousness exists, just like neuroscientists do, and AI-ers just don’t give a hoot about making machines that are actually conscious.
Or, AI-ers are afraid of tackling a problem that is a little more interesting, deeper, and harder (a challenge that gets thousands of neuroscientists and neurophilosophers up on the morning.)
I hope the latter is not true, because I think the depth and possibilities of the real thing—AI with consciousnes—are what gives it all the attraction (and holds, in the end, for reasons I won’t attempt to desribe in a short post, the only possibility of making the things friendly, if not benificient.)
Isn’t that what gives AI its real interest? Otherwise, why not just write business software?
Could it be that Bostrom is throwing out the baby with the bathwater, when he stipulates that the discussion, as he frames it, can be had (and meaningful progress made), without the interlocutors (us) being concerned about whether AIs have genuine self awareness, etc?
My general problem with “utilitarianism” is that it’s sort of like Douglas Adams’ “42.” An answer of the wrong type to a difficult question. Of course we should maximize, that is a useful ingredient of the answer, but is not the only (or the most interesting) ingredient.
Taking off from the end of that point, I might add (but I think this was probably part of your total point, here, about “the most interesting” ingredient) that people sometimes forget that utilitarianism is not a theory itself about what is normatively desirable, and least not much of one. For Bentham-style “greatest good for the greatest number” to have any meaning, it has to be supplemented with a view of what property, state of being, action type, etc, counts as a “good” thing, to begin with. Once this is defined, we can then go on to maximize that—seeking to achieve the most of that, for the most people (or relevant entities.)
But greatest good for the greatest number means nothing until we figure out a theory of normativity, or meta-normativity that can be instantiated across specific, varying situations and scenarios.
IF the “good” is maximizing simple total body weight, then adding up the body weight of all people in possible world A, vs in possible world B, etc, will allow us a utilitarian decision among possible worlds.
IF the “good” were fitness, or mental healty, or educational achievement… we use the same calculus, but the target property is obviously different.
Utilitarianism is sometimes a person’s default answer, until you remind them that this is not an answer at all about what is good. It is just an implementation standard for how that good is to be devided up. Kind of a trivial point, I guess, but worth reminding ourselves from time to time that utilitarianism is not a theory of what is actually good, but how that might be distributed, if that admits of scarcity.
One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is...
This is totally right as well. We live inside our ontologies. I think one of the most distinctive, and important, features of acting, successfully aware minds (I won’t call them ’intelligences” because of what I am going to say further down, in this message) is this capacity to mint new ontologies as needed, and to do it well, and successfully.
Successfully means the ontological additions are useful, somewhat durable constructs, “cognitively penetrable” to our kind of mind, help us flourish, and give a viable foundation for action that “works” … as well as not backing us into a local maximum or minimum.… By that I mean this: “successfull” minting of ontological entities enables us to mint additional ones that also “work”.
Ontologies create us as much as we create them, and this creative process is I think a key feature of “successful” viable minds.
Indeed, I think this capacity to mint new ontologies and do it well, is largely orthogonal to the other two that Bostrom mentions, i.e. 1) means-end reasoning (what Bostrom might otherwise call intelligence) 2) final or teleological selection of goals from the goal space, and to my way of thinking… 3) minting of ontological entities “successfully” and well.
In fact, in a sense, I would put my third one in position one, ahead of means-end reasoning, if I were to give them a relative dependence. Even though orthogonal—in that they vary independently—you have to have the ability to mint ontologies, before means-end reasoning has anything to work on. And in that sense, Katja’s suggestion that ontologies can confer more power and growth potential (for more successful sentience to come), is something I think is quite right.
But I think all three are pretty self-evidentally largely orthogonal, with some qualifications that have been mentioned for Bostrom’s original two.
One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend.
I think the remarks about goals being ontologically-associated, are absolutely spot on. Goals, and any “values” distinguishing among the possible future goals in the agent’s goal space, are built around that agent’s perceived (actually, inhabited is a better word) ontology.
For example, the professional ontology of a wall street financial analyst includes the objects that he or she interacts with (options, stocks, futures, dividends, and the laws and infrastructure associated with the conceptual “deductive closure” of that ontology.)
Clearly, “final”—teleological and moral – principles involving approach and avoidance judgments … say, involving insider trading (and the negative consequences at a practical level, if not the pure anethicality, of running afoul of the laws and rules of governance for trading those objects) , are only defined within an ontological universe of discourse, which contains those financial objects and the network of laws and valuations that define – and are defined by—those objects.
Smarter beings, or even ourselves, as our culture evolves, generation after generation becoming more complex, acquire new ontologies and gradually retire others. Identity theft mediated by surreptitious seeding of laptops in Starbucks with keystroke-logging viruses, is “theft” and is unethical. But trivially in 1510 BCE, the ontological stage on which this is optionally played out did not exist, and thus, the ethical valence would have been undefined, even unintelligible.
That is why, if we can solve the friendlieness problem, it will have to be by some means that gives new minds the capacity to develop robust ethical meta-intuition, that can be recruited creatively, on the fly, as these beings encounter new situations that call upon them to make new ethical judgements.
I happen to be a version of meta -ethical realist, like I am something of a mathematical platonist, but in my position, this is crossed also with a type of constructivist metaethics, apparently like that subscribed-to by John Danaher in his blog (after I followed the link and read it.)
At least, his position sounds like it is similar to mine, although constructivist part of my theory is supplemented with a “weak” quasi-platonist thread, that I am trying to derive from some more fundamental meta-ontological principles (work in progress on that.)
To continue:
If there are untapped human cognitive-emotive-apperceptive potentials (and I believe there are plenty), then all the more openness to undiscovered realms of “value” knowledge, or experience, when designing a new mind architecture, is called for. To me, that is what makes HLAI (and above) worth doing.
But to step back from this wondrous, limitless potential, and suggest some kind of metric based on the values of the “accounting department”, those who are famous for knowing the cost of everything but the value of nothing, and even more famous for, by default, often derisively calling their venal, bottom-line, unimaginative dollars and cents worldview a “realistic” viewpoint (usually a constraint based on lack of vision) -- when faced with pleas for SETI grants, or (originally) money for the National Supercomputing Grid, …, or any of dozen of other projects that represent human aspiration at its best—seems, to me, to be shocking.
I found myself wondering if the moderator was saying that with a straight face, or (hopefully) putting on the hat of a good interlocutor and firestarter, trying to flush out some good comments, because this week had a diminished activity post level.
Irrespective of that, another defect, as I mentioned, is that economics as we know it will prove to be relevant for an eyeblink, in the history of the human species (assuming we endure.) We are closer to the end of this kind of scarcity-based economics, than the beginning (assuming even one or more singularity style scenarios come to pass, like nano.)
It reminds me of the ancient TV series Star Treck New Gen, in an episode wherein someone from our time ends up aboard the Enterprise of the future, and is walking down a corridor speaking with Picard. The visitor asks Picard something like “who pays for all this”, as the visitor is taking in the impressive technology of the 23rd century vessel.
Picard replys something like, “The economics of the 23 century are somewhat different from your time. People no longer arrange their lives around the constraint of amassing material goods....”
I think it will be amazing if even in 50 years, economics as we know it, has much relevance. Still less so in future centuries, if we—or our post-human selves are still here.
Thus, economic measures of “value” or “success” are about the least relevant metric we ought to be using, to assess what possible critaris we might give to track evolving “intelligence”, in the applicable, open-ended, future-oriented sense of the term.
Economic—i.e. marketplace-assigned “value” or “success” is already pretty evidently a very limiting, exclusionary way to evaluate achievement.
Remember: economic value is assigned mostly by the center standard deviation of the intelligence bell curve. This world, is designed BY, and FOR, largely, ordinary people, and they set the economic value of goods and services, to a large extent.
Interventions in free market assignment of value are mostly made by even “worse” agents… greed-based folks who are trying to game the system.
Any older people in here might remember former Senator William Proxmire’s “Golden Fleece” award in the United States. The idea was to ridicule any spending that he thought was impractical and wasteful, or stupid.
He was famous for assigning it to NASA probes to Mars, the Hubble Telescope (in its several incarnations), the early NSF grants for the Human Genome project..… National Institute for Mental Health programs, studies of power grid reliability—anything that was of real value in science, art, medicine… or human life.
He even wanted to close the National Library of Congress, at one point.
THAT, is what you get when you have ECONOMIC measures to define the metric of “value”, intelligence or otherwise.
So, it is a bad idea, in my judgement, any way you look at it.
Ability to generate economic “successfulness” in inventions, organization restructuring… branding yourself of your skills, whatever? I don’t find that compelling.
Again, look at professional sports, one of the most “successful” economic engines in the world. A bunch of narcissistic, girl-friend beating pricks, racist team owners… but by economic standards, they are alphas.
Do we want to attach any criterion—even indirect—of intellectual evolution, to this kind of amoral morass and way of looking at the universe?
Back to how I opened this long post. If our intuitions start running thin, that should tell us we are making progress toward the front lines of new thinking. When our reflexive answers stop coming, that is when we should wake up and start working harder.
That’s because this—intelligence, mind augmentation or redesign, is such a new thing. The ultimate opening-up of horizons. Why bring the most idealistically-blind, suffocatingly concrete worldview, along into the picture, when we have a chance at transcendence, a chance to pursue infinity?
We need new paradigms, and several of them.
Thanks, I’ll have a look. And just to be clear, watching *The Machine” wasn’t driven primarily by prurient interest—I was drawn in by a reviewer who mentioned that the backstory for the film was a near-future world-wide recession, pitting the West with China, and that intelligent battlefield robots and other devices were the “new arms race” in this scenario.
That, and that the film reviewer mentioned that (i) the robot designer used quantum computing to get his creation to pass the Turing Test (a test I have doubts about as do other researchers, of course, but I was curious how the film would use it) - and (ii) yet the project designer continued to grapple with the question of whether his signature humanoid creation was really conscious, or a “clever imitation”, pulled me in.
(He verbally challenges and confronts her/it, in an outburst of frustration, in his lab about this, roughly two thirds of the way through the movie and she verbally parrys plausible responses.)
It’s really not all that weak, as film depictions of AI go. It’s decent entertainment with enough threads of backstory authenticity, political and philosophical, to tweak one’s interest.
My caution, really, was a bit harsh; applying largely to the uncommon rigor of those of us in this group—mainly to emphasise that the film is entertainment, not a candidate for a paper in the ACM digital archives.
However, indeed, even the use of a female humanoid form makes tactical design sense. If a government could make a chassis that “passed” the visual test and didn’t scream “ROBOT” when it walked down the street, it would have much greater scope of tactical application—covert ops, undercover penetration into terrorist cells, what any CIA clandestine operations officer would be assigned to do.
Making it look like a woman just adds to the “blend into the crowd” potential, and that was the justification hinted at in the film, rather than some kind of sexbot application. “She” was definitely designed to be the most effective weapon they could imagine (a British-funded military project.)
Given that over 55 countries now have battlefield robotic projects under way (according to Kurzweil’s weekly newsletter) -- and Google got a big DOD project contract recently, to proceed with advanced development of such mechanical soldiers for the US government—I thought the movie worth a watch.
If you have 90 minutes of low-priority time to spend (one of those hours when you are mentally too spent to do more first quality work for the day, but not yet ready to go to sleep), you might have a glance.
Thanks for the book references. I read mostly non-fiction, but I know sci fi has come a very long way, since the old days when I read some in high school. A little kindling for the imagination never hurts. Kind regards, Tom (“N.G.S”)
Perhaps we should talk about something like productivity instead of intelligence, and quantify according to desirable or economically useful products.
I am not sure I am very sympathetic with a pattern of thinking that keeps cropping up, viz., as soon as our easy and reflexive intuitions about intelligence become strained, we seem to back down the ladder a notch, and propose just using an economic measure of “success”.
Aside from (i) somewhat of a poverty of philosophical of imagination (e.g. what about measuring the intrinsic interestingness of ideas, or creative output of various kinds… or, even, dare I say beauty if these superintellects happen to find that worth doing [footnote 1]), I am skeptical on grounds of (ii): given the phase change in human society likely to accompany superintelligence (or nano, etc.), what kind of economic system is likely to be around, in the 22nd century, the 23rd.… and so on?
Economics, as we usually use the term, seems as dinosaur-like as human death, average IQs of 100, energy availability problems, the nuclear biological human family (already DOA); having offspring by just taking the genetic lottery cards and shuffling… and all the rest of social institutions based on eons of scarcity—of both material goods, and information.
Economic productivity, or perceived economic value. seems like the last thing we ought to based intelligence metrics on. (Just consider some the economic impact of professional sports—hardly a measure of meteoric intellectual achievement.)
[Footnote 1]: I have commented in here before about the possibility that “super-intelligences” might exhibit a few surprises for us math-centric, data dashboard-loving, computation-friendly information hounds.
(Aside: I have been one of them, most of my life, so no one should take offense. Starting far back: I was the president of Mu Alpha Theta, my high school math club, in a high school with an advanced special math program track for mathematically gifted students. Later, while a math major at UC Berkeley, I got virtually straight As and never took notes in class; I just went to class each day, sat in the front row, and payed attention. I vividly remember getting the exciting impression, as I was going through the upper division math courses, that there wasn’t anything I couldn’t model.)
After graduation from UCB, at one point I was proficient in 6 computer languages. So, I do understand the restless bug, the urge to think of a clever data structure and to start coding … the impression that everything can be coded, with enough creativity .
I also understand what mathematics is, pretty well. For starters, it is a language. A very, very special language with deep connections to the fabric of reality. It has features that make it one of the few, perhaps only candidate language, for being level of description independent. Natural languages, and technical domain-specific languages are tied to corresponding ontologies and to corresponding semantics’ that enfold those ontologies. Math is the most omni-ontological, or meta-ontological language we have (not counting brute logic, which is not really a “language”, but a sort of language substructure schema.
Back to math. It is powerful, and an incredible tool, and we should be grateful for the “unreasonable success” it has (and continue to try to understand the basis for that!)
But there are legitimate domains of content beyond numbers. Other ways of experiencing the world’s (and the mind’s) emergent properties. That is something I also understand.
So, gee, thanks to whomever gave me the negative two points. It says more about you than it does about me, because my nerd “street cred” is pretty secure.
I presume the reader “boos” are because I dared to suggest that a superintelligence might be interested in, um, “art”, like the conscious robot in the film I mention below, who spends most of its free time seeking out sketch pads, drawing, and asking for music to listen to. Fortunately, I don’t take polls before I form viewpoints, and I stand by what I said.)
Now, to continue my footnote: Imagine that you were given virtually unlimited computational ability, imperishable memory, ability to grasp the “deductive closure” of any set of propositions or principles, with no effort, automatically and reflexively.
Imagine also that you have something similar to sentience or autonomy, and can choose your own goals. SUppose also, say, that your curiosity functions in such a way that “challenges” are more “interesting” to you than activities that are always a fait accompli .
What are you going to do? Plug yourself into the net, and act like an asperger spectrum mentality, compulsively computing away at everything that you can think of to compute?
Are you going to find pi to a hundred million digits of precision?
Invert giant matrices just for something to do?
It seems at least logically and rationally possible that you will be attracted to precisely those activities that are not computational givens before you even begin doing them. You might view the others as pointless, because their solution is preordained.
Perhaps you will be intrigued by things like art, painting, or increasingly beautiful virtual reality simulations for the sheer beauty of them.
In case anyone saw the movie “The Machine” on Netflix, it dramatizes this point, which was interesting. It was, admittedly, not a very deep film; one inclined to do so can find the usual flaws, and the plot device of using a beautiful female form could appear to be a concession to the typically male demographic for SciFi films—until you look a bit deeper at the backstory of the film (that I mention below.)
I found one thing of interest: when the conscious robot was left alone, she always began drawing again, on sketch pads.
And, in one scene wherein the project leader returned to the lab, did he find “her” plugged-into the internet, playing chess with supercomputers around the world? Working on string theory? Compiling statistics about everything that could conceivably be quantified?
No. The scene finds the robot (in the film, it has sensory responsive skin, emotions, sensory apparati etc. based upon ours) alone in a huge warehouse, having put a layer of water on the floor, doing improvisational dance with joyous abandon, naked, on the wet floor, to loud classical music, losiing herself in the joy of physical freedom, sensual movement, music, and the synethesia of music, light, tactility and the experience of “flow”.
The explosions of light leaking through her artificial skin, in what presumably were fiber ganglia throughout her-its body, were a demure suggestion of whole body physical joy of movement, perhaps even an analogue of sexuality. (She was designed partly as an em, with a brain scan process based on a female lab assistant.)
The movie is worth watching just for that scene (please—it is not for viewer eroticism) and what it suggests to those of us who imagine ourselves overseeing artificial sentience design study groups someday. (And yes, the robot was designed to be conscious, by the designer, hence the addition to the basic design, of the “jumpstart” idea of uploading properties of the scanned CNS of human lab assistant.)
I think we ought to keep open our expectations, when we start talking about creating what might (and what I hope will) turn out to be actual minds.
Bostrom himself raises this possibility when he talks about untapped cognitive abilities that might already be available within the human potential mind-space.
I blew a chance to talk at length about this last week. I started writing up a paper, and realized it was more like a potential PhD dissertation topic, than a post. So I didn’t get it into usable, postable form. But it is not hard to think about, is it? Lots of us in here already must have been thinking about this. … continued
If we could easily see how a rich conception of consciousness could supervene on pure information
I have to confess that I might be the one person in this business who never really understood the concept of supervenience—either “weak supervenience” or “strong supervenience.” I’ve read Chalmers, Dennett, the journals on the concept… never really “snapped-in” for me. So when the term is used, I have to just recuse myself and let those who do understand it, finish their line of thought.
To me, supevenience seems like a fuzzy way to repackage epiphenomenalism, or to finesse some kind of antinomy (for them), like, “can’t live with eliminative materialism, can’t live with dualism, can’t live with type—type identity theory, and token-token identity theory is untestable and difficult even to give logical nec and sufficient conditions for, so… lets have a new word.”
So, (my unruly suspicion tells me) let’s say mental events (states, processes, whatever) “supervene” on physiological states (events, etc.)As I say, so far, I have just had to suspend judgement and wonder if some day “supervene” will snap-in and be intuitively penetrable to me. I push all the definitions, and get to the same place—a “I don’t get it” place, but that doesn’t mean I believe the concept is itself defective. I just have to suspend judgement (like, for the last 25 years of study or so.)
We need more in our ontology, not less.
I actually believe that, too… but with a unique take: I think we all operate with a logical ontology … not in the sense of modus ponens, but in the sense that a memory space can be “logical”, meaning in this context, detached from physical memory.
Further, the construction of this logical ontology is, I think, partly culturally influenced, partly influenced by the species’ sensorium and equipment, party influenced / constructed by something like Jeff Hawkins’ prediction-expectation memory model… constructed, bequeathed culturally, and in several additional related, ways that also tune the idealized, logical ontology.
Memetics influences (in conjunction with native—although changeable—abilities in those memes’ host vectors) the genesis, maintenance, and evolution of this “logical ontology”, also. This is feed foward and feed backward. Memetics influences the logical ontology, which crystalizes into additional memetic templates that are kept, tuning further the logical ontology.
Once “established” (and it constantly evolves), this “logical” ontology is the “target” that, over time, a new (say, human, while growing up, growing old) has as the “target” data structures that it creates a virtual, phenomenological analog simulation of, and as the person gains experience, the person’s virtual reality simulation of the world converges on something that is in some way consistently isomorphically related to this “logical” idealized ontology.
So (and there is lots of neurology research that drives much of this, though it may all sound rather speculative) for me, there are TWO ontologies, BOTH of them constructed, and those are in addition to the entangled “outside world” quantum substrate, which is by definition inherently both sub-ontological (properly understood) and not sensible, (It is sub-ontological because of its nature, but is interrogatable, giving feedback helping to form boundary conditions for the idealized logical ontology (or ontologies, in different species.)
I’ll add that I think the “logical ontology” is also species dependent, unsurprisingly.
I think you and I got off on the wrong foot, maybe you found my tone too declaratory when it should have been phrased more subjunctively. I’ll take your point. But since you obviously have a philosophy competence, you will know what the following means:-- one can say my views resemble somewhat an updated quasi-Kantian model, supplemented with the idea that noumena are the inchoate quantum substrate.
Or perhaps to correct that, in my model there are two “noumenal” realms: one is the “logical ontology” I referred to, a logical data structure, and the other is the one below that, and below ALL ontologies, which is the quantum substrate, necessarily “subontological.”
But my theory (there is more than I have just shot through quickly right now) handles species-relative qualia and the species-relative logical ontologies across species.
Remaining issues include : how qualia are generated. And the same question for the sense of self. I have ideas how to solve these, and the indexical 1st person problem, connected with the basis problem. Neurology studies of default mode network behavior and architecture, its malfunction, and metacognition, epilepsy, etc, help a lot.
Think this is speculative? You should read neruologists these days, especially the better, data driven ones. (Perhaps you already know, and you will thus see where I derive some of my supporting research.)
Anyway, always, always, I am trying to solve all this in the general case—first, across biological conscious species (a bird has a different “logical” ontology than people, as well as a different phenomenological reality that, to varying degrees of precision, “represents” or maps to, or has a recurrent resonance with that species’ logical ontology) -- and then trying to solve it for any general mind in mind space. that has to live in this universe.
It all sounds like hand waving, perhaps. But this is scarcely an abstract. There are many puzzle pieces to the theory, and every piece of it has lots of specific research. It all is progressively falling together into an integrated system. I need geffen graphs, white boards, to explain it, since its a whole theory, so I can’t squeeze it into one post. Besides, this is Bostrom’s show.
I’ll write my own book when the time comes—not saying it is right, but it is a promising effort so far, and it seems to work better, the farther I push it.
When it is far enough along, I can test it on a vlog, and see if people can find problems. If so, I will revise, backtrack, and try again. I intend to spend the rest of my life doing this, so discovered errors are just part of revision and refinement.
But first I have to finish, then present it methodically and carefully, so it can be evaluated by others. No space here for that.
Thanks for your previous thoughts, and your caution against sounding too certain. I am really NOT that certain, of course, of anything. I was just thinking out loud, as they say.
this week is pretty much closed..… cheers...
Thanks for the very nice post.
Three types of information in the brain (and perhaps other platforms), and (coming soon) why we should care
Before I make some remarks, I would recommend Leonard Susskind’s (for those who don’t know him already – though most folks in here probably do—he is a physicist at the Stanford Institute for Theoretical Physics) very accessible 55 min YouTube presentation called “The World as Hologram.” It is not as corny as it might sound, but is a lecture on the indestructibility of information, black holes (which is a convenient lodestone for him to discuss the physics of information and his debate with Hawking), types of information, and so on. He makes the seemingly point that, “…when one rules out the impossible, then what is left, however improbable, is the best candidate for truth.”
One interesting side point that comes out is his take on why computers that are more powerful have to shed more “heat”. Here is the talk: http://youtu.be/2DIl3Hfh9tYOkay, my own remarks. One of my two or three favorite ways to “bring people in” to the mind-body problem, is with some of the ideas I am now presenting. This will be in skeleton form tonight and I will come back and flesh it out more in coming days. (I promised last night to get something up tonight on this topic, and in case anyone cares and came back, I didn’t want to have nothing. I actually have a large piece of theory I am building around some of this, but for now, just the three kinds of information, in abbreviated form.
Type One information is the sort dealt with, referred to, and treated in thermodynamics and entropy discussions. This is dealt with analytically in Newton’s Second Law of Thermodynamics. Here is one small start, but most will know it: en.wikipedia.org/wiki/Second_law_of_thermodynamics
Heat, energy, information, the changing logical positions within state spaces of entities or systems of entities, all belong to what I am calling category one information in the brain. We can also call this “physical” information. The brain is pumped—not closed—with physical information, and emits physical information as well.
Note that there is no semantic, referential, externally cashed-out content, defined for physical, thermodynamic information, qua physical information. It is—though possibly thermodynamically open an otherwise closed universe of discourse, needing nothing logically or ontologically external to analytically characterize it.
Type Two information in the brain (please assign no significance to my ordering, just yet) is functional. It is a carrier, or mediator, of causal properties, in functionally larger physical ensembles, like canonical brain processes. The “information” I direct attention to here must be consistent with (i.e. not violate principles of) Category One informational flow, phase space transitions, etc., in the context of the system, but we cannot derive Category Two information content (causal loop xyz doing pqr) from dynamical Category One data descriptions themselves.
In particular, imagine that we deny the previous proposition. We would need either an isomorphism from Cat One to Cat Two, or at least an “onto” function from Cat One to Cat Two (hope I wrote that right, it’s late.) Clearly, Cat one configurations to Cat Two configurations are many-many, not isomorphic, nor many to one. (And one to many transformations from cat one sets to cat two sets, would be intuitively unsatisfactory if we were trying to build an “identity” or transform to derive C2 specifics, from C1 specifics .
It would resemble replacing type-type identity with token-token identity, jettisoning both sides of the Leibniz Law bi-conditional (“Identity of indiscernibles” and “Indiscernibility of Identicals” --- applied with suitable limits so as not to sneak anything in by misusing sortal ranges of predicates or making category errors in the predications.)
Well, this is a stub, and because of my sketchy presentation, this might be getting opaque, so let me move on to the next information type, just to get all three out.
Type Three information, is semantic, or intentional content, information. If I am visualizing very vibrantly a theta symbol, the intentional content of my mental state is the theta symbol on whatever background I visualize it against. A physical state of, canonically, Type Two information – which is a candidate, in a particular case, to be the substrate-instantiation or substrate-realization of this bundle of Type Three information (probably at least three areas of my brain, frequency coupled and phase offset locked, until a break in my concentration occurs) is also occuring.
A liberal and loose way of describing Type Three info (that will raise some eyebrows because it has baggage, so I use it only under duress: temporary poverty of time and the late hour, to help make the notion easy to spot) is that a Type Three information instance is a “representation” of some element, concept, or sensible experience of the “perceived” ontology (of necessity, a virtual, constructed ontology, in fact, but for this sentence, I take no position about the status of this “perceived”, ostensible virtual object or state of affairs.)
The key idea I would like to encourage people to think about is whether the three categories of information are (a) legitimate categories, and mainly (b) whether they are collapsible, inter-translatable, or are just convenient shorthand level-of-description changes. I hope the reader will see, on the contrary, that one or more of them are NOT reducible to a lower one, and that this has lessons about mind-substrate relationships that point out necessary conceptual revisions—and also opportunities for theoretical progress.
It seems to me that reducing Cat Two to Cat One is problematic, and reducing Cat 3 to Cat 2 is problematic, given the usual standards of “identity” used in logic (e.g. i. Leibniz Law; ii. modal logic’s notions of identity across possible worlds, and so on.)
Okay, I need to clean this up. It is just a stub. Those interested should come back and see it better written, and expanded to include replies to what I know are expected objections, questions, etc., C2 and C3 probably sound like the “same old thing” the m-b problem about experience vs neural correlate. Not quite. I am trying to get at something additional, here. Hard without diagrams.
Also, I have to present much of this without any context… like presenting a randomly selected lecture from some course, without building up the foundational layers. (That is why I am putting together a YouTube channel of my own, to go from scratch, to something like this, after about 6 hours of presentation… then on to a theory of which this is one puzzle piece.
Of course, we are here to discuss Bostrom’s ideas, but this “three information type” idea, less clumsily expressed, does tie straightforwardly to the question of indirect reach, and “kinds of better” that different superintelligences can embrace.
Unfortunately I will have to establish that conceptual link when I come back and clean this up, since it is getting so late. Thanks to those who read this far...
Well, I ran several topics together in the same post, and that was perhaps careless planning. And, in any case I do not expect slavish agreement just because I make the claim.
And, neither should you, just by flatly denying it, with nary a word to clue me in about your reservations about what has, in the last 10 years, transitioned from a convenient metaphor in quantum physics, cosmology, and other disciplines, to a growing consensus about the actual truth of things. (Objections to this growing consensus, when they actually are made, seem to be mostly arguments from guffaw, resembling the famous “I refute you thus” joke about Berkeleyan idealism.)
By the way, I am not defending Berkeleyan idealism, still less the theistic underpinning that kept popping up in his thought (I am an atheist.)
Rather, as for most thinkers, who cite the famous joke about someone kicking a solid object as a “proof” that Berkeley’s virtual phenomenalism was self-evidently foolish, the point of my usage of that joke is to show it misses the point. Of course it seems phenomenologically, like the world is made of “stuff”.
And information doesn’t seem to be “real stuff.” (The earth seems flat, too. So what?)
Had we time, you and I could debate the relative merits of an information-based, scientifically literate metaphysics, with whatever alternate notion of reality you subscribe to in its place, as your scientifically literate metaphysics.
But make no mistake, everyone subscribes to some kind of metaphysics, just as everyone has a working ontology—or candidate, provisional set of ontologies.
Even the most “anti-metaphysical” theorists are operating from a (perhaps unacknowledged) metaphysics and working ontology; it is just that they think theirs, because it is invisible to them, is beyond need of conceptual excavation and clarification, and beyond the reach of critical, rational examination—whereas other people’s metaphysics is acutally a metaphysics (argh), and thus carries an elevated burden of proof relative to their ontology.
I am not saying you are like this, of course. I don’t know your views. As I say, it could be the subject of a whole forum like this one. So I’ll end by saying disagreement is inevitable, especially when I just drop in a remark as I did, about a topic that is actually somewhat tangential (though, as I will try to argue as the forum proceeds, not all that tangential.)
Yes, Bostrom explicitly says he is not concerned with the metaphysics of mind, in his book. Good for him. It’s his book, and he can write it any way he chooses.
And I understand his editorial choice. He is trained as a philosopher, and knows as well as anyone that there are probably millions of pages written about the mind body problem, with more added daily. It is easy to understand his decision to avoid getting stuck in the quicksand of arguing specifics about consciousness, how it can be physically realized.
This book obviously has a different mission. I have written for publication before, and I know one has to make strategic choices (with one’s agent and editor.)
Likewise, his book is also not about “object-level” work in AI—how to make it, achieve it, give it this or that form, give it “real mental states”, emotion, drives. Those of us trying to understand how to achieve those things, still have much to learn from Bostrom’s current book, but will not find intricate conceptual investigations of what will lead to the new science of sentience design.
Still, I would have preferred if he had found a way to “stipulate” Conscious AI, along with speed AI, Quality AI, etc, as one of the flavors that might arise. Then we could address quesions under 4 headings, 4 possible AI worlds (not necessarily mutually exclusive, just as the three from this week are not mutually exclusive.)
The question of the “direct reach” of conscious AI, compared to the others, would have been very interesting.
It is a meta-level book about AI, deliberately ambiguous about consciousness. I think that makes the discussion harder, in many areas.
I like Bostrom. I’ve been reading his papers for 10 or 15 years.
But avoiding or proscribing the question of whether we have consciousness AND intelligence (vs simply intelligent behavior sans consciousness) thus pruning away, preemptively, issues that could depend on: whether they interact; whether the former increases causal powers—or instability or stability—in the exercise of the latter; and so on, keeps lots of questions inherently ambiguous.
I’ll try to make good on that last claim, one way or another, during the next couple of weekly sessions.
A cell can be in a huge number of internal states. Simulating a single cell in a satisfactory way will be impossible for many years. What portion of this detail matters to cognition, however? If we have to consider every time a gene is expressed or protein gets phosphorylated as an information processing event, an awful lot of data processing is going on within neurons, and very quickly.
I agree not only with this sentence, but with this entire post. Which of the many, many degrees of freedom of a neuron, are “housekeeping” and don’t contribute to “information management and processing” (quotes mine, not SteveG’s) is far from obvious, and it seems likely to me that, even with a liberal allocation of the total degrees of freedom of a neuron to some sub-partitiioned equivalence class of “mere” (see following remarks for my reason for quotes) housekeeping, there are likely to be many, many remaining nodes in the directed graph of that neuron’s phase space that participate in the instantiation and evolution of an informational state of the sort we are interested in (non-housekeeping).
And, this is not even to mention adjacent neuroglia, etc, that are in that neuron’s total phase space, actively participating in the relevant (more than substrate-maintenance) set of causal loops—as I argued in my post that WBE is not well-defined, a while back.
Back to what SteveG said about the currently unknown level of detail that matters (to the kind of information processing we are concerned with … more later about this very, very important point); for now: we must not be too temporally-centric, i.e. thinking that the dynamically evolving information processing topology that a neuron makes relevant contributions to, is bounded, temporally, with a window beginning with: dendritic and membrane level “inputs” (receptor occupation, prevailing ionic environment, etc), and ending with: one depolarization -- exocytosis and/or the reuptake and clean-up shortly thereafter.
The gene expression-suppression and the protein turnover within that neuron should, arguably, also be thought of as part of the total information processing action of the cell… leaving this out is not describing the information processing act completely. Rather, it is arbitrarily cutting off our “observation” right before and after a particular depolarization and its immediate sequelae.
The internal modifications of genes and proteins that are going to effect future, information processing (no less than training of ANNs effects future behavior of of the ANN witin that ANNs information ecology) should be thought of, perhaps, as a persistent type of data structure itself. LTP of the whole ecology of the brain may occur on many levels beyond canonical synaptic remodeling.
We don’t know yet which ones we can ignore—e ven after agreeing on some others that are likely substrate maintenance only.
Another way of putting this or an entwined issue is: What are the temporal bounds of an information processing “act”? In a typical Harvard architecture substrate design, natural candidates would be, say, the time window of a changed PSW (processor status word), or PC pointer, etc.
But at a different level of description, it could be the updating of a Dynaset, a concluded SIMD instruction on a memory block representing a video frame, or anything in between.It depends, ie, on both the “application” and aspects of platform archiceture.
I think it productive, at least, to stretch our horizons a bit (not least because of the time dilation of artificial systems relative to biological ones—but again, this very statement itself has unexamined assumptions about the window—spatial and temporal—of a processed / processable information “packet” in both systems, bio and synthetic) and remain open about assumptions about what must be actively and isomorphically simulated, and what may be treated like “sparse brain” at any given moment.
I have more to say about this, but it fans out into several issues that I should put in multiple posts.
One collection of issues deals with: is “intelligence” a process (or processes) actively in play; is it a capacity to spawn effective, active processes; is it a state of being, like occurrently knowing occupying a subject’s specious present, like one of Whitehead’s “occasions of experience?”
Should we get right down to, and at last stop finessing around the elephant in the room: the question of whether consciousness is relevant to intelligence , and if so, when should we head-on start looking aggressively and rigorously at retiring the Turing Test, and supplanting it with one that enfolds consciousness and intelligence together, in their proper ratio? (This ratio is to be determined, of course, since we haven’t even allowed ourselves to formally address the issue with both our eyes—intelligenge and consciousness—open. Maybe looking through both issues, confers insight—like depth vision, to push the metaphor of using two eyes. )
Look, if interested, for my post late tomorrow, Sunday, about the three types of information (at least) in the brain. I will title it as such, for anyone looking for it.
Personally, I think this week is the best thus far, in its parity with my own interests and ongoing research topics. Especially the 4 “For In-depth Ideas” points at the top, posted by Katja. All 4 are exactly what I am most interested in, and working most actively on. But of course that is just me; everyone will have their own favorites.
It is my personal agony (to be melodramatic about it) that I had some external distractions this week, so I am getting a late start on what might have been my best week.
But I will add what I can, Sunday evening (at least about the three types of information, and hopefully other posts. I will come back here even after the “kinetics” topic begins, so those persons in here who are interested in Katja’s 4 In-depth issues, might wish to look back here later next week, as well as Sunday night or Monday morning, if you are interested in those issues as much as I am.
I am also an enthusiast for plumbing the depths of the quality idea, as well as, again, point number one on Katja’s “In-depth Research” idea list for this week, which is essentially the issue of whether we can replace the Turing Test with—now my own characterization follows, not Katja’s, so “blame me” (or applaud if you agree) -- something much more satisfactory, with updated conceptual nuance representative of cognitive sciences and progressive AI as they are (esp the former) in 2015, not 1950.
By that I refer to theories, less preemptively suffocated by the legacy of logical positivism, which has been abandoned in the study of cognition and consciousness by mainstream cognitive science researchers; physicists doing competent research on consciousness; neuroscience and physics-literate philosophers; and even “hard-nosed” neurologists (both clinical and theoretical) who are doing down and detailed, bench level neuroscience.
As an aside, a brief look around confers the impression that some people on this web site still seem to think that being “critical thinkers” is somehow to be identified with holding (albeit perhaps semi-consciously) the scientific ontology of the 19th century, and subscribing to philosophy-of-science of the 1950′s.
Here’s the news, for those folks: the universe is made of information, not Rutherford-style atoms, or particles obeying Newtonian mechanics. Ask a physicist: naive realism is dead. So are many brands of hard “materialism” in philosophy and cognitive science.
Living in the 50′s is not being “critical”, is is being uninformed. Admitting that consciousness exists, and trying to ferret out its function, is not new-agey, it is realistic. Accepting reality is pretty much a necessary condition of being “less wrong.”
And I think it ought to be one of the core tasks we never stray too far from, in our study of, and our pursuit of the creation of, HLAI (and above.)
Okay, late Saturday evening, and I was loosening my tie a bit… and, well, now I’ll to get back to what contemporary bench-science neurologists have to say, to shock some of us (it surprised me) out of our default “obvious* paradigms, even our ideas about what the cortex does.
I’ll try to post a link or two in the next day or two, to illustrate the latter. I recently read one by neurologists (research and clinical) who study children born en-cephalic (basically, just a spinal column and medulla, with an empty cavity full of CS fluid, in the rest of their cranium.) You won’t believe what the team in this one paper presents, about consciousness in these kids. Large database of patients over years of study. And these neurologists are at the top of their game. It will have you rethinking some ideas we all thought were obvious, about what the cortex does. But let me introduce that paper properly, when I post the link, in a future message.
Before that, I want to talk about the three kinds of information in the brain -- maybe two, maybe 4, but important categorical differences (thermodynamic vs. semantic-referential, for starters), and what it means to those of us interested in minds and their platform-independent substrates, etc. I’ll try to have something about that up, here, Sunday night sometime.
No changes that I’d recommend, at all. SPECIAL NOTE: please don’t interpret the drop in the number ofcomments, the last couple of weeks, as a drop in interest by forum participants. The issues of these weeks are the heart of the reason for existence of nearly all the rest of the Bostrom book, and many of the auxiliary papers and references we’ve seen, ultimately also have been context, for confronting and brainstorming about the issue now at hand. I myself just as one example, have a number of actual ideas that I’ve been working on for two weeks, but I’ve been trying to write them up in white paper form, because they seem a bit longish. also I’ve talked to a couple of people off site who are busy thinking about this as well and have much to say. Perhaps taking a one week intermission, would give some of us a chance to organize our thoughts more efficiently for postings. There is a lot of untapped incubating that is coming to a head right now among the participants’ mindse and we would like a chance to say something about these issues before moving on. ((“Still waters run deep” as the cliche goes.) We’re at the point of greatest intellectual depth, now. I could speak for hours, were I commenting orally, and trying to be complete—as opposed to making a skeleton of a comment that would, without context, raise more requests for clarification than be useful. I’m sure I’m not unique. Moderation is fine, though, be assured.