A cell can be in a huge number of internal states. Simulating a single cell in a satisfactory way will be impossible for many years. What portion of this detail matters to cognition, however? If we have to consider every time a gene is expressed or protein gets phosphorylated as an information processing event, an awful lot of data processing is going on within neurons, and very quickly.
I agree not only with this sentence, but with this entire post. Which of the many, many degrees of freedom of a neuron, are “housekeeping” and don’t contribute to “information management and processing” (quotes mine, not SteveG’s) is far from obvious, and it seems likely to me that, even with a liberal allocation of the total degrees of freedom of a neuron to some sub-partitiioned equivalence class of “mere” (see following remarks for my reason for quotes) housekeeping, there are likely to be many, many remaining nodes in the directed graph of that neuron’s phase space that participate in the instantiation and evolution of an informational state of the sort we are interested in (non-housekeeping).
And, this is not even to mention adjacent neuroglia, etc, that are in that neuron’s total phase space, actively participating in the relevant (more than substrate-maintenance) set of causal loops—as I argued in my post that WBE is not well-defined, a while back.
Back to what SteveG said about the currently unknown level of detail that matters (to the kind of information processing we are concerned with … more later about this very, very important point); for now: we must not be too temporally-centric, i.e. thinking that the dynamically evolving information processing topology that a neuron makes relevant contributions to, is bounded, temporally, with a window beginning with: dendritic and membrane level “inputs” (receptor occupation, prevailing ionic environment, etc), and ending with: one depolarization --
exocytosis and/or the reuptake and clean-up shortly thereafter.
The gene expression-suppression and the protein turnover within that neuron should, arguably, also be thought of as part of the total information processing action of the cell… leaving this out is not describing the information processing act completely.
Rather, it is arbitrarily cutting off our “observation” right before and after a particular depolarization and its immediate sequelae.
The internal modifications of genes and proteins that are going to effect future, information processing (no less than training of ANNs effects future behavior of of the ANN witin that ANNs information ecology) should be thought of, perhaps, as a persistent type of data structure itself. LTP of the whole ecology of the brain may occur on many levels beyond canonical synaptic remodeling.
We don’t know yet which ones we can ignore—e ven after agreeing on some others that are likely substrate maintenance only.
Another way of putting this or an entwined issue is: What are the temporal bounds of an information processing “act”? In a typical Harvard architecture substrate design, natural candidates would be, say, the time window of a changed PSW (processor status word), or PC pointer, etc. But at a different level of description, it could be the updating of a Dynaset, a concluded SIMD instruction on a memory block representing a video frame, or anything in between.
It depends, ie, on both the “application” and aspects of platform archiceture.
I think it productive, at least, to stretch our horizons a bit (not least because of the time dilation of artificial systems relative to biological ones—but again, this very statement itself has unexamined assumptions about the window—spatial and temporal—of a processed / processable information “packet” in both systems, bio and synthetic) and remain open about assumptions about what must be actively and isomorphically simulated, and what may be treated like “sparse brain” at any given moment.
I have more to say about this, but it fans out into several issues that I should put in multiple posts.
One collection of issues deals with: is “intelligence” a process (or processes) actively in play; is it a capacity to spawn effective, active processes; is it a state of being, like occurrently knowing occupying a subject’s specious present, like one of Whitehead’s “occasions of experience?”
Should we get right down to, and at last stop finessing around the elephant in the room: the question of whether consciousness is relevant to intelligence , and if so, when should we head-on start looking aggressively and rigorously at retiring the Turing Test, and supplanting it with one that enfolds consciousness and intelligence together, in their proper ratio? (This ratio is to be determined, of course, since we haven’t even allowed ourselves to formally address the issue with both our eyes—intelligenge and consciousness—open. Maybe looking through both issues, confers insight—like depth vision, to push the metaphor of using two eyes. )
Look, if interested, for my post late tomorrow, Sunday, about the three types of information (at least) in the brain. I will title it as such, for anyone looking for it.
Personally, I think this week is the best thus far, in its parity with my own interests and ongoing research topics. Especially the 4 “For In-depth Ideas” points at the top, posted by Katja. All 4 are exactly what I am most interested in, and working most actively on. But of course that is just me; everyone will have their own favorites.
It is my personal agony (to be melodramatic about it) that I had some external distractions this week, so I am getting a late start on what might have been my best week.
But I will add what I can, Sunday evening (at least about the three types of information, and hopefully other posts. I will come back here even after the “kinetics” topic begins, so those persons in here who are interested in Katja’s 4 In-depth issues, might wish to look back here later next week, as well as Sunday night or Monday morning, if you are interested in those issues as much as I am.
I am also an enthusiast for plumbing the depths of the quality idea, as well as, again, point number one on Katja’s “In-depth Research” idea list for this week, which is essentially the issue of whether we can replace the Turing Test with—now my own characterization follows, not Katja’s, so “blame me” (or applaud if you agree) -- something much more satisfactory, with updated conceptual nuance representative of cognitive sciences and progressive AI as they are (esp the former) in 2015, not 1950.
By that I refer to theories, less preemptively suffocated by the legacy of logical positivism, which has been abandoned in the study of cognition and consciousness by mainstream cognitive science researchers; physicists doing competent research on consciousness; neuroscience and physics-literate philosophers; and even “hard-nosed” neurologists (both clinical and theoretical) who are doing down and detailed, bench level neuroscience.
As an aside, a brief look around confers the impression that some people on this web site still seem to think that being “critical thinkers” is somehow to be identified with holding (albeit perhaps semi-consciously) the scientific ontology of the 19th century, and subscribing to philosophy-of-science of the 1950′s.
Here’s the news, for those folks: the universe is made of information, not Rutherford-style atoms, or particles obeying Newtonian mechanics. Ask a physicist: naive realism is dead. So are many brands of hard “materialism” in philosophy and cognitive science.
Living in the 50′s is not being “critical”, is is being uninformed. Admitting that consciousness exists, and trying to ferret out its function, is not new-agey, it is realistic. Accepting reality is pretty much a necessary condition of being “less wrong.”
And I think it ought to be one of the core tasks we never stray too far from, in our study of, and our pursuit of the creation of, HLAI (and above.)
Okay, late Saturday evening, and I was loosening my tie a bit… and, well, now I’ll to get back to what contemporary bench-science neurologists have to say, to shock some of us (it surprised me) out of our default “obvious* paradigms, even our ideas about what the cortex does.
I’ll try to post a link or two in the next day or two, to illustrate the latter. I recently read one by neurologists (research and clinical) who study children born en-cephalic (basically, just a spinal column and medulla, with an empty cavity full of CS fluid, in the rest of their cranium.) You won’t believe what the team in this one paper presents, about consciousness in these kids. Large database of patients over years of study. And these neurologists are at the top of their game. It will have you rethinking some ideas we all thought were obvious, about what the cortex does. But let me introduce that paper properly, when I post the link, in a future message.
Before that, I want to talk about the three kinds of information in the brain -- maybe two, maybe 4, but important categorical differences (thermodynamic vs. semantic-referential, for starters), and what it means to those of us interested in minds and their platform-independent substrates, etc. I’ll try to have something about that up, here, Sunday night sometime.
Well, I ran several topics together in the same post, and that was perhaps careless planning. And, in any case I do not expect slavish agreement just because I make the claim.
And, neither should you, just by flatly denying it, with nary a word to clue me in about your reservations about what has, in the last 10 years, transitioned from a convenient metaphor in quantum physics, cosmology, and other disciplines, to a growing consensus about the actual truth of things. (Objections to this growing consensus, when they actually are made, seem to be mostly arguments from guffaw, resembling the famous “I refute you thus” joke about Berkeleyan idealism.)
By the way, I am not defending Berkeleyan idealism, still less the theistic underpinning that kept popping up in his thought (I am an atheist.)
Rather, as for most thinkers, who cite the famous joke about someone kicking a solid object as a “proof” that Berkeley’s virtual phenomenalism was self-evidently foolish, the point of my usage of that joke is to show it misses the point. Of course it seems phenomenologically, like the world is made of “stuff”.
And information doesn’t seem to be “real stuff.” (The earth seems flat, too. So what?)
Had we time, you and I could debate the relative merits of an information-based, scientifically literate metaphysics, with whatever alternate notion of reality you subscribe to in its place, as your scientifically literate metaphysics.
But make no mistake, everyone subscribes to some kind of metaphysics, just as everyone has a working ontology—or candidate, provisional set of ontologies.
Even the most “anti-metaphysical” theorists are operating from a (perhaps unacknowledged) metaphysics and working ontology; it is just that they think theirs, because it is invisible to them, is beyond need of conceptual excavation and clarification, and beyond the reach of critical, rational examination—whereas other people’s metaphysics is acutally a metaphysics (argh), and thus carries an elevated burden of proof relative to their ontology.
I am not saying you are like this, of course. I don’t know your views. As I say, it could be the subject of a whole forum like this one. So I’ll end by saying disagreement is inevitable, especially when I just drop in a remark as I did, about a topic that is actually somewhat tangential (though, as I will try to argue as the forum proceeds, not all that tangential.)
Yes, Bostrom explicitly says he is not concerned with the metaphysics of mind, in his book. Good for him. It’s his book, and he can write it any way he chooses.
And I understand his editorial choice. He is trained as a philosopher, and knows as well as anyone that there are probably millions of pages written about the mind body problem, with more added daily. It is easy to understand his decision to avoid getting stuck in the quicksand of arguing specifics about consciousness, how it can be physically realized.
This book obviously has a different mission. I have written for publication before, and I know one has to make strategic choices (with one’s agent and editor.)
Likewise, his book is also not about “object-level” work in AI—how to make it, achieve it, give it this or that form, give it “real mental states”, emotion, drives. Those of us trying to understand how to achieve those things, still have much to learn from Bostrom’s current book, but will not find intricate conceptual investigations of what will lead to the new science of sentience design.
Still, I would have preferred if he had found a way to “stipulate” Conscious AI, along with speed AI, Quality AI, etc, as one of the flavors that might arise. Then we could address quesions under 4 headings, 4 possible AI worlds (not necessarily mutually exclusive, just as the three from this week are not mutually exclusive.)
The question of the “direct reach” of conscious AI, compared to the others, would have been very interesting.
It is a meta-level book about AI, deliberately ambiguous about consciousness. I think that makes the discussion harder, in many areas.
I like Bostrom. I’ve been reading his papers for 10 or 15 years.
But avoiding or proscribing the question of whether we have consciousness AND intelligence (vs simply intelligent behavior sans consciousness) thus pruning away, preemptively, issues that could depend on: whether they interact; whether the former increases causal powers—or instability or stability—in the exercise of the latter; and so on, keeps lots of questions inherently ambiguous.
I’ll try to make good on that last claim, one way or another, during the next couple of weekly sessions.
It’s a matter if fact that information ontology i isn’t the established consensus in the way that evolution is. You are entitled to opinions, but not to pass off opinions as fact. There is enough confusion about physics already.
You bring in the issue of objections to information ontology The unstated argument seems to be that since there are no valid objections, there is nothing to stop it becoming the established consensus, so it is as good as.
What would a universe in which information is not fundamental look like, as opposed to one where it is? I would expect a universe where information is not fundamental to look like one where information always requires some physical, material or energetic, medium or carrier—a sheet of paper,, radio wave,a train of pulses going down T1 line. That appears to be the case.
I am not sure why you brought Bostrom in. For what it’s worth, I don’t think a Bostrom style mathematical universe is quite the same as a single universe information ontology.
But avoiding or proscribing the question of whether we have consciousness
I don’t know who you think is doing that, .or why you brought it in. Do you think .IO helps with the mind body problem? I think you need to do more than subtract the stuffiness from matter. If we could easily see how a rich conception of consciousness could supervene on pure information, we would easily be able to see how computers could have qualia, which we can’t. We need more in our ontology, not less.
If we could easily see how a rich conception of consciousness could supervene on pure information
I have to confess that I might be the one person in this business who never really understood the concept of supervenience—either “weak supervenience” or “strong supervenience.” I’ve read Chalmers, Dennett, the journals on the concept… never really “snapped-in” for me. So when the term is used, I have to just recuse myself and let those who do understand it, finish their line of thought.
To me, supevenience seems like a fuzzy way to repackage epiphenomenalism, or to finesse some kind of antinomy (for them), like, “can’t live with eliminative materialism, can’t live with dualism, can’t live with type—type identity theory, and token-token identity theory is untestable and difficult even to give logical nec and sufficient conditions for, so… lets have a new word.” So, (my unruly suspicion tells me) let’s say mental events (states, processes, whatever) “supervene” on physiological states (events, etc.)
As I say, so far, I have just had to suspend judgement and wonder if some day “supervene” will snap-in and be intuitively penetrable to me. I push all the definitions, and get to the same place—a “I don’t get it” place, but that doesn’t mean I believe the concept is itself defective. I just have to suspend judgement (like, for the last 25 years of study or so.)
We need more in our ontology, not less.
I actually believe that, too… but with a unique take: I think we all operate with a logical ontology … not in the sense of modus ponens, but in the sense that a memory space can be “logical”, meaning in this context, detached from physical memory.
Further, the construction of this logical ontology is, I think, partly culturally influenced, partly influenced by the species’ sensorium and equipment, party influenced / constructed by something like Jeff Hawkins’ prediction-expectation memory model… constructed, bequeathed culturally, and in several additional related, ways that also tune the idealized, logical ontology.
Memetics influences (in conjunction with native—although changeable—abilities in those memes’ host vectors) the genesis, maintenance, and evolution of this “logical ontology”, also. This is feed foward and feed backward. Memetics influences the logical ontology, which crystalizes into additional memetic templates that are kept, tuning further the logical ontology.
Once “established” (and it constantly evolves), this “logical” ontology is the “target” that, over time, a new (say, human, while growing up, growing old) has as the “target” data structures that it creates a virtual, phenomenological analog simulation of, and as the person gains experience, the person’s virtual reality simulation of the world converges on something that is in some way consistently isomorphically related to this “logical” idealized ontology.
So (and there is lots of neurology research that drives much of this, though it may all sound rather speculative) for me, there are TWO ontologies, BOTH of them constructed, and those are in addition to the entangled “outside world” quantum substrate, which is by definition inherently both sub-ontological (properly understood) and not sensible, (It is sub-ontological because of its nature, but is interrogatable, giving feedback helping to form boundary conditions for the idealized logical ontology (or ontologies, in different species.)
I’ll add that I think the “logical ontology” is also species dependent, unsurprisingly.
I think you and I got off on the wrong foot, maybe you found my tone too declaratory when it should have been phrased more subjunctively. I’ll take your point. But since you obviously have a philosophy competence, you will know what the following means:-- one can say my views resemble somewhat an updated quasi-Kantian model, supplemented with the idea that noumena are the inchoate quantum substrate.
Or perhaps to correct that, in my model there are two “noumenal” realms: one is the “logical ontology” I referred to, a logical data structure, and the other is the one below that, and below ALL ontologies, which is the quantum substrate, necessarily “subontological.”
But my theory (there is more than I have just shot through quickly right now) handles species-relative qualia and the species-relative logical ontologies across species.
Remaining issues include : how qualia are generated. And the same question for the sense of self. I have ideas how to solve these, and the indexical 1st person problem, connected with the basis problem. Neurology studies of default mode network behavior and architecture, its malfunction, and metacognition, epilepsy, etc, help a lot.
Think this is speculative? You should read neruologists these days, especially the better, data driven ones. (Perhaps you already know, and you will thus see where I derive some of my supporting research.)
Anyway, always, always, I am trying to solve all this in the general case—first, across biological conscious species (a bird has a different “logical” ontology than people, as well as a different phenomenological reality that, to varying degrees of precision, “represents” or maps to, or has a recurrent resonance with that species’ logical ontology) -- and then trying to solve it for any general mind in mind space. that has to live in this universe.
It all sounds like hand waving, perhaps. But this is scarcely an abstract. There are many puzzle pieces to the theory, and every piece of it has lots of specific research. It all is progressively falling together into an integrated system. I need geffen graphs, white boards, to explain it, since its a whole theory, so I can’t squeeze it into one post. Besides, this is Bostrom’s show.
I’ll write my own book when the time comes—not saying it is right, but it is a promising effort so far, and it seems to work better, the farther I push it.
When it is far enough along, I can test it on a vlog, and see if people can find problems. If so, I will revise, backtrack, and try again. I intend to spend the rest of my life doing this, so discovered errors are just part of revision and refinement.
But first I have to finish, then present it methodically and carefully, so it can be evaluated by others. No space here for that.
Thanks for your previous thoughts, and your caution against sounding too certain. I am really NOT that certain, of course, of anything. I was just thinking out loud, as they say.
I agree not only with this sentence, but with this entire post. Which of the many, many degrees of freedom of a neuron, are “housekeeping” and don’t contribute to “information management and processing” (quotes mine, not SteveG’s) is far from obvious, and it seems likely to me that, even with a liberal allocation of the total degrees of freedom of a neuron to some sub-partitiioned equivalence class of “mere” (see following remarks for my reason for quotes) housekeeping, there are likely to be many, many remaining nodes in the directed graph of that neuron’s phase space that participate in the instantiation and evolution of an informational state of the sort we are interested in (non-housekeeping).
And, this is not even to mention adjacent neuroglia, etc, that are in that neuron’s total phase space, actively participating in the relevant (more than substrate-maintenance) set of causal loops—as I argued in my post that WBE is not well-defined, a while back.
Back to what SteveG said about the currently unknown level of detail that matters (to the kind of information processing we are concerned with … more later about this very, very important point); for now: we must not be too temporally-centric, i.e. thinking that the dynamically evolving information processing topology that a neuron makes relevant contributions to, is bounded, temporally, with a window beginning with: dendritic and membrane level “inputs” (receptor occupation, prevailing ionic environment, etc), and ending with: one depolarization -- exocytosis and/or the reuptake and clean-up shortly thereafter.
The gene expression-suppression and the protein turnover within that neuron should, arguably, also be thought of as part of the total information processing action of the cell… leaving this out is not describing the information processing act completely. Rather, it is arbitrarily cutting off our “observation” right before and after a particular depolarization and its immediate sequelae.
The internal modifications of genes and proteins that are going to effect future, information processing (no less than training of ANNs effects future behavior of of the ANN witin that ANNs information ecology) should be thought of, perhaps, as a persistent type of data structure itself. LTP of the whole ecology of the brain may occur on many levels beyond canonical synaptic remodeling.
We don’t know yet which ones we can ignore—e ven after agreeing on some others that are likely substrate maintenance only.
Another way of putting this or an entwined issue is: What are the temporal bounds of an information processing “act”? In a typical Harvard architecture substrate design, natural candidates would be, say, the time window of a changed PSW (processor status word), or PC pointer, etc.
But at a different level of description, it could be the updating of a Dynaset, a concluded SIMD instruction on a memory block representing a video frame, or anything in between.
It depends, ie, on both the “application” and aspects of platform archiceture.
I think it productive, at least, to stretch our horizons a bit (not least because of the time dilation of artificial systems relative to biological ones—but again, this very statement itself has unexamined assumptions about the window—spatial and temporal—of a processed / processable information “packet” in both systems, bio and synthetic) and remain open about assumptions about what must be actively and isomorphically simulated, and what may be treated like “sparse brain” at any given moment.
I have more to say about this, but it fans out into several issues that I should put in multiple posts.
One collection of issues deals with: is “intelligence” a process (or processes) actively in play; is it a capacity to spawn effective, active processes; is it a state of being, like occurrently knowing occupying a subject’s specious present, like one of Whitehead’s “occasions of experience?”
Should we get right down to, and at last stop finessing around the elephant in the room: the question of whether consciousness is relevant to intelligence , and if so, when should we head-on start looking aggressively and rigorously at retiring the Turing Test, and supplanting it with one that enfolds consciousness and intelligence together, in their proper ratio? (This ratio is to be determined, of course, since we haven’t even allowed ourselves to formally address the issue with both our eyes—intelligenge and consciousness—open. Maybe looking through both issues, confers insight—like depth vision, to push the metaphor of using two eyes. )
Look, if interested, for my post late tomorrow, Sunday, about the three types of information (at least) in the brain. I will title it as such, for anyone looking for it.
Personally, I think this week is the best thus far, in its parity with my own interests and ongoing research topics. Especially the 4 “For In-depth Ideas” points at the top, posted by Katja. All 4 are exactly what I am most interested in, and working most actively on. But of course that is just me; everyone will have their own favorites.
It is my personal agony (to be melodramatic about it) that I had some external distractions this week, so I am getting a late start on what might have been my best week.
But I will add what I can, Sunday evening (at least about the three types of information, and hopefully other posts. I will come back here even after the “kinetics” topic begins, so those persons in here who are interested in Katja’s 4 In-depth issues, might wish to look back here later next week, as well as Sunday night or Monday morning, if you are interested in those issues as much as I am.
I am also an enthusiast for plumbing the depths of the quality idea, as well as, again, point number one on Katja’s “In-depth Research” idea list for this week, which is essentially the issue of whether we can replace the Turing Test with—now my own characterization follows, not Katja’s, so “blame me” (or applaud if you agree) -- something much more satisfactory, with updated conceptual nuance representative of cognitive sciences and progressive AI as they are (esp the former) in 2015, not 1950.
By that I refer to theories, less preemptively suffocated by the legacy of logical positivism, which has been abandoned in the study of cognition and consciousness by mainstream cognitive science researchers; physicists doing competent research on consciousness; neuroscience and physics-literate philosophers; and even “hard-nosed” neurologists (both clinical and theoretical) who are doing down and detailed, bench level neuroscience.
As an aside, a brief look around confers the impression that some people on this web site still seem to think that being “critical thinkers” is somehow to be identified with holding (albeit perhaps semi-consciously) the scientific ontology of the 19th century, and subscribing to philosophy-of-science of the 1950′s.
Here’s the news, for those folks: the universe is made of information, not Rutherford-style atoms, or particles obeying Newtonian mechanics. Ask a physicist: naive realism is dead. So are many brands of hard “materialism” in philosophy and cognitive science.
Living in the 50′s is not being “critical”, is is being uninformed. Admitting that consciousness exists, and trying to ferret out its function, is not new-agey, it is realistic. Accepting reality is pretty much a necessary condition of being “less wrong.”
And I think it ought to be one of the core tasks we never stray too far from, in our study of, and our pursuit of the creation of, HLAI (and above.)
Okay, late Saturday evening, and I was loosening my tie a bit… and, well, now I’ll to get back to what contemporary bench-science neurologists have to say, to shock some of us (it surprised me) out of our default “obvious* paradigms, even our ideas about what the cortex does.
I’ll try to post a link or two in the next day or two, to illustrate the latter. I recently read one by neurologists (research and clinical) who study children born en-cephalic (basically, just a spinal column and medulla, with an empty cavity full of CS fluid, in the rest of their cranium.) You won’t believe what the team in this one paper presents, about consciousness in these kids. Large database of patients over years of study. And these neurologists are at the top of their game. It will have you rethinking some ideas we all thought were obvious, about what the cortex does. But let me introduce that paper properly, when I post the link, in a future message.
Before that, I want to talk about the three kinds of information in the brain -- maybe two, maybe 4, but important categorical differences (thermodynamic vs. semantic-referential, for starters), and what it means to those of us interested in minds and their platform-independent substrates, etc. I’ll try to have something about that up, here, Sunday night sometime.
No, information ontology isn’t a done deal.
Well, I ran several topics together in the same post, and that was perhaps careless planning. And, in any case I do not expect slavish agreement just because I make the claim.
And, neither should you, just by flatly denying it, with nary a word to clue me in about your reservations about what has, in the last 10 years, transitioned from a convenient metaphor in quantum physics, cosmology, and other disciplines, to a growing consensus about the actual truth of things. (Objections to this growing consensus, when they actually are made, seem to be mostly arguments from guffaw, resembling the famous “I refute you thus” joke about Berkeleyan idealism.)
By the way, I am not defending Berkeleyan idealism, still less the theistic underpinning that kept popping up in his thought (I am an atheist.)
Rather, as for most thinkers, who cite the famous joke about someone kicking a solid object as a “proof” that Berkeley’s virtual phenomenalism was self-evidently foolish, the point of my usage of that joke is to show it misses the point. Of course it seems phenomenologically, like the world is made of “stuff”.
And information doesn’t seem to be “real stuff.” (The earth seems flat, too. So what?)
Had we time, you and I could debate the relative merits of an information-based, scientifically literate metaphysics, with whatever alternate notion of reality you subscribe to in its place, as your scientifically literate metaphysics.
But make no mistake, everyone subscribes to some kind of metaphysics, just as everyone has a working ontology—or candidate, provisional set of ontologies.
Even the most “anti-metaphysical” theorists are operating from a (perhaps unacknowledged) metaphysics and working ontology; it is just that they think theirs, because it is invisible to them, is beyond need of conceptual excavation and clarification, and beyond the reach of critical, rational examination—whereas other people’s metaphysics is acutally a metaphysics (argh), and thus carries an elevated burden of proof relative to their ontology.
I am not saying you are like this, of course. I don’t know your views. As I say, it could be the subject of a whole forum like this one. So I’ll end by saying disagreement is inevitable, especially when I just drop in a remark as I did, about a topic that is actually somewhat tangential (though, as I will try to argue as the forum proceeds, not all that tangential.)
Yes, Bostrom explicitly says he is not concerned with the metaphysics of mind, in his book. Good for him. It’s his book, and he can write it any way he chooses.
And I understand his editorial choice. He is trained as a philosopher, and knows as well as anyone that there are probably millions of pages written about the mind body problem, with more added daily. It is easy to understand his decision to avoid getting stuck in the quicksand of arguing specifics about consciousness, how it can be physically realized.
This book obviously has a different mission. I have written for publication before, and I know one has to make strategic choices (with one’s agent and editor.)
Likewise, his book is also not about “object-level” work in AI—how to make it, achieve it, give it this or that form, give it “real mental states”, emotion, drives. Those of us trying to understand how to achieve those things, still have much to learn from Bostrom’s current book, but will not find intricate conceptual investigations of what will lead to the new science of sentience design.
Still, I would have preferred if he had found a way to “stipulate” Conscious AI, along with speed AI, Quality AI, etc, as one of the flavors that might arise. Then we could address quesions under 4 headings, 4 possible AI worlds (not necessarily mutually exclusive, just as the three from this week are not mutually exclusive.)
The question of the “direct reach” of conscious AI, compared to the others, would have been very interesting.
It is a meta-level book about AI, deliberately ambiguous about consciousness. I think that makes the discussion harder, in many areas.
I like Bostrom. I’ve been reading his papers for 10 or 15 years.
But avoiding or proscribing the question of whether we have consciousness AND intelligence (vs simply intelligent behavior sans consciousness) thus pruning away, preemptively, issues that could depend on: whether they interact; whether the former increases causal powers—or instability or stability—in the exercise of the latter; and so on, keeps lots of questions inherently ambiguous.
I’ll try to make good on that last claim, one way or another, during the next couple of weekly sessions.
A growing consensus isn’t a done deal.
It’s a matter if fact that information ontology i isn’t the established consensus in the way that evolution is. You are entitled to opinions, but not to pass off opinions as fact. There is enough confusion about physics already.
You bring in the issue of objections to information ontology The unstated argument seems to be that since there are no valid objections, there is nothing to stop it becoming the established consensus, so it is as good as.
What would a universe in which information is not fundamental look like, as opposed to one where it is? I would expect a universe where information is not fundamental to look like one where information always requires some physical, material or energetic, medium or carrier—a sheet of paper,, radio wave,a train of pulses going down T1 line. That appears to be the case.
I am not sure why you brought Bostrom in. For what it’s worth, I don’t think a Bostrom style mathematical universe is quite the same as a single universe information ontology.
I don’t know who you think is doing that, .or why you brought it in. Do you think .IO helps with the mind body problem? I think you need to do more than subtract the stuffiness from matter. If we could easily see how a rich conception of consciousness could supervene on pure information, we would easily be able to see how computers could have qualia, which we can’t. We need more in our ontology, not less.
I have to confess that I might be the one person in this business who never really understood the concept of supervenience—either “weak supervenience” or “strong supervenience.” I’ve read Chalmers, Dennett, the journals on the concept… never really “snapped-in” for me. So when the term is used, I have to just recuse myself and let those who do understand it, finish their line of thought.
To me, supevenience seems like a fuzzy way to repackage epiphenomenalism, or to finesse some kind of antinomy (for them), like, “can’t live with eliminative materialism, can’t live with dualism, can’t live with type—type identity theory, and token-token identity theory is untestable and difficult even to give logical nec and sufficient conditions for, so… lets have a new word.”
So, (my unruly suspicion tells me) let’s say mental events (states, processes, whatever) “supervene” on physiological states (events, etc.)
As I say, so far, I have just had to suspend judgement and wonder if some day “supervene” will snap-in and be intuitively penetrable to me. I push all the definitions, and get to the same place—a “I don’t get it” place, but that doesn’t mean I believe the concept is itself defective. I just have to suspend judgement (like, for the last 25 years of study or so.)
I actually believe that, too… but with a unique take: I think we all operate with a logical ontology … not in the sense of modus ponens, but in the sense that a memory space can be “logical”, meaning in this context, detached from physical memory.
Further, the construction of this logical ontology is, I think, partly culturally influenced, partly influenced by the species’ sensorium and equipment, party influenced / constructed by something like Jeff Hawkins’ prediction-expectation memory model… constructed, bequeathed culturally, and in several additional related, ways that also tune the idealized, logical ontology.
Memetics influences (in conjunction with native—although changeable—abilities in those memes’ host vectors) the genesis, maintenance, and evolution of this “logical ontology”, also. This is feed foward and feed backward. Memetics influences the logical ontology, which crystalizes into additional memetic templates that are kept, tuning further the logical ontology.
Once “established” (and it constantly evolves), this “logical” ontology is the “target” that, over time, a new (say, human, while growing up, growing old) has as the “target” data structures that it creates a virtual, phenomenological analog simulation of, and as the person gains experience, the person’s virtual reality simulation of the world converges on something that is in some way consistently isomorphically related to this “logical” idealized ontology.
So (and there is lots of neurology research that drives much of this, though it may all sound rather speculative) for me, there are TWO ontologies, BOTH of them constructed, and those are in addition to the entangled “outside world” quantum substrate, which is by definition inherently both sub-ontological (properly understood) and not sensible, (It is sub-ontological because of its nature, but is interrogatable, giving feedback helping to form boundary conditions for the idealized logical ontology (or ontologies, in different species.)
I’ll add that I think the “logical ontology” is also species dependent, unsurprisingly.
I think you and I got off on the wrong foot, maybe you found my tone too declaratory when it should have been phrased more subjunctively. I’ll take your point. But since you obviously have a philosophy competence, you will know what the following means:-- one can say my views resemble somewhat an updated quasi-Kantian model, supplemented with the idea that noumena are the inchoate quantum substrate.
Or perhaps to correct that, in my model there are two “noumenal” realms: one is the “logical ontology” I referred to, a logical data structure, and the other is the one below that, and below ALL ontologies, which is the quantum substrate, necessarily “subontological.”
But my theory (there is more than I have just shot through quickly right now) handles species-relative qualia and the species-relative logical ontologies across species.
Remaining issues include : how qualia are generated. And the same question for the sense of self. I have ideas how to solve these, and the indexical 1st person problem, connected with the basis problem. Neurology studies of default mode network behavior and architecture, its malfunction, and metacognition, epilepsy, etc, help a lot.
Think this is speculative? You should read neruologists these days, especially the better, data driven ones. (Perhaps you already know, and you will thus see where I derive some of my supporting research.)
Anyway, always, always, I am trying to solve all this in the general case—first, across biological conscious species (a bird has a different “logical” ontology than people, as well as a different phenomenological reality that, to varying degrees of precision, “represents” or maps to, or has a recurrent resonance with that species’ logical ontology) -- and then trying to solve it for any general mind in mind space. that has to live in this universe.
It all sounds like hand waving, perhaps. But this is scarcely an abstract. There are many puzzle pieces to the theory, and every piece of it has lots of specific research. It all is progressively falling together into an integrated system. I need geffen graphs, white boards, to explain it, since its a whole theory, so I can’t squeeze it into one post. Besides, this is Bostrom’s show.
I’ll write my own book when the time comes—not saying it is right, but it is a promising effort so far, and it seems to work better, the farther I push it.
When it is far enough along, I can test it on a vlog, and see if people can find problems. If so, I will revise, backtrack, and try again. I intend to spend the rest of my life doing this, so discovered errors are just part of revision and refinement.
But first I have to finish, then present it methodically and carefully, so it can be evaluated by others. No space here for that.
Thanks for your previous thoughts, and your caution against sounding too certain. I am really NOT that certain, of course, of anything. I was just thinking out loud, as they say.
this week is pretty much closed..… cheers...
Supervenience is not a claim like epiphenonenalism, it is a set of constraints that represent some broad naturalists conclusions.