Well, I ran several topics together in the same post, and that was perhaps careless planning. And, in any case I do not expect slavish agreement just because I make the claim.
And, neither should you, just by flatly denying it, with nary a word to clue me in about your reservations about what has, in the last 10 years, transitioned from a convenient metaphor in quantum physics, cosmology, and other disciplines, to a growing consensus about the actual truth of things. (Objections to this growing consensus, when they actually are made, seem to be mostly arguments from guffaw, resembling the famous “I refute you thus” joke about Berkeleyan idealism.)
By the way, I am not defending Berkeleyan idealism, still less the theistic underpinning that kept popping up in his thought (I am an atheist.)
Rather, as for most thinkers, who cite the famous joke about someone kicking a solid object as a “proof” that Berkeley’s virtual phenomenalism was self-evidently foolish, the point of my usage of that joke is to show it misses the point. Of course it seems phenomenologically, like the world is made of “stuff”.
And information doesn’t seem to be “real stuff.” (The earth seems flat, too. So what?)
Had we time, you and I could debate the relative merits of an information-based, scientifically literate metaphysics, with whatever alternate notion of reality you subscribe to in its place, as your scientifically literate metaphysics.
But make no mistake, everyone subscribes to some kind of metaphysics, just as everyone has a working ontology—or candidate, provisional set of ontologies.
Even the most “anti-metaphysical” theorists are operating from a (perhaps unacknowledged) metaphysics and working ontology; it is just that they think theirs, because it is invisible to them, is beyond need of conceptual excavation and clarification, and beyond the reach of critical, rational examination—whereas other people’s metaphysics is acutally a metaphysics (argh), and thus carries an elevated burden of proof relative to their ontology.
I am not saying you are like this, of course. I don’t know your views. As I say, it could be the subject of a whole forum like this one. So I’ll end by saying disagreement is inevitable, especially when I just drop in a remark as I did, about a topic that is actually somewhat tangential (though, as I will try to argue as the forum proceeds, not all that tangential.)
Yes, Bostrom explicitly says he is not concerned with the metaphysics of mind, in his book. Good for him. It’s his book, and he can write it any way he chooses.
And I understand his editorial choice. He is trained as a philosopher, and knows as well as anyone that there are probably millions of pages written about the mind body problem, with more added daily. It is easy to understand his decision to avoid getting stuck in the quicksand of arguing specifics about consciousness, how it can be physically realized.
This book obviously has a different mission. I have written for publication before, and I know one has to make strategic choices (with one’s agent and editor.)
Likewise, his book is also not about “object-level” work in AI—how to make it, achieve it, give it this or that form, give it “real mental states”, emotion, drives. Those of us trying to understand how to achieve those things, still have much to learn from Bostrom’s current book, but will not find intricate conceptual investigations of what will lead to the new science of sentience design.
Still, I would have preferred if he had found a way to “stipulate” Conscious AI, along with speed AI, Quality AI, etc, as one of the flavors that might arise. Then we could address quesions under 4 headings, 4 possible AI worlds (not necessarily mutually exclusive, just as the three from this week are not mutually exclusive.)
The question of the “direct reach” of conscious AI, compared to the others, would have been very interesting.
It is a meta-level book about AI, deliberately ambiguous about consciousness. I think that makes the discussion harder, in many areas.
I like Bostrom. I’ve been reading his papers for 10 or 15 years.
But avoiding or proscribing the question of whether we have consciousness AND intelligence (vs simply intelligent behavior sans consciousness) thus pruning away, preemptively, issues that could depend on: whether they interact; whether the former increases causal powers—or instability or stability—in the exercise of the latter; and so on, keeps lots of questions inherently ambiguous.
I’ll try to make good on that last claim, one way or another, during the next couple of weekly sessions.
It’s a matter if fact that information ontology i isn’t the established consensus in the way that evolution is. You are entitled to opinions, but not to pass off opinions as fact. There is enough confusion about physics already.
You bring in the issue of objections to information ontology The unstated argument seems to be that since there are no valid objections, there is nothing to stop it becoming the established consensus, so it is as good as.
What would a universe in which information is not fundamental look like, as opposed to one where it is? I would expect a universe where information is not fundamental to look like one where information always requires some physical, material or energetic, medium or carrier—a sheet of paper,, radio wave,a train of pulses going down T1 line. That appears to be the case.
I am not sure why you brought Bostrom in. For what it’s worth, I don’t think a Bostrom style mathematical universe is quite the same as a single universe information ontology.
But avoiding or proscribing the question of whether we have consciousness
I don’t know who you think is doing that, .or why you brought it in. Do you think .IO helps with the mind body problem? I think you need to do more than subtract the stuffiness from matter. If we could easily see how a rich conception of consciousness could supervene on pure information, we would easily be able to see how computers could have qualia, which we can’t. We need more in our ontology, not less.
If we could easily see how a rich conception of consciousness could supervene on pure information
I have to confess that I might be the one person in this business who never really understood the concept of supervenience—either “weak supervenience” or “strong supervenience.” I’ve read Chalmers, Dennett, the journals on the concept… never really “snapped-in” for me. So when the term is used, I have to just recuse myself and let those who do understand it, finish their line of thought.
To me, supevenience seems like a fuzzy way to repackage epiphenomenalism, or to finesse some kind of antinomy (for them), like, “can’t live with eliminative materialism, can’t live with dualism, can’t live with type—type identity theory, and token-token identity theory is untestable and difficult even to give logical nec and sufficient conditions for, so… lets have a new word.” So, (my unruly suspicion tells me) let’s say mental events (states, processes, whatever) “supervene” on physiological states (events, etc.)
As I say, so far, I have just had to suspend judgement and wonder if some day “supervene” will snap-in and be intuitively penetrable to me. I push all the definitions, and get to the same place—a “I don’t get it” place, but that doesn’t mean I believe the concept is itself defective. I just have to suspend judgement (like, for the last 25 years of study or so.)
We need more in our ontology, not less.
I actually believe that, too… but with a unique take: I think we all operate with a logical ontology … not in the sense of modus ponens, but in the sense that a memory space can be “logical”, meaning in this context, detached from physical memory.
Further, the construction of this logical ontology is, I think, partly culturally influenced, partly influenced by the species’ sensorium and equipment, party influenced / constructed by something like Jeff Hawkins’ prediction-expectation memory model… constructed, bequeathed culturally, and in several additional related, ways that also tune the idealized, logical ontology.
Memetics influences (in conjunction with native—although changeable—abilities in those memes’ host vectors) the genesis, maintenance, and evolution of this “logical ontology”, also. This is feed foward and feed backward. Memetics influences the logical ontology, which crystalizes into additional memetic templates that are kept, tuning further the logical ontology.
Once “established” (and it constantly evolves), this “logical” ontology is the “target” that, over time, a new (say, human, while growing up, growing old) has as the “target” data structures that it creates a virtual, phenomenological analog simulation of, and as the person gains experience, the person’s virtual reality simulation of the world converges on something that is in some way consistently isomorphically related to this “logical” idealized ontology.
So (and there is lots of neurology research that drives much of this, though it may all sound rather speculative) for me, there are TWO ontologies, BOTH of them constructed, and those are in addition to the entangled “outside world” quantum substrate, which is by definition inherently both sub-ontological (properly understood) and not sensible, (It is sub-ontological because of its nature, but is interrogatable, giving feedback helping to form boundary conditions for the idealized logical ontology (or ontologies, in different species.)
I’ll add that I think the “logical ontology” is also species dependent, unsurprisingly.
I think you and I got off on the wrong foot, maybe you found my tone too declaratory when it should have been phrased more subjunctively. I’ll take your point. But since you obviously have a philosophy competence, you will know what the following means:-- one can say my views resemble somewhat an updated quasi-Kantian model, supplemented with the idea that noumena are the inchoate quantum substrate.
Or perhaps to correct that, in my model there are two “noumenal” realms: one is the “logical ontology” I referred to, a logical data structure, and the other is the one below that, and below ALL ontologies, which is the quantum substrate, necessarily “subontological.”
But my theory (there is more than I have just shot through quickly right now) handles species-relative qualia and the species-relative logical ontologies across species.
Remaining issues include : how qualia are generated. And the same question for the sense of self. I have ideas how to solve these, and the indexical 1st person problem, connected with the basis problem. Neurology studies of default mode network behavior and architecture, its malfunction, and metacognition, epilepsy, etc, help a lot.
Think this is speculative? You should read neruologists these days, especially the better, data driven ones. (Perhaps you already know, and you will thus see where I derive some of my supporting research.)
Anyway, always, always, I am trying to solve all this in the general case—first, across biological conscious species (a bird has a different “logical” ontology than people, as well as a different phenomenological reality that, to varying degrees of precision, “represents” or maps to, or has a recurrent resonance with that species’ logical ontology) -- and then trying to solve it for any general mind in mind space. that has to live in this universe.
It all sounds like hand waving, perhaps. But this is scarcely an abstract. There are many puzzle pieces to the theory, and every piece of it has lots of specific research. It all is progressively falling together into an integrated system. I need geffen graphs, white boards, to explain it, since its a whole theory, so I can’t squeeze it into one post. Besides, this is Bostrom’s show.
I’ll write my own book when the time comes—not saying it is right, but it is a promising effort so far, and it seems to work better, the farther I push it.
When it is far enough along, I can test it on a vlog, and see if people can find problems. If so, I will revise, backtrack, and try again. I intend to spend the rest of my life doing this, so discovered errors are just part of revision and refinement.
But first I have to finish, then present it methodically and carefully, so it can be evaluated by others. No space here for that.
Thanks for your previous thoughts, and your caution against sounding too certain. I am really NOT that certain, of course, of anything. I was just thinking out loud, as they say.
Well, I ran several topics together in the same post, and that was perhaps careless planning. And, in any case I do not expect slavish agreement just because I make the claim.
And, neither should you, just by flatly denying it, with nary a word to clue me in about your reservations about what has, in the last 10 years, transitioned from a convenient metaphor in quantum physics, cosmology, and other disciplines, to a growing consensus about the actual truth of things. (Objections to this growing consensus, when they actually are made, seem to be mostly arguments from guffaw, resembling the famous “I refute you thus” joke about Berkeleyan idealism.)
By the way, I am not defending Berkeleyan idealism, still less the theistic underpinning that kept popping up in his thought (I am an atheist.)
Rather, as for most thinkers, who cite the famous joke about someone kicking a solid object as a “proof” that Berkeley’s virtual phenomenalism was self-evidently foolish, the point of my usage of that joke is to show it misses the point. Of course it seems phenomenologically, like the world is made of “stuff”.
And information doesn’t seem to be “real stuff.” (The earth seems flat, too. So what?)
Had we time, you and I could debate the relative merits of an information-based, scientifically literate metaphysics, with whatever alternate notion of reality you subscribe to in its place, as your scientifically literate metaphysics.
But make no mistake, everyone subscribes to some kind of metaphysics, just as everyone has a working ontology—or candidate, provisional set of ontologies.
Even the most “anti-metaphysical” theorists are operating from a (perhaps unacknowledged) metaphysics and working ontology; it is just that they think theirs, because it is invisible to them, is beyond need of conceptual excavation and clarification, and beyond the reach of critical, rational examination—whereas other people’s metaphysics is acutally a metaphysics (argh), and thus carries an elevated burden of proof relative to their ontology.
I am not saying you are like this, of course. I don’t know your views. As I say, it could be the subject of a whole forum like this one. So I’ll end by saying disagreement is inevitable, especially when I just drop in a remark as I did, about a topic that is actually somewhat tangential (though, as I will try to argue as the forum proceeds, not all that tangential.)
Yes, Bostrom explicitly says he is not concerned with the metaphysics of mind, in his book. Good for him. It’s his book, and he can write it any way he chooses.
And I understand his editorial choice. He is trained as a philosopher, and knows as well as anyone that there are probably millions of pages written about the mind body problem, with more added daily. It is easy to understand his decision to avoid getting stuck in the quicksand of arguing specifics about consciousness, how it can be physically realized.
This book obviously has a different mission. I have written for publication before, and I know one has to make strategic choices (with one’s agent and editor.)
Likewise, his book is also not about “object-level” work in AI—how to make it, achieve it, give it this or that form, give it “real mental states”, emotion, drives. Those of us trying to understand how to achieve those things, still have much to learn from Bostrom’s current book, but will not find intricate conceptual investigations of what will lead to the new science of sentience design.
Still, I would have preferred if he had found a way to “stipulate” Conscious AI, along with speed AI, Quality AI, etc, as one of the flavors that might arise. Then we could address quesions under 4 headings, 4 possible AI worlds (not necessarily mutually exclusive, just as the three from this week are not mutually exclusive.)
The question of the “direct reach” of conscious AI, compared to the others, would have been very interesting.
It is a meta-level book about AI, deliberately ambiguous about consciousness. I think that makes the discussion harder, in many areas.
I like Bostrom. I’ve been reading his papers for 10 or 15 years.
But avoiding or proscribing the question of whether we have consciousness AND intelligence (vs simply intelligent behavior sans consciousness) thus pruning away, preemptively, issues that could depend on: whether they interact; whether the former increases causal powers—or instability or stability—in the exercise of the latter; and so on, keeps lots of questions inherently ambiguous.
I’ll try to make good on that last claim, one way or another, during the next couple of weekly sessions.
A growing consensus isn’t a done deal.
It’s a matter if fact that information ontology i isn’t the established consensus in the way that evolution is. You are entitled to opinions, but not to pass off opinions as fact. There is enough confusion about physics already.
You bring in the issue of objections to information ontology The unstated argument seems to be that since there are no valid objections, there is nothing to stop it becoming the established consensus, so it is as good as.
What would a universe in which information is not fundamental look like, as opposed to one where it is? I would expect a universe where information is not fundamental to look like one where information always requires some physical, material or energetic, medium or carrier—a sheet of paper,, radio wave,a train of pulses going down T1 line. That appears to be the case.
I am not sure why you brought Bostrom in. For what it’s worth, I don’t think a Bostrom style mathematical universe is quite the same as a single universe information ontology.
I don’t know who you think is doing that, .or why you brought it in. Do you think .IO helps with the mind body problem? I think you need to do more than subtract the stuffiness from matter. If we could easily see how a rich conception of consciousness could supervene on pure information, we would easily be able to see how computers could have qualia, which we can’t. We need more in our ontology, not less.
I have to confess that I might be the one person in this business who never really understood the concept of supervenience—either “weak supervenience” or “strong supervenience.” I’ve read Chalmers, Dennett, the journals on the concept… never really “snapped-in” for me. So when the term is used, I have to just recuse myself and let those who do understand it, finish their line of thought.
To me, supevenience seems like a fuzzy way to repackage epiphenomenalism, or to finesse some kind of antinomy (for them), like, “can’t live with eliminative materialism, can’t live with dualism, can’t live with type—type identity theory, and token-token identity theory is untestable and difficult even to give logical nec and sufficient conditions for, so… lets have a new word.”
So, (my unruly suspicion tells me) let’s say mental events (states, processes, whatever) “supervene” on physiological states (events, etc.)
As I say, so far, I have just had to suspend judgement and wonder if some day “supervene” will snap-in and be intuitively penetrable to me. I push all the definitions, and get to the same place—a “I don’t get it” place, but that doesn’t mean I believe the concept is itself defective. I just have to suspend judgement (like, for the last 25 years of study or so.)
I actually believe that, too… but with a unique take: I think we all operate with a logical ontology … not in the sense of modus ponens, but in the sense that a memory space can be “logical”, meaning in this context, detached from physical memory.
Further, the construction of this logical ontology is, I think, partly culturally influenced, partly influenced by the species’ sensorium and equipment, party influenced / constructed by something like Jeff Hawkins’ prediction-expectation memory model… constructed, bequeathed culturally, and in several additional related, ways that also tune the idealized, logical ontology.
Memetics influences (in conjunction with native—although changeable—abilities in those memes’ host vectors) the genesis, maintenance, and evolution of this “logical ontology”, also. This is feed foward and feed backward. Memetics influences the logical ontology, which crystalizes into additional memetic templates that are kept, tuning further the logical ontology.
Once “established” (and it constantly evolves), this “logical” ontology is the “target” that, over time, a new (say, human, while growing up, growing old) has as the “target” data structures that it creates a virtual, phenomenological analog simulation of, and as the person gains experience, the person’s virtual reality simulation of the world converges on something that is in some way consistently isomorphically related to this “logical” idealized ontology.
So (and there is lots of neurology research that drives much of this, though it may all sound rather speculative) for me, there are TWO ontologies, BOTH of them constructed, and those are in addition to the entangled “outside world” quantum substrate, which is by definition inherently both sub-ontological (properly understood) and not sensible, (It is sub-ontological because of its nature, but is interrogatable, giving feedback helping to form boundary conditions for the idealized logical ontology (or ontologies, in different species.)
I’ll add that I think the “logical ontology” is also species dependent, unsurprisingly.
I think you and I got off on the wrong foot, maybe you found my tone too declaratory when it should have been phrased more subjunctively. I’ll take your point. But since you obviously have a philosophy competence, you will know what the following means:-- one can say my views resemble somewhat an updated quasi-Kantian model, supplemented with the idea that noumena are the inchoate quantum substrate.
Or perhaps to correct that, in my model there are two “noumenal” realms: one is the “logical ontology” I referred to, a logical data structure, and the other is the one below that, and below ALL ontologies, which is the quantum substrate, necessarily “subontological.”
But my theory (there is more than I have just shot through quickly right now) handles species-relative qualia and the species-relative logical ontologies across species.
Remaining issues include : how qualia are generated. And the same question for the sense of self. I have ideas how to solve these, and the indexical 1st person problem, connected with the basis problem. Neurology studies of default mode network behavior and architecture, its malfunction, and metacognition, epilepsy, etc, help a lot.
Think this is speculative? You should read neruologists these days, especially the better, data driven ones. (Perhaps you already know, and you will thus see where I derive some of my supporting research.)
Anyway, always, always, I am trying to solve all this in the general case—first, across biological conscious species (a bird has a different “logical” ontology than people, as well as a different phenomenological reality that, to varying degrees of precision, “represents” or maps to, or has a recurrent resonance with that species’ logical ontology) -- and then trying to solve it for any general mind in mind space. that has to live in this universe.
It all sounds like hand waving, perhaps. But this is scarcely an abstract. There are many puzzle pieces to the theory, and every piece of it has lots of specific research. It all is progressively falling together into an integrated system. I need geffen graphs, white boards, to explain it, since its a whole theory, so I can’t squeeze it into one post. Besides, this is Bostrom’s show.
I’ll write my own book when the time comes—not saying it is right, but it is a promising effort so far, and it seems to work better, the farther I push it.
When it is far enough along, I can test it on a vlog, and see if people can find problems. If so, I will revise, backtrack, and try again. I intend to spend the rest of my life doing this, so discovered errors are just part of revision and refinement.
But first I have to finish, then present it methodically and carefully, so it can be evaluated by others. No space here for that.
Thanks for your previous thoughts, and your caution against sounding too certain. I am really NOT that certain, of course, of anything. I was just thinking out loud, as they say.
this week is pretty much closed..… cheers...
Supervenience is not a claim like epiphenonenalism, it is a set of constraints that represent some broad naturalists conclusions.