I’m not sure I understand what it means for an algorithm to have an inside, let alone for an algorithm to “feel” something from the inside. “Inside” is a geometrical concept, not an algorithmical one.
Well, that’s just the title, you know? The original article was talking about cognitive algorithms (an algorithm, not any algorithm).
Unless you assume some kind of un-physical substance having a causal effect on your brain and your continued existence after death, you is what your cognitive algorithm feels when it’s run on your brain wetware.
“Inside” is a geometrical concept, not an algorithmical one.
That’s not true: every formal system that can produce a model of a subsets of its axioms might be considered as having an ‘inside’ (as in set theory: constructible model are called ‘inner model’), and that’s just one possible definition.
The original article was talking about cognitive algorithms (an algorithm, not any algorithm).
So what’s the difference between cognitive algorithms with the ability of “feeling from the inside” and the non-cognitive algorithms which can’t “feel from the inside”?
Unless you assume some kind of un-physical substance having a causal effect on your brain and your continued existence after death, you is what your cognitive algorithm feels when it’s run on your brain wetware.
Please don’t construct strawmen. I never once mentioned unphysical substances having any causal effect, nor do I believe in such. Actually from my perspective it seems to me that it is you who are referring to unphysical substances called “algorithms” “models”, the “inside”, etc. All these seem to me to be on the map, not on the territory.
And to say that I am my algorithm running on my brain doesn’t help dissolve for me the question of qualia anymore than if some religious guy had said that I’m the soul controlling my body.
So what’s the difference between cognitive algorithms with the ability of “feeling from the inside” and the non-cognitive algorithms which can’t “feel from the inside”?
If I knew I would have already written an AI. This is an NP problem, easy to check, hard to find a solution for: I knew that the one running on my brain is of the kind, and the one spouting Fibonacci number is not. I can only guess that involves some kind of self-representation.
Please don’t construct strawmen. I never once mentioned unphysical substances having any causal effect, nor do I believe in such.
Sorry if I seemed to do so, I wasn’t attributing those beliefs to you, I was just listing the possible escape routes from the argument.
Actually from my perspective it seems to me that it is you who are referring to unphysical substances called “algorithms” “models”, the “inside”, etc. All these seem to me to be on the map, not on the territory.
Well, if you already do not accept those concepts, you need to tell me what your basic ontology is so we can agree on definitions. I thought that we already have “algorithm” covered by “Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be”
And to say that I am my algorithm running on my brain doesn’t help dissolve for me the question of qualia anymore than if some religious guy had said that I’m the soul controlling my body.
That’s because it was not the question that my sentence was answering. You have to admit that writing “I’m not sure I understand what it means for an algorithm to have an inside” is a rather strange way to ask “Please justify the way the sequence has in your opinion dissolved the qualia problem”. If you’re asking me that, I might just want to write an entire separate post, in the hope of being clearer and more convincing.
I think this is confusing qualia with intelligence. There’s no big confusion about how an algorithm run on hardware can produce something we identify as intelligence—there’s a big confusion about such an algorithm “feeling things from the inside”.
Well, if you already do not accept those concepts, you need to tell me what your basic ontology is so we can agree on definitions.
It seems to me that in a physical universe, the concept of “algorithms” is merely an abstract representation in our minds of groupings of physical happenings, and therefore algorithms are no more ontologically fundamental than the category of “fruits” or “dinosaurs”.
Now starting with a mathematical ontology instead, like Tegmark IV’s Mathematical Universe Hypothesis, it’s physical particles that are concrete representations of algorithms instead (very simple algorithms in the case of particles). In that ontology, where algorithms are ontologically fundamental and physical particles aren’t, you can perhaps clearly define qualia as the inputs of the much-more-complex algorithms which are our minds...
That’s sort-of the way that I would go about dissolving the issue of qualia if I could. But in a universe which is fundamentally physical it doesn’t get dissolved by positing “algorithms” because algorithms aren’t fundamentally physical...
I’m going to write a full-blown post so that I can present my view more clearly. If you want we can move the discussion there when it will be ready (I think in a couple of days).
Well, that’s just the title, you know? The original article was talking about cognitive algorithms (an algorithm, not any algorithm). Unless you assume some kind of un-physical substance having a causal effect on your brain and your continued existence after death, you is what your cognitive algorithm feels when it’s run on your brain wetware.
That’s not true: every formal system that can produce a model of a subsets of its axioms might be considered as having an ‘inside’ (as in set theory: constructible model are called ‘inner model’), and that’s just one possible definition.
So what’s the difference between cognitive algorithms with the ability of “feeling from the inside” and the non-cognitive algorithms which can’t “feel from the inside”?
Please don’t construct strawmen. I never once mentioned unphysical substances having any causal effect, nor do I believe in such. Actually from my perspective it seems to me that it is you who are referring to unphysical substances called “algorithms” “models”, the “inside”, etc. All these seem to me to be on the map, not on the territory.
And to say that I am my algorithm running on my brain doesn’t help dissolve for me the question of qualia anymore than if some religious guy had said that I’m the soul controlling my body.
If I knew I would have already written an AI. This is an NP problem, easy to check, hard to find a solution for: I knew that the one running on my brain is of the kind, and the one spouting Fibonacci number is not. I can only guess that involves some kind of self-representation.
Sorry if I seemed to do so, I wasn’t attributing those beliefs to you, I was just listing the possible escape routes from the argument.
Well, if you already do not accept those concepts, you need to tell me what your basic ontology is so we can agree on definitions. I thought that we already have “algorithm” covered by “Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be”
That’s because it was not the question that my sentence was answering. You have to admit that writing “I’m not sure I understand what it means for an algorithm to have an inside” is a rather strange way to ask “Please justify the way the sequence has in your opinion dissolved the qualia problem”. If you’re asking me that, I might just want to write an entire separate post, in the hope of being clearer and more convincing.
I think this is confusing qualia with intelligence. There’s no big confusion about how an algorithm run on hardware can produce something we identify as intelligence—there’s a big confusion about such an algorithm “feeling things from the inside”.
It seems to me that in a physical universe, the concept of “algorithms” is merely an abstract representation in our minds of groupings of physical happenings, and therefore algorithms are no more ontologically fundamental than the category of “fruits” or “dinosaurs”.
Now starting with a mathematical ontology instead, like Tegmark IV’s Mathematical Universe Hypothesis, it’s physical particles that are concrete representations of algorithms instead (very simple algorithms in the case of particles). In that ontology, where algorithms are ontologically fundamental and physical particles aren’t, you can perhaps clearly define qualia as the inputs of the much-more-complex algorithms which are our minds...
That’s sort-of the way that I would go about dissolving the issue of qualia if I could. But in a universe which is fundamentally physical it doesn’t get dissolved by positing “algorithms” because algorithms aren’t fundamentally physical...
I’m going to write a full-blown post so that I can present my view more clearly. If you want we can move the discussion there when it will be ready (I think in a couple of days).