Great post, I felt it really defined and elaborated on a phenomena I’ve seen recur on a regular basis.
It’s funny how consciousness is so difficult to understand, to the point that it seems pre-paradigmatic to me. At this point, I like, like presumably many others, evaluate claims of conscientiousness by setting the prior that I’m personally conscious to near 1, and then evaluating the consciousness of other entities primarily by their structural similarity to my own computational substrate, the brain.
So another human is almost certainly conscious, most mammals are likely conscious and so on, and while I wouldn’t go so far as to say that novel or unusual computational substrates such as say, an octopus, aren’t conscious, I strongly suspect their consciousness is internally different than ours.
Or more precisely, it’s not really the substrate but the algorithm running on it that’s the crux of it, and it’s only that conservation of the substrate’s arrangement constrains our expectations of what kind of algorithm runs on it. I expect a human brain’s consciousness to be radically different from an octopus because the different structure requires a different algorithm to handle, in the latter case a far more diffuse one.
I’d go so far as to say that I think substrate can be irrelevant in practise, since I think that a human brain emulation experiences consciousness near identical to one running on head cheese, and not akin to an octopus or some AI that was trained by modern ML.
Do I know this for a fact? Hell no, and at this point I expect it to be an AGI-complete problem to solve, it’s just that I need an operational framework to live by in the mean time and this is the best I’ve got.
Great post, I felt it really defined and elaborated on a phenomena I’ve seen recur on a regular basis.
It’s funny how consciousness is so difficult to understand, to the point that it seems pre-paradigmatic to me. At this point, I like, like presumably many others, evaluate claims of conscientiousness by setting the prior that I’m personally conscious to near 1, and then evaluating the consciousness of other entities primarily by their structural similarity to my own computational substrate, the brain.
So another human is almost certainly conscious, most mammals are likely conscious and so on, and while I wouldn’t go so far as to say that novel or unusual computational substrates such as say, an octopus, aren’t conscious, I strongly suspect their consciousness is internally different than ours.
Or more precisely, it’s not really the substrate but the algorithm running on it that’s the crux of it, and it’s only that conservation of the substrate’s arrangement constrains our expectations of what kind of algorithm runs on it. I expect a human brain’s consciousness to be radically different from an octopus because the different structure requires a different algorithm to handle, in the latter case a far more diffuse one.
I’d go so far as to say that I think substrate can be irrelevant in practise, since I think that a human brain emulation experiences consciousness near identical to one running on head cheese, and not akin to an octopus or some AI that was trained by modern ML.
Do I know this for a fact? Hell no, and at this point I expect it to be an AGI-complete problem to solve, it’s just that I need an operational framework to live by in the mean time and this is the best I’ve got.