The potential to enhance the information complexity of another agent. Where the degree of this potential and the degree of the complexity provided indicates the degree of moral significance.
Which reduces the problem to the somewhat less difficult one of estimating complexity and so estimating potential complexity influences among agents. By this, I means something more nuanced than algorithmic or Kolmogorov complexity. We need something that takes into account fun theory and how both simple systems and random noise are innately less complex than systems with non-trivial structure and dynamics or to put it another way, systems that interest and enrich.
Also note, don’t make the error of equating the presence of complexity for the potential to enhance complexity in other agents.
As for suffering, in this context, you can define suffering as the inverse of complexity enhancement, namely the sapping of the innate complexity of the agent.
Can you explain why “the inverse of complexity enhancement” would be a good definition of “suffering” that would share the other features we mean by the word?
Well, I just don’t see any connection at all, and I assume that has something to do with the −1 karma status of the comment.
People usually use “suffering” to mean something along the lines of “experiencing subjectivly unpleasant qualia” and having negative utility associated with it.
Building on some of the more non-trivial theories of fun—specifically cognitive science research focusing on the human response to learning there is a direct relationship between human perception of subjectively unpleasant qualia and the complexity impact on the human of that qualia.
Admittedly extending this concept of suffering beyond humanity is a bit questionable. But it’s better than a tautological or innately subjective definition, because with this model it is possible to estimate and compare with more intuitive expectations.
One nice effect of having suffering be defined as the sapping of complexity is that it deals with the question of which pain is suffering fairly elegantly—“subjectively” interesting pain is not suffering, but “subjectively” uninteresting pain is suffering.
Of course, that is only a small part of the process of making these distinctions. It’s important to estimate both the subject of the qualia, and the structure of the sequence of qualia as it relates to the current state of the entity in question before you can estimate whether the stream of qualia will induce suffering or not.
It is a very powerful approach. But it is by no means simple. So I don’t begrudge some karma loss in trying to explain it to folks here. But it’s at least some feedback from unclear explanations.
I don’t mean to suggest that anything that subtracts a karma point isn’t worth doing, just that it’s evidence that you’re not accomplishing what you’d like.
You’ve made some claims (in other comments too) which would be very interesting if true, but weren’t backed up enough for me to make the inferential jump.
I’d like to see a full top level post on this idea, as it seems quite interesting if true, but it also seems to need more space to give the details and full supporting arguments.
You’re right in that this, among other topics, I owe a top level post.
Although one worry I have with trying to lay out inferential steps is that some of these ideas (this one included) seem to encounter a sort of Xeno’s paradox for full comprehension. It stops being enough to be willing to take the next step, it becomes necessary to take the inferential limit to get to the other side.
Which means that until I find a way to map people around that phenomena I’m hesitant in giving a large scale treatment. Just because it was the route I took, doesn’t mean it’s a good way to explain things generally, ala Typical Mind Fallacy born out by evidence.
But in any case I will return to it when I have the time.
Laying out the route you took might be a lot easier than looking for another route. Also, the feedback from comments might be a better way to look for another route than modeling other minds on your own.
I suspect that people are voting you down because you sound like you’re attempting to show off, rather than attempting to communicate. Several of your posts seem to be simple assertions that you possess knowledge or a theory. I did vote down the comment at the top of this thread, but I don’t remember if that’s why. I was surprised that I didn’t vote down other of your comments where I remember having that reaction, so this theory-from-introspection isn’t even a good theory of me. But it might work better for people who vote more. (the simple theory of when I vote you up is 21 May and 6 June, which disturbs me.)
The potential to enhance the information complexity of another agent. Where the degree of this potential and the degree of the complexity provided indicates the degree of moral significance.
Which reduces the problem to the somewhat less difficult one of estimating complexity and so estimating potential complexity influences among agents. By this, I means something more nuanced than algorithmic or Kolmogorov complexity. We need something that takes into account fun theory and how both simple systems and random noise are innately less complex than systems with non-trivial structure and dynamics or to put it another way, systems that interest and enrich.
Also note, don’t make the error of equating the presence of complexity for the potential to enhance complexity in other agents.
As for suffering, in this context, you can define suffering as the inverse of complexity enhancement, namely the sapping of the innate complexity of the agent.
Can you explain why “the inverse of complexity enhancement” would be a good definition of “suffering” that would share the other features we mean by the word?
Possibly, could you list some of the features you had in mind?
Well, I just don’t see any connection at all, and I assume that has something to do with the −1 karma status of the comment.
People usually use “suffering” to mean something along the lines of “experiencing subjectivly unpleasant qualia” and having negative utility associated with it.
Where does complexity come in?
Building on some of the more non-trivial theories of fun—specifically cognitive science research focusing on the human response to learning there is a direct relationship between human perception of subjectively unpleasant qualia and the complexity impact on the human of that qualia.
Admittedly extending this concept of suffering beyond humanity is a bit questionable. But it’s better than a tautological or innately subjective definition, because with this model it is possible to estimate and compare with more intuitive expectations.
One nice effect of having suffering be defined as the sapping of complexity is that it deals with the question of which pain is suffering fairly elegantly—“subjectively” interesting pain is not suffering, but “subjectively” uninteresting pain is suffering.
Of course, that is only a small part of the process of making these distinctions. It’s important to estimate both the subject of the qualia, and the structure of the sequence of qualia as it relates to the current state of the entity in question before you can estimate whether the stream of qualia will induce suffering or not.
It is a very powerful approach. But it is by no means simple. So I don’t begrudge some karma loss in trying to explain it to folks here. But it’s at least some feedback from unclear explanations.
I don’t mean to suggest that anything that subtracts a karma point isn’t worth doing, just that it’s evidence that you’re not accomplishing what you’d like.
You’ve made some claims (in other comments too) which would be very interesting if true, but weren’t backed up enough for me to make the inferential jump.
I’d like to see a full top level post on this idea, as it seems quite interesting if true, but it also seems to need more space to give the details and full supporting arguments.
You’re right in that this, among other topics, I owe a top level post.
Although one worry I have with trying to lay out inferential steps is that some of these ideas (this one included) seem to encounter a sort of Xeno’s paradox for full comprehension. It stops being enough to be willing to take the next step, it becomes necessary to take the inferential limit to get to the other side.
Which means that until I find a way to map people around that phenomena I’m hesitant in giving a large scale treatment. Just because it was the route I took, doesn’t mean it’s a good way to explain things generally, ala Typical Mind Fallacy born out by evidence.
But in any case I will return to it when I have the time.
Laying out the route you took might be a lot easier than looking for another route. Also, the feedback from comments might be a better way to look for another route than modeling other minds on your own.
I suspect that people are voting you down because you sound like you’re attempting to show off, rather than attempting to communicate. Several of your posts seem to be simple assertions that you possess knowledge or a theory. I did vote down the comment at the top of this thread, but I don’t remember if that’s why. I was surprised that I didn’t vote down other of your comments where I remember having that reaction, so this theory-from-introspection isn’t even a good theory of me. But it might work better for people who vote more. (the simple theory of when I vote you up is 21 May and 6 June, which disturbs me.)