You’re not engaging with his point. By writing what he did, he was inviting you to consider the idea of a genuinely arbitrary complex concept, without the bother of writing one out explicitly (because there’s no compact way to do so— the compact ways of representing complex concepts in English are reserved for the complex concepts that we do care about).
By writing what he did, he was inviting you to consider the idea of a genuinely arbitrary complex concept
If he picked actual complex concept, there’d be autistic happily doing just this and we’d be happy for this autistic that he found something he likes to do, that also engages his brain in precisely the way in which the GCD does not. Or at least, have found it far less objectionable than GCD example.
Would you be happier for him than someone with down syndrome that was just always happy?
If I offered you a pill that would modify you so that you use some hash function to figure out happiness that’s far more complex than what you have now, along with completely unrelated, would you take it?
that also engages his brain in precisely the way in which the GCD does not.
I don’t see why it would engage it differently. More perhaps, but not differently.
You sort of are still agreeing with me here even though your explicit notion of complex is very different from your implicit notion.
The hash functions are simple. The visual recognition is hard. If I spend my life trying to break encryption codes and hash functions—that is very hard, and that’s what is complex about the hash functions—they are easy to compute, not easy to reverse—and be intellectually happy when i break some encryption, it’d generally be a quite respectable thing to do. Even though that is a case of happiness from some really difficult nonsense.
edit: and I do agree that something entirely random, we don’t consider complex.
Hash functions tend to be simple, but they don’t have to be. If you came up with something of a certain complexity at random, it would look like an extremely complex hash function, not image recognition.
edit: and I do agree that something entirely random, we don’t consider complex.
In that case, you are using an entirely different definition of complexity than I am. Define complexity.
Ultimately, the complexity as in Kolmogorov’s complexity, is not computable (nor always useful). There are various complexity metrics that are more practical. The interesting thing about complexity metrics, is that under certain conditions, the complexity of concatenation of A and B is close to sum of their complexities, and under other conditions, it is far; that generally has to do with how much A and B have in common. The problem of course is that we can’t quite pin down which exact metric is used there.
One sort of complexity is the size of the internal representation inside the head. We don’t know how we represent things internally. That is a very complicated problem. It does seem that we use compression, implying that the ‘size’ of thing inside the head depends to the complexity of it, in terms of it’s repetitiveness, but ignoring randomness. It may be that—our hardware being faulty and all—it plays a role how big is the internal representation. It is clearly the case that the more abstractly we represent strangers the less we care about them.
There are various complexity metrics that are more practical.
What complexity metric are you using? I suspect it involves only counting information that you find interesting, or something to that extent. Otherwise, I don’t see how random data could possibly have low complexity.
We compress random data into ‘random data’ (along with standard deviation etc) because we don’t care about exact content or find it irrelevant. Maybe a bit like random noise image after it been blurred.
Before, I thought you were saying that people favor moral values that have high K-complexity. This essentially means that people favor moral values that don’t seem arbitrary. I think I agree with that.
Not the moral values actually… the idea is that when making moral comparisons, the perceived complexity (length of internal representation perhaps) may be playing big role. Evolution also tends to pick easiest routes; if the size of internal representation correlated with tribal importance or genetic proximity, then caring more for those most complex represented, would be a readily available solution to discrimination between in-tribe and out-tribe.
I think you could have found a nicer way to make your point..… a better example.
In California autism rates have reached 1 in 88 (propaganda.… or real rate? Hard to tell. Nonetheless, it is high), and are steadily increasing all over the world.
This disorder is so prevalent now that when you speak on any issue at all, someone in your audience has probably been affected by autism.
Using traits of the disabled as some type of caricature example to espouse your unrelated opinions is not only unproductive..… but it also makes you look like a jerk.
I am absolutely NOT in support of a ‘politically correct’ society, but your example was in poor taste.
You’re not engaging with his point. By writing what he did, he was inviting you to consider the idea of a genuinely arbitrary complex concept, without the bother of writing one out explicitly (because there’s no compact way to do so— the compact ways of representing complex concepts in English are reserved for the complex concepts that we do care about).
If he picked actual complex concept, there’d be autistic happily doing just this and we’d be happy for this autistic that he found something he likes to do, that also engages his brain in precisely the way in which the GCD does not. Or at least, have found it far less objectionable than GCD example.
Would you be happier for him than someone with down syndrome that was just always happy?
If I offered you a pill that would modify you so that you use some hash function to figure out happiness that’s far more complex than what you have now, along with completely unrelated, would you take it?
I don’t see why it would engage it differently. More perhaps, but not differently.
You sort of are still agreeing with me here even though your explicit notion of complex is very different from your implicit notion.
The hash functions are simple. The visual recognition is hard. If I spend my life trying to break encryption codes and hash functions—that is very hard, and that’s what is complex about the hash functions—they are easy to compute, not easy to reverse—and be intellectually happy when i break some encryption, it’d generally be a quite respectable thing to do. Even though that is a case of happiness from some really difficult nonsense.
edit: and I do agree that something entirely random, we don’t consider complex.
Hash functions tend to be simple, but they don’t have to be. If you came up with something of a certain complexity at random, it would look like an extremely complex hash function, not image recognition.
In that case, you are using an entirely different definition of complexity than I am. Define complexity.
I’ll link other post, that I didn’t know of, which explains it considerably better:
http://lesswrong.com/lw/196/boredom_vs_scope_insensitivity/
Ultimately, the complexity as in Kolmogorov’s complexity, is not computable (nor always useful). There are various complexity metrics that are more practical. The interesting thing about complexity metrics, is that under certain conditions, the complexity of concatenation of A and B is close to sum of their complexities, and under other conditions, it is far; that generally has to do with how much A and B have in common. The problem of course is that we can’t quite pin down which exact metric is used there.
One sort of complexity is the size of the internal representation inside the head. We don’t know how we represent things internally. That is a very complicated problem. It does seem that we use compression, implying that the ‘size’ of thing inside the head depends to the complexity of it, in terms of it’s repetitiveness, but ignoring randomness. It may be that—our hardware being faulty and all—it plays a role how big is the internal representation. It is clearly the case that the more abstractly we represent strangers the less we care about them.
I don’t see how that’s relevant.
What complexity metric are you using? I suspect it involves only counting information that you find interesting, or something to that extent. Otherwise, I don’t see how random data could possibly have low complexity.
We compress random data into ‘random data’ (along with standard deviation etc) because we don’t care about exact content or find it irrelevant. Maybe a bit like random noise image after it been blurred.
That changes a lot.
Before, I thought you were saying that people favor moral values that have high K-complexity. This essentially means that people favor moral values that don’t seem arbitrary. I think I agree with that.
Not the moral values actually… the idea is that when making moral comparisons, the perceived complexity (length of internal representation perhaps) may be playing big role. Evolution also tends to pick easiest routes; if the size of internal representation correlated with tribal importance or genetic proximity, then caring more for those most complex represented, would be a readily available solution to discrimination between in-tribe and out-tribe.
I think you could have found a nicer way to make your point..… a better example.
In California autism rates have reached 1 in 88 (propaganda.… or real rate? Hard to tell. Nonetheless, it is high), and are steadily increasing all over the world.
This disorder is so prevalent now that when you speak on any issue at all, someone in your audience has probably been affected by autism.
Using traits of the disabled as some type of caricature example to espouse your unrelated opinions is not only unproductive..… but it also makes you look like a jerk.
I am absolutely NOT in support of a ‘politically correct’ society, but your example was in poor taste.