I don’t really care about what’s going on at very low levels of intelligence (insects)
Well, yes, removing (low X, high Y) points is one way to make correlation coefficient positive, but then you shouldn’t trust any conclusion based on that (or, more precisely, you shouldn’t update based on that). Idem if your data form clusters.
… and my very own Agency: What it is and why it matters—LessWrong
Thanks, very helpful! Yes, we agree that, once we define agency as basically the ability to represent and act on plans, and each level in agency as one type of what I’d call cognitive strategies, then the more intelligence the more agency.
But is that definition useful enough? I’ll have to read the other links to be fair, but what’s your best three arguments for the universality of this definition? Or at least why you think it should apply to computer programs and human-made robots?
But out of curiosity what are you thinking there—are you thinking that the smarter insects are less agentic, the more agentic insects are smarter?
Well the good thing is we don’t need to think that much, we can just read the literature. The behaviors of social insects that appear the most agentic (using man on the street feeling rather than your specialized definition) are collective behaviors: they can go to war, enslave their victims, raise cattle, sail (arguably), decide to emigrate en masse, choose the best among candidate locations, etc. The following paper explains quite well that this does not rely on individual intelligence, but on coordination among individuals: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5226334/
Now, if you decide to exclude social insects from your definition of agency or intelligence, then I think you’re also at risk of missing what I see as one of the main present danger: collective stupidity emerging from our use of social networks. Imagine if covid had turned out to be as dangerous as ebola. We wouldn’t have to care about our civilisation being too powerful for its own good, at least for a while.
What’s your response to my “If I did...” point? If we include all the data points, the correlation between intelligence and agency is clearly positive, because rocks have 0 intelligence and 0 agency.
If you agree that agency as I’ve defined it in that sequence is closely and positively related to intelligence, then maybe we don’t have anything else to disagree about. I would then ask of you and Boaz what other notion of agency you have in mind, and encourage you to specify it to avoid confusion, and then maybe that’s all I’d say since maybe we’d be in agreement.
I am not excluding social insects from my definition of agency or intelligence. I think ants are quite agentic and also quite intelligent.
I do disagree that collective stupidity from our use of social networks is our main present danger; I think it’s sorta a meta-danger, in that if we could solve it maybe we’d solve a bunch of our other problems too, but it’s only dangerous insofar as it it leads to those other problems, and some of those other problems are really pressing… analogy: “Our biggest problem is suboptimal laws. If only our laws and regulations were optimal, all our other problems such as AGI risk would go away.” This is true, but… yeah it seems less useful to focus on that problem, and more useful to focus on more first-order problems and how our laws can be changed to address them.
Sorry, that was the « Idem if your data forms clusters ». In other words, I agree a cluster to (0,0) and a cluster to (+,+) will turn into positive correlation coefficients, and I warn you against updating based on that (it’s a statistical mistake).
If you agree that agency as I’ve defined it in that sequence is closely and positively related to intelligence, then maybe we don’t have anything else to disagree about.
I respectfully disagree with the idea that most disagreements comes from making different conclusion based on the same priors. Most disagreements I have with anyone on LessWrong (and anywhere, really) is about what priors and prior structures are best for what purpose. In other words, I fully agree that
I would then ask of you and Boaz what other notion of agency you have in mind, and encourage you to specify it to avoid confusion, and then maybe that’s all I’d say since maybe we’d be in agreement.
Speaking for myself only, my notion of agency is basically « anything that behaves like an error-correcting code ». This includes conscious beings that want to promote their fate, but also life who want to live, and even two thermostats fighting over who’s in charge.
I do disagree that collective stupidity from our use of social networks is our main present danger; I think it’s sorta a meta-danger, in that if we could solve it maybe we’d solve a bunch of our other problems too, but it’s only dangerous insofar as it it leads to those other problems, and some of those other problems are really pressing...
That and the analogy are very good points, thank you.
Well, yes, removing (low X, high Y) points is one way to make correlation coefficient positive, but then you shouldn’t trust any conclusion based on that (or, more precisely, you shouldn’t update based on that). Idem if your data form clusters.
Thanks, very helpful! Yes, we agree that, once we define agency as basically the ability to represent and act on plans, and each level in agency as one type of what I’d call cognitive strategies, then the more intelligence the more agency.
But is that definition useful enough? I’ll have to read the other links to be fair, but what’s your best three arguments for the universality of this definition? Or at least why you think it should apply to computer programs and human-made robots?
Well the good thing is we don’t need to think that much, we can just read the literature. The behaviors of social insects that appear the most agentic (using man on the street feeling rather than your specialized definition) are collective behaviors: they can go to war, enslave their victims, raise cattle, sail (arguably), decide to emigrate en masse, choose the best among candidate locations, etc. The following paper explains quite well that this does not rely on individual intelligence, but on coordination among individuals: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5226334/
Now, if you decide to exclude social insects from your definition of agency or intelligence, then I think you’re also at risk of missing what I see as one of the main present danger: collective stupidity emerging from our use of social networks. Imagine if covid had turned out to be as dangerous as ebola. We wouldn’t have to care about our civilisation being too powerful for its own good, at least for a while.
What’s your response to my “If I did...” point? If we include all the data points, the correlation between intelligence and agency is clearly positive, because rocks have 0 intelligence and 0 agency.
If you agree that agency as I’ve defined it in that sequence is closely and positively related to intelligence, then maybe we don’t have anything else to disagree about. I would then ask of you and Boaz what other notion of agency you have in mind, and encourage you to specify it to avoid confusion, and then maybe that’s all I’d say since maybe we’d be in agreement.
I am not excluding social insects from my definition of agency or intelligence. I think ants are quite agentic and also quite intelligent.
I do disagree that collective stupidity from our use of social networks is our main present danger; I think it’s sorta a meta-danger, in that if we could solve it maybe we’d solve a bunch of our other problems too, but it’s only dangerous insofar as it it leads to those other problems, and some of those other problems are really pressing… analogy: “Our biggest problem is suboptimal laws. If only our laws and regulations were optimal, all our other problems such as AGI risk would go away.” This is true, but… yeah it seems less useful to focus on that problem, and more useful to focus on more first-order problems and how our laws can be changed to address them.
Sorry, that was the « Idem if your data forms clusters ». In other words, I agree a cluster to (0,0) and a cluster to (+,+) will turn into positive correlation coefficients, and I warn you against updating based on that (it’s a statistical mistake).
I respectfully disagree with the idea that most disagreements comes from making different conclusion based on the same priors. Most disagreements I have with anyone on LessWrong (and anywhere, really) is about what priors and prior structures are best for what purpose. In other words, I fully agree that
Speaking for myself only, my notion of agency is basically « anything that behaves like an error-correcting code ». This includes conscious beings that want to promote their fate, but also life who want to live, and even two thermostats fighting over who’s in charge.
That and the analogy are very good points, thank you.