What’s your response to my “If I did...” point? If we include all the data points, the correlation between intelligence and agency is clearly positive, because rocks have 0 intelligence and 0 agency.
If you agree that agency as I’ve defined it in that sequence is closely and positively related to intelligence, then maybe we don’t have anything else to disagree about. I would then ask of you and Boaz what other notion of agency you have in mind, and encourage you to specify it to avoid confusion, and then maybe that’s all I’d say since maybe we’d be in agreement.
I am not excluding social insects from my definition of agency or intelligence. I think ants are quite agentic and also quite intelligent.
I do disagree that collective stupidity from our use of social networks is our main present danger; I think it’s sorta a meta-danger, in that if we could solve it maybe we’d solve a bunch of our other problems too, but it’s only dangerous insofar as it it leads to those other problems, and some of those other problems are really pressing… analogy: “Our biggest problem is suboptimal laws. If only our laws and regulations were optimal, all our other problems such as AGI risk would go away.” This is true, but… yeah it seems less useful to focus on that problem, and more useful to focus on more first-order problems and how our laws can be changed to address them.
Sorry, that was the « Idem if your data forms clusters ». In other words, I agree a cluster to (0,0) and a cluster to (+,+) will turn into positive correlation coefficients, and I warn you against updating based on that (it’s a statistical mistake).
If you agree that agency as I’ve defined it in that sequence is closely and positively related to intelligence, then maybe we don’t have anything else to disagree about.
I respectfully disagree with the idea that most disagreements comes from making different conclusion based on the same priors. Most disagreements I have with anyone on LessWrong (and anywhere, really) is about what priors and prior structures are best for what purpose. In other words, I fully agree that
I would then ask of you and Boaz what other notion of agency you have in mind, and encourage you to specify it to avoid confusion, and then maybe that’s all I’d say since maybe we’d be in agreement.
Speaking for myself only, my notion of agency is basically « anything that behaves like an error-correcting code ». This includes conscious beings that want to promote their fate, but also life who want to live, and even two thermostats fighting over who’s in charge.
I do disagree that collective stupidity from our use of social networks is our main present danger; I think it’s sorta a meta-danger, in that if we could solve it maybe we’d solve a bunch of our other problems too, but it’s only dangerous insofar as it it leads to those other problems, and some of those other problems are really pressing...
That and the analogy are very good points, thank you.
What’s your response to my “If I did...” point? If we include all the data points, the correlation between intelligence and agency is clearly positive, because rocks have 0 intelligence and 0 agency.
If you agree that agency as I’ve defined it in that sequence is closely and positively related to intelligence, then maybe we don’t have anything else to disagree about. I would then ask of you and Boaz what other notion of agency you have in mind, and encourage you to specify it to avoid confusion, and then maybe that’s all I’d say since maybe we’d be in agreement.
I am not excluding social insects from my definition of agency or intelligence. I think ants are quite agentic and also quite intelligent.
I do disagree that collective stupidity from our use of social networks is our main present danger; I think it’s sorta a meta-danger, in that if we could solve it maybe we’d solve a bunch of our other problems too, but it’s only dangerous insofar as it it leads to those other problems, and some of those other problems are really pressing… analogy: “Our biggest problem is suboptimal laws. If only our laws and regulations were optimal, all our other problems such as AGI risk would go away.” This is true, but… yeah it seems less useful to focus on that problem, and more useful to focus on more first-order problems and how our laws can be changed to address them.
Sorry, that was the « Idem if your data forms clusters ». In other words, I agree a cluster to (0,0) and a cluster to (+,+) will turn into positive correlation coefficients, and I warn you against updating based on that (it’s a statistical mistake).
I respectfully disagree with the idea that most disagreements comes from making different conclusion based on the same priors. Most disagreements I have with anyone on LessWrong (and anywhere, really) is about what priors and prior structures are best for what purpose. In other words, I fully agree that
Speaking for myself only, my notion of agency is basically « anything that behaves like an error-correcting code ». This includes conscious beings that want to promote their fate, but also life who want to live, and even two thermostats fighting over who’s in charge.
That and the analogy are very good points, thank you.