I like this distinction, and actually agree! Have talked with a CFAR (ex-)staff member about this, who confirmed my impression that CFAR has been compensating factor in the community amongst most of the perceptual/cognitive dimensions I listed. Where you and I may disagree though is that I still think the default tendency for many rationalists is to construct problems as structure-based.
There are certainly rationalists who’s approach to problems is structure-based. We have a diversity of approaches.
One interesting example here is the question of diet. You find people who do argue the structure-based approach where it’s about CICO (Calories-Out-Calories) in. Then you have other people who take a more process oriented perspective out of cybernetics.
You often have people who believe in CICO who don’t get that there is another way to look at the issue but historically for example Eliezer did argue the cybernetics paradigm.
When I look at my local rationality community CFAR has a huge influence on it. I would guess that within EA you find more people who can only handle the structure-based approach. I would claim that’s more because those EA have too little exposure to the rationality community then it’s due to flaws in the rationality community.
Good nuances here that we don’t just see individuals as independent of the system they’re operating in. So there is some sense of interconnectivity there. I think you’re only partially capturing what I mean with interdependence though. See other comment for my attempt to convey this.
There are probably two separate issues here. One is about modeling yourself as independent and the other is about modeling other people as independent actors.
I think generally we do a decent job at modeling how other people are constraint in the choices they make by their enviroment but model ourselves more as independent actors. But then I’m uncertain whether anybody really looks at the way their own decision making depends on other people.
re: Processes vs Structure Your concrete examples made me update somewhat towards process thinking being more common in AI alignment and local rationality communities than I was giving credit for. I also appreciate you highlighting that the rationality community has a diversity of approaches, and that we’re not some homogenous blob (as my post might imply).
A CFAR staff member also pointed me to MIRI’s logical induction paper (with input from Critch) as one of MIRI’s most cited papers (i.e. at least somewhat representative of how outside people might view MIRI’s work) that’s clearly based on an algorithmic process.
Eliezer’s AI Foom post (the one I linked to above) can be read as an explanation of how an individual agent is constructed out of a reinitiating process.
Also, there has been some interest in decision and allocation mechanisms like e.g. quadratic voting and funding (both promoted by RxC, latter also by Vitalik Buterin) which seems kinda deliberately process-oriented.
I would guess that within EA you find more people who can only handle the structure-based approach.
This also resonates, and I hadn’t explicitly made that distinction yet! Particularly, how EA researchers have traditionally represented cause areas/problems/interventions to work on (after going through e.g. Importance-Tractability-Neglectedness analysis) seems quite structure-based (80Ks methods for testing personal fit don’t as much however).
IMO CEA-hosted grantmakers also started from a place where they dissected and assessed the possible promising/unpromising traits of a person, the country or professional hub they operate from, and their project idea (based on first principles say, or track record). This was particularly the case with the EA Community Building Grant in early days. But seems to be changing somewhat in how EA Funds grantmakers are offering smaller grants in dialog with possible applicants, and assessing viability for the career aspirant to continue or for a project to expand further as they go.
Personally, I think my project work could be more robustly evaluated based on - feedback on the extent to which my past projects enabled or hindered aspiring EAs. - the care I take in soliciting feedback from users and in passing that on to strategy coaches and evaluating funders. - how I use feedback from users and advisors to refine and prioritise next services.
At each stage, I want to improve processes for coordinating on making better decisions. Particularly, to couple feedback with funding better: deserved recognition actionable feedback ⇅ - ⇅commensurate funding improved work
I’m not claiming btw that process-based representations are inherently superior for the social good or something. Just that the value of that kind of thinking is overlooked in some of the work we do.
E.g. In this 80K interview, they made a good case for why adhering to enacting some bureaucratic process can be bad. I also thought they overlooked a point – you can make other, similarly compelling arguments for why rewarding that some previously assessed outcome or end state came into existence or was reached can be bad.
Robert Wiblin: You have to focus on outcomes, not process.
Brian Christian: Yeah. One of the examples that I give is my friend and collaborator, Tom Griffiths. When his daughter was really young, she had this toy brush and pan, and she swept up some stuff on the floor and put it in the trash. And he praised her, like “Oh, wow, good job. You swept that really well.” And the daughter was very proud. And then without missing a beat, she dumps the trash back out onto the floor in order to sweep it up a second time and get the same praise a second time. And so Tom—
Robert Wiblin: Pure intelligence.
Brian Christian: Exactly. Yeah. Tom was—
Robert Wiblin: Should be very proud.
Brian Christian: —making the classic blunder of rewarding her actions rather than the state of the kitchen. So he should have praised how clean the floor was, rather than her sweeping itself. So again, there are these surprisingly deep parallels between humans and machines. Increasingly, people like Stuart Russell are making the argument that we just shouldn’t manually design rewards at all.
Robert Wiblin: Because we just have too bad a track record.
Brian Christian: Yes. It’s just that you can’t predict the possible loopholes that will be found. And I think generally that seems right.
Robert Wiblin: Yeah. I guess we’ll talk about some of the alternative architectures later on. With the example of a human child, it’s very visible what’s going wrong, and so you can usually fix it. But I guess the more perverse cases where it really sticks around is when you’re rewarding a process or an activity within an organisation, and it’s sufficiently big that it’s not entirely visible to any one person or within their power to fix the incentives, or you start rewarding people going through the motions of achieving some outcome rather than the outcome itself. Of course rewarding the outcome can be also very difficult if you can’t measure the outcome very well, so you can end up just stuck with not really having any nice solution for giving people the right motivation. But yeah, we see the same phenomenon in AI and in human life yet again.
re: Independent vs. Interdependent
I think generally we do a decent job at modeling how other people are constraint in the choices they make by their enviroment but model ourselves more as independent actors. But then I’m uncertain whether anybody really looks at the way their own decision making depends on other people.
Both resonate for me.
And seeing yourself as more independent than you see others does seem very human (or at least seems like what I’d do :P). Wondering though whether there’s any rigorous research on this in East-Asian cultures, given the different tendency for people living there to construe the personal and agentic self as more interdependent.
One is about modeling yourself as independent and the other is about modeling other people as independent actors.
I like your distinction of viewing yourself vs. another as an independent agent.
Some other person-oriented ways of representing that didn’t make it into the post:
Identifying an in(ter)dependent personal self with respect to the outside world.
Identifying an in(ter)dependent relation attributable to ‘you’ and/or to an identified social partner.
Identifying a collective of related people as in(ter) dependent with respect to the outside world.
and so on…
Worked those out using the perceptual framework I created, inspired by and roughly matching up with the categories of a paper that seems kinda interesting.
There’s a nuanced difference between the way most of your examples are framed, and the framing I’m trying to convey. Struggling to, but here’s another attempt!
Your examples:
When discussing for example FDA decisions we don’t see Fauci as a person who’s independent of the system in which he operates.
There’s the moral maze discourse which is also about individuals being strongly influenced by the system in which they are operating
But then I’m uncertain whether anybody really looks at the way their own decision making depends on other people.
Each of those examples describes an individual agent as independent vs. ‘not independent’, i.e. dependent.
Dependence is unlike interdependence (in terms of what you subjectively perceive in the moment). Interdependence involves holding both/all in mind at the same time, and representing how the enduring existence of each is conditional upon both themselves and the other/s. If that sounds wishy-washy and unintuitive, then hope you get my struggle to explain it.
You could sketch out a causal diagram, where one arrow shows the agent affecting the system, and a parallel arrow shows the system affecting the agent back. That translates as “A independently wills a change in S; S depends on A” in framing 1, “S independently causes a change in A; A depends on S” in framing 2. Then, when you mentally situate framing 2 next to framing 1, that might look like you actually modelled the interdependence of the two parts.
That control loop seems deceptively like interdependence, but it’s not by the richer meaning I’m pointing to with the word. It’s what these speakers are trying to point out when they vaguely talk on about system complexity vs. holistic complexity.
Cartesian frames seem like a mathematically precise way of depicting interdependence. Though this also implicitly imports an assumption of self-centricity: a model in which compute is allocated towards predicting the future as represented within a dualistic ontology (consisting of the embedded self and the environment outside of the self).
Brian Christian’s example is interesting. It seems to suggest that focusing on the process or outcome are the only possible directions to focus. Leslie Cameron-Bandler et al argue in one example in the Emprint Method that in good parenting the focus in not on the past (and the process or outcome of the past) but on the future. (It’s generally a good book for people who want to understand what ways there are to think and make decisions)
I spoke imprecisely above when I linked Fauci and the FDA. Fauci leads the NIAID. He has some influence on it but he’s also largely influenced by it. That seems to me interdepence.
I will listen to the talk later and maybe write more then.
Let me google the Emprint Method. The idea of focus on past vs. future in rewarding/encouraging makes intuitive sense to me though.
I haven’t actually heard of Fauci or discussions around him, but appreciate the clarification!
Note again, I’m talking about a way you perceive interdependence (not to point to the elements needed for two states to be objectively described as interdependent).
There are certainly rationalists who’s approach to problems is structure-based. We have a diversity of approaches.
One interesting example here is the question of diet. You find people who do argue the structure-based approach where it’s about CICO (Calories-Out-Calories) in. Then you have other people who take a more process oriented perspective out of cybernetics.
You often have people who believe in CICO who don’t get that there is another way to look at the issue but historically for example Eliezer did argue the cybernetics paradigm.
When I look at my local rationality community CFAR has a huge influence on it. I would guess that within EA you find more people who can only handle the structure-based approach. I would claim that’s more because those EA have too little exposure to the rationality community then it’s due to flaws in the rationality community.
There are probably two separate issues here. One is about modeling yourself as independent and the other is about modeling other people as independent actors.
I think generally we do a decent job at modeling how other people are constraint in the choices they make by their enviroment but model ourselves more as independent actors. But then I’m uncertain whether anybody really looks at the way their own decision making depends on other people.
re: Processes vs Structure
Your concrete examples made me update somewhat towards process thinking being more common in AI alignment and local rationality communities than I was giving credit for. I also appreciate you highlighting that the rationality community has a diversity of approaches, and that we’re not some homogenous blob (as my post might imply).
A CFAR staff member also pointed me to MIRI’s logical induction paper (with input from Critch) as one of MIRI’s most cited papers (i.e. at least somewhat representative of how outside people might view MIRI’s work) that’s clearly based on an algorithmic process.
Eliezer’s AI Foom post (the one I linked to above) can be read as an explanation of how an individual agent is constructed out of a reinitiating process.
Also, there has been some interest in decision and allocation mechanisms like e.g. quadratic voting and funding (both promoted by RxC, latter also by Vitalik Buterin) which seems kinda deliberately process-oriented.
This also resonates, and I hadn’t explicitly made that distinction yet! Particularly, how EA researchers have traditionally represented cause areas/problems/interventions to work on (after going through e.g. Importance-Tractability-Neglectedness analysis) seems quite structure-based (80Ks methods for testing personal fit don’t as much however).
IMO CEA-hosted grantmakers also started from a place where they dissected and assessed the possible promising/unpromising traits of a person, the country or professional hub they operate from, and their project idea (based on first principles say, or track record). This was particularly the case with the EA Community Building Grant in early days. But seems to be changing somewhat in how EA Funds grantmakers are offering smaller grants in dialog with possible applicants, and assessing viability for the career aspirant to continue or for a project to expand further as they go.
I made a case before for funding my entrepreneurial endeavour that instead relied on funding processes that relied more directly on eliciting and acting on feedback. And expanded on that in a grant application to the EA Infrastructure Fund:
I’m not claiming btw that process-based representations are inherently superior for the social good or something. Just that the value of that kind of thinking is overlooked in some of the work we do.
E.g. In this 80K interview, they made a good case for why adhering to enacting some bureaucratic process can be bad. I also thought they overlooked a point – you can make other, similarly compelling arguments for why rewarding that some previously assessed outcome or end state came into existence or was reached can be bad.
re: Independent vs. Interdependent
Both resonate for me.
And seeing yourself as more independent than you see others does seem very human (or at least seems like what I’d do :P). Wondering though whether there’s any rigorous research on this in East-Asian cultures, given the different tendency for people living there to construe the personal and agentic self as more interdependent.
I like your distinction of viewing yourself vs. another as an independent agent.
Some other person-oriented ways of representing that didn’t make it into the post:
Identifying an in(ter)dependent personal self with respect to the outside world.
Identifying an in(ter)dependent relation attributable to ‘you’ and/or to an identified social partner.
Identifying a collective of related people as in(ter) dependent with respect to the outside world.
and so on…
Worked those out using the perceptual framework I created, inspired by and roughly matching up with the categories of a paper that seems kinda interesting.
Returning to the simplified two-sided distinction I made above, my sense is still that you’re not capturing it.
There’s a nuanced difference between the way most of your examples are framed, and the framing I’m trying to convey. Struggling to, but here’s another attempt!
Your examples:
Each of those examples describes an individual agent as independent vs. ‘not independent’, i.e. dependent.
Dependence is unlike interdependence (in terms of what you subjectively perceive in the moment).
Interdependence involves holding both/all in mind at the same time, and representing how the enduring existence of each is conditional upon both themselves and the other/s. If that sounds wishy-washy and unintuitive, then hope you get my struggle to explain it.
You could sketch out a causal diagram, where one arrow shows the agent affecting the system, and a parallel arrow shows the system affecting the agent back. That translates as “A independently wills a change in S; S depends on A” in framing 1, “S independently causes a change in A; A depends on S” in framing 2.
Then, when you mentally situate framing 2 next to framing 1, that might look like you actually modelled the interdependence of the two parts.
That control loop seems deceptively like interdependence, but it’s not by the richer meaning I’m pointing to with the word. It’s what these speakers are trying to point out when they vaguely talk on about system complexity vs. holistic complexity.
Cartesian frames seem like a mathematically precise way of depicting interdependence. Though this also implicitly imports an assumption of self-centricity: a model in which compute is allocated towards predicting the future as represented within a dualistic ontology (consisting of the embedded self and the environment outside of the self).
Brian Christian’s example is interesting. It seems to suggest that focusing on the process or outcome are the only possible directions to focus. Leslie Cameron-Bandler et al argue in one example in the Emprint Method that in good parenting the focus in not on the past (and the process or outcome of the past) but on the future. (It’s generally a good book for people who want to understand what ways there are to think and make decisions)
I spoke imprecisely above when I linked Fauci and the FDA. Fauci leads the NIAID. He has some influence on it but he’s also largely influenced by it. That seems to me interdepence.
I will listen to the talk later and maybe write more then.
Let me google the Emprint Method. The idea of focus on past vs. future in rewarding/encouraging makes intuitive sense to me though.
I haven’t actually heard of Fauci or discussions around him, but appreciate the clarification! Note again, I’m talking about a way you perceive interdependence (not to point to the elements needed for two states to be objectively described as interdependent).