Is it not at all concerning that aliens with no knowledge of Earth or humanity could plausibly guess that a movement dedicated to a maximizing, impartial, welfarist conception of the good would also be intrinsically attracted to learning about idealized reasoning procedures?
This is not at all concerning. If we are concerned about this then we should also be concerned that aliens could plausibly guess a movement dedicated to space exploration would be intrinsically attracted to learning about idealized dynamical procedures. It seems to me this is just a prior that groups with a goal investigate instrumentally useful things.
My model of your model so far is this: because the EA community is interested in LessWrong, and because LessWrong facilitated the group that work on HRAD research, the EA community will move their practices closer to implications of this research even in the case where it is wrong. Is that accurate?
My expectation is that EAs will give low weight to the details of HRAD research, even in the case where it is a successful program. The biggest factor is the timelines: HRAD research is in service of the long term goal of reasoning correctly about AGI; EA is about doing as much good as possible, as soon as possible. The iconic feature of the EA movement is the giving pledge, which is largely predicated on the idea that money given now is more impactful than money given later. There is a lot of discussion about alternatives and different practices, for example the donor’s dilemma and mission hedging, but these are operational concerns rather than theoretical/idealized ones.
Even if I assume HRAD is a productive line of research, I strongly expect that the path to changing EA practice leads from some surprising result, evaluated all the way up to the level of employment and investment decisions. This means the result would need to be surprising, then it would need to withstand scrutiny, then it would need to lead to conclusions big enough to shift activity like donations, employment, and investments, cost of change included and all. I would be deeply shocked if this happened, and then further shocked if it had a broad enough impact to change the course of EA as a group.
I don’t understand the source of your concern.
This is not at all concerning. If we are concerned about this then we should also be concerned that aliens could plausibly guess a movement dedicated to space exploration would be intrinsically attracted to learning about idealized dynamical procedures. It seems to me this is just a prior that groups with a goal investigate instrumentally useful things.
My model of your model so far is this: because the EA community is interested in LessWrong, and because LessWrong facilitated the group that work on HRAD research, the EA community will move their practices closer to implications of this research even in the case where it is wrong. Is that accurate?
My expectation is that EAs will give low weight to the details of HRAD research, even in the case where it is a successful program. The biggest factor is the timelines: HRAD research is in service of the long term goal of reasoning correctly about AGI; EA is about doing as much good as possible, as soon as possible. The iconic feature of the EA movement is the giving pledge, which is largely predicated on the idea that money given now is more impactful than money given later. There is a lot of discussion about alternatives and different practices, for example the donor’s dilemma and mission hedging, but these are operational concerns rather than theoretical/idealized ones.
Even if I assume HRAD is a productive line of research, I strongly expect that the path to changing EA practice leads from some surprising result, evaluated all the way up to the level of employment and investment decisions. This means the result would need to be surprising, then it would need to withstand scrutiny, then it would need to lead to conclusions big enough to shift activity like donations, employment, and investments, cost of change included and all. I would be deeply shocked if this happened, and then further shocked if it had a broad enough impact to change the course of EA as a group.