Alice is, indeed, a fictional character—but clearly some people exist who are extremely ethical. There’s people who go around donating 50%, giving kidneys to strangers, volunteering to get diseases in human challenge trials, working on important things rather than their dream career, thinking about altruism in the shower, etc.
Where do you think is the optimal realistic point on the spectrum between Alice and Bob?
Do you think it’s definitely true that Bob would be doing it already if he could? Or do you think there exist some people who could but don’t want to, or who have mistaken beliefs where they think they couldn’t but could if they tried, or who currently can’t but could if they got stronger social support from the community?
That will vary with the person. All these things are imaginable, but that is no limitation. Bob is presented as someone who talks the EA talk, but has no heart for walking the walk. If he lets Alice badger him into greatly upping his efforts I would not expect it to go well.
What specifically would you expect to not go well? What bad things will happen if Bob greatly ups his efforts? Why will they happen?
Are there things we could do to mitigate those bad things? How could we lower the probability of the bad things happening? If you don’t think any risk reduction or mitigation is possible at all, how certain are you about that?
Can we test this?
Do you think it’s worthwhile to have really precise, careful, detailed models of this aspect of the world?
I would expect Bob, as you have described him, to never reach Alice’s level of commitment and performance, and after some period of time, with or without some trauma along the way, to drop out of EA. But these are imaginary creatures, and we can make up any story we like about them. There is no question of making predictions. If Alice — or you — want to convert people like Bob, you and she will have to observe the results obtained and steer accordingly.
really precise, careful, detailed
Four intensifiers in a row!!!! Is it worthwhile to have, simply, models? Expectations about how things will go? Yes, as long as you track how well they’re fitting.
Alice is, indeed, a fictional character—but clearly some people exist who are extremely ethical. There’s people who go around donating 50%, giving kidneys to strangers, volunteering to get diseases in human challenge trials, working on important things rather than their dream career, thinking about altruism in the shower, etc.
Where do you think is the optimal realistic point on the spectrum between Alice and Bob?
Do you think it’s definitely true that Bob would be doing it already if he could? Or do you think there exist some people who could but don’t want to, or who have mistaken beliefs where they think they couldn’t but could if they tried, or who currently can’t but could if they got stronger social support from the community?
That will vary with the person. All these things are imaginable, but that is no limitation. Bob is presented as someone who talks the EA talk, but has no heart for walking the walk. If he lets Alice badger him into greatly upping his efforts I would not expect it to go well.
What specifically would you expect to not go well? What bad things will happen if Bob greatly ups his efforts? Why will they happen?
Are there things we could do to mitigate those bad things? How could we lower the probability of the bad things happening? If you don’t think any risk reduction or mitigation is possible at all, how certain are you about that?
Can we test this?
Do you think it’s worthwhile to have really precise, careful, detailed models of this aspect of the world?
I would expect Bob, as you have described him, to never reach Alice’s level of commitment and performance, and after some period of time, with or without some trauma along the way, to drop out of EA. But these are imaginary creatures, and we can make up any story we like about them. There is no question of making predictions. If Alice — or you — want to convert people like Bob, you and she will have to observe the results obtained and steer accordingly.
Four intensifiers in a row!!!! Is it worthwhile to have, simply, models? Expectations about how things will go? Yes, as long as you track how well they’re fitting.