To the contrary, this does not get you one iota closer to “ought”.
This is true, but I do think there’s something being pointed at that deserves acknowledging.
I think I’d describe it as: you don’t get an ought, but you do get to predict what oughts are likely to be acknowledged. (In future/in other parts of the world/from behind a veil of ignorance.)
That is, an agent who commits suicide is unlikely to propagate; so agents who hold suicide as an ought are unlikely to propagate; so you don’t expect to see many agents with suicide as an ought.
And agents with cooperative tendencies do tend to propagate (among other agents with cooperative tendencies); so agents who hold cooperation as an ought tend to propagate (among...); so you expect to see agents who hold cooperation as an ought (but only in groups).
And for someone who acknowledges suicide as an ought, this can’t convince them not to; and for someone who doesn’t acknowledge cooperation, it doesn’t convince them to. So I wouldn’t describe it as “getting an ought from an is”. But I’d say you’re at least getting something of the same type as an ought?
First of all, there isn’t anything that’s “of the the same type as an ought” except an ought. So no, you’re not getting any oughts, nor anything “of the same type”. It’s “is” all the way through, here.
More to the point, I think you’re missing a critical layer of abstraction/indirection: namely, that what you can predict, via the adaptive/game-theoretic perspective, isn’t “what oughts are likely to be acknowledged”, but “what oughts will the agent act as if it follows”. Those will usually not be the same as what oughts the agent acknowledges, or finds persuasive, etc.
This is related to “Adaptation-Executers, Not Fitness-Maximizers”. An agent who commits suicide is unlikely (though not entirely unable!) to propagate, this is true, but who says that an agent who doesn’t commit suicide can’t believe that suicide is good, can’t advocate for suicide, etc.? In fact, such agents—actual people, alive today—can, and do, all these things!
This is true, but I do think there’s something being pointed at that deserves acknowledging.
I think I’d describe it as: you don’t get an ought, but you do get to predict what oughts are likely to be acknowledged. (In future/in other parts of the world/from behind a veil of ignorance.)
That is, an agent who commits suicide is unlikely to propagate; so agents who hold suicide as an ought are unlikely to propagate; so you don’t expect to see many agents with suicide as an ought.
And agents with cooperative tendencies do tend to propagate (among other agents with cooperative tendencies); so agents who hold cooperation as an ought tend to propagate (among...); so you expect to see agents who hold cooperation as an ought (but only in groups).
And for someone who acknowledges suicide as an ought, this can’t convince them not to; and for someone who doesn’t acknowledge cooperation, it doesn’t convince them to. So I wouldn’t describe it as “getting an ought from an is”. But I’d say you’re at least getting something of the same type as an ought?
First of all, there isn’t anything that’s “of the the same type as an ought” except an ought. So no, you’re not getting any oughts, nor anything “of the same type”. It’s “is” all the way through, here.
More to the point, I think you’re missing a critical layer of abstraction/indirection: namely, that what you can predict, via the adaptive/game-theoretic perspective, isn’t “what oughts are likely to be acknowledged”, but “what oughts will the agent act as if it follows”. Those will usually not be the same as what oughts the agent acknowledges, or finds persuasive, etc.
This is related to “Adaptation-Executers, Not Fitness-Maximizers”. An agent who commits suicide is unlikely (though not entirely unable!) to propagate, this is true, but who says that an agent who doesn’t commit suicide can’t believe that suicide is good, can’t advocate for suicide, etc.? In fact, such agents—actual people, alive today—can, and do, all these things!