I personally don’t see the choice of “allowing a more intelligent set of agents take over” as particularly altruistic: personally, i think intelligence trumps species, and I am not convinced interrupting its growth to make sure more sets of genes similar to mine find hosts for longer would somehow be “for my benefit”.
Even in my AI Risk years, what I was afraid is the same I’m afraid of now: Boring Futures. The difference is that in the meantime the arguments for a singleton ASI, with a single unchangeable utility function that is not more intelligence/knowledge/curiosity became less and less tenable (together with FOOM within our lifetimes).
This being the case, “altruistic” really seems out of place: it’s likely that early sapiens would have understood nothing of our goals, our morality, and the drives that got us to build civilisations—but would it have been better for them had they murdered the first guy in the troop they found flirting with a neanderthal and prevented this? I personally doubt it, and I think the comparison between us and ASI is more or less in the same ballpark,
lumpenspace
Not hitting on people on their first meetup is good practice, but none of the arguments in OP seem to support such a norm.
Perhaps less charitably than @Huluk, I find the consent framing almost tendentious. It’s quite easy to see how the dynamics denounced have little to do with consent; here are two substitutions which show how the examples are professional ethics matters, and orthogonal to the intimacy axis:
- one could easily swap “sexual relations” with “access to their potential grantee’s timeshare” without changing much in terms of moral calculus;
- one could make the grantee as the recipient of another, exclusive grant from other sources. In this case, flirting with a grantmaker would no longer have the downstream consequences OP warned about.
All in all, the scenario in OP seems to call not for more restrictive sexual norms, but for explicit and consistently enforced anti-collusion/corruption regulations.
Once again: this is limited to the examples provided by @jefftk, and the arguments accompanying them. It’s possible that consent isn’t always enough in some contexts within EA, for reason separated from professional ethics—but I did not find support for such thesis in the thread.
I’m not sure I agree—in the original thought experiment, it was a given that increasing intelligence would lead to changes in values in ways that the agent, at t=0, would not understand or share.
At this point, one could decide whether to go for it or hold back—and we should all consider ourself lucky that our early sapiens predecessors didn’t take the second option.
(btw, I’m very curious to know what you make of this other Land text: https://etscrivner.github.io/cryptocurrent/