I’m interpreting “realize” colloquially, as in, “be aware of”. I don’t think the people discussed in the post just haven’t had it occur to them that pre-singularity wealth doesn’t matter because a win singularity society very likely wouldn’t care much about it. Instead someone might, for example...
...care a lot about their and their people’s lives in the next few decades.
...view it as being the case that [wealth mattering] is dependent on human coordination, and not trust others to coordinate like that. (In other words: the “stakeholders” would have to all agree to cede de facto power from themselves, to humanity.)
...not agree that humanity will or should treat wealth as not mattering; and instead intend to pursue a wealthy and powerful position mid-singularity, with the expectation of this strategy having large payoffs.
...be in some sort of mindbroken state (in the genre of Moral Mazes), such that they aren’t really (say, in higher-order derivatives) modeling the connection between actions and long-term outcomes, and instead are, I don’t know, doing something else, maybe involving arbitrary obeisance to power.
I don’t know what’s up with people, but I think it’s potentially important to understand deeply what’s up with people, without making whatever assumption goes into thinking that IF someone only became aware of this vision of the future, THEN they would adopt it.
(If Tammy responded that “realize” was supposed to mean the etymonic sense of “making real” then I’d have to concede.)
That’s another main possibility. I don’t buy the reasoning in general though—integrity is just super valuable. (Separately I’m aware of projects that are very important and neglected (legibly so) without being funded, so I don’t overall believe that there are a bunch of people strategically capitulating to anti-integrity systems in order to fund key projects.) Anyway, my main interest here is to say that there is a real, large-scale, ongoing problem(s) with the social world, which increases X-risk; it would be good for some people to think clearly about that; and it’s not good to be satisfied with false / vague / superficial stories about what’s happening.
?
I’m interpreting “realize” colloquially, as in, “be aware of”. I don’t think the people discussed in the post just haven’t had it occur to them that pre-singularity wealth doesn’t matter because a win singularity society very likely wouldn’t care much about it. Instead someone might, for example...
...care a lot about their and their people’s lives in the next few decades.
...view it as being the case that [wealth mattering] is dependent on human coordination, and not trust others to coordinate like that. (In other words: the “stakeholders” would have to all agree to cede de facto power from themselves, to humanity.)
...not agree that humanity will or should treat wealth as not mattering; and instead intend to pursue a wealthy and powerful position mid-singularity, with the expectation of this strategy having large payoffs.
...be in some sort of mindbroken state (in the genre of Moral Mazes), such that they aren’t really (say, in higher-order derivatives) modeling the connection between actions and long-term outcomes, and instead are, I don’t know, doing something else, maybe involving arbitrary obeisance to power.
I don’t know what’s up with people, but I think it’s potentially important to understand deeply what’s up with people, without making whatever assumption goes into thinking that IF someone only became aware of this vision of the future, THEN they would adopt it.
(If Tammy responded that “realize” was supposed to mean the etymonic sense of “making real” then I’d have to concede.)
Isn’t the central one “you want to spend money to make a better long term future more likely, e.g. by donating it to fund AI safety work now”?
Fair enough if you think the marginal value of money is negligable, but this isn’t exactly obvious.
That’s another main possibility. I don’t buy the reasoning in general though—integrity is just super valuable. (Separately I’m aware of projects that are very important and neglected (legibly so) without being funded, so I don’t overall believe that there are a bunch of people strategically capitulating to anti-integrity systems in order to fund key projects.) Anyway, my main interest here is to say that there is a real, large-scale, ongoing problem(s) with the social world, which increases X-risk; it would be good for some people to think clearly about that; and it’s not good to be satisfied with false / vague / superficial stories about what’s happening.