sigh I wish people realized how useless it is to have money when the singularity happens. Either we die or we get a utopia in which it’s pretty unlikely that pre-singularity wealth matters. What you want to maximize is not your wealth but your utility function, and you sure as hell are gonna get more from LDT handshakes with aligned superintelligences in saved worlds, if you don’t help OpenAI reduce the amount of saved worlds.
downvote and agree. but being financially ruined makes it harder to do other things, and it’s probably pretty aversive to go through even if you expect things to turn out better in expectation because of it. the canaries thing seems pretty reasonable to me in light of this.
I’m interpreting “realize” colloquially, as in, “be aware of”. I don’t think the people discussed in the post just haven’t had it occur to them that pre-singularity wealth doesn’t matter because a win singularity society very likely wouldn’t care much about it. Instead someone might, for example...
...care a lot about their and their people’s lives in the next few decades.
...view it as being the case that [wealth mattering] is dependent on human coordination, and not trust others to coordinate like that. (In other words: the “stakeholders” would have to all agree to cede de facto power from themselves, to humanity.)
...not agree that humanity will or should treat wealth as not mattering; and instead intend to pursue a wealthy and powerful position mid-singularity, with the expectation of this strategy having large payoffs.
...be in some sort of mindbroken state (in the genre of Moral Mazes), such that they aren’t really (say, in higher-order derivatives) modeling the connection between actions and long-term outcomes, and instead are, I don’t know, doing something else, maybe involving arbitrary obeisance to power.
I don’t know what’s up with people, but I think it’s potentially important to understand deeply what’s up with people, without making whatever assumption goes into thinking that IF someone only became aware of this vision of the future, THEN they would adopt it.
(If Tammy responded that “realize” was supposed to mean the etymonic sense of “making real” then I’d have to concede.)
That’s another main possibility. I don’t buy the reasoning in general though—integrity is just super valuable. (Separately I’m aware of projects that are very important and neglected (legibly so) without being funded, so I don’t overall believe that there are a bunch of people strategically capitulating to anti-integrity systems in order to fund key projects.) Anyway, my main interest here is to say that there is a real, large-scale, ongoing problem(s) with the social world, which increases X-risk; it would be good for some people to think clearly about that; and it’s not good to be satisfied with false / vague / superficial stories about what’s happening.
I care about my wealth post-singularity and would be wiling to make bets consistent with this preference, e.g. I pay 1 share of QQQ now, you pay me 3 shares of QQQ 6 months after the world GDP has 10xed if we are not all dead then.
Can I mark you down as in favor of AI related NDAs? In your ideal world, would a perfect solution be for a single large company to hire all the capable AI researchers, give them aggressive non disclosure and non compete agreements, then shut down every part of the company except the legal department that enforces the agreements?
I’m a different person but I would support contracts which disallow spread of capabilities insights, but not contracts which disallow criticism of AI orgs (and especially not surprise ones).
IIUC the latter is what what the OAI-NonDisparagement-controversy has been about.
I’m not confident the following is true, but it seems to me that your first question was written under a belief that the controversy was about both of those at once. It seems like it was trying (under that world model) to ‘axiomatically’ elicit a belief in disagreement with an ongoing controversy, which would be non-truthseeking.
That seems like a misgeneralization, and I’d like to hear what thoughts you’d have depending on the various answers that could be given in the framework you raise. I’d imagine that there are a wide variety of possible ways a person could be limited in what they choose to say, and being threatened if they say things is a different situation than if they voluntarily do not: for example, the latter allows them to criticize.
sigh I wish people realized how useless it is to have money when the singularity happens. Either we die or we get a utopia in which it’s pretty unlikely that pre-singularity wealth matters. What you want to maximize is not your wealth but your utility function, and you sure as hell are gonna get more from LDT handshakes with aligned superintelligences in saved worlds, if you don’t help OpenAI reduce the amount of saved worlds.
downvote and agree. but being financially ruined makes it harder to do other things, and it’s probably pretty aversive to go through even if you expect things to turn out better in expectation because of it. the canaries thing seems pretty reasonable to me in light of this.
I wish you would realize that whatever we’re looking at, it isn’t people not realizing this.
?
I’m interpreting “realize” colloquially, as in, “be aware of”. I don’t think the people discussed in the post just haven’t had it occur to them that pre-singularity wealth doesn’t matter because a win singularity society very likely wouldn’t care much about it. Instead someone might, for example...
...care a lot about their and their people’s lives in the next few decades.
...view it as being the case that [wealth mattering] is dependent on human coordination, and not trust others to coordinate like that. (In other words: the “stakeholders” would have to all agree to cede de facto power from themselves, to humanity.)
...not agree that humanity will or should treat wealth as not mattering; and instead intend to pursue a wealthy and powerful position mid-singularity, with the expectation of this strategy having large payoffs.
...be in some sort of mindbroken state (in the genre of Moral Mazes), such that they aren’t really (say, in higher-order derivatives) modeling the connection between actions and long-term outcomes, and instead are, I don’t know, doing something else, maybe involving arbitrary obeisance to power.
I don’t know what’s up with people, but I think it’s potentially important to understand deeply what’s up with people, without making whatever assumption goes into thinking that IF someone only became aware of this vision of the future, THEN they would adopt it.
(If Tammy responded that “realize” was supposed to mean the etymonic sense of “making real” then I’d have to concede.)
Isn’t the central one “you want to spend money to make a better long term future more likely, e.g. by donating it to fund AI safety work now”?
Fair enough if you think the marginal value of money is negligable, but this isn’t exactly obvious.
That’s another main possibility. I don’t buy the reasoning in general though—integrity is just super valuable. (Separately I’m aware of projects that are very important and neglected (legibly so) without being funded, so I don’t overall believe that there are a bunch of people strategically capitulating to anti-integrity systems in order to fund key projects.) Anyway, my main interest here is to say that there is a real, large-scale, ongoing problem(s) with the social world, which increases X-risk; it would be good for some people to think clearly about that; and it’s not good to be satisfied with false / vague / superficial stories about what’s happening.
I care about my wealth post-singularity and would be wiling to make bets consistent with this preference, e.g. I pay 1 share of QQQ now, you pay me 3 shares of QQQ 6 months after the world GDP has 10xed if we are not all dead then.
Based on your recent post here: https://www.lesswrong.com/posts/55rc6LJcqRmyaEr9T/please-stop-publishing-ideas-insights-research-about-ai
Can I mark you down as in favor of AI related NDAs? In your ideal world, would a perfect solution be for a single large company to hire all the capable AI researchers, give them aggressive non disclosure and non compete agreements, then shut down every part of the company except the legal department that enforces the agreements?
I’m a different person but I would support contracts which disallow spread of capabilities insights, but not contracts which disallow criticism of AI orgs (and especially not surprise ones).
IIUC the latter is what what the OAI-NonDisparagement-controversy has been about.
I’m not confident the following is true, but it seems to me that your first question was written under a belief that the controversy was about both of those at once. It seems like it was trying (under that world model) to ‘axiomatically’ elicit a belief in disagreement with an ongoing controversy, which would be non-truthseeking.
That seems like a misgeneralization, and I’d like to hear what thoughts you’d have depending on the various answers that could be given in the framework you raise. I’d imagine that there are a wide variety of possible ways a person could be limited in what they choose to say, and being threatened if they say things is a different situation than if they voluntarily do not: for example, the latter allows them to criticize.