Thought #6: Listen to the Married Graduate Students and Ignore the Unmarried Students Who Live in the Dorms
Students with families have perspective on life and friends outside of the university. They tend to be happy and productive and think sleeping on the futon in your office is childish. They also bathe every day. Which is a nice bonus. The students who are unmarried and living in the dorm have probably escaped, thus far, exposure to the real world in any meaningful form, and because of this they are likely to have a warped sense of personal worth and work habits, and suffer from weird guilt issues. Ignore them.
In other words, don’t try to be some sort of software ronin: this is less effective than having enough balance and boundaries to maintain some relationships that aren’t about your special interest. If you would rather do programming than be around people, that’s OK but it’s still good to do other activities with other people even if they are not “useful”. What is meant by “usefulness” if not you and others enjoying what you have created? Generally speaking, if you are doing work to “save the world” rather than for cash money, you are being lied to and underpaid, and the dollar amount that you are being underpaid is the amount you value feeling like you are “saving the world”.
Also, and this is not a popular opinion on this forum, I think Elon Musk has the right idea about AI Safety. This is heavily cultural, and Elon’s proposal (let everyone grid-link themselves to their own all-powerful AI) is in line with culturally Protestant values, while the LW proposal (appoint an all-powerful council of elders who decree who is and is not worthy to use AI technology, based on their own research into the doctrine) is in line with culturally Catholic values. I will never give up my heritage of freedom, my right of self-defense, my right to privacy on my own computer in my own home, and my cultural ideal of equality of all before the law and before the Creator. I look forward to healthy debate with the AI Safety Experts. The American heritage of “fair play” and civil rights is a defense against totalitarian government. The AI Safety Expert Panel would be in a position to cause the AI equivalent of the Irish Potato Famine by hoarding all the AI and distributing it in an “equitable” way that does not include my fellow Irish. The great thing about freedom is that I get to make up my own mind about what software I want to use, create, or buy; the AI Safety Expert Panel does not and will never have the right to confiscate my rightful property; and this heritage of freedom will save the AI Safety Expert Panel from accidentally becoming the dystopia that they seek to prevent.
I am not convinced that “the LW proposal” is to appoint an all-powerful council of elders who decree who is and who isn’t worthy to use AI technology, and in fact I don’t recall ever seeing anything resembling that. (Though of course I might well have missed it.)
What I think I have seen suggested or implied is that something like that might be beneficial for the development of possibly-superhumanly-intelligent AIs, on the basis that random individuals are simply not competent to judge whether what they’re doing is safe and that if it isn’t the results might be catastrophic.
To whatever extent it’s true that (1) humans are capable of producing superhumanly intelligent AIs and (2) superhumanly intelligent AIs are likely to have or acquire vastly superhuman power and (3) even conditional on being able to make the superhuman AIs, making them so that they don’t use that power in ways we’d consider catastrophic is a Very Hard Problem (and I think it’s fair to say that (1-3), or at least their possibility, is pretty central to the LW community’s thinking on this), a permissively libertarian position on possibly-superhuman AI development seems uncomfortably close to a permissively libertarian position on, say, nuclear bombs.
Whether (1-3) are right, and whether a “council of elders” is the best solution if they are, are debatable. But I don’t think it should be even slightly controversial that conditional on (1-3) it’s unconscionably dangerous to say “everyone should try to make their own superhuman AI and no one should try to stop them, because Freedom”.
The most freedom-positive society in human history is probably the United States of America. Even there, there are few people arguing that the Second Amendment confers on all the right to keep and bear nuclear warheads.
Of course, if free-for-all AI development is in fact perfectly safe (at least in the sense of being vanishingly unlikely to result in outright catastrophe) then “everyone has to be free to do it because Freedom” is a much more reasonable position. But then the key point in your argument, at least around these parts where most people endorse (1-3) and lean at least somewhat libertarian, is not “Freedom!” but “having everyone develop their own superhuman AI is unlikely to be catastrophic, because …”. Which requires an actual argument, not just a scattering of boo-words like “council of elders” and “totalitarian” and “famine” and “dystopia” and yay-words like “freedom”, “privacy”, “equality”, “fair play”, “freedom”, “rightuful”, “freedom”, and “freedom”.
(I feel like I should repeat a key point from earlier: you write as if the question is who will decide who gets to own/use superhuman AIs once they exist, but so far as I know “the LW proposal” doesn’t involve anything remotely like a “council of elders” for that. The point at which something of the sort might be appropriate is in the development of possibly-superhuman AIs.)
This is heavily cultural, and Elon’s proposal (let everyone grid-link themselves to their own all-powerful AI) is in line with culturally Protestant values, while the LW proposal (appoint an all-powerful council of elders who decree who is and is not worthy to use AI technology, based on their own research into the doctrine) is in line with culturally Catholic values.
Deciding based on the two approaches based on which values they align with misunderstands the problem. A good strategy depends on what’s actually possible.
The idea that human/AI hybrids are competitive at requiring resources in an enviroment with strong AGIs is doubtful. That means that over time all the resources and power go to the AGIs.
I’m not sure if you thought of it while reading my comment or if it’s generally your go-to advice, but I may have accidentally given the wrong impression about how much I prioritize work over being around other people. It’s good to be actively reminded about it though for entropy reasons, so I appreciate it.
I admit that what I know about AI Safety comes from reading posts about it instead of talking with the experts about their meta-level ideas, but that doesn’t sound like the impression I got. CEV, for example, is one example that deals with the ethical mess of which people’s values are worth including. The discussion around that generally had a very negative prior to anyone having the power to decide whose values are good enough, is what it appeared like to me. Elon’s proposal comes with its own set of problems, a couple that stick out to me being co-ordination problems between multiple AGI, and grid-linking not completely solving the alignment problem because we’ll still be far inferior to good AGI.
Good luck man. I did a different kind of engineering, but here is some advice I wish I had heard 15 years ago:
https://www.calnewport.com/blog/2009/03/12/some-thoughts-on-grad-school/
In other words, don’t try to be some sort of software ronin: this is less effective than having enough balance and boundaries to maintain some relationships that aren’t about your special interest. If you would rather do programming than be around people, that’s OK but it’s still good to do other activities with other people even if they are not “useful”. What is meant by “usefulness” if not you and others enjoying what you have created? Generally speaking, if you are doing work to “save the world” rather than for cash money, you are being lied to and underpaid, and the dollar amount that you are being underpaid is the amount you value feeling like you are “saving the world”.
Also, and this is not a popular opinion on this forum, I think Elon Musk has the right idea about AI Safety. This is heavily cultural, and Elon’s proposal (let everyone grid-link themselves to their own all-powerful AI) is in line with culturally Protestant values, while the LW proposal (appoint an all-powerful council of elders who decree who is and is not worthy to use AI technology, based on their own research into the doctrine) is in line with culturally Catholic values. I will never give up my heritage of freedom, my right of self-defense, my right to privacy on my own computer in my own home, and my cultural ideal of equality of all before the law and before the Creator. I look forward to healthy debate with the AI Safety Experts. The American heritage of “fair play” and civil rights is a defense against totalitarian government. The AI Safety Expert Panel would be in a position to cause the AI equivalent of the Irish Potato Famine by hoarding all the AI and distributing it in an “equitable” way that does not include my fellow Irish. The great thing about freedom is that I get to make up my own mind about what software I want to use, create, or buy; the AI Safety Expert Panel does not and will never have the right to confiscate my rightful property; and this heritage of freedom will save the AI Safety Expert Panel from accidentally becoming the dystopia that they seek to prevent.
I am not convinced that “the LW proposal” is to appoint an all-powerful council of elders who decree who is and who isn’t worthy to use AI technology, and in fact I don’t recall ever seeing anything resembling that. (Though of course I might well have missed it.)
What I think I have seen suggested or implied is that something like that might be beneficial for the development of possibly-superhumanly-intelligent AIs, on the basis that random individuals are simply not competent to judge whether what they’re doing is safe and that if it isn’t the results might be catastrophic.
To whatever extent it’s true that (1) humans are capable of producing superhumanly intelligent AIs and (2) superhumanly intelligent AIs are likely to have or acquire vastly superhuman power and (3) even conditional on being able to make the superhuman AIs, making them so that they don’t use that power in ways we’d consider catastrophic is a Very Hard Problem (and I think it’s fair to say that (1-3), or at least their possibility, is pretty central to the LW community’s thinking on this), a permissively libertarian position on possibly-superhuman AI development seems uncomfortably close to a permissively libertarian position on, say, nuclear bombs.
Whether (1-3) are right, and whether a “council of elders” is the best solution if they are, are debatable. But I don’t think it should be even slightly controversial that conditional on (1-3) it’s unconscionably dangerous to say “everyone should try to make their own superhuman AI and no one should try to stop them, because Freedom”.
The most freedom-positive society in human history is probably the United States of America. Even there, there are few people arguing that the Second Amendment confers on all the right to keep and bear nuclear warheads.
Of course, if free-for-all AI development is in fact perfectly safe (at least in the sense of being vanishingly unlikely to result in outright catastrophe) then “everyone has to be free to do it because Freedom” is a much more reasonable position. But then the key point in your argument, at least around these parts where most people endorse (1-3) and lean at least somewhat libertarian, is not “Freedom!” but “having everyone develop their own superhuman AI is unlikely to be catastrophic, because …”. Which requires an actual argument, not just a scattering of boo-words like “council of elders” and “totalitarian” and “famine” and “dystopia” and yay-words like “freedom”, “privacy”, “equality”, “fair play”, “freedom”, “rightuful”, “freedom”, and “freedom”.
(I feel like I should repeat a key point from earlier: you write as if the question is who will decide who gets to own/use superhuman AIs once they exist, but so far as I know “the LW proposal” doesn’t involve anything remotely like a “council of elders” for that. The point at which something of the sort might be appropriate is in the development of possibly-superhuman AIs.)
Deciding based on the two approaches based on which values they align with misunderstands the problem. A good strategy depends on what’s actually possible.
The idea that human/AI hybrids are competitive at requiring resources in an enviroment with strong AGIs is doubtful. That means that over time all the resources and power go to the AGIs.
Human nature suggests that an all-powerful council-of-elders always becomes corrupt, so that approach might not be possible either.
Human nature is relatively irrelevant to the behavior of AIs. At the same time that’s basically saying that the alignment is a hard problem.
The alignment problem is one of the key AI safety problems.
Thanks.
I’m not sure if you thought of it while reading my comment or if it’s generally your go-to advice, but I may have accidentally given the wrong impression about how much I prioritize work over being around other people. It’s good to be actively reminded about it though for entropy reasons, so I appreciate it.
I admit that what I know about AI Safety comes from reading posts about it instead of talking with the experts about their meta-level ideas, but that doesn’t sound like the impression I got. CEV, for example, is one example that deals with the ethical mess of which people’s values are worth including. The discussion around that generally had a very negative prior to anyone having the power to decide whose values are good enough, is what it appeared like to me. Elon’s proposal comes with its own set of problems, a couple that stick out to me being co-ordination problems between multiple AGI, and grid-linking not completely solving the alignment problem because we’ll still be far inferior to good AGI.