I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn’t give away money that could be used to design and build FAI—because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There’s nothing wrong with believing in what you’re doing, and believing that such a thing is optimal. …Perhaps it is optimal. If it’s not, then why do it? If money—a fungible asset—won’t help you to do it, it’s likely “you’re doing it wrong.”
Socratic questioning helps. Asking the opposite of a statement, or its invalidation helps.
Most people I’ve met lack rational high-level goals, and have no prioritization schemes that hold up to even cursory questioning, therefore, they could burn their money or give it to the poor and get a better system-wide “high level” outcome than buying another piece of consumer electronics or whatever else they were going to buy for themselves. Heck, if most people had vastly more money, they’d kill themselves with it—possibly with high glycemic index carbohydrates, or heroin. Before they get to effective altruism, they have to get to rational self-interest, and disavow coercion as a “one size fits all problem solver.”
Since that’s not going to happen, and since most people are actively involved with worsening the plight of humanity, including many LW members, I’d suggest that a strong dose of the Hippocratic Oath prescription is in order:
Sure, the human-level tiny brains are enamored with modern equivalents of medical “blood-letting.” But you’re an early-adopter, and a thinker, so you don’t join them. First, do no harm!
Sure, your tiny brained relatives over for Thanksgiving vote for “tough on crime” politicians. But you patiently explain jury nullification of law to them, indicating that one year prior to marijuana legalization in Colorado by the vote, marijuana was de facto legalized because prosecutors were experiencing too much jury nullification of law to save face while trying to prosecute marijuana offenders. Then, you show them Sanjay Gupta’s heartbreaking video documentary about how marijuana prohibition is morally wrong.
You do what you have to to change their minds. You present ideas that challenge them, because they are human beings who need something other than a bland ocean of conformity to destruction and injustice. You help them to be better people, taking the place of “strong benevolent Friendly AI” in their lives.
In fact, for simple dualist moral decisions, the people on this board can function as FAI.
I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn’t give away money that could be used to design and build FAI—because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There’s nothing wrong with believing in what you’re doing, and believing that such a thing is optimal. …Perhaps it is optimal. If it’s not, then why do it? If money—a fungible asset—won’t help you to do it, it’s likely “you’re doing it wrong.”
Socratic questioning helps. Asking the opposite of a statement, or its invalidation helps.
Most people I’ve met lack rational high-level goals, and have no prioritization schemes that hold up to even cursory questioning, therefore, they could burn their money or give it to the poor and get a better system-wide “high level” outcome than buying another piece of consumer electronics or whatever else they were going to buy for themselves. Heck, if most people had vastly more money, they’d kill themselves with it—possibly with high glycemic index carbohydrates, or heroin. Before they get to effective altruism, they have to get to rational self-interest, and disavow coercion as a “one size fits all problem solver.”
Since that’s not going to happen, and since most people are actively involved with worsening the plight of humanity, including many LW members, I’d suggest that a strong dose of the Hippocratic Oath prescription is in order:
First, do no harm.
Sure, the human-level tiny brains are enamored with modern equivalents of medical “blood-letting.” But you’re an early-adopter, and a thinker, so you don’t join them. First, do no harm!
Sure, your tiny brained relatives over for Thanksgiving vote for “tough on crime” politicians. But you patiently explain jury nullification of law to them, indicating that one year prior to marijuana legalization in Colorado by the vote, marijuana was de facto legalized because prosecutors were experiencing too much jury nullification of law to save face while trying to prosecute marijuana offenders. Then, you show them Sanjay Gupta’s heartbreaking video documentary about how marijuana prohibition is morally wrong.
You do what you have to to change their minds. You present ideas that challenge them, because they are human beings who need something other than a bland ocean of conformity to destruction and injustice. You help them to be better people, taking the place of “strong benevolent Friendly AI” in their lives.
In fact, for simple dualist moral decisions, the people on this board can function as FAI.
The software for the future we want is ours to evolve, and the hardware designers’ to build.