Thanks for taking the time to do this. I’m not really a fan of the way you approach writing up your thoughts here. The post seems high on snark, rhetoric and bare assertion, and low on clarity, reasoning transparency, and quality of reasoning. The piece feels like you are leaning on your reputation to make something like a political speech, which will get you credit among certain groups, rather than a reasoned argument designed to persuade anyone who doesn’t already like you. For example, you say:
But at least the crazy kids are trying. At all. They get to be wrong, where most others are not even wrong.
Also, future children in another galaxy? Try our own children, here and now. People get fooled into thinking that ‘long term’ means some distant future. And yes, in some important senses, most of the potential value of humanity lies in its distant future.
But the dangers we aim to prevent, the benefits we hope to accrue? They are not some distant dream of a million years from now. They are for people alive today. You, yes you, and your loved ones and friends and if you have them children, are at risk of dying from AI or from a pandemic. Nor are these risks so improbable that one needs to cite future generations for them to be worthy causes.
I fight the possibility of AI killing everyone, not (only or even primarily) because of a long, long time from now in a galaxy far, far away. I fight so I and everyone else will have grandchildren, and so that those grandchildren will live. Here and now.
As I understand it, this is meant to be a critique of longtermism. The claims you have made here just seem to be asserting that longtermism is not true, without argument, which is what pretty much every journalist does now that every journalist doesn’t like EA. But EA philosophers are field leaders in population ethics, and have published papers in leading journals on it, and you can’t just dismiss it by saying things which look on the face of it to be inconsistent such as “Try our own children, here and now. People get fooled into thinking that ‘long term’ means some distant future. And yes, in some important senses, most of the potential value of humanity lies in its distant future. But the dangers we aim to prevent, the benefits we hope to accrue? They are not some distant dream of a million years from now.” In what sense is the potential value of humanity in the future if the benefits are not in the future?
Similarly, on whether personhood intrinsically matters, you say:
“This attitude drives me bonkers. Yes, suffering is bad. It is the way we indicate to ourselves that things are bad. It sucks. Preventing it is a good idea. But when you think that suffering is the thing that matters, you confuse the map for the territory, the measure for the man, the math with reality. Combine that with all the other EA beliefs, set this as a maximalist goal, and you get… well, among other things, you get FTX. Also you get people worried about wild animal or electron suffering and who need hacks put in to not actively want to wipe out humanity.
If you do not love life, and you do not love people, or anything or anyone within the world, and instead wholly rely on a proxy metric? If you do not have Something to Protect? Oh no.
Again, you are just asserting here without argument and with lots of rhetoric that you believe personhood matters independently of subjective experience. I don’t see why you would think this would convince anyone. A lot of EAs I know have actually read the philosophical literature on personal identity, and your claims seem highly non-serious by comparison.
On Alameda, you say
“It was the flood of effective altruists out of the firm that was worrisome. It was the effective altruists who were the greedy ones, who were convinced they could make more money outside the firm, and that they had a moral obligation to do so. You know, for the common good. They proved themselves neither honest nor loyal. Neither was ‘part of their utility function.’
I agree that setting up Alameda was a very bad idea for lots of reasons. However, you claim here that the people who joined Alameda aside from Sam weren’t actually doing it for the common good. From my knowledge, this is false—they did honestly believe they were doing it for the common good and were going to give the money away. Do you have evidence that they didn’t actually donate the money they made?
When you say they proved that they were not loyal, are you saying they should have been loyal to SBF, or that they should have been loyal to Jane Street? Both claims seem false. Even if they should have stayed at Jane Street, loyalty is not a good reason to do so, and they shouldn’t have been loyal to SBF because he was a psychopath.
These general points aside, I agree that management of bad actors and emphasis on rule following are extremely important and should receive much more emphasis than they do.
This seems to be misunderstanding several points I was attempting to make so I’ll clear those up here. Apologies if I gave the wrong idea.
On longtermism I was responding to Lewis’ critique, saying that you do not need full longtermism to care about the issues longtermists care about, that there were also (medium term?) highly valuable issues at stake that would already be sufficient to care about such matters. It was not intended as an assertion that longtermism is false, nor do I believe that.
I am asserting there that I believe that things other than subjective experience of pleasure/suffering matter, and that I think the opposite position is nuts both philosophically and in terms of it causing terrible outcomes. I don’t think this requires belief in personhood mattering per se, although I would indeed say that it matters. And when people say ‘I have read the philosophical literature on this and that’s why nothing you think matters matters, why haven’t you done your homework’… well, honestly, that’s basically why I almost never talk philosophy online and most other people don’t either, and I think that sucks a lot. But if you want to know what’s behind that on a philosophical level? I mean, I’ve written quite a lot of words both here and in various places. But I agree that this was not intended to turn someone who had read 10 philosophy books and bought Benthamite Utilitarianism into switching.
On Alameda, I was saying this from the perspective of Jane Street Capital. Sorry if that was unclear. As in, Lewis said JS looked at EAs suspiciously for not being greedy. Whereas I said no, that’s false, EAs got looked at suspiciously because they left in the way they did. Nor is this claiming they were not doing it for the common good—it is saying that from the perspective of JSC, them saying it was ‘for the common good’ doesn’t change anything, even if true. My guess, as is implied elsewhere, is that the EAs did believe this consciously. As for whether they ‘should have been’ loyal to JSC, my answer is they shouldn’t have stayed out of loyalty, but they should have left in a more cooperative fashion.
My point in raising the philosophy literature was that you seemed to be professing anger at the idea that subjective experience is all that matters morally—it drives you ‘bonkers’ and is ‘nuts’. I was saying that people with a lot more expertise than you in philosophy think it is plausible and you haven’t presented any arguments, so I don’t think it should drive you bonkers. I think the default position would be to update a bit towards that view and to not think it is bonkers.
Similarly, if I wrote a piece saying that a particular reasonably popular view on quantitative trading was bonkers, I might reasonably get called out by someone (like you) who has more expertise than me in it. I also don’t think me saying “this is why I never have online discussions with people who have expertise in quantitative trading” should reassure you. Your response confirms my sense that much of the piece was not meant to persuade but more to use your own reputation to decrease the status of various opinions among your readers without offering any arguments.
In the passage I quote, you also make a couple of inconsistent statements. You say “Yes, suffering is bad. It is the way we indicate to ourselves that things are bad. It sucks. Preventing it is a good idea”. Then you say “Also you get people worried about wild animal or electron suffering”. If you think suffering is bad, then why would you not think wild animals can suffer? Or if you do think that they can suffer, then aren’t you committed to the view that preventing wild animal suffering is good? Same for digital suffering. I think mistakes like this should lead to some humility about labelling reasonably popular philosophical views as nuts without argument.
I also don’t understand why you think the view that subjective wellbeing is all that matters implies you get FTX. FTX seems to have stemmed from naive consequentialism, which is distinct from the view that subjective experience is all that matters. Indeed, FTX was ex ante and ex post very very bad from the point of view of a worldview which says that subjective experience is all that matters (hedonistic total utilitarianism).This dynamic has happened repeatedly in various different places since the fall of FTX. ‘Here is some idiosyncratic feature f of FTX, FTX is bad, therefore f is bad’ is not a good argument but keeps coming up, cf arguments that FTX wasn’t focused on progress, wasn’t democratic, they believed in the total view, they think preventing existential risks is a good idea, etc. Again, I also don’t see an argument for why you think this, you just assert it.
Can you say more about how they could have left in a more cooperative fashion? My default take would be that, as long as you give notice, from the point of view of common sense morality, there is nothing wrong with leaving a company. In the case of Jane Street, I think from the point of view of common sense morality, since the social benefits of the company are small, most people would think that a dozen people leaving at once just doesn’t matter at all. It might be different if it was a refugee charity or something. Is there some detail about how they left that I am missing?
Why do you think they weren’t honest?
This passage: “It was the effective altruists who were the greedy ones, who were convinced they could make more money outside the firm, and that they had a moral obligation to do so. You know, for the common good” strongly reads as saying the EAs who left Jane Street for Alameda did so out of greed. If you didn’t intend this, I would suggest editing the main text to clarify that when you said “it was the effective altruists who were the greedy ones” you meant they were actually doing it for the common good. A lot of the people who read this forum will know a lot of the people who left to join Alameda, so if you are unintentionally calling them greedy, dishonest and disloyal, that could be quite bad. Unless you intend to do that, in which case fair enough.
Thanks for taking the time to do this. I’m not really a fan of the way you approach writing up your thoughts here. The post seems high on snark, rhetoric and bare assertion, and low on clarity, reasoning transparency, and quality of reasoning. The piece feels like you are leaning on your reputation to make something like a political speech, which will get you credit among certain groups, rather than a reasoned argument designed to persuade anyone who doesn’t already like you. For example, you say:
As I understand it, this is meant to be a critique of longtermism. The claims you have made here just seem to be asserting that longtermism is not true, without argument, which is what pretty much every journalist does now that every journalist doesn’t like EA. But EA philosophers are field leaders in population ethics, and have published papers in leading journals on it, and you can’t just dismiss it by saying things which look on the face of it to be inconsistent such as “Try our own children, here and now. People get fooled into thinking that ‘long term’ means some distant future. And yes, in some important senses, most of the potential value of humanity lies in its distant future. But the dangers we aim to prevent, the benefits we hope to accrue? They are not some distant dream of a million years from now.” In what sense is the potential value of humanity in the future if the benefits are not in the future?
Similarly, on whether personhood intrinsically matters, you say:
Again, you are just asserting here without argument and with lots of rhetoric that you believe personhood matters independently of subjective experience. I don’t see why you would think this would convince anyone. A lot of EAs I know have actually read the philosophical literature on personal identity, and your claims seem highly non-serious by comparison.
On Alameda, you say
I agree that setting up Alameda was a very bad idea for lots of reasons. However, you claim here that the people who joined Alameda aside from Sam weren’t actually doing it for the common good. From my knowledge, this is false—they did honestly believe they were doing it for the common good and were going to give the money away. Do you have evidence that they didn’t actually donate the money they made?
When you say they proved that they were not loyal, are you saying they should have been loyal to SBF, or that they should have been loyal to Jane Street? Both claims seem false. Even if they should have stayed at Jane Street, loyalty is not a good reason to do so, and they shouldn’t have been loyal to SBF because he was a psychopath.
These general points aside, I agree that management of bad actors and emphasis on rule following are extremely important and should receive much more emphasis than they do.
This seems to be misunderstanding several points I was attempting to make so I’ll clear those up here. Apologies if I gave the wrong idea.
On longtermism I was responding to Lewis’ critique, saying that you do not need full longtermism to care about the issues longtermists care about, that there were also (medium term?) highly valuable issues at stake that would already be sufficient to care about such matters. It was not intended as an assertion that longtermism is false, nor do I believe that.
I am asserting there that I believe that things other than subjective experience of pleasure/suffering matter, and that I think the opposite position is nuts both philosophically and in terms of it causing terrible outcomes. I don’t think this requires belief in personhood mattering per se, although I would indeed say that it matters. And when people say ‘I have read the philosophical literature on this and that’s why nothing you think matters matters, why haven’t you done your homework’… well, honestly, that’s basically why I almost never talk philosophy online and most other people don’t either, and I think that sucks a lot. But if you want to know what’s behind that on a philosophical level? I mean, I’ve written quite a lot of words both here and in various places. But I agree that this was not intended to turn someone who had read 10 philosophy books and bought Benthamite Utilitarianism into switching.
On Alameda, I was saying this from the perspective of Jane Street Capital. Sorry if that was unclear. As in, Lewis said JS looked at EAs suspiciously for not being greedy. Whereas I said no, that’s false, EAs got looked at suspiciously because they left in the way they did. Nor is this claiming they were not doing it for the common good—it is saying that from the perspective of JSC, them saying it was ‘for the common good’ doesn’t change anything, even if true. My guess, as is implied elsewhere, is that the EAs did believe this consciously. As for whether they ‘should have been’ loyal to JSC, my answer is they shouldn’t have stayed out of loyalty, but they should have left in a more cooperative fashion.
Hello,
It seems I misunderstood sorry
My point in raising the philosophy literature was that you seemed to be professing anger at the idea that subjective experience is all that matters morally—it drives you ‘bonkers’ and is ‘nuts’. I was saying that people with a lot more expertise than you in philosophy think it is plausible and you haven’t presented any arguments, so I don’t think it should drive you bonkers. I think the default position would be to update a bit towards that view and to not think it is bonkers.
Similarly, if I wrote a piece saying that a particular reasonably popular view on quantitative trading was bonkers, I might reasonably get called out by someone (like you) who has more expertise than me in it. I also don’t think me saying “this is why I never have online discussions with people who have expertise in quantitative trading” should reassure you. Your response confirms my sense that much of the piece was not meant to persuade but more to use your own reputation to decrease the status of various opinions among your readers without offering any arguments.
In the passage I quote, you also make a couple of inconsistent statements. You say “Yes, suffering is bad. It is the way we indicate to ourselves that things are bad. It sucks. Preventing it is a good idea”. Then you say “Also you get people worried about wild animal or electron suffering”. If you think suffering is bad, then why would you not think wild animals can suffer? Or if you do think that they can suffer, then aren’t you committed to the view that preventing wild animal suffering is good? Same for digital suffering. I think mistakes like this should lead to some humility about labelling reasonably popular philosophical views as nuts without argument.
I also don’t understand why you think the view that subjective wellbeing is all that matters implies you get FTX. FTX seems to have stemmed from naive consequentialism, which is distinct from the view that subjective experience is all that matters. Indeed, FTX was ex ante and ex post very very bad from the point of view of a worldview which says that subjective experience is all that matters (hedonistic total utilitarianism).This dynamic has happened repeatedly in various different places since the fall of FTX. ‘Here is some idiosyncratic feature f of FTX, FTX is bad, therefore f is bad’ is not a good argument but keeps coming up, cf arguments that FTX wasn’t focused on progress, wasn’t democratic, they believed in the total view, they think preventing existential risks is a good idea, etc. Again, I also don’t see an argument for why you think this, you just assert it.
Can you say more about how they could have left in a more cooperative fashion? My default take would be that, as long as you give notice, from the point of view of common sense morality, there is nothing wrong with leaving a company. In the case of Jane Street, I think from the point of view of common sense morality, since the social benefits of the company are small, most people would think that a dozen people leaving at once just doesn’t matter at all. It might be different if it was a refugee charity or something. Is there some detail about how they left that I am missing?
Why do you think they weren’t honest?
This passage: “It was the effective altruists who were the greedy ones, who were convinced they could make more money outside the firm, and that they had a moral obligation to do so. You know, for the common good” strongly reads as saying the EAs who left Jane Street for Alameda did so out of greed. If you didn’t intend this, I would suggest editing the main text to clarify that when you said “it was the effective altruists who were the greedy ones” you meant they were actually doing it for the common good. A lot of the people who read this forum will know a lot of the people who left to join Alameda, so if you are unintentionally calling them greedy, dishonest and disloyal, that could be quite bad. Unless you intend to do that, in which case fair enough.