What made Charles Manson’s cult crazy in the eyes of the rest of society was not that they (allegedly) believed that was a race war was inevitable, and that white people needed to prepare for it & be the ones that struck first. Many people throughout history who we tend to think of as “sane” have evangelized similar doctrines or agitated in favor of them. What made them “crazy” was how nonsensical their actions were even granted their premises, i.e. the decision to kill a bunch of prominent white people as a “false flag”.
Likewise, you can see how Lasota’s “surface” doctrine sort of makes sense, I guess. It would be terrible if we made an AI that only cared about humans and not animals or aliens, and that led to astronomical suffering. The Nuremberg trials were a good idea, probably for reasons that have their roots in acausally blackmailing people not to commit genocide. If the only things I knew about the Zizcult were that they believed we should punish evildoers, and that factory farms were evil, I wouldn’t call them crazy. But then they go and (allegedly) waste Jamie Zajko’s parents in a manner that doesn’t further their stated goals at all and makes no tactical sense to anyone thinking coherently about their situation. Ditto for FTX, which, when one business failed, decided to commit multi-billion dollar fraud via their other actually successfully business, instead of just shutting down alameda and hoping that the lenders wouldn’t be able to repo too much of the exchange.
If instead of supposing that these behaviors were motivated by “belief”, we suppose they’re primarily socially motivated behaviors—in LaSota’s case, for deepening her ties with her followers and her status over them as a leader, for ultimately the same reasons all gang leaders become gang leaders; in FTX’s case, for maintaining the FTX team’s public image as wildly successful altruists—that seems like it actually tracks. The crazy behaviors were, in theory and in practice, absurdly counterproductive, ideologically speaking. But status anxiety is a hell of a drug.
But then they go and (allegedly) waste Jamie Zajko’s parents in a manner that doesn’t further their stated goals at all and makes no tactical sense to anyone thinking coherently about their situation.
And yet that seems entirely in line with the “Collapse the Timeline” line of thinking that Ziz advocated.
Ditto for FTX, which, when one business failed, decided to commit multi-billion dollar fraud via their other actually successfully business, instead of just shutting down alameda and hoping that the lenders wouldn’t be able to repo too much of the exchange.
And yet, that seems like the correct action if you sufficiently bullet bite expected value and the St. Petersberg Paradox, which SBF did repeatedly in interviews.
And yet, that seems like the correct action if you sufficiently bullet bite expected value and the St. Petersberg Paradox, which SBF did repeatedly in interviews.
I am not making an argument that the crime was +EV but SBF was dealt a bad hand. The EV of turning your entire business into the second largest ponzi scheme ever in order to save the smaller half is pretty apparently stupid, and ran an overwhelming chance of failure. There is no EV calculus where the SBF decision is a good one except maybe one in which he ignores externalities to EA and is simply trying to support his status, and even then I hardly understand it.
And yet that seems entirely in line with the “Collapse the Timeline” line of thinking that Ziz advocated.
Right, it is possible that something like this was what they told themselves, but it’s bananas. Imagine you’re Ziz. You believe the entire lightcone is at risk of becoming a torture zone for animals at the behest of Sam Altman and Demis Hassabis. This threat is foundational to your worldview and is the premier cassus belli for action. Instead of doing anything about that, you completely ignore this problem to go on the side quest of enacting retributive justice against Jamie’s parents. What kind of acausal reasoning could possibly motivate you to assume this level of risk of being completely wiped out as a group, for an objective so small?!
Scratch that. Imagine you believe you’re destined to reduce any normal plebian historical tragedies via Ziz acausal technobabble, through an expected ~2 opportunities to retroactively kill anybody in the world, and this is the strategy you must take. You’ve just succeeded with the IMO really silly instrumental objective of amassing a group of people willing to help you with this. Then you say—pass on the pancasila youth, pass on the sinaloa cartel, pass on anybody in the federal government, I need to murder Jamie’s parents.
My understanding of your point is that Mason was crazy because his plans didn’t follow from his premise and had nothing to do with his core ideas. I agree, but I do not think that’s relevant.
I am pushing back because, if you are St. Petersberg Paradox-pilled like SBF and make public statements that actually you should keep taking double or nothing bets, perhaps you are more likely to make tragic betting decisions and that’s because of you’re taking certain ideas seriously. If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.
I am pushing back because, if you believe that you are constantly being simulated to see what sort of decision agent you are, you are going to react extremely to every slight and that’s because you’re taking certain ideas seriously. If you have galaxy brained the idea that you’re being simulated to see how you react, killing Jamie’s parents isn’t even really killing Jamie’s parents, it’s showing what sort of decision agent you are to your simulators.
In both cases, they did X because they believe Y which implies X seems like a more parsimonious explanation for their behaviour.
(To be clear: I endorse neither of these ideas, even if I was previously positive on MIRI style decision theory research.)
I am pushing back because, if you are St. Petersberg Paradox-pilled like SBF and make public statements that actually you should keep taking double or nothing bets, perhaps you are more likely to make tragic betting decisions and that’s because of you’re taking certain ideas seriously. If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.
This is conceding a big part of your argument. You’re basically saying, yes, SBF’s decision was -EV according to any normal analysis, but according to a particular incorrect (“galaxy-brained”) analysis, it was +EV.
(Aside: what was actually the galaxy-brained analysis that’s supposed to have led to SBF’s conclusion, according to you? I don’t think I’ve seen it described, and I suspect this lack of a description is not a coincidence; see below.)
There are many reasons someone might make an error of judgement—but when the error in question stems (allegedly) from an incorrect application of a particular theory or idea, it makes no sense to attribute responsibility for the error to the theory. And as the mistake in question grows more and more outlandish (and more and more disconnected from any result the theory could plausibly have produced), the degree of responsibility that can plausibly be attributed to the theory correspondingly shrinks (while the degree of responsibility of specific brain-worms grows).
In other words,
they did X because they believe Y which implies X
is a misdescription of what happened in these cases, because in these cases the “Y” in question actually does not imply X, cannot reasonably be construed to imply X, and if somehow the individuals in question managed to bamboozle themselves badly enough to think Y implied X, that signifies unrelated (and causally prior) weirdness going on in their brains which is not explained by belief in Y.
In short: SBF is no more an indictment of expected utility theory (or of “taking ideas seriously”) than Deepak Chopra is of quantum mechanics; ditto Ziz and her corrupted brand of “timeless decision theory”. The only reason one would use these examples to argue against “taking ideas seriously” is if one already believed that “taking ideas seriously” was bad for some reason or other, and was looking for ways to affirm that belief.
If people inevitably sometimes make mistakes when interpreting theories, and theory-driven mistakes are more likely to be catastrophic than the mistakes people make when acting according to “atheoretical” learning from experience and imitation, then unusually theory-driven people are more likely to make catastrophic mistakes. In the absence of a way to prevent people from sometimes making mistakes when interpreting theories, this seems like a pretty strong argument in favor of atheoretical learning from experience and imitation!
This is particularly pertinent if, in a lot of cases where more sober theorists tend to say, “Well, the true theory wouldn’t have recommended that,” the reason the sober theorists believe that is because they expect true theories to not wildly contradict the wisdom of atheoretical learning from experience and imitation, rather than because they’ve personally pinpointed the error in the interpretation.
(“But I don’t need to know the answer. I just recite to myself, over and over, until I can choose sleep: It all adds up to normality.”)
And that’s even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money. (Even while sober people with sublinear utility functions shake their heads at the allegedly foolish spectacle of the bankruptcy in 89% of possible worlds.)
I think the causality runs the other way though; people who are crazy and grandiose are likely to come up with spurious theories to justify actions they wanted to take anyway. Experience and imitation shows us that non-crazy people successfully use theories to do non-crazy things all the time, so much so that you probably take it for granted.
And that’s even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money.
But of course no human financier has a utility function, let alone one that can be expressed only in terms of money, let alone one that’s linear in money. So in this hypothetical, yes, there is an error.
(SBF said his utility was linear in money. I think he probably wasn’t confused enough to think that was literally true, but I do think he was confused about the math.)
And that’s even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money. (Even while sober people with sublinear utility functions shake their heads at the allegedly foolish spectacle of the bankruptcy in 89% of possible worlds.)
This is related to a very important point: Without more assumptions, there is no way to distinguish via outcomes the following 2 cases: irrationality while pursuing your values and being rational but having very different or strange values.
(Also, I dislike the implication that it all adds up to normality, unless something else is meant or it’s trivial, since you can’t define normality without a context.)
There are many reasons someone might make an error of judgement—but when the error in question stems (allegedly) from an incorrect application of a particular theory or idea, it makes no sense to attribute responsibility for the error to the theory.
Eh, I’m a little concerned in general, because this, without restrictions could be used to redirect blame away from the theory, even in cases where the implementation of a theory is evidence against the theory.
The best example is historical non-capitalist societies, especially communist societies where communists claimed when responding to criticism roughly said that the communist societies weren’t truly communist, and thus communism could still work if they were truly communist.
This is the best example of this phenomenon, but I’m sure there’s other examples of this phenomenon.
If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.
I don’t think so. At the very least, it seems debatable. Biting the bullet in the St Petersburg paradox doesn’t mean taking negative-EV bets. House of cards stuff ~never turns out well in the long run, and the fallout from an implosion also grows as you double down. Everything that’s coming to light about FTX indicates it was a total house of cards. Seems really unlikely to me that most of these bets were positive even on fanatically risk-neutral, act utilitarian grounds.
Maybe I’m biased because it’s convenient to believe what I believe (that the instrumentally rational action is almost never “do something shady according to common sense morality.”) Let’s say it’s defensible to see things otherwise. Even then, I find it weird that because Sam had these views on St Petersburg stuff, people speak as though this explains everything about FTX epistemics. “That was excellent instrumental rationality we were seeing on display by FTX leadership, granted that they don’t care about common sense morality and bite the bullet on St Petersburg.” At the very least, we should name and consider the other hypothesis, on which the St Petersburg views were more incidental (though admittedly still “characteristic”). On that other hypothesis, there’s a specific type of psychology that makes people think they’re invincible, which leads to them taking negative bets on any defensible interpretation of decision-making under uncertainty.
Instead of doing anything about that, you completely ignore this problem to go on the side quest of enacting retributive justice against Jamie’s parents.
It sounds to me like they thought that Jamie would inherit a significant amount of money if they do that. They might have done it not only for reasons of retributive justice but to fund their whole operation.
What made Charles Manson’s cult crazy in the eyes of the rest of society was not that they (allegedly) believed that was a race war was inevitable, and that white people needed to prepare for it & be the ones that struck first. Many people throughout history who we tend to think of as “sane” have evangelized similar doctrines or agitated in favor of them. What made them “crazy” was how nonsensical their actions were even granted their premises, i.e. the decision to kill a bunch of prominent white people as a “false flag”.
Likewise, you can see how Lasota’s “surface” doctrine sort of makes sense, I guess. It would be terrible if we made an AI that only cared about humans and not animals or aliens, and that led to astronomical suffering. The Nuremberg trials were a good idea, probably for reasons that have their roots in acausally blackmailing people not to commit genocide. If the only things I knew about the Zizcult were that they believed we should punish evildoers, and that factory farms were evil, I wouldn’t call them crazy. But then they go and (allegedly) waste Jamie Zajko’s parents in a manner that doesn’t further their stated goals at all and makes no tactical sense to anyone thinking coherently about their situation. Ditto for FTX, which, when one business failed, decided to commit multi-billion dollar fraud via their other actually successfully business, instead of just shutting down alameda and hoping that the lenders wouldn’t be able to repo too much of the exchange.
If instead of supposing that these behaviors were motivated by “belief”, we suppose they’re primarily socially motivated behaviors—in LaSota’s case, for deepening her ties with her followers and her status over them as a leader, for ultimately the same reasons all gang leaders become gang leaders; in FTX’s case, for maintaining the FTX team’s public image as wildly successful altruists—that seems like it actually tracks. The crazy behaviors were, in theory and in practice, absurdly counterproductive, ideologically speaking. But status anxiety is a hell of a drug.
And yet that seems entirely in line with the “Collapse the Timeline” line of thinking that Ziz advocated.
And yet, that seems like the correct action if you sufficiently bullet bite expected value and the St. Petersberg Paradox, which SBF did repeatedly in interviews.
I am not making an argument that the crime was +EV but SBF was dealt a bad hand. The EV of turning your entire business into the second largest ponzi scheme ever in order to save the smaller half is pretty apparently stupid, and ran an overwhelming chance of failure. There is no EV calculus where the SBF decision is a good one except maybe one in which he ignores externalities to EA and is simply trying to support his status, and even then I hardly understand it.
Right, it is possible that something like this was what they told themselves, but it’s bananas. Imagine you’re Ziz. You believe the entire lightcone is at risk of becoming a torture zone for animals at the behest of Sam Altman and Demis Hassabis. This threat is foundational to your worldview and is the premier cassus belli for action. Instead of doing anything about that, you completely ignore this problem to go on the side quest of enacting retributive justice against Jamie’s parents. What kind of acausal reasoning could possibly motivate you to assume this level of risk of being completely wiped out as a group, for an objective so small?!
Scratch that. Imagine you believe you’re destined to reduce any normal plebian historical tragedies via Ziz acausal technobabble, through an expected ~2 opportunities to retroactively kill anybody in the world, and this is the strategy you must take. You’ve just succeeded with the IMO really silly instrumental objective of amassing a group of people willing to help you with this. Then you say—pass on the pancasila youth, pass on the sinaloa cartel, pass on anybody in the federal government, I need to murder Jamie’s parents.
My understanding of your point is that Mason was crazy because his plans didn’t follow from his premise and had nothing to do with his core ideas. I agree, but I do not think that’s relevant.
I am pushing back because, if you are St. Petersberg Paradox-pilled like SBF and make public statements that actually you should keep taking double or nothing bets, perhaps you are more likely to make tragic betting decisions and that’s because of you’re taking certain ideas seriously. If you have galaxy brained the idea of the St. Petersberg Paradox, it seems like Alameda style fraud is +EV.
I am pushing back because, if you believe that you are constantly being simulated to see what sort of decision agent you are, you are going to react extremely to every slight and that’s because you’re taking certain ideas seriously. If you have galaxy brained the idea that you’re being simulated to see how you react, killing Jamie’s parents isn’t even really killing Jamie’s parents, it’s showing what sort of decision agent you are to your simulators.
In both cases, they did X because they believe Y which implies X seems like a more parsimonious explanation for their behaviour.
(To be clear: I endorse neither of these ideas, even if I was previously positive on MIRI style decision theory research.)
This is conceding a big part of your argument. You’re basically saying, yes, SBF’s decision was -EV according to any normal analysis, but according to a particular incorrect (“galaxy-brained”) analysis, it was +EV.
(Aside: what was actually the galaxy-brained analysis that’s supposed to have led to SBF’s conclusion, according to you? I don’t think I’ve seen it described, and I suspect this lack of a description is not a coincidence; see below.)
There are many reasons someone might make an error of judgement—but when the error in question stems (allegedly) from an incorrect application of a particular theory or idea, it makes no sense to attribute responsibility for the error to the theory. And as the mistake in question grows more and more outlandish (and more and more disconnected from any result the theory could plausibly have produced), the degree of responsibility that can plausibly be attributed to the theory correspondingly shrinks (while the degree of responsibility of specific brain-worms grows).
In other words,
is a misdescription of what happened in these cases, because in these cases the “Y” in question actually does not imply X, cannot reasonably be construed to imply X, and if somehow the individuals in question managed to bamboozle themselves badly enough to think Y implied X, that signifies unrelated (and causally prior) weirdness going on in their brains which is not explained by belief in Y.
In short: SBF is no more an indictment of expected utility theory (or of “taking ideas seriously”) than Deepak Chopra is of quantum mechanics; ditto Ziz and her corrupted brand of “timeless decision theory”. The only reason one would use these examples to argue against “taking ideas seriously” is if one already believed that “taking ideas seriously” was bad for some reason or other, and was looking for ways to affirm that belief.
If people inevitably sometimes make mistakes when interpreting theories, and theory-driven mistakes are more likely to be catastrophic than the mistakes people make when acting according to “atheoretical” learning from experience and imitation, then unusually theory-driven people are more likely to make catastrophic mistakes. In the absence of a way to prevent people from sometimes making mistakes when interpreting theories, this seems like a pretty strong argument in favor of atheoretical learning from experience and imitation!
This is particularly pertinent if, in a lot of cases where more sober theorists tend to say, “Well, the true theory wouldn’t have recommended that,” the reason the sober theorists believe that is because they expect true theories to not wildly contradict the wisdom of atheoretical learning from experience and imitation, rather than because they’ve personally pinpointed the error in the interpretation.
(“But I don’t need to know the answer. I just recite to myself, over and over, until I can choose sleep: It all adds up to normality.”)
And that’s even if there is an error. A reckless financier who accepts a 89% chance of losing it all for an 11% chance of dectupling their empire would be rational if they truly had linear utility for money. (Even while sober people with sublinear utility functions shake their heads at the allegedly foolish spectacle of the bankruptcy in 89% of possible worlds.)
I think the causality runs the other way though; people who are crazy and grandiose are likely to come up with spurious theories to justify actions they wanted to take anyway. Experience and imitation shows us that non-crazy people successfully use theories to do non-crazy things all the time, so much so that you probably take it for granted.
But of course no human financier has a utility function, let alone one that can be expressed only in terms of money, let alone one that’s linear in money. So in this hypothetical, yes, there is an error.
(SBF said his utility was linear in money. I think he probably wasn’t confused enough to think that was literally true, but I do think he was confused about the math.)
This is related to a very important point: Without more assumptions, there is no way to distinguish via outcomes the following 2 cases: irrationality while pursuing your values and being rational but having very different or strange values.
(Also, I dislike the implication that it all adds up to normality, unless something else is meant or it’s trivial, since you can’t define normality without a context.)
Eh, I’m a little concerned in general, because this, without restrictions could be used to redirect blame away from the theory, even in cases where the implementation of a theory is evidence against the theory.
The best example is historical non-capitalist societies, especially communist societies where communists claimed when responding to criticism roughly said that the communist societies weren’t truly communist, and thus communism could still work if they were truly communist.
This is the best example of this phenomenon, but I’m sure there’s other examples of this phenomenon.
I don’t think so. At the very least, it seems debatable. Biting the bullet in the St Petersburg paradox doesn’t mean taking negative-EV bets. House of cards stuff ~never turns out well in the long run, and the fallout from an implosion also grows as you double down. Everything that’s coming to light about FTX indicates it was a total house of cards. Seems really unlikely to me that most of these bets were positive even on fanatically risk-neutral, act utilitarian grounds.
Maybe I’m biased because it’s convenient to believe what I believe (that the instrumentally rational action is almost never “do something shady according to common sense morality.”) Let’s say it’s defensible to see things otherwise. Even then, I find it weird that because Sam had these views on St Petersburg stuff, people speak as though this explains everything about FTX epistemics. “That was excellent instrumental rationality we were seeing on display by FTX leadership, granted that they don’t care about common sense morality and bite the bullet on St Petersburg.” At the very least, we should name and consider the other hypothesis, on which the St Petersburg views were more incidental (though admittedly still “characteristic”). On that other hypothesis, there’s a specific type of psychology that makes people think they’re invincible, which leads to them taking negative bets on any defensible interpretation of decision-making under uncertainty.
Who were you responding to, since I didn’t make the argument that you were responding to.
Oh, I was replying to Iceman – mostly this part that I quoted:
(I think I’ve seen similar takes by other posters in the past.)
I should have mentioned that I’m not replying to you.
I think I took such a long break from LW that I forgot that you can make subthreads rather than just continue piling on at the end of a thread.
It sounds to me like they thought that Jamie would inherit a significant amount of money if they do that. They might have done it not only for reasons of retributive justice but to fund their whole operation.