That’s fair, yeah
We need a proper mathematical model to study this further. I expect it to be difficult to set up because the situation is so unrealistic/impossible as to be hard to model. But if you do have a model in mind I’ll take a look
That’s fair, yeah
We need a proper mathematical model to study this further. I expect it to be difficult to set up because the situation is so unrealistic/impossible as to be hard to model. But if you do have a model in mind I’ll take a look
It would help to have a more formal model, but as far as I can tell the oracle can only narrow down its predictions of the future to the extent that those predictions are independent of the oracle’s output. That is to say, if the people in the universe ignore what the oracle says, then the oracle can give an informative prediction.
This would seem to exactly rule out any type of signal which depends on the oracle’s output, which is precisely the types of signals that nostalgebraist was concerned about.
The problem is that the act of leaving the message depends on the output of the oracle (otherwise you wouldn’t need the oracle at all, but you also would not know how to leave a message). If the behavior of the machine depends on the oracle’s actions, then we have to be careful with what the fixed point will be.
For example, if we try to fight the oracle and do the opposite, we get the “noise” situation from the grandfather paradox.
But if we try to cooperate with the oracle and do what it predicts, then there are many different fixed points and no telling which the oracle would choose (this is not specified in the setting).
It would be great to see a formal model of the situation. I think any model in which such message transmission would work is likely to require some heroic assumptions which don’t correspond much to real life.
Thanks for the link to reflective oracles!
On the gap between the computable and uncomputable: It’s not so bad to trifle a little. Diagonalization arguments can often be avoided with small changes to the setup, and a few of Paul’s papers are about doing exactly this.
I strongly disagree with this: diagonalization arguments often cannot be avoided at all, not matter how you change the setup. This is what vexed logicians in the early 20th century: no matter how you change your formal system, you won’t be able to avoid Godel’s incompleteness theorems.
There is a trick that reliably gets you out of such paradoxes, however: switch to probabilistic mixtures. This is easily seen in a game setting: in rock-paper-scissors, there is no deterministic Nash equilibrium. Switch to mixed strategies, however, and suddenly there is always a Nash equilibrium.
This is the trick that Paul is using: he is switching from deterministic Turing machines to randomized ones. That’s fine as far as it goes, but it has some weird side effects. One of them is that if a civilization is trying to predict the universal prior that is simulating itself, and tries to send a message, then it is likely that with “reflexive oracles” in place, the only message it can send is random noise. That is, Paul shows reflexive oracles exist in the same way that Nash equilibria exist; but there is no control over what the reflexive oracle actually is, and in paradoxical situations (like rock-paper-scissors) the Nash equilibrium is the boring “mix everything together uniformly”.
The underlying issue is that a universe that can predict the universal prior, which in turn simulates the universe itself, can encounter a grandfather paradox. It can see its own future by looking at the simulation, and then it can do the opposite. The grandfather paradox is where the universe decides to kill the grandfather of a child that the simulation predicts.
Paul solves this by only letting it see its own future using a “reflexive oracle” which essentially finds a fixed point (which is a probability distribution). The fixed point of a grandfather paradox is something like “half the time the simulation shows the grandchild alive, causing the real universe to kill the grandfather; the other half the time, the simulation shows the grandfather dead and the grandchild not existing”. Such a fixed point exists even when the universe tries to do the opposite of the prediction.
The thing is, this fixed point is boring! Repeat this enough times, and it eventually just says “well my prediction about your future is random noise that doesn’t have to actually come true in your own future”. I suspect that if you tried to send a message through the universal prior in this setting, the message would consist of essentially uniformly random bits. This would depend on the details of the setup, I guess.
I think the problem to grapple with is that I can cover the rationals in [0,1] with countably many intervals of total length only 1⁄2 (eg enumerate rationals in [0,1], and place interval of length 1⁄4 around first rational, interval of length 1⁄8 around the second, etc). This is not possible with reals—that’s the insight that makes measure theory work!
The covering means that the rationals in an interval cannot have a well defined length or measure which behaves reasonably under countable unions. This is a big barrier to doing probability theory. The same problem happens with ANY countable set—the reals only avoid it by being uncountable.
Weirdly aggressive post.
I feel like maybe what’s going on here is that you do not know what’s in The Bell Curve, so you assume it is some maximally evil caricature? Whereas what’s actually in the book is exactly Scott’s position, the one you say is “his usual “learn to love scientific consensus” stance”.
If you’d stop being weird about it for just a second, could you answer something for me? What is one (1) position that Murray holds about race/IQ and Scott doesn’t? Just name a single one, I’ll wait.
Or maybe what’s going on here is that you have a strong “SCOTT GOOD” prior as well as a strong “MURRAY BAD” prior, and therefore anyone associating the two must be on an ugly smear campaign. But there’s actually zero daylight between their stances and both of them know it!
Relatedly, if you cannot outright make a claim because it is potentially libellous, you shouldn’t use vague insinuation to imply it to your massive and largely-unfamiliar-with-the-topic audience.
Strong disagree. If I know an important true fact, I can let people know in a way that doesn’t cause legal liability for me.
Can you grapple with the fact that the “vague insinuation” is true? Like, assuming it’s true and that Cade knows it to be true, your stance is STILL that he is not allowed to say it?
Your position seems to amount to epistemic equivalent of ‘yes, the trial was procedurally improper, and yes the prosecutor deceived the jury with misleading evidence, and no the charge can’t actually be proven beyond a reasonable doubt- but he’s probably guilty anyway, so what’s the issue’. I think the issue is journalistic malpractice. Metz has deliberately misled his audience in order to malign Scott on a charge which you agree cannot be substantiated, because of his own ideological opposition (which he admits). To paraphrase the same SSC post quoted above, he has locked himself outside of the walled garden. And you are “Andrew Cord”, arguing that we should all stop moaning because it’s probably true anyway so the tactics are justified.
It is not malpractice, because Cade had strong evidence for the factually true claim! He just didn’t print the evidence. The evidence was of the form “interview a lot of people who know Scott and decide who to trust”, which is a difficult type of evidence to put into print, even though it’s epistemologically fine (in this case IT LED TO THE CORRECT BELIEF so please give it a rest with the malpractice claims).
Here is the evidence of Scott’s actual beliefs:
https://twitter.com/ArsonAtDennys/status/1362153191102677001
As for your objections:
First of all, this is already significantly different, more careful and qualified than what Metz implied, and that’s after we read into it more than what Scott actually said. Does that count as “aligning yourself”?
This is because Scott is giving a maximally positive spin on his own beliefs! Scott is agreeing that Cade is correct about him! Scott had every opportunity to say “actually, I disagree with Murray about...” but he didn’t, because he agrees with Murray just like Cade said. And that’s fine! I’m not even criticizing it. It doesn’t make Scott a bad person. Just please stop pretending that Cade is lying.
Relatedly, even if Scott did truly believe exactly what Charles Murray does on this topic, which again I don’t think we can fairly assume, he hasn’t said that, and that’s important. Secretly believing something is different from openly espousing it, and morally it can be much different if one believes that openly espousing it could lead to it being used in harmful ways (which from the above, Scott clearly does, even in the qualified form which he may or may not believe). Scott is going to some lengths and being very careful not to espouse it openly and without qualification, and clearly believes it would be harmful to do so, so it’s clearly dishonest and misleading to suggest that he has “aligns himself” with Charles Murray on this topic. Again, this is even after granting the very shaky proposition that he secretly does align with Charles Murray, which I think we have established is a claim that cannot be substantiated.
Scott so obviously aligns himself with Murray that I knew it before that email was leaked or Cade’s article was written, as did many other people. At some point, Scott even said that he will talk about race/IQ in the context of Jews in order to ease the public into it, and then he published this. (I can’t find where I saw Scott saying it though.)
Further, Scott, unlike Charles Murray, is very emphatic about the fact that, whatever the answer to this question, this should not affect our thinking on important issues or our treatment of anyone. Is this important addendum not elided by the idea that he ‘aligned himself’ with Charles Murray? Would not that not be a legitimate “gripe”?
Actually, this is not unlike Charles Murray, who also says this should not affect our treatment of anyone. (I disagree with the “thinking on important issues” part, which Scott surely does think it affects.)
The epistemology was not bad behind the scenes, it was just not presented to the readers. That is unfortunate but it is hard to write a NYT article (there are limits on how many receipts you can put in an article and some of the sources may have been off the record).
Cade correctly informed the readers that Scott is aligned with Murray on race and IQ. This is true and informative, and at the time some people here doubted it before the one email leaked. Basically, Cade’s presented evidence sucked but someone going with the heuristic “it’s in the NYT so it must be true” would have been correctly informed.
I don’t know if Cade had a history of “tabloid rhetorical tricks” but I think it is extremely unbecoming to criticize a reporter for giving true information that happens to paint the community in a bad light. Also, the post you linked by Trevor uses some tabloid rhetorical tricks: it says Cade sneered at AI risk but links to an article that literally doesn’t mention AI risk at all.
What you’re suggesting amounts to saying that on some topics, it is not OK to mention important people’s true views because other people find those views objectionable. And this holds even if the important people promote those views and try to convince others of them. I don’t think this is reasonable.
As a side note, it’s funny to me that you link to Against Murderism as an example of “careful subtlety”. It’s one of my least favorite articles by Scott, and while I don’t generally think Scott is racist that one almost made me change my mind. It is just a very bad article. It tries to define racism out of existence. It doesn’t even really attempt to give a good definition—Scott is a smart person, he could do MUCH better than those definitions if he tried. For example, a major part of the rationalist movement was originally about cognitive biases, yet “racism defined as cognitive bias” does not appear in the article at all. Did Scott really not think of it?
What Metz did is not analogous to a straightforward accusation of cheating. Straightforward accusations are what I wish he did.
It was quite straightforward, actually. Don’t be autistic about this: anyone reasonably informed who is reading the article knows what Scott is accused of thinking when Cade mentions Murray. He doesn’t make the accusation super explicit, but (a) people here would be angrier if he did, not less angry, and (b) that might actually pose legal issues for the NYT (I’m not a lawyer).
What Cade did reflects badly on Cade in the sense that it is very embarrassing to cite such weak evidence. I would never do that because it’s mortifying to make such a weak accusation.
However, Scott has no possible gripe here. Cade’s article makes embarrassing logical leaps, but the conclusion is true and the reporting behind the article (not featured in the article) was enough to show it true, so even a claim of being Gettier Cased does not work here.
Scott thinks very highly of Murray and agrees with him on race/IQ. Pretty much any implication one could reasonably draw from Cade’s article regarding Scott’s views on Murray or on race/IQ/genes is simply factually true. Your hypothetical author in Alabama has Greta Thunberg posters in her bedroom here.
Wait a minute. Please think through this objection. You are saying that if the NYT encountered factually true criticisms of an important public figure, it would be immoral of them to mention this in an article about that figure?
Does it bother you that your prediction didn’t actually happen? Scott is not dying in prison!
This objection is just ridiculous, sorry. Scott made it an active project to promote a worldview that he believes in and is important to him—he specifically said he will mention race/IQ/genes in the context of Jews, because that’s more palatable to the public. (I’m not criticizing this right now, just observing it.) Yet if the NYT so much as mentions this, they’re guilty of killing him? What other important true facts about the world am I not allowed to say according to the rationalist community? I thought there was some mantra of like “that which can be destroyed by the truth should be”, but I guess this does not apply to criticisms of people you like?
The evidence wasn’t fake! It was just unconvincing. “Giving unconvincing evidence because the convincing evidence is confidential” is in fact a minor sin.
I assume it was hard to substantiate.
Basically it’s pretty hard to find Scott saying what he thinks about this matter, even though he definitely thinks this. Cade is cheating with the citations here but that’s a minor sin given the underlying claim is true.
It’s really weird to go HOW DARE YOU when someone says something you know is true about you, and I was always unnerved by this reaction from Scott’s defenders. It reminds me of a guy I know who was cheating on his girlfriend, and she suspected this, and he got really mad at her. Like, “how can you believe I’m cheating on you based on such flimsy evidence? Don’t you trust me?” But in fact he was cheating.
I think for the first objection about race and IQ I side with Cade. It is just true that Scott thinks what Cade said he thinks, even if that one link doesn’t prove it. As Cade said, he had other reporting to back it up. Truth is a defense against slander, and I don’t think anyone familiar with Scott’s stance can honestly claim slander here.
This is a weird hill to die on because Cade’s article was bad in other ways.
What position did Paul Christiano get at NIST? Is it a leadership position?
The problem with that is that it sounds like the common error of “let’s promote our best engineer to a manager position”, which doesn’t work because the skills required to be an excellent engineer have little to do with the skills required to be a great manager. Christiano is the best of the best in technical work on AI safety; I am not convinced putting him in a management role is the best approach.
Eh, I feel like this is a weird way of talking about the issue.
If I didn’t understand something and, after a bunch of effort, I managed to finally get it, I will definitely try to summarize the key lesson to myself. If I prove a theorem or solve a contest math problem, I will definitely pause to think “OK, what was the key trick here, what’s the essence of this, how can I simplify the proof”.
Having said that, I would NOT describe this as asking “how could I have arrived at the same destination by a shorter route”. I would just describe it as asking “what did I learn here, really”. Counterfactually, if I had to solve the math problem again without knowing the solution, I’d still have to try a bunch of different things! I don’t have any improvement on this process, not even in hindsight; what I have is a lesson learned, but it doesn’t feel like a shortened path.
Anyway, for the dates thing, what is going on is not that EY is super good at introspecting (lol), but rather that he is bad at empathizing with the situation. Like, go ask EY if he never slacks on a project; he has in the past said he is often incapable of getting himself to work even when he believes the work is urgently necessary to save the world. He is not a person with a 100% solved, harmonic internal thought process; far from it. He just doesn’t get the dates thing, so assumes it is trivial.
This is interesting, but how do you explain the observation that LW posts are frequently much much longer than they need to be to convey their main point? They take forever to get started (“what this NOT arguing: [list of 10 points]” etc) and take forever to finish.
Given o1, I want to remark that the prediction in (2) was right. Instead of training LLMs to give short answers, an LLM is trained to give long answers and another LLM summarizes.