Eliezer prioritized a donor (presumably long-term and one he knew personally) over an article.
Note that the post in question has already been seen by the donor, and has effectively advocated donating all spare money to SI. I imagine the donor was not a mind upload and the point was not deleted from donor’s memory, but I do know that deletion of it from public space resulted in lack of rebuttals.
In any case my point was not that censorship was bad, but that a nonsense threat utterly lacking in any credibility was taken very seriously (to the point of nightmares you say?). It is dangerous to have anyone seriously believe your project is going to kill everyone, even if that person is a pencil necked white nerd.
“he’s a high school dropout who hasn’t yet created an AI and so must be completely wrong”?
Strawman. A Bayesian reasoner should update on such evidence, especially as combination of ‘high school drop out’ and ‘no impressive technical accomplishments’ is a very strong indicator (of lack of world class genius) for that age category. It is the case that this evidence, post update, shifts estimates significantly in direction of ‘completely wrong or not even wrong’ for all insights that require world class genius level intelligence, such as, incidentally, forming opinion on AI risk which most world class geniuses did not form.
In any case I did not even say what you implied. To me the Roko incident is evidence that some people here take that kind of nonsense seriously enough to have nightmares about it (to delete it, etc etc), and as such it is unsafe if such people get told that particular software project is going to kill us all, while the list of accomplishment was to perform update on, when evaluating probability.
I have never seen where the person-with-nightmares was revealed as a donor, or indeed any clue as to who they were other than ‘someone Eliezer knows’. I would like some evidence, if there is any.
Also, Eliezer did not drop out of high school; he never attended in the first place, commonly known as ‘skipping it’, which is more common among “geniuses” (though I dislike that description).
Please note that none of the evidence shows the donor status of the anonymous people/person who actually had nightmares, and the two named individuals did not say it gave them nightmares, but used a popular TVTropes idiom, “Nightmare Fuel”, as an adjective.
Very few people are so smart they are in the category of ‘too smart for highschool and any university’… many more are less smart, some have practical issues (need to work to feed family f/e). There’s some very serious priors from the normal distribution, for evidence to shift. Successful self education is fairly uncommon, especially outside the context of ‘had to feed family’.
Does it really? Do I have to repeat myself more? Is it against some unwritten rule to mention Bell curve prior which I have had from the start?
What is your purpose?
What do you think? Feedback. I do actually think he’s nuts, you know? I also think he’s terribly miscalibrated , which is probably the cause of the overconfidence in his foom belief (and it is ultimately the overconfidence that is nutty, same beliefs with appropriate confidence would be just mildly weird in a good way). It is also probably the case that politeness results in biased feedback.
Well, there’s also the matter of why I’d think he’s nuts when facing “either he’s a supergenius or he’s nuts” dilemma created by overly high confidence expressed in overly speculative arguments. But yea I’m not sure it’s getting anywhere, the target audience is just EY himself, and I do expect he’d read this at least out of curiosity to see how he’s being defended, but with low confidence so I’m done.
It is the case that this evidence, post update, shifts estimates significantly in direction of ‘completely wrong or not even wrong’ for all insights that require world class genius level intelligence, such as, incidentally, forming opinion on AI risk which most world class geniuses did not form.
Most “world class geniuses” have not opinionated on AI risk. So “forming opinion on AI risk which most world class geniuses did not form” is hardly a task which requires “world class genius level intelligence”.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Most “world class geniuses” have not opinionated on AI risk.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families. I should have said, forming a valid opinion.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Assuming that absolutely nothing in the writing had to be taken on faith. True for mathematical proofs. False for almost everything else.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families.
That seems like a pretty questionable presumption to me. High IQ is linked to reduced mortality according to at least one study, but that needn’t imply that any particular fatal risk be likely to be uncovered, let alone prevented, by any particular genius; there’s no physical law stating that lethal threats must be obvious in proportion to their lethality. And that’s especially true for existential threats, which almost by definition must be without experiential precedent.
You’d have a stronger argument if you narrowed your reference class to AI researchers. Not a terribly original one in this context, but a stronger one.
Go dig for numbers yourself, and assume he is a genius until you find numbers, that will be very rational. Meanwhile most of people have a general feel of how rare it would be that a person with supposedly genius level untested insights into a technical topic (in so much as most geniuses fail to have those insights) would have nothing impressive that was tested, at age of, what, 32? edit: Then also, the geniuses know of that feeling and generally produce the accomplishments in question if they want to be taken seriously.
Starting a nonprofit on a subject unfamiliar to most and successfully soliciting donations, starting an 8.5-million-view blog, writing over 2 million words on wide-ranging controversial topics so well that the only sustained criticism to be made is “it’s long” and minor nitpicks, writing an extensive work of fiction that dominated its genre, and making some novel and interesting inroads into decision theory all seem, to me, to be evidence in favour of genius-level intelligence. These are evidence because the overwhelming default in every case for simply ‘smart’ people is to fail.
It provides evidence in favour of him being correct. If there weren’t other sources of information on Hubbard’s activities, I’d expect him to be of genius-level intelligence.
You’re familiar with the concept that someone looking like Hitler doesn’t make them fascist, right?
It provides evidence in favour of him being correct. If there weren’t other sources of information on Hubbard’s activities, I’d expect him to be of genius-level intelligence.
Honestly, I wouldn’t be surprised if he was; he clearly had an almost uniquely good understanding of what it takes to build a successful cult (though his early links with the OTO probably helped). New religious movements start all the time, and not one in a hundred reaches Scientology’s level of success. You can be both a genius and a charlatan. It’s easier to be the latter if you’re the former, actually.
Although his writing’s admittedly pretty terrible.
I wouldn’t expect genius level technical intelligence. Self deception is important part of effective deception; you have to believe a lie to build a good lie. Avoiding self deception is important part of technical accomplishment.
Furthermore, knowing that someone has no technical accomplishments is very different from not knowing if someone has technical accomplishments.
Yes. Worked at 3 failed start-ups, founded successful start-up (and know of several more failed ones). Self deception is incredibly destructive to any accomplishment that is not involving deception of other people. You need to know how good your skill set is, how good your product is, how good your idea is. You can’t be falling in love with brainfarts.
In any case, talents require extensive practice with feedback (are massively enhanced by that), and no technical accomplishments at age above 30 pretty much excludes any possibility of technical talent of any significance nowadays. (Yes, some odd case may discover they are awesome inventor, at age past 30, but they suffer from lack of earlier practice, and it’d be incredibly foolish of anyone who knows of own natural talent since teen, not to practice properly)
I’d also point out that if you read the investigative Hubbard biographies, you see many classic signs of con artistry: constant changes of location, careers, ideologies, bankruptcies or court cases in their wake, endless lies about their credentials, and so on. Most of these do not match Eliezer at all—the only similarities are flux in ideas and projects which don’t always pan out (like Flare), but that could be said of an ordinary academic AI researcher as well. (Most academic software is used for some publications and abandoned to bitrot.)
Note that the post in question has already been seen by the donor, and has effectively advocated donating all spare money to SI. I imagine the donor was not a mind upload and the point was not deleted from donor’s memory, but I do know that deletion of it from public space resulted in lack of rebuttals.
In any case my point was not that censorship was bad, but that a nonsense threat utterly lacking in any credibility was taken very seriously (to the point of nightmares you say?). It is dangerous to have anyone seriously believe your project is going to kill everyone, even if that person is a pencil necked white nerd.
Strawman. A Bayesian reasoner should update on such evidence, especially as combination of ‘high school drop out’ and ‘no impressive technical accomplishments’ is a very strong indicator (of lack of world class genius) for that age category. It is the case that this evidence, post update, shifts estimates significantly in direction of ‘completely wrong or not even wrong’ for all insights that require world class genius level intelligence, such as, incidentally, forming opinion on AI risk which most world class geniuses did not form.
In any case I did not even say what you implied. To me the Roko incident is evidence that some people here take that kind of nonsense seriously enough to have nightmares about it (to delete it, etc etc), and as such it is unsafe if such people get told that particular software project is going to kill us all, while the list of accomplishment was to perform update on, when evaluating probability.
I have never seen where the person-with-nightmares was revealed as a donor, or indeed any clue as to who they were other than ‘someone Eliezer knows’. I would like some evidence, if there is any.
Also, Eliezer did not drop out of high school; he never attended in the first place, commonly known as ‘skipping it’, which is more common among “geniuses” (though I dislike that description).
I sent you 3 pieces of evidence via private message. Including two names.
Thank you for the links.
Please note that none of the evidence shows the donor status of the anonymous people/person who actually had nightmares, and the two named individuals did not say it gave them nightmares, but used a popular TVTropes idiom, “Nightmare Fuel”, as an adjective.
Very few people are so smart they are in the category of ‘too smart for highschool and any university’… many more are less smart, some have practical issues (need to work to feed family f/e). There’s some very serious priors from the normal distribution, for evidence to shift. Successful self education is fairly uncommon, especially outside the context of ‘had to feed family’.
Your criticism shifts as the wind.
What is your purpose?
Does it really? Do I have to repeat myself more? Is it against some unwritten rule to mention Bell curve prior which I have had from the start?
What do you think? Feedback. I do actually think he’s nuts, you know? I also think he’s terribly miscalibrated , which is probably the cause of the overconfidence in his foom belief (and it is ultimately the overconfidence that is nutty, same beliefs with appropriate confidence would be just mildly weird in a good way). It is also probably the case that politeness results in biased feedback.
If your purpose is “let everyone know I think Eliezer is nuts”, then you have succeeded, and may cease posting.
Well, there’s also the matter of why I’d think he’s nuts when facing “either he’s a supergenius or he’s nuts” dilemma created by overly high confidence expressed in overly speculative arguments. But yea I’m not sure it’s getting anywhere, the target audience is just EY himself, and I do expect he’d read this at least out of curiosity to see how he’s being defended, but with low confidence so I’m done.
Most “world class geniuses” have not opinionated on AI risk. So “forming opinion on AI risk which most world class geniuses did not form” is hardly a task which requires “world class genius level intelligence”.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families. I should have said, forming a valid opinion.
Assuming that absolutely nothing in the writing had to be taken on faith. True for mathematical proofs. False for almost everything else.
That seems like a pretty questionable presumption to me. High IQ is linked to reduced mortality according to at least one study, but that needn’t imply that any particular fatal risk be likely to be uncovered, let alone prevented, by any particular genius; there’s no physical law stating that lethal threats must be obvious in proportion to their lethality. And that’s especially true for existential threats, which almost by definition must be without experiential precedent.
You’d have a stronger argument if you narrowed your reference class to AI researchers. Not a terribly original one in this context, but a stronger one.
Numbers?
Go dig for numbers yourself, and assume he is a genius until you find numbers, that will be very rational. Meanwhile most of people have a general feel of how rare it would be that a person with supposedly genius level untested insights into a technical topic (in so much as most geniuses fail to have those insights) would have nothing impressive that was tested, at age of, what, 32? edit: Then also, the geniuses know of that feeling and generally produce the accomplishments in question if they want to be taken seriously.
Starting a nonprofit on a subject unfamiliar to most and successfully soliciting donations, starting an 8.5-million-view blog, writing over 2 million words on wide-ranging controversial topics so well that the only sustained criticism to be made is “it’s long” and minor nitpicks, writing an extensive work of fiction that dominated its genre, and making some novel and interesting inroads into decision theory all seem, to me, to be evidence in favour of genius-level intelligence. These are evidence because the overwhelming default in every case for simply ‘smart’ people is to fail.
Many a con men accomplish this.
The overwhelming default for those capable of significant technical accomplishment is not to spend time on such activities.
Ultimately there’s many more successful ventures like this, such as scientology, and if I use this kind of metric on L. Ron Hubbard...
It provides evidence in favour of him being correct. If there weren’t other sources of information on Hubbard’s activities, I’d expect him to be of genius-level intelligence.
You’re familiar with the concept that someone looking like Hitler doesn’t make them fascist, right?
Honestly, I wouldn’t be surprised if he was; he clearly had an almost uniquely good understanding of what it takes to build a successful cult (though his early links with the OTO probably helped). New religious movements start all the time, and not one in a hundred reaches Scientology’s level of success. You can be both a genius and a charlatan. It’s easier to be the latter if you’re the former, actually.
Although his writing’s admittedly pretty terrible.
I wouldn’t expect genius level technical intelligence. Self deception is important part of effective deception; you have to believe a lie to build a good lie. Avoiding self deception is important part of technical accomplishment.
Furthermore, knowing that someone has no technical accomplishments is very different from not knowing if someone has technical accomplishments.
This does not seem obvious to me, in general. Do you have experience making technical accomplishments?
Yes. Worked at 3 failed start-ups, founded successful start-up (and know of several more failed ones). Self deception is incredibly destructive to any accomplishment that is not involving deception of other people. You need to know how good your skill set is, how good your product is, how good your idea is. You can’t be falling in love with brainfarts.
In any case, talents require extensive practice with feedback (are massively enhanced by that), and no technical accomplishments at age above 30 pretty much excludes any possibility of technical talent of any significance nowadays. (Yes, some odd case may discover they are awesome inventor, at age past 30, but they suffer from lack of earlier practice, and it’d be incredibly foolish of anyone who knows of own natural talent since teen, not to practice properly)
I’d also point out that if you read the investigative Hubbard biographies, you see many classic signs of con artistry: constant changes of location, careers, ideologies, bankruptcies or court cases in their wake, endless lies about their credentials, and so on. Most of these do not match Eliezer at all—the only similarities are flux in ideas and projects which don’t always pan out (like Flare), but that could be said of an ordinary academic AI researcher as well. (Most academic software is used for some publications and abandoned to bitrot.)