I have not only been warned, but I have stared the basilisk in the eyes, and I’m still here typing about it.
The point we are trying to make is that we think the people who stared the basilisk in the eyes and metaphorically turned to stone are stronger evidence.
The point we are trying to make is that we think the people who stared the basilisk in the eyes and metaphorically turned to stone are stronger evidence.
I get that. But I think it’s important to consider both positive and negative evidence- if someone’s testimony that they got turned to stone is important, so are the testimonies of people who didn’t get turned to stone.
The question to me is whether the basilisk turns people to stone or people turn themselves into stone. I prefer the second because it requires no magic powers on the part of the basilisk. It might be that some people turn to stone when they see goatse for the first time, but that tells you more about humans and how they respond to shock than about goatse.
Indeed, that makes it somewhat useful to know what sort of things shock other people. Calling this idea ‘dangerous’ instead of ’dangerous to EY” strikes me as mind projection.
But I think it’s important to consider both positive and negative evidence- if someone’s testimony that they got turned to stone is important, so are the testimonies of people who didn’t get turned to stone.
It might be that some people turn to stone when they see goatse for the first time, but that tells you more about humans and how they respond to shock than about goatse.
I generally find myself in support of people who advocate a policy of keeping people from seeing Goatse.
I generally find myself in support of people who advocate a policy of keeping people from seeing Goatse.
I’m not sure how to evaluate this statement. What do you mean by “keeping people from seeing Goatse”? Banning? Voluntarily choosing not to spread it? A filter like the one proposed in Australia that checks every request to the outside world?
I am much more sympathetic to “keeping goatse off of site X” than “keeping people from seeing goatse,” and so that’s a reasonable policy. If your site is about posting pictures of cute kittens, then goatse is not a picture of a cute kitten.
However, it seems to me that suspected Langford basilisks are part of the material of LessWrong. Imagine someone posted in the discussion “hey guys, I really want to be an atheist but I can’t stop worrying about whether or not the Rapture will happen, and if it does life will suck.” It seems to me that we would have a lot to say to them about how they could approach the situation more rationally.
And, if Langford basilisks exist, religion has found them. Someone got a nightmare because of Roko’s idea, but people fainted upon hearing Sinners in the Hands of an Angry God. Why are we not looking for the Perseus for this Medusa? If rationality is like an immune system, and we’re interested in refining our rationality, we ought to be looking for antibodies.
However, it seems to me that suspected Langford basilisks are part of the material of LessWrong.
It seems to me that Eliezer’s response as moderator of LessWrong strongly implies that he does not believe this is the case. Your goal, then, would be to convince Eliezer that it ought to be part of the LessWrong syllabus, as it were. Cialdini’s Influence and other texts would probably advise you to work within his restrictions and conform to his desires as much as practical—on a site like LessWrong, though, I am not sure how applicable the advice would be, and in any case I don’t mean to be prescriptive about it.
Okay, but more than four people have engaged with the idea. Should we take a poll?
The problem of course is that majorities often believe stupid things. That is why a free marketplace of ideas free from censorship is a really good thing! The obvious thing to do is exchange information until agreement but we can’t do that, at least not here.
Also, the people who think it should be censored all seem to disagree about how dangerous the idea really is, suggesting it isn’t clear how it is dangerous. It also seems plausible that some people have influenced the thinking of other people- for example it looks like Roko regretted posting after talking to Eliezer. While Roko’s regret is evidence that Eliezer is right, it isn’t the same as independent/blind confirmation that the idea is dangerous.
The problem of course is that majorities often believe stupid things.
When you give all agents equal weight, sure. Without taking a poll of anything except my memory, Eliezer+Roko+VladNesov+Alicorn are against, DavidGerard+waitingforgodel+vaniver are for. Others are more sidelined than supporting a particular side.
The obvious thing to do is exchange information until agreement but we can’t do that, at least not here.
Aumann agreement works in the case of hidden information—all you need are posteriors and common knowledge of the event alone.
While Roko’s regret is evidence that Eliezer is right, it isn’t the same as independent/blind confirmation that the idea is dangerous.
Roko increased his estimation and Eliezer decreased his estimation—and the amounts they did so are balanced according to the strength of their private signals. Looking at two Aumann-agreed conclusions gives you the same evidence as looking that the pre-Aumann (differing) conclusions—the same way that 10, 10 gives you the same average as 5, 15.
Others are more sidelined than supporting a particular side.
I would prefer you not treat people avoiding a discussion as evidence that people don’t differentially evaluate the assertions made in that discussion.
Doing so creates a perverse incentive whereby chiming in to say “me too!” starts to feel like a valuable service, which would likely chase me off the site altogether. (Similar concerns apply to upvoting comments I agree with but don’t want to see more of.)
If you are seriously interested in data about how many people believe or disbelieve certain propositions, there exist techniques for gathering that data that are more reliable than speculating.
If you aren’t interested, you could just not bring it up.
I would prefer you not treat people avoiding a discussion as evidence that people don’t differentially evaluate the assertions made in that discussion.
I treat them as not having given me evidence either way. I honestly don’t know how I could treat them otherwise.
Okay. It is not that they give no evidence by remaining out of the discussion—it is that the evidence they give is spread equally over all possibilities. I don’t know enough about these people to say that discussion-abstainers are uniformly in support or in opposition to the idea. The best I can do is assume they are equally distributed between support and opposition, and not incorrectly constrain my anticipations.
the best I can do is assume they are equally distributed between support and opposition
You can do better than that along a number of different dimensions.
But even before getting there, it seems important to ask whether our unexpressed beliefs are relevant.
That is, if it turned out that instead of “equally distributed between support and opposition”, we are 70% on one side, or 90%, or 99%, or that there are third options with significant membership, would that information significantly affect your current confidence levels about what you believe?
If our unexpressed opinions aren’t relevant, you can just not talk about them at all, just like you don’t talk about millions of other things that you don’t know and don’t matter to you.
If they are relevant, one thing you could do is, y’know, research. That is, set up a poll clearly articulating the question and the answers that would affect your beliefs and let people vote for their preferred answers. That would be significantly better than assuming equal distribution.
Another thing you could do, if gathering data is unpalatable, is look at the differential characteristics of groups that express one opinion or another and try to estimate what percentage of the site shares which characteristics.
would that information significantly affect your current confidence levels about what you believe?
Yes. In the absence of actual evidence (which seems dangerous to gather in the case of this basilisk), I pretty much have to go by expressed opinions. To my mind, it was like trying to count the results of experiments that haven’t been performed yet.
I did not seek out more information because it was a throwaway line in an argument attempting to explain to people why it appears their voices are being ignored. I personally am on the side of censoring the idea, not having understood it at all when it first posted, and that may have bled into my posts (I should have exercised stronger control over that) but I am not arguing for censorship. I am arguing why, when someone says “it’s not dangerous!”, some people aren’t coming around to their perspective.
I don’t intend to argue for the censorship of the idea unless sorely pressed.
** I’m confused. On the one hand, you say knowing the popularity of various positions is important to you in deciding your own beliefs about something potentially dangerous to you and others. On the other hand, you say it’s not worth seeking more information about and was just a throwaway line in an argument. I am having a hard time reconciling those two claims… you seem to be trying to have it both ways. I suspect I’ve misunderstood something important.
** I didn’t think you were arguing for censorship. Or against it. Actually, I have long since lost track of what most participants in this thread are arguing for, and in some cases I’m not sure they themselves know.
** I agree with you that the existence of knowledgeable people who think something is dangerous is evidence that it’s dangerous.
** Since it seems to matter: for my own part, I rate the expected dangerousness of “the basilisk” very low, and the social cost to the group of the dispute over “censoring” it significantly higher but still low.
** I cannot see why that should be of any evidentiary value whatsoever, to you or anyone else. Whether I’m right or wrong, my position is a pretty easy-to-reach one; it’s the one you arrive at in the absence of other salient beliefs (like, for example, the belief that EY/SIAI is a highly reliable estimator of potential harm done by “basilisks” in general, or the belief that the specific argument for the harmfulness of this basilisk is compelling). And most newcomers will lack those other beliefs. So I expect that quite a few people share my position—far more than 50% -- but I can’t see why you ought to find that fact compelling. That a belief is very widely shared among many many people like me who don’t know much about the topic isn’t much evidence for anything.
(nods) I’m a great believer in it. Especially in cases where a disagreement has picked up momentum, and recognizable factions have started forming… for example, if people start suggesting that those who side with the other team should leave the group. My confidence in my ability to evaluate an argument honestly goes up when I genuinely don’t know what team that argument is playing for.
I suspect I’ve obfuscated it, actually. The popularity of various positions is not intrinsically important to me—in fact, I give professions of believe about as little credit as I can get away with. This specific case is such that every form of evidence I find stronger (reasoning through the argument logically for flaws; statistical evidence about its danger) is not available. With a dearth of stronger evidence, I have to rely on weak evidence—but “the evidence is weak” is not an argument for privileging my own unsubstantiated position.
I don’t feel the need to collect weak evidence … I should, in this case. I was following a heuristic of not collecting weak evidence (waste of effort) without noticing that there was no stronger evidence.
Why are people’s beliefs of any value? Everyone has the ability to reason. All (non-perfect) reasoners fail in some way or another; if I look at many (controlling for biased reasoning) it gives me more of a chance to spot the biases—I have a control to compare it to.
This case is a special case; some people do have evidence. They’ve read the basilisk, applied their reasoning and logic, and deduced that it is / is not dangerous. These peoples’ beliefs are to be privileged over people who have not read the basilisk. I can’t access private signals like that—I don’t want to read a potential basilisk. So I make a guess at how strong their private signal is (this is why I care about their rationality) and use that as weak evidence for or against.
If seeking harder evidence wasn’t dangerous (and it usually isn’t) I would have done that instead.
The sentence I quoted sounded to me as though you were treating those of us who’ve remained “sidelined” as evidence of something. But if you were instead just bringing us up as an example of something that provides no evidence of anything, and if that was clear to everyone else, then I’m content.
Without taking a poll of anything except my memory, Eliezer+Roko+VladNesov+Alicorn are against, DavidGerard+waitingforgodel+vaniver are for.
I’m for. I believe Tim Tyler is for.
Aumann agreement works in the case of hidden information—all you need are posteriors and common knowledge of the event alone.
Human’s have this unfortunate feature of not being logically omniscient. In such cases where people don’t see all the logical implications of an argument we can treat those implications as hidden information. If this wasn’t the case then the censorship would be totally unnecessary as Roko’s argument didn’t actually include new information. We would have all turned to stone already.
Roko increased his estimation and Eliezer decreased his estimation—and the amounts they did so are balanced according to the strength of their private signals.
There is no way for you to have accurately assessed this. Roko and Eliezer aren’t idealized Bayesian agents, it is extremely unlikely they performed a perfect Aumann agreement. If one is more persuasive than the other for reasons other than the evidence they share than their combined support for the proposition may not be worth the same as two people who independently came to support the proposition. Besides which, according to you, what information did they share exactly?
I had a private email conversation with Eliezer that did involve a process of logical discourse, and another with Carl.
Also, when I posted the material, I hadn’t thought it through. One I had thought it through, I realized that I had accidentally said more than I should have done.
David_Gerard, Jack, timtyler, waitingforgodel, and Vaniver do not currently outweigh Eliezer_Yudkowsky, FormallyknownasRoko, Vladimir_Nesov, and Alicorn, as of now, in my mind.
It does not need to be a perfect Aumann agreement; a merely good one will still reduce the chances of overcounting or undercounting either side’s evidence well below the acceptable limits.
There is no way for you to have accurately assessed this. Roko and Eliezer aren’t idealized Bayesian agents, it is extremely unlikely they performed a perfect Aumann agreement.
They are approximations of Bayesian agents, and it is extremely likely they performed an approximate Aumann agreement.
To settle this particular question, however, I will pay money. I promise to donate 50 dollars to the Singularity Institute for Artificial Intelligence, independent of other plans to donate, if Eliezer confirms that he did revise his estimate down; or if he confirms that he did not revise his estimate down. Payable within two weeks of Eliezer’s comment.
I’m curious: if he confirms instead that the change in his estimate, if there was one, was small enough relative to his estimate that he can’t reliably detect it or detect its absence, although he infers that he updated using more or less the same reasoning you use above, will you donate or not?
I would donate even if he said that he revised his estimate upwards.
I would then seriously reconsider my evaluation of him, but as it stands the offer is for him to weigh in at all, not weigh in on my side.
edit: I misparsed your comment. That particular answer would dance very close to ‘no comment’, but unless it seemed constructed that way on purpose, I would still donate.
Yeah, that’s fair. One of the things I was curious about was, in fact, whether you would take that answer as a hedge, but “it depends” is a perfectly legitimate answer to that question.
For the posterior to equal or lower than the prior, Vaniver would have to be more a rationalist than Eliezer, Roko, and you put together.
How many of me would there have to be for that to work?
Also, why is rationalism the risk factor for this basilisk? Maybe the basilisk only turns to stone people with brown eyes (or the appropriate mental analog).
How many of me would there have to be for that to work?
Only one; I meant ‘you’ in that line to refer to Vlad. It does raise the question “how many people disagree before I side with them instead of Eliezer/Roko/Vlad”. And the answer to that is … complicated. Each person’s rationality, modified by how much it was applied in this particular case, is the weight I give to their evidence; then the full calculation of evidence for and against should bring my prior to within epsilon but preferably below my original prior for me to decide the idea is safe.
Also, why is rationalism the risk factor for this basilisk?
Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.
Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.
Generally, if your immune system is fighting something, you’re already sick. Most pathogens are benign or don’t have the keys to your locks. This might be a similar situation- the idea is only troubling if your lock fits it- and it seems like then there would be rational methods to erode that fear (like the immune system mobs an infection).
The analogy definitely breaks down, doesn’t it? What I had in mind was Eliezer, Roko, and Vlad saying “I got sick from this infection” and you saying “I did not get sick from this infection”—I would look at how strong each person’s immune system is.
So if Eliezer, Roko, and Vlad all had weak immune systems and yours was quite robust, I would conclude that the bacterium in question is not particularly virulent. But if three robust immune systems all fell sick, and one robust immune system did not, I would be forced to decide between some hypotheses:
the first three are actually weak immune systems
the fourth was not properly exposed to the bacterium
the fourth has a condition that makes it immune
the bacterium is not virulent, the first three got unlucky
On the evidence I have, the middle two seem more likely than the first and last hypotheses.
I agree- my money is on #3 (but I’m not sure whether I would structure is as “fourth is immune” or “first three are vulnerable”- both are correct, but which is more natural word to use depends on the demographic response).
Sorry, I should not have included censoring specifically. Change the “read:”s to ‘engages, reacts negatively’, ‘engages, does not react negatively’ and the argument still functions.
The point we are trying to make is that we think the people who stared the basilisk in the eyes and metaphorically turned to stone are stronger evidence.
I get that. But I think it’s important to consider both positive and negative evidence- if someone’s testimony that they got turned to stone is important, so are the testimonies of people who didn’t get turned to stone.
The question to me is whether the basilisk turns people to stone or people turn themselves into stone. I prefer the second because it requires no magic powers on the part of the basilisk. It might be that some people turn to stone when they see goatse for the first time, but that tells you more about humans and how they respond to shock than about goatse.
Indeed, that makes it somewhat useful to know what sort of things shock other people. Calling this idea ‘dangerous’ instead of ’dangerous to EY” strikes me as mind projection.
I am considering both.
I generally find myself in support of people who advocate a policy of keeping people from seeing Goatse.
I’m not sure how to evaluate this statement. What do you mean by “keeping people from seeing Goatse”? Banning? Voluntarily choosing not to spread it? A filter like the one proposed in Australia that checks every request to the outside world?
Censoring posts that display Goatse on LessWrong.
Generally, censoring posts that display Goatse on non-Goatse websites.
I am much more sympathetic to “keeping goatse off of site X” than “keeping people from seeing goatse,” and so that’s a reasonable policy. If your site is about posting pictures of cute kittens, then goatse is not a picture of a cute kitten.
However, it seems to me that suspected Langford basilisks are part of the material of LessWrong. Imagine someone posted in the discussion “hey guys, I really want to be an atheist but I can’t stop worrying about whether or not the Rapture will happen, and if it does life will suck.” It seems to me that we would have a lot to say to them about how they could approach the situation more rationally.
And, if Langford basilisks exist, religion has found them. Someone got a nightmare because of Roko’s idea, but people fainted upon hearing Sinners in the Hands of an Angry God. Why are we not looking for the Perseus for this Medusa? If rationality is like an immune system, and we’re interested in refining our rationality, we ought to be looking for antibodies.
It seems to me that Eliezer’s response as moderator of LessWrong strongly implies that he does not believe this is the case. Your goal, then, would be to convince Eliezer that it ought to be part of the LessWrong syllabus, as it were. Cialdini’s Influence and other texts would probably advise you to work within his restrictions and conform to his desires as much as practical—on a site like LessWrong, though, I am not sure how applicable the advice would be, and in any case I don’t mean to be prescriptive about it.
Right. I see a few paths to do that that may work (and no, holding the future hostage is not one of them).
Is Goatse supposed to be a big deal? Someone showed it to me and I literally said “who cares?”
I totally agree. There are far more important internet requests that my (Australian) government should be trying to filter. Priorities people!
Yes.
I feel like reaction videos are biased towards people who have funny or dramatic reactions, but point taken.
I don’t understand this. (Play on conservation of expected evidence? In what way?)
Normal updating.
Original prior for basilik-danger.
Eliezer_Yudkowsky stares at basilisk, turns to stone (read: engages idea, decides to censor). Revise pr(basilisk-danger) upwards.
FormallyknownasRoko stares at basilisk, turns to stone (read: appears to truly wish e had never thought it). Revise pr(basilisk-danger) upwards.
Vladimir_Nesov stares at basilisk, turns to stone (read: engages idea, decides it is dangerous). Revise pr(basilisk-danger) upwards.
Vaniver stares at basilisk, is unharmed (read: engages idea, decides it is not dangerous). Revise pr(basilisk-danger) downwards.
Posterior is higher than original prior.
For the posterior to equal or lower than the prior, Vaniver would have to be more a rationalist than Eliezer, Roko, and you put together.
Okay, but more than four people have engaged with the idea. Should we take a poll?
The problem of course is that majorities often believe stupid things. That is why a free marketplace of ideas free from censorship is a really good thing! The obvious thing to do is exchange information until agreement but we can’t do that, at least not here.
Also, the people who think it should be censored all seem to disagree about how dangerous the idea really is, suggesting it isn’t clear how it is dangerous. It also seems plausible that some people have influenced the thinking of other people- for example it looks like Roko regretted posting after talking to Eliezer. While Roko’s regret is evidence that Eliezer is right, it isn’t the same as independent/blind confirmation that the idea is dangerous.
When you give all agents equal weight, sure. Without taking a poll of anything except my memory, Eliezer+Roko+VladNesov+Alicorn are against, DavidGerard+waitingforgodel+vaniver are for. Others are more sidelined than supporting a particular side.
Aumann agreement works in the case of hidden information—all you need are posteriors and common knowledge of the event alone.
Roko increased his estimation and Eliezer decreased his estimation—and the amounts they did so are balanced according to the strength of their private signals. Looking at two Aumann-agreed conclusions gives you the same evidence as looking that the pre-Aumann (differing) conclusions—the same way that 10, 10 gives you the same average as 5, 15.
I would prefer you not treat people avoiding a discussion as evidence that people don’t differentially evaluate the assertions made in that discussion.
Doing so creates a perverse incentive whereby chiming in to say “me too!” starts to feel like a valuable service, which would likely chase me off the site altogether. (Similar concerns apply to upvoting comments I agree with but don’t want to see more of.)
If you are seriously interested in data about how many people believe or disbelieve certain propositions, there exist techniques for gathering that data that are more reliable than speculating.
If you aren’t interested, you could just not bring it up.
I treat them as not having given me evidence either way. I honestly don’t know how I could treat them otherwise.
It is extremely hard to give no evidence by making a decision, even a decision to do nothing.
Okay. It is not that they give no evidence by remaining out of the discussion—it is that the evidence they give is spread equally over all possibilities. I don’t know enough about these people to say that discussion-abstainers are uniformly in support or in opposition to the idea. The best I can do is assume they are equally distributed between support and opposition, and not incorrectly constrain my anticipations.
You can do better than that along a number of different dimensions.
But even before getting there, it seems important to ask whether our unexpressed beliefs are relevant.
That is, if it turned out that instead of “equally distributed between support and opposition”, we are 70% on one side, or 90%, or 99%, or that there are third options with significant membership, would that information significantly affect your current confidence levels about what you believe?
If our unexpressed opinions aren’t relevant, you can just not talk about them at all, just like you don’t talk about millions of other things that you don’t know and don’t matter to you.
If they are relevant, one thing you could do is, y’know, research. That is, set up a poll clearly articulating the question and the answers that would affect your beliefs and let people vote for their preferred answers. That would be significantly better than assuming equal distribution.
Another thing you could do, if gathering data is unpalatable, is look at the differential characteristics of groups that express one opinion or another and try to estimate what percentage of the site shares which characteristics.
Yes. In the absence of actual evidence (which seems dangerous to gather in the case of this basilisk), I pretty much have to go by expressed opinions. To my mind, it was like trying to count the results of experiments that haven’t been performed yet.
I did not seek out more information because it was a throwaway line in an argument attempting to explain to people why it appears their voices are being ignored. I personally am on the side of censoring the idea, not having understood it at all when it first posted, and that may have bled into my posts (I should have exercised stronger control over that) but I am not arguing for censorship. I am arguing why, when someone says “it’s not dangerous!”, some people aren’t coming around to their perspective.
I don’t intend to argue for the censorship of the idea unless sorely pressed.
A few things:
** I’m confused. On the one hand, you say knowing the popularity of various positions is important to you in deciding your own beliefs about something potentially dangerous to you and others. On the other hand, you say it’s not worth seeking more information about and was just a throwaway line in an argument. I am having a hard time reconciling those two claims… you seem to be trying to have it both ways. I suspect I’ve misunderstood something important.
** I didn’t think you were arguing for censorship. Or against it. Actually, I have long since lost track of what most participants in this thread are arguing for, and in some cases I’m not sure they themselves know.
** I agree with you that the existence of knowledgeable people who think something is dangerous is evidence that it’s dangerous.
** Since it seems to matter: for my own part, I rate the expected dangerousness of “the basilisk” very low, and the social cost to the group of the dispute over “censoring” it significantly higher but still low.
** I cannot see why that should be of any evidentiary value whatsoever, to you or anyone else. Whether I’m right or wrong, my position is a pretty easy-to-reach one; it’s the one you arrive at in the absence of other salient beliefs (like, for example, the belief that EY/SIAI is a highly reliable estimator of potential harm done by “basilisks” in general, or the belief that the specific argument for the harmfulness of this basilisk is compelling). And most newcomers will lack those other beliefs. So I expect that quite a few people share my position—far more than 50% -- but I can’t see why you ought to find that fact compelling. That a belief is very widely shared among many many people like me who don’t know much about the topic isn’t much evidence for anything.
Sometimes that isn’t a bad state to be in. Not having an agenda to serve frees up the mind somewhat! :)
(nods) I’m a great believer in it. Especially in cases where a disagreement has picked up momentum, and recognizable factions have started forming… for example, if people start suggesting that those who side with the other team should leave the group. My confidence in my ability to evaluate an argument honestly goes up when I genuinely don’t know what team that argument is playing for.
I suspect I’ve obfuscated it, actually. The popularity of various positions is not intrinsically important to me—in fact, I give professions of believe about as little credit as I can get away with. This specific case is such that every form of evidence I find stronger (reasoning through the argument logically for flaws; statistical evidence about its danger) is not available. With a dearth of stronger evidence, I have to rely on weak evidence—but “the evidence is weak” is not an argument for privileging my own unsubstantiated position.
I don’t feel the need to collect weak evidence … I should, in this case. I was following a heuristic of not collecting weak evidence (waste of effort) without noticing that there was no stronger evidence.
Why are people’s beliefs of any value? Everyone has the ability to reason. All (non-perfect) reasoners fail in some way or another; if I look at many (controlling for biased reasoning) it gives me more of a chance to spot the biases—I have a control to compare it to.
This case is a special case; some people do have evidence. They’ve read the basilisk, applied their reasoning and logic, and deduced that it is / is not dangerous. These peoples’ beliefs are to be privileged over people who have not read the basilisk. I can’t access private signals like that—I don’t want to read a potential basilisk. So I make a guess at how strong their private signal is (this is why I care about their rationality) and use that as weak evidence for or against.
If seeking harder evidence wasn’t dangerous (and it usually isn’t) I would have done that instead.
The sentence I quoted sounded to me as though you were treating those of us who’ve remained “sidelined” as evidence of something. But if you were instead just bringing us up as an example of something that provides no evidence of anything, and if that was clear to everyone else, then I’m content.
I think I had a weird concept of what ‘sidelined’ meant in my head when I was writing that. Certainly, it seems out of place to me now.
I’m for. I believe Tim Tyler is for.
Human’s have this unfortunate feature of not being logically omniscient. In such cases where people don’t see all the logical implications of an argument we can treat those implications as hidden information. If this wasn’t the case then the censorship would be totally unnecessary as Roko’s argument didn’t actually include new information. We would have all turned to stone already.
There is no way for you to have accurately assessed this. Roko and Eliezer aren’t idealized Bayesian agents, it is extremely unlikely they performed a perfect Aumann agreement. If one is more persuasive than the other for reasons other than the evidence they share than their combined support for the proposition may not be worth the same as two people who independently came to support the proposition. Besides which, according to you, what information did they share exactly?
I had a private email conversation with Eliezer that did involve a process of logical discourse, and another with Carl.
Also, when I posted the material, I hadn’t thought it through. One I had thought it through, I realized that I had accidentally said more than I should have done.
David_Gerard, Jack, timtyler, waitingforgodel, and Vaniver do not currently outweigh Eliezer_Yudkowsky, FormallyknownasRoko, Vladimir_Nesov, and Alicorn, as of now, in my mind.
It does not need to be a perfect Aumann agreement; a merely good one will still reduce the chances of overcounting or undercounting either side’s evidence well below the acceptable limits.
They are approximations of Bayesian agents, and it is extremely likely they performed an approximate Aumann agreement.
To settle this particular question, however, I will pay money. I promise to donate 50 dollars to the Singularity Institute for Artificial Intelligence, independent of other plans to donate, if Eliezer confirms that he did revise his estimate down; or if he confirms that he did not revise his estimate down. Payable within two weeks of Eliezer’s comment.
I’m curious: if he confirms instead that the change in his estimate, if there was one, was small enough relative to his estimate that he can’t reliably detect it or detect its absence, although he infers that he updated using more or less the same reasoning you use above, will you donate or not?
I will donate.
I would donate even if he said that he revised his estimate upwards.
I would then seriously reconsider my evaluation of him, but as it stands the offer is for him to weigh in at all, not weigh in on my side.
edit: I misparsed your comment. That particular answer would dance very close to ‘no comment’, but unless it seemed constructed that way on purpose, I would still donate.
Yeah, that’s fair. One of the things I was curious about was, in fact, whether you would take that answer as a hedge, but “it depends” is a perfectly legitimate answer to that question.
How many of me would there have to be for that to work?
Also, why is rationalism the risk factor for this basilisk? Maybe the basilisk only turns to stone people with brown eyes (or the appropriate mental analog).
Only one; I meant ‘you’ in that line to refer to Vlad. It does raise the question “how many people disagree before I side with them instead of Eliezer/Roko/Vlad”. And the answer to that is … complicated. Each person’s rationality, modified by how much it was applied in this particular case, is the weight I give to their evidence; then the full calculation of evidence for and against should bring my prior to within epsilon but preferably below my original prior for me to decide the idea is safe.
Rationalism is the ability to think well and this is a dangerous idea. If it were a dangerous bacterium then immune system would be the risk factor.
Generally, if your immune system is fighting something, you’re already sick. Most pathogens are benign or don’t have the keys to your locks. This might be a similar situation- the idea is only troubling if your lock fits it- and it seems like then there would be rational methods to erode that fear (like the immune system mobs an infection).
The analogy definitely breaks down, doesn’t it? What I had in mind was Eliezer, Roko, and Vlad saying “I got sick from this infection” and you saying “I did not get sick from this infection”—I would look at how strong each person’s immune system is.
So if Eliezer, Roko, and Vlad all had weak immune systems and yours was quite robust, I would conclude that the bacterium in question is not particularly virulent. But if three robust immune systems all fell sick, and one robust immune system did not, I would be forced to decide between some hypotheses:
the first three are actually weak immune systems
the fourth was not properly exposed to the bacterium
the fourth has a condition that makes it immune
the bacterium is not virulent, the first three got unlucky
On the evidence I have, the middle two seem more likely than the first and last hypotheses.
I agree- my money is on #3 (but I’m not sure whether I would structure is as “fourth is immune” or “first three are vulnerable”- both are correct, but which is more natural word to use depends on the demographic response).
Er, are you describing rationalism (I note you say that and not “rationality”) as susceptible to autoimmune disorders? More so than in this post?
This equivocates the intended meaning of turning to stone in the original discussion you replied to. Fail. (But I understand what you meant now.)
Sorry, I should not have included censoring specifically. Change the “read:”s to ‘engages, reacts negatively’, ‘engages, does not react negatively’ and the argument still functions.
The argument does seem to function, but you shouldn’t have used the term in a sense conflicting with intended.