You can pretty easily think of “apocalyptic” scenarios in which Zoltan would end up getting elected in a fairly normal way. Picking a president at random from the adult population would require even more improbable events.
I loved this comment, but then realized I may not have understood it—is the apocalyptic scenario one where a bunch of people die, but somehow those remaining tend to be Zoltan supporters?
I actually meant it more generally, in the sense of highly unusual situations. So gjm’s suggested path would count.
But more straightforwardly apocalyptic situations could also work. So a whole bunch of people die, then those remaining become concerned about existential risk—given what just happened—and this leads to people becoming convinced Zoltan would be a good idea. This is more likely than a virus that kills non-Zoltan supporters.
I did the equivalent bet test, and came up with about 5%. I suspect that due to the problems I’ve done calibration training on, I have a very hard time working with extremely low probabilities.
Where did you do your calibration training?
On prediction book I think most people would put 0% in the box for Zoltan getting elected in the next election.
But calibration training should theoretically should fix these exact issues—I’m going to try to find a better calibration question set that can help me with this.
Bias is not the only source of errors. It is notoriously hard to come up with probability estimates for rare events, ones that are way out in the tails of the distribution.
Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.
Zoltan is articulate, extremely good looking, and willing to put in a lot of work to become president. Imagine one or both of the major U.S. political parties becomes discredited and Zoltan gets significant financial support from a high-tech billionaire. He could then have a non-trivial chance of becoming president, although the odds of this ever happening is still under .1%.
But what he talks about is completely unaligned with what 99% of the electorate gives half a shit about. Even though I suppose recent political-theater events in the united states proves that giving off a strong crackpot vibe is not an automatic disqualification, there is that to contend with.
I’d guess less than 5% chance for each major party to get discredited, maybe 50% chance that after that a high-tech billionaire decides it’s a good time to try to shape politics, maybe a 2% chance that s/he chooses Zoltan, and no more than a 20% chance that Zoltan wins after all that happens. I make that about a 0.0005% chance, being quite generous.
So, yeah, “remotely theoretically possible” is about as far as it goes.
Billionaires attempt to shape politics right now and I don’t see why would they stop. I think that the 50% chance is actually a 100% chance. However the probability of choosing specifically Zoltan I would estimate as considerably less than 2%.
maybe 50% chance that after that a high-tech billionaire decides it’s a good time to try to shape politics
If both parties become discredited I say at least 80% chance that more than one high-tech billionaire will try to shape politics, but otherwise a good estimate.
What do you think the probabilty is of Zoltan getting elected? I’d put it lower than 5%.
Lower than .00005%.
I’ll take those odds.
That would still make him more likely than if we were picking a president at random from the adult population. I think that’s untrue.
You can pretty easily think of “apocalyptic” scenarios in which Zoltan would end up getting elected in a fairly normal way. Picking a president at random from the adult population would require even more improbable events.
I loved this comment, but then realized I may not have understood it—is the apocalyptic scenario one where a bunch of people die, but somehow those remaining tend to be Zoltan supporters?
I actually meant it more generally, in the sense of highly unusual situations. So gjm’s suggested path would count.
But more straightforwardly apocalyptic situations could also work. So a whole bunch of people die, then those remaining become concerned about existential risk—given what just happened—and this leads to people becoming convinced Zoltan would be a good idea. This is more likely than a virus that kills non-Zoltan supporters.
I think it’s unlikely that someone actively campaigning to be president is less likely than someone who isn’t.
Why did you pick 5%? That number seems very high for me.
I did the equivalent bet test, and came up with about 5%. I suspect that due to the problems I’ve done calibration training on, I have a very hard time working with extremely low probabilities.
Where did you do your calibration training? On prediction book I think most people would put 0% in the box for Zoltan getting elected in the next election.
I’ve used prediction book rarely, I mostly use the calibration game and the updating game.
What do you mean with “updating game”?
http://rationality.org/apps/
The page lists the calibration game with a link but lists no link for the updating game. Is the updating game something that CFAR uses internally?
http://www.patheos.com/blogs/unequallyyoked/2012/07/play-along-with-rationality-camp-at-home.html has a link
edit: https://groups.google.com/forum/#!topic/lesswrongslc/DuWDe_km88w has more links. They seem to be malformed by google, but manually fixing them works.
Mac: https://dl.dropbox.com/u/30954211/RationalityGames/UpdatingGame%28Mac%29.app.zip Android: https://dl.dropbox.com/u/30954211/RationalityGames/UpdatingGame%28And%29.apk
I actually can’t recall how I got the updating game… I believe it’s on the android store somewhere, but really hard to find.
We all do, err all but .001% or whatever of us.
But calibration training should theoretically should fix these exact issues—I’m going to try to find a better calibration question set that can help me with this.
I am not sure about that—why do you think so?
Because it’s deliberate practice in debiasing—it’s specifically created to train out those biases/
Edit: To be clear, I’m not sure about it either, but theoretically, that’s what’s supposed to happen.
Bias is not the only source of errors. It is notoriously hard to come up with probability estimates for rare events, ones that are way out in the tails of the distribution.
Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.
I think he may be elected in 2024, but the main point of campaign is to raise awareness about life extension and FAI topics.
By associating them with extreme weirdness?
What makes that 2024 thing even remotely theoretically possible?
Zoltan is articulate, extremely good looking, and willing to put in a lot of work to become president. Imagine one or both of the major U.S. political parties becomes discredited and Zoltan gets significant financial support from a high-tech billionaire. He could then have a non-trivial chance of becoming president, although the odds of this ever happening is still under .1%.
But what he talks about is completely unaligned with what 99% of the electorate gives half a shit about. Even though I suppose recent political-theater events in the united states proves that giving off a strong crackpot vibe is not an automatic disqualification, there is that to contend with.
I’d guess less than 5% chance for each major party to get discredited, maybe 50% chance that after that a high-tech billionaire decides it’s a good time to try to shape politics, maybe a 2% chance that s/he chooses Zoltan, and no more than a 20% chance that Zoltan wins after all that happens. I make that about a 0.0005% chance, being quite generous.
So, yeah, “remotely theoretically possible” is about as far as it goes.
Billionaires attempt to shape politics right now and I don’t see why would they stop. I think that the 50% chance is actually a 100% chance. However the probability of choosing specifically Zoltan I would estimate as considerably less than 2%.
If both parties become discredited I say at least 80% chance that more than one high-tech billionaire will try to shape politics, but otherwise a good estimate.
A chance under 0.1% sounds trivial to me.