This is a terrible misrepresentation. SI does not argue for donations on these grounds; Eliezer and other SI staff have explicitly rejected such Pascalian reasons, but instead argued that the risks that they wish to avert are quite probable.
It has to also be probable that their work averts those risks, which seem incredibly improbable by any reasonable estimate. If the alternative Earth was to adopt a strategy of ignoring prophetic groups of ‘idea guys’ similar to SI and ignore their pleads for donations so that they can hire competent researchers to pursue their ideas, I do not think that such decision would have increased the risk by more than a miniscule amount.
People currently understand the physical world sufficiently to see that supernatural claims are bogus, and so there is certainty about impossibility of developments predicated on supernatural. People know robust and general laws of physics that imply impossibility of perpetual motion, and so we can conclude in advance with great certainty that any perpetual motion engineering project is going to fail. Some long-standing problems in mathematics were attacked unsuccessfully for a long time, and so we know that making further progress on them is hard. In all these cases, there are specific pieces of positive knowledge that enable the inference of impossibility or futility of certain endeavors.
In contrast, a lot of questions concerning Friendly AI remain confusing and unexplored. It might turn out to be impossibly difficult to make progress on them, or else a simple matter of figuring out how to apply standard tools of mainstream mathematics. We don’t know, but neither do we have positive knowledge that implies impossibility or extreme difficulty of progress on these questions. In particular, the enormity of consequences does not imply extreme improbability of influencing those consequences. It looks plausible that the problem can be solved.
This kind of seems like political slander to me. Maybe I’m miscalibrated? But it seems like you’re thinking of “reasonable estimates” as things produced by groups or factions, treating SI as a single “estimate” in this sense, and lumping them with a vaguely negative but non-specified reference class of “prophetic groups”.
The packaged claims function to reduce SI’s organizational credibility, and yet it references no external evidence and makes no testable claims. For your “prophetic groups” reference class, does it include 1930′s nuclear activists, 1950′s environmentalists, or 1970′s nanotechnology activists? Those examples come from the socio-political reference class I generally think of SI as belonging to, and I think of them in a mostly positive way.
Personally, I prefer to think of “estimates” as specific predictions produced by specific processes at specific times, and they seem like they should be classified as “reasonable” or not on the basis of their mechanisms and grounding in observables in the past and the future.
The politics and social dynamics surrounding an issue can give you hints about what’s worth thinking about, but ultimately you have to deal with the object level issues, and the object level issues will screen off the politics and social dynamics once you process them. The most reasonable tool for extracting a “coherent opinion” from someone on the subject of AGI that is available to the public that I’m aware of is the uncertain future.
(Endgame: Singularity is a more interesting tool in some respects. It’s interesting for building intuitions about certain kinds of reality/observable correlations because it has you play as a weak but essentially benevolent AGI rather than as humanity, but (1) it is ridiculously over-specific as a prediction tool, and (2) seems to give the AGI certain unrealistic advantages and disadvantages for the sake of making it more fun as a game. I’ve had a vague thought to fork it, try to change it to be more realistic, write a bot for playing it, and use that as an engine for Monte-carlo simulator of singularity scenarios. Alas: a day job prevents me from having the time, and if that constraint were removed I bet I could find many higher value things to work on, reality being what it is, and people being motivated to action the way they are.)
Do you know of anything more epistemically helpful than the uncertain future? If so, can you tell me about it? If not, could you work through it and say how it affected your model of the world?
(Note that the Uncertain Future software is mostly supposed to be a conceptual demonstration; as mentioned in the accompanying conference paper, a better probabilistic forecasting guide would take historical observations and uncertainty about constant underlying factors into account more directly, with Bayesian model structure. The most important part of this would be stochastic differential equation model components that could account for both parameter and state uncertainty in nonlinear models of future economic development from past observations, especially of technology performance curves and learning curves. Robin Hanson’s analysis of the random properties of technological growth modes has something of a similar spirit.)
I think your estimate of their chances of success is low. But even given that estimate, I don’t think it’s Pascalian. To me, it’s Pascalian when you say “my model says the chances of this are zero, but I have to give it non-zero odds because there may be an unknown failing in my model”. I think Heaven and Hell are actually impossible, I’m just not 100% confident of that. By contrast, it would be a bit odd if your model of the world said “there is this risk to us all, but the odds of a group of people causing a change that averts that risk are actually zero”.
It is not just their chances of success. For the donations to matter, you need SI to succeed where without SI there is failure. You need to get a basket of eggs, and have all the good looking eggs be rotten inside but one fairly rotten looking egg be fresh. Even if a rotten looking egg is nonetheless more likely to be fresh inside than one would believe, it is highly unlikely situation.
I’m afraid I’m not getting your meaning. Could you fill out what corresponds to what in the analogy? What are all the other eggs? In what way do they look good compared to SI?
All the other people and organizations that are no less capable of identifying the preventable risks (if those exist) and addressing them, have to be unable to prevent destruction of mankind without SI. Just like in the Pascal’s original wager, the Thor and other deities are to be ignored by omission.
On how the SI does not look good, well, it does not look good to Holden Karnofsky, or me for that matter. Resistance to feedback loops is an extremely strong point of his.
On the rationality movement, here’s a quote from Holden.
Apparent poorly grounded belief in SI’s superior general rationality. Many of the things that SI and its supporters and advocates say imply a belief that they have special insights into the nature of general rationality, and/or have superior general rationality, relative to the rest of the population. (Examples here, here and here). My understanding is that SI is in the process of spinning off a group dedicated to training people on how to have higher general rationality.
Yet I’m not aware of any of what I consider compelling evidence that SI staff/supporters/advocates have any special insight into the nature of general rationality or that they have especially high general rationality.
Could you give me some examples of other people and organizations trying to prevent the risk of an Unfriendy AI? Because for me, it’s not like I believe that SI has a great chance to develop the theory and prevent the danger, but rather like they are the only people who even care about this specific risk (which I believe to be real).
As soon as the message becomes widely known, and smart people and organizations will start rationally discussing the dangers of Unfriendly AI, and how to make a Friendly AI (avoiding some obvious errors, such as “a smart AI simply must develop a human-compatible morality, because it would be too horrible to think otherwise”), then there is a pretty good chance that some of those organization will be more capable than SI to reach that goal: more smart people, better funding, etc. But at this moment, SI seems to be the only one paying attention to this topic.
It’s a crooked game, but it’s the only game in town?
None of that is evidence that SI would be more effective if it had more money. Assign odds to hostile AI becoming extant given low funding for SI, and compare the odds of hostile AI becoming extant given high funding for SI. The difference between those two is proportional to the value of SI (with regards to preventing hostile AI).
SI being the only one ought to lower your probability that this whole enterprise is worthwhile in any way.
With regards to the ‘message’, i think you grossly over estimate value of a rather easy insight that anyone who has watched Terminator could have. With regards to “rationally discussing”, what I have seen so far here is pure rationalization and very little, if any, rationality. What the SI has on the track record is, once again, a lot of rationalizations and not enough rationality to even have had an accountant through it’s first 10 years and first over 2 millions dollars in other people’s money.
Note that that second paragraph is one of Holden Karnofsky’s objections to SIAI: a high opinion of its own rationality that is not so far substantiable from the outside view.
Yes. I am sure Holden is being very polite, which is generally good but I’ve been getting impression that the point he was making did not in full carry across the same barrier that has resulted in the above-mentioned high opinion of own rationality despite complete lack of results for which rationality would be better explanation than irrationality (and presence of results which set rather low ceiling for the rationality). The ‘resistance to feedback’ is even stronger point, suggestive that the belief in own rationality is, at least to some extent, combined with expectation that it won’t pass the test and subsequent avoidance (rather than seeking) of tests; as when psychics do believe in their powers but do avoid any reliable test.
Eliezer and other SI staff have explicitly rejected such Pascalian reasons, but instead argued that the risks that they wish to avert are quite probable.
Really, last time I checked Eliezer was refusing to name either a probability or a time scale.
I’m not seeing how you get from “doesn’t state an explicit probability or timescale publically” to “argues that SI should be supported on Pascalian grounds”.
In Pascal’s original wager the ‘risk’ is 50% . Due to our natural tendency to see two alternatives, once presented in A vs B form, as closer to even odds when information is absent. That’s what wager is about—screwing up probabilities in absence of knowledge of how to calculate them, and arguing that it is quite probable when it’s not quite probable. Not so much the problem with decision process as the problem with ‘lets call vague feelings probabilities’. Between other gods, the possibility of better deals in the future, I have impression that mathematically it would work out to ‘do not pay’, if someone could actually do the math. With the made up probabilities that tend towards 50⁄50 when there’s unknowns, and when the propositions are maliciously generated to relieve you of your wallet, and when you are simply inserting a hypothesis into graph because you were told something (zero to nonzero update), a little wonder that some people get exploited.
In Pascal’s original wager the ‘risk’ is 50% . Due to our natural tendency to see two alternatives, once presented in A vs B form, as closer to even odds when information is absent. That’s what wager is about—screwing up probabilities in absence of knowledge of how to calculate them, and arguing that it is quite probable when it’s not quite probable.
Er. We’re talking about Pascal’s wager, right? The one published in Pensees? The one which explicitly invokes infinities, where it doesn’t matter if the odds are 1 to 1 or 1,000,000,000 to 1, the argument still goes through?
The point is that it is still a Pascal’s wager even if you have mis-estimated probabilities and argued that it is actually likely that God exists.
In case of SI, even if we assume that risk exists it is still the case that one is to donate to group of people whom, in all likelihood, are entirely incapable of affecting the risk in any way what so ever (and are only offering risk reduction due to their incompetence combined with Dunning-Kruger effect. It never happened in the history that the first people to take money for cure would be anything but either self deluded or confidence tricksters)
and are only offering risk reduction due to their incompetence combined with Dunning-Kruger effect.
You realize DK is a narrow effect which only obtains in certain conditions, is still controversial, and invoking it just makes you look like you’ll grab at any thing at all no matter how dubious in order to attack SI, right? (About on the same level as ‘Hitler was an atheist!’)
It never happened in the history that the first people to take money for cure would be anything but either self deluded or confidence tricksters
Seriously. In no area of research, medicine, engineering, or whatever, the first group to tackle a problem succeeded? Such a world would be far poorer and still stuck in the Dark Ages than the one we actually live in. I realize this may be a hard concept, but sometimes, the first person to tackle a problem—succeeds! In fact, sometimes multiple people tackling the problem all simultaneously succeed! (This is very common; called multiple discovery.)
Not every problem is as hard as fusion, or to put it another way, most hard problems are made of other, easier, problems, while if your hyperbolic statement were true, no progress would ever be made.
Dunning Kruger effect is likely a product of some general deficiency in the meta reasoning facility leading both to failure of reasoning itself and failure of evaluation of the reasoning; extremely relevant to people that proclaim themselves to be more rational, more moral, and so on than anyone else but do not seem to accomplish above mediocre performance at fairly trivial yet quantifiable things.
Seriously. In no area of research, medicine, engineering, or whatever, the first group to tackle a problem succeeded? Such a world would be far poorer and still stuck in the Dark Ages than the one we actually live in. I realize this may be a hard concept, but sometimes, the first person to tackle a problem—succeeds!
Ghmm. He said first people to take money, not first people to tackle.
The first people to explain the universe (and take some contributions for that) produced something of negative value, nearly all of the medicine until last couple hundred years was not only ineffective but completely harmful, and so on.
If you look at very narrow definitions, of course, the first to tackle nuclear bomb creation did succeed—but the first to tackle the general problem of weapon of mass destruction were various shamans sending a curse. If saving people from AI is an easy problem, then we’ll survive without SI; if it’s a hard problem, at any rate SI doesn’t start with a letter from Einstein to the government, it starts with a person with no quantifiable accomplishments cleverly employing oneself. As far as I am concerned, there’s literally no case for donations here; the donations happen via sort of decision noise similar to how NASA has spent millions on various antigravity devices, the power companies have spent millions on putting electrons in hydrogen orbitals at below ground level (see Mills hydrinos), and millions were invested in Steorn’s magnetic engine.
Ghmm. He said first people to take money, not first people to tackle.
Speaking of yourself in the third person?
Dmytry, you are abusing sockpuppet accounts. Your use of private_messaging was questionable, but at least you declared it as an alias early on. Right now you are using a sockpuppet with the intent to deceive.
You know, I’ve been wondering about that for a while now, but it never occurred to me to look for hapaxes (nor had I been aware of the phrase). I have just learned a new technique, for which I have learned a cool new word, which helps solve a problem I actually had. If I endorsed upvoting multiple times, I would upvote you multiple times; as it is, you’ll have to settle for an upvote and my gratitude.
It’s just one of the little-known advantages to reading critical analysis of classical or Biblical literature! Although I’d be hard-pressed to name a second advantage.
The embarrassing thing, now that you mention it, is that I’m acquainted with this technique in that context (identifying source texts and common authors and so forth) but it still didn’t occur to me to apply it here, despite it being the same problem even at a surface level.
(sigh) Corrupted hardware sucks. It ain’t the things I don’t know that irritate me. It’s not even the things I do know that just ain’t so. It’s the things I know, that are so, and that somehow don’t present themselves to be reasoned with when I need them.
Dunning Kruger effect is likely a product of some general deficiency in the meta reasoning facility leading both to failure of reasoning itself and failure of evaluation of the reasoning;
That seems unlikely. Leading both?
extremely relevant to people that proclaim themselves to be more rational, more moral, and so on than anyone else but do not seem to accomplish above mediocre performance at fairly trivial yet quantifiable things.
Mediocrity is sufficient to push them entirely out of the DK gap; your thinking DK applies is just another example of what I mean by these being fragile easily over-interpreted results.
(Besides blatant misapplication, please keep in mind that even if DK had been verified by meta-analysis of dozens of laboratory studies, which it has not, that still only gives a roughly 75% chance that the effect applies outside the lab.)
The first people to explain the universe (and take some contributions for that) produced something of negative value, nearly all of the medicine until last couple hundred years was not only ineffective but completely harmful, and so on.
Without specifics, one cannot argue against that.
If you look at very narrow definitions, of course, the first to tackle nuclear bomb creation did succeed—but the first to tackle the general problem of weapon of mass destruction were various shamans sending a curse.
So you’re just engaged in reference class tennis. (‘No, you’re wrong because the right reference class is magicians!’)
Seems straightforward to me: Eliezer’s unwarranted self importance did result in him not pursuing education or for that matter proper self education, and simultaneously to believing he’s awesome and selling existential risk reduction that nobody else would sell. edit: The alternative explanation is the level of resistance to self deception so high that the process of the self education transcended the necessity to seek objective feedback on the progress (which one gets if one e.g. tries to prove mathematical theorems, as here an un-intelligent process of checking a proof can validate one’s powers of reasoning).
So you’re just engaged in reference class tennis. (‘No, you’re wrong because the right reference class is magicians!’)
Did it ever occur to you that one has to actually do something incompatible with the broad reference class to get into much much smaller reference class? E.g. you are in reference class ‘people’, not reference class ‘people with IQ>=150’ unless you take IQ test or take other test with very low false positive rate. Likewise, the reference class is ‘people with grand promises’ until you actually do something that moves you into microscopic sub class of ‘people with grand promises who deliver’.
Seems straightforward to me: Eliezer’s unwarranted self importance did result in him not pursuing education or for that matter proper self education, and simultaneously to believing he’s awesome and selling existential risk reduction that nobody else would sell. edit: The alternative explanation is the level of resistance to self deception so high that the process of the self education transcended the necessity to seek objective feedback on the progress (which one gets if one e.g. tries to prove mathematical theorems, as here an un-intelligent process of checking a proof can validate one’s powers of reasoning).
Suppose one were to grant that for Eliezer. Out of curiosity, I would be interested in hearing how Nick Bostrom & FHI are similarly deluded and in the reference class of magicians.
Speculation of sufficiently advanced future technologies is indistinguishable from magical thinking.
Unless there is scientific method in what you’re doing, and unless you’re producing something testable and testing it, you are certainly not in the reference class of scientists. Unless there is plenty of rigour, you are not in reference class of people using mathematical methods (even if you have formulas in your papers). Not up to grabs here either. Maybe the reference class is philosophers. If you wish, philosophers with PhD.
As they do tend to honestly make actual arguments rather than just try to manipulate for profit the people who already agree with core ideas, one can examine actual argumentation. Which is not very good. E.g. simulation argument of his looks watertight at first glance but is really reliant on assumption about physics (suppose MWI is correct, now the counting patently doesn’t work for probabilities), and does not account for potentially enormous number of simulated beings that can easily tell their reality is simulated as the simulator cuts some corners. Typical example of philosophy, making arguments that seem true for mere lack of alternative propositions to made up assertions. You only spend time formalizing such stuff if you can’t see that you are building false precision, making up far too many assumptions for the results to be meaningful in any way (in the field where intuition is unlikely to work, too). If you don’t see that you are making a lot of extremely shaky assumptions when you are making a lot of extremely shaky assumptions, you’ll be susceptible to generating false precision precisely as per Nick Szabo’s article.
Their musings about singularity are in precise agreement with the hypothesis that at the current point in time it is early enough that the only people ‘working’ on this are those who for some reason fail to see when they aren’t making progress. I’m pretty sure all this movement will look very silly in 100 years—there will be dangers they did not see and there won’t be dangers they focused on.
suppose MWI is correct, now the counting patently doesn’t work for probabilities
What is the correct counting for MWI, exactly?
does not account for potentially enormous number of simulated beings that can easily tell their reality is simulated as the simulator cuts some corners.
I think you are misunderstanding the SA, which is surprising since it’s formally pretty simple.
The SA is a trilemma; finding evidence that strongly supports one leg of the trilemma is not a problem with the trilemma itself. It’s just reasons for you to bite a particular bullet: “the SA says ‘either X or Y or Z’, and here our reality looks like a cutrate simulation, so I guess Z was right after all!′
So the trilemma remains ‘watertight’ even if the specific paper in enumerating reasons to believe X (or Y, or Z) fails to cover some favorite bit of reasoning of yours. The reasoning still fits inside the trilemma framework and is not an argument against the framework itself.
(My own impression is that your cutrate suggestion wouldn’t go very far since it’s not clear—to me, anyway—what a cutrate simulation would look like, and whether our own universe is not cutrate. One could validly argue that our failure to find good clear evidence that we’re in a simulation is evidence against being in a simulation, but quantifying how much evidence this would be is even harder. And given how many crude simulation we run for science & business & pleasure, and the Fermi paradox, it seems especially unlikely that this point is strong enough to move someone from biting the we’re-in-a-simulation bullet to biting one of the other bullets.)
I’m pretty sure all this movement will look very silly in 100 years—there will be dangers they did not see and there won’t be dangers they focused on.
Everyone looks silly from 100 years on. That’s not a useful point to make.
MWI: we don’t know what is that works, but we can tell if something doesn’t work. Probabilities don’t seem to work out if you just count distinct observers. Plus, the number of distinct observers grows very rapidly with time, so you get extreme case of doomsday paradox. If you aren’t just counting distinct observers but count copies twice then your probabilities could as well depend to e.g. thickness of wires in the computer, not just the raw number of simulated realities.
Furthermore, more significantly, under MWI it is not even clear what first two statements could even mean.
I think you are misunderstanding the SA, which is surprising since it’s formally pretty simple.
No civilization will reach a level of technological maturity capable of producing simulated realities.
No civilization reaching aforementioned technological status will produce a significant number of simulated realities, for any of a number of reasons, such as diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.
Any entities with our general set of experiences are almost certainly living in a simulation.
I assumed that the last statement is to be taken as ‘we should expect to be in a sim if first two conditions are false and given our general set of experiences’, by assumption of at least rudimentary relevance of this trilemma.
In such case there is the fourth possibility with probability overwhelmingly higher than of this entire argument: the wild guess that there would ever be a good reason to believe that we should be among most numerous, with same weight for real thing and simulator (or same weight for different types of simulator), is not spot on. Collaborated also by us not being among those in weird sims of any kind (we’d detect a god speaking to us every day).
Furthermore, the distinction between perfect simulator and reality strikes me as nonsensical. Until there is a measurement that we are in a simulation, we may most sensibly assume we are in both (this time merely drawing inspiration from the sort of intuitions we might have had if we believed in MWI). The probability of measurement that we are in a simulation, well that has an exceptionally good chance of being a much more complicated matter than assumed.
edit: To clarify, my point is that even putting this sort of stuff in words or equations is a great example of false precision that Nick Szabo complains about. Too many assumptions have to be made without noticing that those assumptions are made, for the statements to have meaning at all.
Everyone looks silly from 100 years on. That’s not a useful point to make.
Those who aren’t grossly wrong (Newton for example) don’t look as silly as the silly I am speaking of.
Wikipedia is pretty bad on philosophy (the SEP is much better), and in this case, there’s no reason not to read Bostrom’s original paper and the correction: he writes clearly, and they are readily available on his website.
In such case there is the fourth possibility with probability overwhelmingly higher than of this entire argument: the wild guess that there would ever be a good reason to believe that we should be among most numerous, with same weight for real thing and simulator (or same weight for different types of simulator), is not spot on. Collaborated also by us not being among those in weird sims of any kind (we’d detect a god speaking to us every day).
What? Could you write this more clearly, I have no idea what you’re trying to say.
Those who aren’t grossly wrong (Newton for example) don’t look as silly as the silly I am speaking of.
Newton is actually a great example; if you don’t choose to ignore the areas which make him look bad, he looks like an incredible fool in many respects. His constant pursuit of alchemy even then was the object of derision, and while we no longer would hang or exile him for his bizarre theology & eschatology, we (even most Christians) would regard them as hilarious. Then there are his priority disputes...
If even Newton looks this foolish, what hope can the rest of us have? No, the suggestion ‘would this make me look foolish in 100 years?’ does us no good in practice.
Then we can pardon Bostrom for not taking them into account.
When pondering the possibilities where we live given lack of grand unified theory ‘of everything’, you can’t assume your physical intuitions hold true. In fact you should assume the opposite. The MWI is just an example of how philosophical argument that looks entirely invariable admits defeat from possible physics, in a way in which the mathematics—which philosophy mimics—does not. That means validity of argument requires validity of intuitions, which are reasonably very unlikely to be valid in any grand sense. There’s also historical example: a lot of philosophers assuming Euclidean geometry is the only logically possible kind of geometry, without even noticing that they are making such assumption, up until mathematicians came up with alternative.
What? Could you write this more clearly, I have no idea what you’re trying to say.
You assume that the probability of being among either group depends to number of yous within that group (rather than something entirely different), to do anthropic reasoning beyond tautological ‘we can’t observe universes where we can’t be alive’. In my opinion this is a case of wild guess over unknowns, totally false precision.
I came up with a clearer example of how something totally different may actually make a lot more sense: Solomonoff induction on codes that model various universes. A model of the universe outputting the data matching your internal self-perception must not only contain yourself, but must include the code that finds yourself within that model, so that output begins with sense data. It is then clear that the code that picks one of yous out of a model full of all sorts of simulations may easily be larger than the code that picks you out of the world where there is just one you, but the number of various others is much smaller.
I’m not arguing that Bostrom is bad for a philosopher. I am outlining how in philosophy you just make all sorts of assumptions that you don’t notice are wild guesses, and how the field is basically built on false precision. I.e. you assume that the probability of being within some huge set full of yous and non-yours is independent of number of non-yous, which is just a wild guess. Connect together half a dozen implicit wild guesses with likelihood of correctness of overly generous 1 in 100 each , and we’re speaking of probability of correctness in the range of 10^-12 . Philosophy is generally like this.
I believe this falls under Nick Szabo’s complaint about false precision.
Also, a paper by Steven Weinberg on usefulness of philosophy.
It seems to me that funding philosophical works in this field may actually be actively harmful, due to establishment of such false precision and prejudices. It’s like funding ‘embracing bias, imprecision, and making your mind up before checking where the mathematics will lead you’.
If even Newton looks this foolish, what hope can the rest of us have? No, the suggestion ‘would this make me look foolish in 100 years?’ does us no good in practice.
That’s the whole point: very low probability of being right. There’s a crucial difference: the methods Newton employed managed to achieve a non-zero (and not negligible) truth finding rate. So he made something that does not look silly. Even with this, most of stuff was quite seriously wrong.
you assume that the probability of being within some huge set full of yous and non-yours is independent of number of non-yous, which is just a wild guess. Connect together half a dozen implicit wild guesses with likelihood of correctness of overly generous 1 in 100 each
Do you think it’s “generous” to assign only 99% probability in the negation of “the probability of being within some huge set full of yous and non-yours is independent of number of non-yous” where “you” is interpreted to include all your observations? That seems like insane overconfidence in a view that goes haywire in simple finite discrete cases.
Typical philosophy: tear strawman alternatives to prove a wild guess. (Why strawman: because can be also dependent on anything else entirely like positions of copies)
Also, the 99% confidence is not in the dependence on non yous (and yous BUT nothing else), the 99% confidence is in the wild guess of independence from everything else and dependence to count of yous, to be wrong. Also, consider two computer circuits real nearby, running identical you, separated by thin layer of dielectric. Remove dielectric, 1 copy with thicker wires. Conclusion: it may depend to thickness of wires of a copy or maybe to the speed of the copy.
Hell, the probability of being in a specific copy may just as well be undefined entirely until a copy figures out which copy it is, and then depend solely to how it was figured out.
Let’s suppose that the probability of being sampled out of model is sum of 2^-l over all codes that pluck you out of the model, like in Solomonoff induction. May well be dependent on the presence or absence of stone dummies (provided those break some simple method of locating you). Will definitely depend to your position. Go show it broken.
edit : actually, this alternative distribution for observers (and observer-moments) based on Solomonoff-type prior has been proposed here before by Wei_Dai , and has also been mentioned by Marcus Hutter. I’m not at all impressed by Nick Bostrom, that’s the point, or philosophy for that matter. The conclusions of philosophers—given relative uselessness of philosophy compared to science—ought to be taken as very low grade evidence.
Thank you for linking to Hutter’s talk, what an astounding mind. What a small world it is, I remember being impressed by him when I sat through his courses back at grad school, little knowing how much of my future perspective on map-building would eventually depend on his and his colleagues’ school of thought.
That presentation should be mandatory reading. In all Everett branches.
What is the reasonable probability you think I should assign to the proposition by some bunch of guys (with at most some accomplishments in highly non-gradable field of philosophy) led by a person with no formal education nor prior job experience nor quantifiable accomplishments, that they should be given money to hire more people to develop their ideas on how to save the world from a danger they are most adept at seeing? The prior here is so laughably low you can hardly find a study so flawed it wouldn’t be a vastly greater explanation for the SI behavior than it’s mission statement taken at face value, even if we do not take into account SI’s prior record.
So you’re just engaged in reference class tennis. (‘No, you’re wrong because the right reference class is magicians!’)
Reference class is not up for grabs. If you want narrower reference class you need to substantiate why it should be so narrow.
edit: Actually, sorry it comes as unnecessarily harsh. But do you recognize that SI genuinely has a huge credibility problem?
The donations to SI only make sense if we are to assume SI has extremely rare survival ability vs the technological risks. Low priors for extremely rare anything are a tautology, not an opinion. The lack of other alternatives is evidence against SI’s cause.
What is the reasonable probability you think I should assign to the proposition by some bunch of guys (with at most some accomplishments in highly non-gradable field of philosophy) led by a person with no formal education nor prior job experience nor quantifiable accomplishments, that they should be given money to hire more people to develop their ideas on how to save the world from a danger they are most adept at seeing? The prior here is so laughably low you can hardly find a study so flawed it wouldn’t be a vastly greater explanation for the SI behavior than it’s mission statement taken at face value, even if we do not take into account SI’s prior record.
What is this, the second coming of C.S. Lewis and his trilemma? SI must either be completely right and demi-gods who will save us all or they must be deluded fools who suffer from some psychological bias—can you really think of no intermediates between ‘saviors of humanity’ and ‘deluded fools who cannot possibly do any good’, which might apply?
I just wanted to point out that invoking DK is an incredible abuse of psychological research and does not reflect well on either you or Dymtry, and now you want me to justify SI entirely...
The lack of other alternatives is evidence against SI’s cause.
Alternatives would also be evidence against donating, too, since what makes you think they are the best one out of all the alternatives? Curious how either way, one should not donate!
What is this, the second coming of C.S. Lewis and his trilemma? SI must either be completely right and demi-gods who will save us all or they must be deluded fools who suffer from some psychological bias—can you really think of no intermediates between ‘saviors of humanity’ and ‘deluded fools who cannot possibly do any good’, which might apply?
No, it comes out of what SI members claim about themselves and their methods—better than science, we are more rational, etc etc etc. That really drives down the probability of anything in the middle between the claimed excellence and the incompetence compatible with the fact of making those claims (you need sufficient incompetence to claim extreme competence). If they didn’t want that sort of dichotomy they should have kept their extreme arrogance from surfacing. (Or alternatively they wanted this dichotomy, to drive some people into fallacies from politeness)
Alternatives would also be evidence against donating, too, since what makes you think they are the best one out of all the alternatives? Curious how either way, one should not donate!
Do you have a disagreement besides fairly stupid rhetoric? The lack of alternatives is genuinely evidence against SI’s cause, whereas presence of alternatives would genuinely make it unlikely that either of them is necessary. Yep, it’s very curious, and very inconvenient for you. The logic is sometimes impeccably against what you like. Without some seriously solid evidence in favour of SI, it is a Pascalian wager as the chance of SI making a difference is small.
Do you have a disagreement besides fairly stupid rhetoric? The lack of alternatives is genuinely evidence against SI’s cause, whereas presence of alternatives would genuinely make it unlikely that either of them is necessary. Yep, it’s very curious, and very inconvenient for you. The logic is sometimes impeccably against what you like. Without some seriously solid evidence in favour of SI, it is a Pascalian wager as the chance of SI making a difference is small.
I’ll rephrase: your argument from alternatives is as much bullshit as invoking Dunning-Kruger. Both an argument and its opposite cannot lead to the same conclusion unless the argument is completely irrelevant to the conclusion. If alternatives matter at all, there must be some number of alternatives which reflect better on SI than the other numbers.
Both an argument and its opposite cannot lead to the same conclusion unless the argument is completely irrelevant to the conclusion.
It’s not an argument and it’s opposite. One of the assumptions in either argument is ‘opposite’, that could make distinction between those two assumptions irrelevant but the arguments themselves remain very relevant.
I take as other alternatives everyone who could of worked on AI risk but didn’t, because I consider it to be an alternative not to work on AI risk now. Some other people take as other alternatives people working on precisely the kind of AI risk reduction that SI works on. In which case the absence of alternatives—under this meaning of ‘alternatives’ - is evidence against SI’s cause—against the idea that one should work on such AI risk reduction now. There should be no way how you can change—against same world—meanings of the words and arrive at different conclusion; it only happens if you are exercising in the rationalization and rhetoric. In electromagnetism if you are to change right hand rule to left hand rule every conclusion will stay the same; in reasoning if you wiggle what is ‘alternatives’ that should not change the conclusion either.
This concludes our discussion. Pseudologic derived from formal maxims and employing the method of collision (like in this case, colliding ‘assumption’ with ‘argument’) is too annoying.
I hope you don’t mind if I don’t reply to you any further until it’s clear whether you’re a Dmytry sockpuppet.
If an account isn’t actualy Dmytry but instead just someone who thinks the same way that Dmytry does there seems we can just treat them the same way anyhow. After all, ten people who act like Dmytry seems just as bad as Dmytry with ten accounts.
So the choice then would be whether to give the potential sockpuppet the benefit of the doubt and allow treatment of them to asymptotically approach the treatment of known Dmytry accounts to the extent that and for as long as they make Dmytry-like posts and for as long as it looks like the comments could be anomalies. ie. There is the expectation of either a regression to the mean or that an actual new user will be capable of learning from feedback.
If it is assumed that the accounts are sockpuppets then they immediately get treated without the benefit of doubt and with the additional loading penalty given to sockpuppets for being sockpuppets.
And also similar issues with English as a second language. I agree it’s Dmytry, but not a sockpuppet. He didn’t go to great lengths to hide that he was private_messaging. The new JaneQ account posted some posts (as opposed to comments), thanks to its positive karma balance. I figure that Dmytry wanted to make a post, so created a new account without huge negative karma (and thus be able to post). I.e. I think he’s just trying to circumvent the karma system, not deceive people.
I think he’s just trying to circumvent the karma system
“Just”.
People are welcome to abandon an account when they realize they have irrevocably destroyed their reputation and wish to start again and try not being an asshat. They are not welcome to use multiple accounts to subvert the karma system.
I agree it’s Dmytry, but not a sockpuppet.
Please review the context. You will notice that gwern is arguing with two ‘people’ in this discussion. Both of them, by your own prediction, are Dmytry. Using multiple accounts to support each other in a single argument is exactly what ‘sockpuppetry’ is all about. To put it mildly: I don’t like it.
Yeah, I’ve been considering that theory for a while myself, as they share a few salient characteristics, but I’ve been unable to work out to my satisfaction what evidence would make it clear one way or the other. I’d be interested in your thoughts on the subject.
Ah well, those can’t be the singularitarians he’s talking about then. He doesn’t name any names, leaving it to Anonymous to do so, then responds by saying “I wasn’t going to name names, but...” and then continuing not to name names. I predict a no true Scotsman path of retreat if you take your argument to him.
It’s not clear to me how approaching your response with an assumption of bad faith will convince him or his readers of the correctness of your position. Let us know how it works out for you.
I’m not assuming bad faith, just observing a lack of specifics about who he is talking about. But I’m not intending to make any response there, not being as informed as, say, ciphergoth on the SI’s position.
I would have preferred that he use my (even more passive-aggressive) approach, which is to say, “I’m not going to name any names[1]”, and then have a footnote saying “[1] A ‘name’ is an identifier used to reference a proper noun. An example of a name might be ‘Singularity Institute’.”
Get it? You’re not “naming names”, you’re just giving an example of name in the exact neighborhood of the accusation! Tee hee!
(Of course, it’s even better if you actually make the accusation directly, but that’s obviously not an option here.)
This is a terrible misrepresentation. SI does not argue for donations on these grounds; Eliezer and other SI staff have explicitly rejected such Pascalian reasons, but instead argued that the risks that they wish to avert are quite probable.
Then it constitutes a serious PR problem.
Or is at least a symptom of bad PR.
It has to also be probable that their work averts those risks, which seem incredibly improbable by any reasonable estimate. If the alternative Earth was to adopt a strategy of ignoring prophetic groups of ‘idea guys’ similar to SI and ignore their pleads for donations so that they can hire competent researchers to pursue their ideas, I do not think that such decision would have increased the risk by more than a miniscule amount.
People currently understand the physical world sufficiently to see that supernatural claims are bogus, and so there is certainty about impossibility of developments predicated on supernatural. People know robust and general laws of physics that imply impossibility of perpetual motion, and so we can conclude in advance with great certainty that any perpetual motion engineering project is going to fail. Some long-standing problems in mathematics were attacked unsuccessfully for a long time, and so we know that making further progress on them is hard. In all these cases, there are specific pieces of positive knowledge that enable the inference of impossibility or futility of certain endeavors.
In contrast, a lot of questions concerning Friendly AI remain confusing and unexplored. It might turn out to be impossibly difficult to make progress on them, or else a simple matter of figuring out how to apply standard tools of mainstream mathematics. We don’t know, but neither do we have positive knowledge that implies impossibility or extreme difficulty of progress on these questions. In particular, the enormity of consequences does not imply extreme improbability of influencing those consequences. It looks plausible that the problem can be solved.
This kind of seems like political slander to me. Maybe I’m miscalibrated? But it seems like you’re thinking of “reasonable estimates” as things produced by groups or factions, treating SI as a single “estimate” in this sense, and lumping them with a vaguely negative but non-specified reference class of “prophetic groups”.
The packaged claims function to reduce SI’s organizational credibility, and yet it references no external evidence and makes no testable claims. For your “prophetic groups” reference class, does it include 1930′s nuclear activists, 1950′s environmentalists, or 1970′s nanotechnology activists? Those examples come from the socio-political reference class I generally think of SI as belonging to, and I think of them in a mostly positive way.
Personally, I prefer to think of “estimates” as specific predictions produced by specific processes at specific times, and they seem like they should be classified as “reasonable” or not on the basis of their mechanisms and grounding in observables in the past and the future.
The politics and social dynamics surrounding an issue can give you hints about what’s worth thinking about, but ultimately you have to deal with the object level issues, and the object level issues will screen off the politics and social dynamics once you process them. The most reasonable tool for extracting a “coherent opinion” from someone on the subject of AGI that is available to the public that I’m aware of is the uncertain future.
(Endgame: Singularity is a more interesting tool in some respects. It’s interesting for building intuitions about certain kinds of reality/observable correlations because it has you play as a weak but essentially benevolent AGI rather than as humanity, but (1) it is ridiculously over-specific as a prediction tool, and (2) seems to give the AGI certain unrealistic advantages and disadvantages for the sake of making it more fun as a game. I’ve had a vague thought to fork it, try to change it to be more realistic, write a bot for playing it, and use that as an engine for Monte-carlo simulator of singularity scenarios. Alas: a day job prevents me from having the time, and if that constraint were removed I bet I could find many higher value things to work on, reality being what it is, and people being motivated to action the way they are.)
Do you know of anything more epistemically helpful than the uncertain future? If so, can you tell me about it? If not, could you work through it and say how it affected your model of the world?
(Note that the Uncertain Future software is mostly supposed to be a conceptual demonstration; as mentioned in the accompanying conference paper, a better probabilistic forecasting guide would take historical observations and uncertainty about constant underlying factors into account more directly, with Bayesian model structure. The most important part of this would be stochastic differential equation model components that could account for both parameter and state uncertainty in nonlinear models of future economic development from past observations, especially of technology performance curves and learning curves. Robin Hanson’s analysis of the random properties of technological growth modes has something of a similar spirit.)
I think your estimate of their chances of success is low. But even given that estimate, I don’t think it’s Pascalian. To me, it’s Pascalian when you say “my model says the chances of this are zero, but I have to give it non-zero odds because there may be an unknown failing in my model”. I think Heaven and Hell are actually impossible, I’m just not 100% confident of that. By contrast, it would be a bit odd if your model of the world said “there is this risk to us all, but the odds of a group of people causing a change that averts that risk are actually zero”.
It is not just their chances of success. For the donations to matter, you need SI to succeed where without SI there is failure. You need to get a basket of eggs, and have all the good looking eggs be rotten inside but one fairly rotten looking egg be fresh. Even if a rotten looking egg is nonetheless more likely to be fresh inside than one would believe, it is highly unlikely situation.
I’m afraid I’m not getting your meaning. Could you fill out what corresponds to what in the analogy? What are all the other eggs? In what way do they look good compared to SI?
All the other people and organizations that are no less capable of identifying the preventable risks (if those exist) and addressing them, have to be unable to prevent destruction of mankind without SI. Just like in the Pascal’s original wager, the Thor and other deities are to be ignored by omission.
On how the SI does not look good, well, it does not look good to Holden Karnofsky, or me for that matter. Resistance to feedback loops is an extremely strong point of his.
On the rationality movement, here’s a quote from Holden.
Could you give me some examples of other people and organizations trying to prevent the risk of an Unfriendy AI? Because for me, it’s not like I believe that SI has a great chance to develop the theory and prevent the danger, but rather like they are the only people who even care about this specific risk (which I believe to be real).
As soon as the message becomes widely known, and smart people and organizations will start rationally discussing the dangers of Unfriendly AI, and how to make a Friendly AI (avoiding some obvious errors, such as “a smart AI simply must develop a human-compatible morality, because it would be too horrible to think otherwise”), then there is a pretty good chance that some of those organization will be more capable than SI to reach that goal: more smart people, better funding, etc. But at this moment, SI seems to be the only one paying attention to this topic.
It’s a crooked game, but it’s the only game in town?
None of that is evidence that SI would be more effective if it had more money. Assign odds to hostile AI becoming extant given low funding for SI, and compare the odds of hostile AI becoming extant given high funding for SI. The difference between those two is proportional to the value of SI (with regards to preventing hostile AI).
SI being the only one ought to lower your probability that this whole enterprise is worthwhile in any way.
With regards to the ‘message’, i think you grossly over estimate value of a rather easy insight that anyone who has watched Terminator could have. With regards to “rationally discussing”, what I have seen so far here is pure rationalization and very little, if any, rationality. What the SI has on the track record is, once again, a lot of rationalizations and not enough rationality to even have had an accountant through it’s first 10 years and first over 2 millions dollars in other people’s money.
Note that that second paragraph is one of Holden Karnofsky’s objections to SIAI: a high opinion of its own rationality that is not so far substantiable from the outside view.
Yes. I am sure Holden is being very polite, which is generally good but I’ve been getting impression that the point he was making did not in full carry across the same barrier that has resulted in the above-mentioned high opinion of own rationality despite complete lack of results for which rationality would be better explanation than irrationality (and presence of results which set rather low ceiling for the rationality). The ‘resistance to feedback’ is even stronger point, suggestive that the belief in own rationality is, at least to some extent, combined with expectation that it won’t pass the test and subsequent avoidance (rather than seeking) of tests; as when psychics do believe in their powers but do avoid any reliable test.
Really, last time I checked Eliezer was refusing to name either a probability or a time scale.
I’m not seeing how you get from “doesn’t state an explicit probability or timescale publically” to “argues that SI should be supported on Pascalian grounds”.
It looked like just a response to you saying “instead argued that the risks that they wish to avert are quite probable.”
In Pascal’s original wager the ‘risk’ is 50% . Due to our natural tendency to see two alternatives, once presented in A vs B form, as closer to even odds when information is absent. That’s what wager is about—screwing up probabilities in absence of knowledge of how to calculate them, and arguing that it is quite probable when it’s not quite probable. Not so much the problem with decision process as the problem with ‘lets call vague feelings probabilities’. Between other gods, the possibility of better deals in the future, I have impression that mathematically it would work out to ‘do not pay’, if someone could actually do the math. With the made up probabilities that tend towards 50⁄50 when there’s unknowns, and when the propositions are maliciously generated to relieve you of your wallet, and when you are simply inserting a hypothesis into graph because you were told something (zero to nonzero update), a little wonder that some people get exploited.
You are grossly misinformed.
Er. We’re talking about Pascal’s wager, right? The one published in Pensees? The one which explicitly invokes infinities, where it doesn’t matter if the odds are 1 to 1 or 1,000,000,000 to 1, the argument still goes through?
The point is that it is still a Pascal’s wager even if you have mis-estimated probabilities and argued that it is actually likely that God exists.
In case of SI, even if we assume that risk exists it is still the case that one is to donate to group of people whom, in all likelihood, are entirely incapable of affecting the risk in any way what so ever (and are only offering risk reduction due to their incompetence combined with Dunning-Kruger effect. It never happened in the history that the first people to take money for cure would be anything but either self deluded or confidence tricksters)
You realize DK is a narrow effect which only obtains in certain conditions, is still controversial, and invoking it just makes you look like you’ll grab at any thing at all no matter how dubious in order to attack SI, right? (About on the same level as ‘Hitler was an atheist!’)
Seriously. In no area of research, medicine, engineering, or whatever, the first group to tackle a problem succeeded? Such a world would be far poorer and still stuck in the Dark Ages than the one we actually live in. I realize this may be a hard concept, but sometimes, the first person to tackle a problem—succeeds! In fact, sometimes multiple people tackling the problem all simultaneously succeed! (This is very common; called multiple discovery.)
Not every problem is as hard as fusion, or to put it another way, most hard problems are made of other, easier, problems, while if your hyperbolic statement were true, no progress would ever be made.
Dunning Kruger effect is likely a product of some general deficiency in the meta reasoning facility leading both to failure of reasoning itself and failure of evaluation of the reasoning; extremely relevant to people that proclaim themselves to be more rational, more moral, and so on than anyone else but do not seem to accomplish above mediocre performance at fairly trivial yet quantifiable things.
Ghmm. He said first people to take money, not first people to tackle.
The first people to explain the universe (and take some contributions for that) produced something of negative value, nearly all of the medicine until last couple hundred years was not only ineffective but completely harmful, and so on.
If you look at very narrow definitions, of course, the first to tackle nuclear bomb creation did succeed—but the first to tackle the general problem of weapon of mass destruction were various shamans sending a curse. If saving people from AI is an easy problem, then we’ll survive without SI; if it’s a hard problem, at any rate SI doesn’t start with a letter from Einstein to the government, it starts with a person with no quantifiable accomplishments cleverly employing oneself. As far as I am concerned, there’s literally no case for donations here; the donations happen via sort of decision noise similar to how NASA has spent millions on various antigravity devices, the power companies have spent millions on putting electrons in hydrogen orbitals at below ground level (see Mills hydrinos), and millions were invested in Steorn’s magnetic engine.
Speaking of yourself in the third person?
Dmytry, you are abusing sockpuppet accounts. Your use of private_messaging was questionable, but at least you declared it as an alias early on. Right now you are using a sockpuppet with the intent to deceive.
“Ghmm” is a hapax legomenon used solely by private_messaging/Dmytry and JaneQ.
Ah, nice. I’ve been looking for a phrase/word with this meaning.
“Ghmm” was the phrase that made me certain. Before that I found “edit: i.e.” and “etc etc etc”, but these were not quite as unique. (PB page)
You know, I’ve been wondering about that for a while now, but it never occurred to me to look for hapaxes (nor had I been aware of the phrase). I have just learned a new technique, for which I have learned a cool new word, which helps solve a problem I actually had. If I endorsed upvoting multiple times, I would upvote you multiple times; as it is, you’ll have to settle for an upvote and my gratitude.
It’s just one of the little-known advantages to reading critical analysis of classical or Biblical literature! Although I’d be hard-pressed to name a second advantage.
The embarrassing thing, now that you mention it, is that I’m acquainted with this technique in that context (identifying source texts and common authors and so forth) but it still didn’t occur to me to apply it here, despite it being the same problem even at a surface level.
(sigh) Corrupted hardware sucks. It ain’t the things I don’t know that irritate me. It’s not even the things I do know that just ain’t so. It’s the things I know, that are so, and that somehow don’t present themselves to be reasoned with when I need them.
That seems unlikely. Leading both?
Mediocrity is sufficient to push them entirely out of the DK gap; your thinking DK applies is just another example of what I mean by these being fragile easily over-interpreted results.
(Besides blatant misapplication, please keep in mind that even if DK had been verified by meta-analysis of dozens of laboratory studies, which it has not, that still only gives a roughly 75% chance that the effect applies outside the lab.)
Without specifics, one cannot argue against that.
So you’re just engaged in reference class tennis. (‘No, you’re wrong because the right reference class is magicians!’)
Seems straightforward to me: Eliezer’s unwarranted self importance did result in him not pursuing education or for that matter proper self education, and simultaneously to believing he’s awesome and selling existential risk reduction that nobody else would sell. edit: The alternative explanation is the level of resistance to self deception so high that the process of the self education transcended the necessity to seek objective feedback on the progress (which one gets if one e.g. tries to prove mathematical theorems, as here an un-intelligent process of checking a proof can validate one’s powers of reasoning).
Did it ever occur to you that one has to actually do something incompatible with the broad reference class to get into much much smaller reference class? E.g. you are in reference class ‘people’, not reference class ‘people with IQ>=150’ unless you take IQ test or take other test with very low false positive rate. Likewise, the reference class is ‘people with grand promises’ until you actually do something that moves you into microscopic sub class of ‘people with grand promises who deliver’.
Suppose one were to grant that for Eliezer. Out of curiosity, I would be interested in hearing how Nick Bostrom & FHI are similarly deluded and in the reference class of magicians.
Speculation of sufficiently advanced future technologies is indistinguishable from magical thinking.
Unless there is scientific method in what you’re doing, and unless you’re producing something testable and testing it, you are certainly not in the reference class of scientists. Unless there is plenty of rigour, you are not in reference class of people using mathematical methods (even if you have formulas in your papers). Not up to grabs here either. Maybe the reference class is philosophers. If you wish, philosophers with PhD.
As they do tend to honestly make actual arguments rather than just try to manipulate for profit the people who already agree with core ideas, one can examine actual argumentation. Which is not very good. E.g. simulation argument of his looks watertight at first glance but is really reliant on assumption about physics (suppose MWI is correct, now the counting patently doesn’t work for probabilities), and does not account for potentially enormous number of simulated beings that can easily tell their reality is simulated as the simulator cuts some corners. Typical example of philosophy, making arguments that seem true for mere lack of alternative propositions to made up assertions. You only spend time formalizing such stuff if you can’t see that you are building false precision, making up far too many assumptions for the results to be meaningful in any way (in the field where intuition is unlikely to work, too). If you don’t see that you are making a lot of extremely shaky assumptions when you are making a lot of extremely shaky assumptions, you’ll be susceptible to generating false precision precisely as per Nick Szabo’s article.
Their musings about singularity are in precise agreement with the hypothesis that at the current point in time it is early enough that the only people ‘working’ on this are those who for some reason fail to see when they aren’t making progress. I’m pretty sure all this movement will look very silly in 100 years—there will be dangers they did not see and there won’t be dangers they focused on.
What is the correct counting for MWI, exactly?
I think you are misunderstanding the SA, which is surprising since it’s formally pretty simple.
The SA is a trilemma; finding evidence that strongly supports one leg of the trilemma is not a problem with the trilemma itself. It’s just reasons for you to bite a particular bullet: “the SA says ‘either X or Y or Z’, and here our reality looks like a cutrate simulation, so I guess Z was right after all!′
So the trilemma remains ‘watertight’ even if the specific paper in enumerating reasons to believe X (or Y, or Z) fails to cover some favorite bit of reasoning of yours. The reasoning still fits inside the trilemma framework and is not an argument against the framework itself.
(My own impression is that your cutrate suggestion wouldn’t go very far since it’s not clear—to me, anyway—what a cutrate simulation would look like, and whether our own universe is not cutrate. One could validly argue that our failure to find good clear evidence that we’re in a simulation is evidence against being in a simulation, but quantifying how much evidence this would be is even harder. And given how many crude simulation we run for science & business & pleasure, and the Fermi paradox, it seems especially unlikely that this point is strong enough to move someone from biting the we’re-in-a-simulation bullet to biting one of the other bullets.)
Everyone looks silly from 100 years on. That’s not a useful point to make.
MWI: we don’t know what is that works, but we can tell if something doesn’t work. Probabilities don’t seem to work out if you just count distinct observers. Plus, the number of distinct observers grows very rapidly with time, so you get extreme case of doomsday paradox. If you aren’t just counting distinct observers but count copies twice then your probabilities could as well depend to e.g. thickness of wires in the computer, not just the raw number of simulated realities.
Furthermore, more significantly, under MWI it is not even clear what first two statements could even mean.
We are discussing Nick Bostrom, and I take the http://en.wikipedia.org/wiki/Nick_Bostrom to be at least somewhat representative of his contribution to simulation argument
The trilemma as stated is:
No civilization will reach a level of technological maturity capable of producing simulated realities.
No civilization reaching aforementioned technological status will produce a significant number of simulated realities, for any of a number of reasons, such as diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.
Any entities with our general set of experiences are almost certainly living in a simulation.
I assumed that the last statement is to be taken as ‘we should expect to be in a sim if first two conditions are false and given our general set of experiences’, by assumption of at least rudimentary relevance of this trilemma.
In such case there is the fourth possibility with probability overwhelmingly higher than of this entire argument: the wild guess that there would ever be a good reason to believe that we should be among most numerous, with same weight for real thing and simulator (or same weight for different types of simulator), is not spot on. Collaborated also by us not being among those in weird sims of any kind (we’d detect a god speaking to us every day).
Furthermore, the distinction between perfect simulator and reality strikes me as nonsensical. Until there is a measurement that we are in a simulation, we may most sensibly assume we are in both (this time merely drawing inspiration from the sort of intuitions we might have had if we believed in MWI). The probability of measurement that we are in a simulation, well that has an exceptionally good chance of being a much more complicated matter than assumed.
edit: To clarify, my point is that even putting this sort of stuff in words or equations is a great example of false precision that Nick Szabo complains about. Too many assumptions have to be made without noticing that those assumptions are made, for the statements to have meaning at all.
Those who aren’t grossly wrong (Newton for example) don’t look as silly as the silly I am speaking of.
Then we can pardon Bostrom for not taking them into account.
Wikipedia is pretty bad on philosophy (the SEP is much better), and in this case, there’s no reason not to read Bostrom’s original paper and the correction: he writes clearly, and they are readily available on his website.
What? Could you write this more clearly, I have no idea what you’re trying to say.
Newton is actually a great example; if you don’t choose to ignore the areas which make him look bad, he looks like an incredible fool in many respects. His constant pursuit of alchemy even then was the object of derision, and while we no longer would hang or exile him for his bizarre theology & eschatology, we (even most Christians) would regard them as hilarious. Then there are his priority disputes...
If even Newton looks this foolish, what hope can the rest of us have? No, the suggestion ‘would this make me look foolish in 100 years?’ does us no good in practice.
When pondering the possibilities where we live given lack of grand unified theory ‘of everything’, you can’t assume your physical intuitions hold true. In fact you should assume the opposite. The MWI is just an example of how philosophical argument that looks entirely invariable admits defeat from possible physics, in a way in which the mathematics—which philosophy mimics—does not. That means validity of argument requires validity of intuitions, which are reasonably very unlikely to be valid in any grand sense. There’s also historical example: a lot of philosophers assuming Euclidean geometry is the only logically possible kind of geometry, without even noticing that they are making such assumption, up until mathematicians came up with alternative.
You assume that the probability of being among either group depends to number of yous within that group (rather than something entirely different), to do anthropic reasoning beyond tautological ‘we can’t observe universes where we can’t be alive’. In my opinion this is a case of wild guess over unknowns, totally false precision.
I came up with a clearer example of how something totally different may actually make a lot more sense: Solomonoff induction on codes that model various universes. A model of the universe outputting the data matching your internal self-perception must not only contain yourself, but must include the code that finds yourself within that model, so that output begins with sense data. It is then clear that the code that picks one of yous out of a model full of all sorts of simulations may easily be larger than the code that picks you out of the world where there is just one you, but the number of various others is much smaller.
I’m not arguing that Bostrom is bad for a philosopher. I am outlining how in philosophy you just make all sorts of assumptions that you don’t notice are wild guesses, and how the field is basically built on false precision. I.e. you assume that the probability of being within some huge set full of yous and non-yours is independent of number of non-yous, which is just a wild guess. Connect together half a dozen implicit wild guesses with likelihood of correctness of overly generous 1 in 100 each , and we’re speaking of probability of correctness in the range of 10^-12 . Philosophy is generally like this.
I believe this falls under Nick Szabo’s complaint about false precision.
Also, a paper by Steven Weinberg on usefulness of philosophy.
It seems to me that funding philosophical works in this field may actually be actively harmful, due to establishment of such false precision and prejudices. It’s like funding ‘embracing bias, imprecision, and making your mind up before checking where the mathematics will lead you’.
That’s the whole point: very low probability of being right. There’s a crucial difference: the methods Newton employed managed to achieve a non-zero (and not negligible) truth finding rate. So he made something that does not look silly. Even with this, most of stuff was quite seriously wrong.
Do you think it’s “generous” to assign only 99% probability in the negation of “the probability of being within some huge set full of yous and non-yours is independent of number of non-yous” where “you” is interpreted to include all your observations? That seems like insane overconfidence in a view that goes haywire in simple finite discrete cases.
Typical philosophy: tear strawman alternatives to prove a wild guess. (Why strawman: because can be also dependent on anything else entirely like positions of copies)
Also, the 99% confidence is not in the dependence on non yous (and yous BUT nothing else), the 99% confidence is in the wild guess of independence from everything else and dependence to count of yous, to be wrong. Also, consider two computer circuits real nearby, running identical you, separated by thin layer of dielectric. Remove dielectric, 1 copy with thicker wires. Conclusion: it may depend to thickness of wires of a copy or maybe to the speed of the copy.
Hell, the probability of being in a specific copy may just as well be undefined entirely until a copy figures out which copy it is, and then depend solely to how it was figured out.
Let’s suppose that the probability of being sampled out of model is sum of 2^-l over all codes that pluck you out of the model, like in Solomonoff induction. May well be dependent on the presence or absence of stone dummies (provided those break some simple method of locating you). Will definitely depend to your position. Go show it broken.
edit : actually, this alternative distribution for observers (and observer-moments) based on Solomonoff-type prior has been proposed here before by Wei_Dai , and has also been mentioned by Marcus Hutter. I’m not at all impressed by Nick Bostrom, that’s the point, or philosophy for that matter. The conclusions of philosophers—given relative uselessness of philosophy compared to science—ought to be taken as very low grade evidence.
Thank you for linking to Hutter’s talk, what an astounding mind. What a small world it is, I remember being impressed by him when I sat through his courses back at grad school, little knowing how much of my future perspective on map-building would eventually depend on his and his colleagues’ school of thought.
That presentation should be mandatory reading. In all Everett branches.
What is the reasonable probability you think I should assign to the proposition by some bunch of guys (with at most some accomplishments in highly non-gradable field of philosophy) led by a person with no formal education nor prior job experience nor quantifiable accomplishments, that they should be given money to hire more people to develop their ideas on how to save the world from a danger they are most adept at seeing? The prior here is so laughably low you can hardly find a study so flawed it wouldn’t be a vastly greater explanation for the SI behavior than it’s mission statement taken at face value, even if we do not take into account SI’s prior record.
Reference class is not up for grabs. If you want narrower reference class you need to substantiate why it should be so narrow.
edit: Actually, sorry it comes as unnecessarily harsh. But do you recognize that SI genuinely has a huge credibility problem?
The donations to SI only make sense if we are to assume SI has extremely rare survival ability vs the technological risks. Low priors for extremely rare anything are a tautology, not an opinion. The lack of other alternatives is evidence against SI’s cause.
What is this, the second coming of C.S. Lewis and his trilemma? SI must either be completely right and demi-gods who will save us all or they must be deluded fools who suffer from some psychological bias—can you really think of no intermediates between ‘saviors of humanity’ and ‘deluded fools who cannot possibly do any good’, which might apply?
I just wanted to point out that invoking DK is an incredible abuse of psychological research and does not reflect well on either you or Dymtry, and now you want me to justify SI entirely...
Alternatives would also be evidence against donating, too, since what makes you think they are the best one out of all the alternatives? Curious how either way, one should not donate!
No, it comes out of what SI members claim about themselves and their methods—better than science, we are more rational, etc etc etc. That really drives down the probability of anything in the middle between the claimed excellence and the incompetence compatible with the fact of making those claims (you need sufficient incompetence to claim extreme competence). If they didn’t want that sort of dichotomy they should have kept their extreme arrogance from surfacing. (Or alternatively they wanted this dichotomy, to drive some people into fallacies from politeness)
Do you have a disagreement besides fairly stupid rhetoric? The lack of alternatives is genuinely evidence against SI’s cause, whereas presence of alternatives would genuinely make it unlikely that either of them is necessary. Yep, it’s very curious, and very inconvenient for you. The logic is sometimes impeccably against what you like. Without some seriously solid evidence in favour of SI, it is a Pascalian wager as the chance of SI making a difference is small.
I’ll rephrase: your argument from alternatives is as much bullshit as invoking Dunning-Kruger. Both an argument and its opposite cannot lead to the same conclusion unless the argument is completely irrelevant to the conclusion. If alternatives matter at all, there must be some number of alternatives which reflect better on SI than the other numbers.
It’s not an argument and it’s opposite. One of the assumptions in either argument is ‘opposite’, that could make distinction between those two assumptions irrelevant but the arguments themselves remain very relevant.
I take as other alternatives everyone who could of worked on AI risk but didn’t, because I consider it to be an alternative not to work on AI risk now. Some other people take as other alternatives people working on precisely the kind of AI risk reduction that SI works on. In which case the absence of alternatives—under this meaning of ‘alternatives’ - is evidence against SI’s cause—against the idea that one should work on such AI risk reduction now. There should be no way how you can change—against same world—meanings of the words and arrive at different conclusion; it only happens if you are exercising in the rationalization and rhetoric. In electromagnetism if you are to change right hand rule to left hand rule every conclusion will stay the same; in reasoning if you wiggle what is ‘alternatives’ that should not change the conclusion either.
This concludes our discussion. Pseudologic derived from formal maxims and employing the method of collision (like in this case, colliding ‘assumption’ with ‘argument’) is too annoying.
I hope you don’t mind if I don’t reply to you any further until it’s clear whether you’re a Dmytry sockpuppet.
If an account isn’t actualy Dmytry but instead just someone who thinks the same way that Dmytry does there seems we can just treat them the same way anyhow. After all, ten people who act like Dmytry seems just as bad as Dmytry with ten accounts.
They could just think the same way and be borrowing vocabulary and ideas without actually being as bad as him.
So the choice then would be whether to give the potential sockpuppet the benefit of the doubt and allow treatment of them to asymptotically approach the treatment of known Dmytry accounts to the extent that and for as long as they make Dmytry-like posts and for as long as it looks like the comments could be anomalies. ie. There is the expectation of either a regression to the mean or that an actual new user will be capable of learning from feedback.
If it is assumed that the accounts are sockpuppets then they immediately get treated without the benefit of doubt and with the additional loading penalty given to sockpuppets for being sockpuppets.
My comment on Dunning-Kruger effect is the second highest ranked comment in my post history or so.
Also: this thread is too weird.
And also similar issues with English as a second language. I agree it’s Dmytry, but not a sockpuppet. He didn’t go to great lengths to hide that he was private_messaging. The new JaneQ account posted some posts (as opposed to comments), thanks to its positive karma balance. I figure that Dmytry wanted to make a post, so created a new account without huge negative karma (and thus be able to post). I.e. I think he’s just trying to circumvent the karma system, not deceive people.
“Just”.
People are welcome to abandon an account when they realize they have irrevocably destroyed their reputation and wish to start again and try not being an asshat. They are not welcome to use multiple accounts to subvert the karma system.
Please review the context. You will notice that gwern is arguing with two ‘people’ in this discussion. Both of them, by your own prediction, are Dmytry. Using multiple accounts to support each other in a single argument is exactly what ‘sockpuppetry’ is all about. To put it mildly: I don’t like it.
You’re right, I missed that.
Yeah, I’ve been considering that theory for a while myself, as they share a few salient characteristics, but I’ve been unable to work out to my satisfaction what evidence would make it clear one way or the other. I’d be interested in your thoughts on the subject.
Let’s avoid inflationary use of Pascal’s wager.
Ah well, those can’t be the singularitarians he’s talking about then. He doesn’t name any names, leaving it to Anonymous to do so, then responds by saying “I wasn’t going to name names, but...” and then continuing not to name names. I predict a no true Scotsman path of retreat if you take your argument to him.
It’s not clear to me how approaching your response with an assumption of bad faith will convince him or his readers of the correctness of your position. Let us know how it works out for you.
I’m not assuming bad faith, just observing a lack of specifics about who he is talking about. But I’m not intending to make any response there, not being as informed as, say, ciphergoth on the SI’s position.
I would have preferred that he use my (even more passive-aggressive) approach, which is to say, “I’m not going to name any names[1]”, and then have a footnote saying “[1] A ‘name’ is an identifier used to reference a proper noun. An example of a name might be ‘Singularity Institute’.”
Get it? You’re not “naming names”, you’re just giving an example of name in the exact neighborhood of the accusation! Tee hee!
(Of course, it’s even better if you actually make the accusation directly, but that’s obviously not an option here.)