An example here is the treatment and use of MWI (a.k.a. the “many-worlds interpretation”) and the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that’s it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least as understood within the LW community? But that’s besides the point. The problem here is that such conclusions are, I believe, widely considered to be weak evidence to base further speculations and estimations on.
Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.
What I’m trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.
Actually, now that I read this paragraph, it sounds like you think that “exponential”, “evolving” AI is an unsupported premise, rather than “AI go FOOM” being the conclusion of a lot of other disjunctive lines of reasoning. That explains a lot about the tone of this post. And if you’re calling it “exponential” or “evolving”, which are both things the reasoning would specifically deny (it’s supposed to be faster-than-exponential and have nothing to do with natural selection), then you probably haven’t read the supporting arguments. Read the FOOM debate.
Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who has figured all this out? The only person who’s aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?
After reading enough sequences you’ll pick up enough of a general sense of what it means to treat a thesis analytically, analyze it modularly, and regard every detail of a thesis as burdensome, that you’ll understand people here would mention Bostrom or Hanson instead. The sort of thinking where you take things apart into pieces and analyze each piece is very rare, and anyone who doesn’t do it isn’t treated by us as a commensurable voice with those who do. Also, someone link an explanation of pluralistic ignorance and bystander apathy.
I’m talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all all those claims for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site and by the SIAI rather than other near-term risks that might very well wipe us out.
An argument which makes sense emotionally (ambiguity aversion, someone link to hyperbolic discounting, link to scope insensitivity for the concept of warm glow) but not analytically (the expected utility intervals are huge, research often has long lead times).
I believe that hard-SF authors certainly know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have picked Greg Egan. That’s besides the point though, it’s not just Stross or Egan but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
Good reasoning is very rare, and it only takes a single mistake to derail. “Teach but not use” is extremely common. You might as well ask “Why aren’t there other sites with the same sort of content as LW?” Reading enough, and either you’ll pick up a visceral sense of the quality of reasoning being higher than anything you’ve ever seen before, or you’ll be able to follow the object-level arguments well enough that you don’t worry about other sources casually contradicting them based on shallower examinations, or, well, you won’t.
What do you expect me to do? Just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn’t allow me to devote all my resource to the SIAI, or even a substantial amount of my income. The thought makes me reluctant to give anything at all.
Start out with a recurring Paypal donation that doesn’t hurt, let it fade into the background, consider doing more after the first stream no longer takes a psychic effort, don’t try to make any commitment now or think about it now in order to avoid straining your willpower.
Maybe after a few years of study I’ll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I’d have some fun.
I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.
I haven’t done the work to understand MWI yet, but if this FAQ is accurate, almost nobody likes the Copenhagen interpretation (observers are SPECIAL) and a supermajority of “cosmologists and quantum field theorists” think MWI is true.
Since MWI seems to have no practical impact on my decision making, this is good enough for me. Also, Feynman likes it :)
Thanks for taking the time to give a direct answer. I enjoyed reading this and these replies will likely serve as useful comments to when people ask similar questions in the future.
Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.
Actually, now that I read this paragraph, it sounds like you think that “exponential”, “evolving” AI is an unsupported premise, rather than “AI go FOOM” being the conclusion of a lot of other disjunctive lines of reasoning. That explains a lot about the tone of this post. And if you’re calling it “exponential” or “evolving”, which are both things the reasoning would specifically deny (it’s supposed to be faster-than-exponential and have nothing to do with natural selection), then you probably haven’t read the supporting arguments. Read the FOOM debate.
After reading enough sequences you’ll pick up enough of a general sense of what it means to treat a thesis analytically, analyze it modularly, and regard every detail of a thesis as burdensome, that you’ll understand people here would mention Bostrom or Hanson instead. The sort of thinking where you take things apart into pieces and analyze each piece is very rare, and anyone who doesn’t do it isn’t treated by us as a commensurable voice with those who do. Also, someone link an explanation of pluralistic ignorance and bystander apathy.
An argument which makes sense emotionally (ambiguity aversion, someone link to hyperbolic discounting, link to scope insensitivity for the concept of warm glow) but not analytically (the expected utility intervals are huge, research often has long lead times).
Good reasoning is very rare, and it only takes a single mistake to derail. “Teach but not use” is extremely common. You might as well ask “Why aren’t there other sites with the same sort of content as LW?” Reading enough, and either you’ll pick up a visceral sense of the quality of reasoning being higher than anything you’ve ever seen before, or you’ll be able to follow the object-level arguments well enough that you don’t worry about other sources casually contradicting them based on shallower examinations, or, well, you won’t.
Start out with a recurring Paypal donation that doesn’t hurt, let it fade into the background, consider doing more after the first stream no longer takes a psychic effort, don’t try to make any commitment now or think about it now in order to avoid straining your willpower.
I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.
Quantum Mechanics Sequence
Pluralistic Ignorance
Bystander Apathy
Scope Insensitivity
No bystander apathy here!
The relevant fallacy in ‘Aristotelian’ logic is probably false dilemma, though there are a few others in the neighborhood.
Probably black and white thinking.
I haven’t done the work to understand MWI yet, but if this FAQ is accurate, almost nobody likes the Copenhagen interpretation (observers are SPECIAL) and a supermajority of “cosmologists and quantum field theorists” think MWI is true.
Since MWI seems to have no practical impact on my decision making, this is good enough for me. Also, Feynman likes it :)
Thanks for taking the time to give a direct answer. I enjoyed reading this and these replies will likely serve as useful comments to when people ask similar questions in the future.