I felt that I had to leave blank some of the questions that ask for a probability number, because no answer that complies with the instructions would be right. For instance, I consider the “Many Worlds” hypothesis to be effectively meaningless, since while it does describe a set of plausible alleged facts, there is, as far as I know, no possible experiment that could falsify it. (“Supernatural” is also effectively meaningless, but for a different reason: vagueness. “Magic”, to me, describes only situations where Clarke’s Third Law applies. And so forth.)
I would like to participate in a deeper discussion of the idea of the Singularity, but don’t know if that’s welcome on LW. I want to attack the idea on several levels: (1) the definition of it, which may be too vague to be falsifiable; (2) the definition of intelligence—I don’t think we’re talking about a mere chess-playing computer, but it’s not clear to me whether Minsky’s criteria are sufficient; (3) if those first two points are somehow nailed down, then I’m not at all sure that a machine intelligence is desirable, and certainly I’d hesitate to connect one to hardware with enough abilities that the revolution in “I, Robot” becomes possible; and (4) if such a change does happen, I would prefer, and I think most people would insist, that it happen relatively slowly to give everyone then alive time to cope with the change, thus making it not really a singularity in the mathematical sense.
(I do like the transhumanist notion that humans should feel free to modify our own hardware individually, but I don’t see that as necessarily connected with a Singularity, and I don’t use the jargon of transhumanism for the same reason I avoid the jargon of anarchism when talking politics—it scares people needlessly.)
I left both MIRI questions blank because I don’t know who or what MIRI is.
Re. The Great Stagnation: This theory asserts that we are in an economic stall, if you will, because of a lack of innovation, and is set against the assertion of a “Great Divergence” in which rising income inequality and globalization are to blame for the stall. I didn’t answer because I consider both views to be baloney—we are in an economic stall because of unnecessary and crony-driven overregulation, much of it done in the name of the misguided green and “social justice” movements.
I didn’t do the finger length questions; not sure what “the bottom crease” is, or maybe I don’t have them. (Do you mean the crease at the base of the fingers, or one farther down on the hand?)
Re. feminism, I answered based on what I believe the current use of the term is, which is not at all like the definition on Wikipedia. Wikipedia calls it more or less pro-equality and I support that, but the current usage is more like “social justice” and that whole concept is complete hooey.
I would like to participate in a deeper discussion of the idea of the Singularity, but don’t know if that’s welcome on LW.
You should be able to find a lot of info about the Singularity (and proposed ways to influence its outcome) in MIRI publications and LW posts. If you want to have further discussions about the Singularity you can comment below the relevant LW posts.
I didn’t do the finger length questions; not sure what “the bottom crease” is, or maybe I don’t have them. (Do you mean the crease at the base of the fingers, or one farther down on the hand?)
It’s supposed to refer to the crease at the base of the fingers.
Why was I downvoted? Was that from you, jdgalt? Were you hoping to have the Singularity discussion here instead of below another post? If so that wasn’t clear to me from your above comment, since you were asking about whether it was welcome on LW, and you seemed to be going off on a tangent (particularly with your latter two points). Also, you didn’t seem like you possessed much of the background knowledge regarding intelligence explosion and friendly/unfriendly AI, so I thought you would find it helpful for me to point you toward some relevant sources that might answer your questions, not to mention provide more general information on the topic. Of course, if you’re not interested in general information I’d be willing to address your specific questions.
Sorry, I’m not trying to be confrontational, I just want to understand what I did wrong so that I can better improve the quality of my comments, as well as clear up any misunderstandings.
It seems that I was the one who downvoted you, but now I don’t remember why. I’ve retracted it for now, since I don’t see anything wrong with the comment. May have just been a clicking error.
(4) if such a change does happen, I would prefer, and I think most people would insist, that it happen relatively slowly to give everyone then alive time to cope with the change, thus making it not really a singularity in the mathematical sense.
I agree with this position, and it was apparently controversial on the LW-TelAviv mailing list.
we are in an economic stall because of unnecessary and crony-driven overregulation, much of it done in the name of the misguided green and “social justice” movements.
I’m not sure what I could post here that would back that up: it requires some economics knowledge. I can refer you to good economics blogs such as Marginal Revolution and Cafe Hayek, or to Mises’ Human Action.
It was MR that sent me here to LW in the first place.
Austrianism and “economics knowledge” do not go together. Science is built on empiricism, not on deliberately ignoring data because your ideology tells you there can be no empirical examination of human beings.
If there’s no way for me to figure out whether there’s a chocolate cake inside of the Sun or not, then I might as well assume there’s no cake, because this makes the math easier. I see MWI vs. no MWI the same way, but apparently that’s the wrong answer...
What about the question of whether something not in your light cone still exists?
Also, assuming something to make the math easier does not mean that it is meaningless. It may have little utility to calculate it if your utility function only counts things that affect you physically, although you’d still need the general rules for priors for things that can only be tested long term but influence decisions made short term.
I think I see what you are trying to say, but I don’t think the Boltzmann Cake Theory is comparable to Many Worlds.
In the Boltzmann Cake case, it may be impossible to physically test the theory (though I don’t conclusively assume so—there could well be some very subtle effect on the Sun’s output that would facilitate such a test), but the question of fact it raises is still of objective fact.
But the truth or falsity of the Many Worlds Theory can only exist in a reference frame which spans the entire conceptual space in which the many worlds would have to coexist. And I don’t believe such a frame can exist. The very fabric of logic itself requires a space-time in which to exist; without one (or extending beyond one) its very postulates become open to doubt.
I did the survey.
I felt that I had to leave blank some of the questions that ask for a probability number, because no answer that complies with the instructions would be right. For instance, I consider the “Many Worlds” hypothesis to be effectively meaningless, since while it does describe a set of plausible alleged facts, there is, as far as I know, no possible experiment that could falsify it. (“Supernatural” is also effectively meaningless, but for a different reason: vagueness. “Magic”, to me, describes only situations where Clarke’s Third Law applies. And so forth.)
I would like to participate in a deeper discussion of the idea of the Singularity, but don’t know if that’s welcome on LW. I want to attack the idea on several levels: (1) the definition of it, which may be too vague to be falsifiable; (2) the definition of intelligence—I don’t think we’re talking about a mere chess-playing computer, but it’s not clear to me whether Minsky’s criteria are sufficient; (3) if those first two points are somehow nailed down, then I’m not at all sure that a machine intelligence is desirable, and certainly I’d hesitate to connect one to hardware with enough abilities that the revolution in “I, Robot” becomes possible; and (4) if such a change does happen, I would prefer, and I think most people would insist, that it happen relatively slowly to give everyone then alive time to cope with the change, thus making it not really a singularity in the mathematical sense.
(I do like the transhumanist notion that humans should feel free to modify our own hardware individually, but I don’t see that as necessarily connected with a Singularity, and I don’t use the jargon of transhumanism for the same reason I avoid the jargon of anarchism when talking politics—it scares people needlessly.)
I left both MIRI questions blank because I don’t know who or what MIRI is.
Re. The Great Stagnation: This theory asserts that we are in an economic stall, if you will, because of a lack of innovation, and is set against the assertion of a “Great Divergence” in which rising income inequality and globalization are to blame for the stall. I didn’t answer because I consider both views to be baloney—we are in an economic stall because of unnecessary and crony-driven overregulation, much of it done in the name of the misguided green and “social justice” movements.
I didn’t do the finger length questions; not sure what “the bottom crease” is, or maybe I don’t have them. (Do you mean the crease at the base of the fingers, or one farther down on the hand?)
Re. feminism, I answered based on what I believe the current use of the term is, which is not at all like the definition on Wikipedia. Wikipedia calls it more or less pro-equality and I support that, but the current usage is more like “social justice” and that whole concept is complete hooey.
You should be able to find a lot of info about the Singularity (and proposed ways to influence its outcome) in MIRI publications and LW posts. If you want to have further discussions about the Singularity you can comment below the relevant LW posts.
It’s supposed to refer to the crease at the base of the fingers.
Why was I downvoted? Was that from you, jdgalt? Were you hoping to have the Singularity discussion here instead of below another post? If so that wasn’t clear to me from your above comment, since you were asking about whether it was welcome on LW, and you seemed to be going off on a tangent (particularly with your latter two points). Also, you didn’t seem like you possessed much of the background knowledge regarding intelligence explosion and friendly/unfriendly AI, so I thought you would find it helpful for me to point you toward some relevant sources that might answer your questions, not to mention provide more general information on the topic. Of course, if you’re not interested in general information I’d be willing to address your specific questions.
Sorry, I’m not trying to be confrontational, I just want to understand what I did wrong so that I can better improve the quality of my comments, as well as clear up any misunderstandings.
It seems that I was the one who downvoted you, but now I don’t remember why. I’ve retracted it for now, since I don’t see anything wrong with the comment. May have just been a clicking error.
Thank you.
I agree with this position, and it was apparently controversial on the LW-TelAviv mailing list.
You really ought to back that up.
I’m not sure what I could post here that would back that up: it requires some economics knowledge. I can refer you to good economics blogs such as Marginal Revolution and Cafe Hayek, or to Mises’ Human Action.
It was MR that sent me here to LW in the first place.
Austrianism and “economics knowledge” do not go together. Science is built on empiricism, not on deliberately ignoring data because your ideology tells you there can be no empirical examination of human beings.
I am sorry, and on which basis do you make sweeping claims about schools of economic thought?
On the meaningless of MWI, you may find this post useful. It cleared up a lot of points for me.
It didn’t do much for me :-(
If there’s no way for me to figure out whether there’s a chocolate cake inside of the Sun or not, then I might as well assume there’s no cake, because this makes the math easier. I see MWI vs. no MWI the same way, but apparently that’s the wrong answer...
What about the question of whether something not in your light cone still exists?
Also, assuming something to make the math easier does not mean that it is meaningless. It may have little utility to calculate it if your utility function only counts things that affect you physically, although you’d still need the general rules for priors for things that can only be tested long term but influence decisions made short term.
I think I see what you are trying to say, but I don’t think the Boltzmann Cake Theory is comparable to Many Worlds.
In the Boltzmann Cake case, it may be impossible to physically test the theory (though I don’t conclusively assume so—there could well be some very subtle effect on the Sun’s output that would facilitate such a test), but the question of fact it raises is still of objective fact.
But the truth or falsity of the Many Worlds Theory can only exist in a reference frame which spans the entire conceptual space in which the many worlds would have to coexist. And I don’t believe such a frame can exist. The very fabric of logic itself requires a space-time in which to exist; without one (or extending beyond one) its very postulates become open to doubt.