That’s odd and catches me completely off guard. I wouldn’t expect someone who seems to be deeply inside the hive to both cognize my stance as well as you have and be judging that my heretofore unstated arguments might be worth hearing. Your submission history reflects what I assume; that you are on the outer edges of the hive despite an apparently deep investment.
With the forewarning that my ideas may well be hard to rid yourself of and that you might lack the communicate skills to adequately convey the ideas to your peers, are you willing to accept the consequences of being rejected by the immune system? You’re risking becoming a “carrier” of the ideas here.
Why don’t you just post them explicitly? As long they don’t involve modeling a vengeful far-future AI everyone will be fine. Plus, then you can actually test to see if they will be rejected.
Why are you convinced I haven’t posted them explicitly? Or otherwise tested the reactions of LessWrongers to my ideas? Are you under the impression that they were going to be recognized as worth thinking about and that they would be brought to your personal attention?
Let’s say I actually possess ideas with future light cones on order of strong AI. Do you earently expect me to honestly send that signal and bring a ton of attention to myself? In a world of fools that want nothing more than to believe in divinity? (Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, “Works in mysterious ways that we can’t hope to fathom.”)
I have every reason not to share my thoughts and every reason to play coy and try to get LessWrong thinking for itself. I’m getting pretty damn close to jumping ship and watching the aftermath here as it is.
I’m just trying to encourage you to make you contributions moderately interesting. I don’t really care how special you think you are.
Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, “Works in mysterious ways that we can’t hope to fathom.”
Wow, what an interesting perspective. Never heard that before.
I don’t really care how special you think you are.
See, that’s the kind of stance I can appreciate. Straight to the point without any wasted energy. That’s not the majority response LessWrong gives, though. If people really wanted me to post about this as the upvotes on the posts urging me to post about this would suggest, why is each and every one of my posts getting downvoted? How am I supposed to actually do what people are suggesting when they are actively preventing me from doing so?
...Or is the average voter simply not cognizant enough to realize this...?
Worst effect of having sub-zero karma? Having to wait ten minutes between comments.
Wow, what an interesting perspective. Never heard that before.
Sarcasm. We get the “oh this is just like theism!” position articulated here every ten months or so. Those of us who have been here a while are kind of bored with it. (Yes, yes, yes, no doubt that simply demonstrates our inadequate levels of self-awareness and metacognition.)
No, I suppose you’ll need a fuller description to see why the similarity is relevant.
LessWrong is sci-fi. Check what’s popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
These concepts straight out of sci-fi have next to zero basis. Who is to say there even are concepts that the human mind simply can’t grasp? I can’t visualize in n-dimensional space, but I can certainly understand the concept. Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be? Are generation ships feasible? Is there some way to warp space to go fast enough that you don’t need an entire ecosystem on board? If complex information processing nanites aren’t feasible, is reanimation? These concepts aren’t new, they’ve been around for ages. It’s Magic 2.0.
If it’s not about evidence, what is it about? I’m not denying any of these possibilities, but aside from being fun ideas, we are nowhere near close to proving them legitimate. It’s not something people are believing in because “it only makes sense.” It’s fantasy at it’s base, and if it turns out to be halfway possible, great. What if it doesn’t? Is there going to be some point in the future where LessWrong lets go of these childish ideas of simulated worlds and supertechnological abilities? 100 years from now, if we don’t have AI and utility fog, is LessWrong going to give up these ideas? No. Because that just means that we’re closer to finally realizing the technology! Grow up already. This stuff isn’t reasonable, it’s just plausible, and our predictions are nothing more than mere predictions. LessWrong believes this stuff because LessWrong wants to believe in this stuff. At this moment in time, it is pure fiction.
If it’s not rationa—No, you’ve stopped following along by now. It’s not enough to point out that the ideas are pure fiction that humanity has dreamed about for ages. I can’t make an argument within the context that it’s irrational because you’ve heard it all before. What, do you just ignore it? Do you have an actual counter-point? Do you just shrug it off because “it’s obvious” and you don’t like the implications?
Seriously. Grow up. If there’s a reason for me to think LessWrong isn’t filled with children who like to believe in Magic 2.0, I’m certainly not seeing it.
LessWrong is sci-fi. Check what’s popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
It is true that people have written unrealistic books about these things. People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.
Who is to say there even are concepts that the human mind simply can’t grasp? I can’t visualize in n-dimensional space, but I can certainly understand the concept
The human mind is finite, and there are infinitely many possible concepts. If you’re interested in the limits of human intelligence and the possibilities of artificial intelligence, you might want to read The Hanson-Yudkowsky Debate .
Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be?
People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.
Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we’d need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I’d love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I’m not holding my breath for femtoengineering. Nevertheless, if such things were as readily predictable as people seem to think, you have to ask why we don’t have the technology already. The answer is that actually expressing our ideas onto physical reality is non-trivial, and by direct consequence, potentially non-viable.
The human mind is finite, and there are infinitely many possible concepts.
I need backing on both of these points. As far as I know, there isn’t enough verified neuroscience to determine if our brains are conceptually limited in any way. Primarily because we don’t actually know how abstract mental concepts map onto physical neurons. Even ignoring that (contrary to memetic citation) the brain does grow new neural cells and repair itself in adults, even if the number of neurons is finite, the number of and potential for connections between them is astronomical. We simply don’t know the maximum conceptual complexity of the human brain.
As far as there being infinitely many concepts, “flying car” isn’t terribly more complicated than “car” and “flying.” Even if something in the far future is given a name other than “car,” we can still grasp the concept of “transportation device,” paired with any number of accessory concepts like, “cup holder,” “flies,” “transforms,” “teleports,” and so on. Maybe it’s closer to a “suit” than anything we would currently call a “car;” some sort of “jetpack” or other. I’d need an expansion on “concept” before you could effectively communicate that concept-space is infinite. Countably infinite or uncountably infinite? All the formal math I’m aware of indicates that things like conceptual language are incomputable or give rise to paradoxes or some other such problem that would make “infinite” simply be inapplicable/nonsensical.
This doesn’t actually counter my argument, for two main reasons:
That wasn’t my argument.
That doesn’t counter anything.
Please don’t bother replying to me unless you’re going to actually explain something. Anything else is disuseful and you know it. I want to know how you justify to yourself that LessWrong is anything but childish. If you’re not willing to explain that, I’m not interested.
I want to know how you justify to yourself that LessWrong is anything but childish.
I don’t.
I often have conversations here that interest me, which is all the justification I need for continuing to have conversations here. If I stopped finding them interesting, I would stop spending time here.
Perhaps those conversations are childish; if so, it follows that I am interested in childish conversations. Perhaps it follows that I myself am childish. That doesn’t seem true to me, but presumably if it is my opinions on the matter aren’t worth much.
All of that would certainly be a low-status admission, but denying it or pretending otherwise wouldn’t change the fact if it’s true. It seems more productive to pursue what interests me without worrying too much about how childish it is or isn’t, let alone worrying about demonstrating to others that I or LW meet some maturity threshold.
I have every reason not to share my thoughts and every reason to play coy and try to get LessWrong thinking for itself. I’m getting pretty damn close to jumping ship and watching the aftermath here as it is.
Few places online appreciate drama-queening, you know.
How specifically can you be surprised to hear “be specific” on LessWrong? (Because that’s more or less what Nancy said.) If nothing else, this suggests that your model of LessWrong is seriously wrong.
Giving specific examples of “LessWrong is unable to discuss X, Y, Z” is so much preferable to saying “you know… LessWrong is a hivemind… there are things you can’t think about...” without giving any specific examples.
How specifically? Easy. Because LessWrong is highly dismissive, and because I’ve been heavily signalling that I don’t have any actual arguments or criticisms. I do, obviously, but I’ve been signalling that that’s just a bluff on my part, up to an including this sentence. Nobody’s supposed to read this and think, “You know, he might actually have something that he’s not sharing.” Frankly, I’m surprised that with all the attention this article got that I haven’t been downvoted a hell of a lot more. I’m not sure where I messed up that LessWrong isn’t hammering me and is actually bothering to ask for specifics, but you’re right; it doesn’t fit the pattern I’ve seen prior to this thread.
I’m not yet sure where the limits of LessWrong’s patience lie, but I’ve come too far to stop trying to figure that out now.
I’m not yet sure where the limits of LessWrong’s patience lie, but I’ve come too far to stop trying to figure that out now.
I do not represent Less Wrong, but you have crossed a limit with me. The magic moment came when I realized that BaconServ means spambot. Spammers are the people I most love to hate. I respond to their provocations with a genuine desire to find them and torture them to death. If you were any more obnoxious, I wouldn’t even be telling you this, I would just be trying to find out who you are.
So wake the fuck up. We are all real people with lives, stop wasting our time. Try to keep the words “I”, “Less Wrong”, and “signalling” out of your next two hundred comments.
ETA This angry comment was written while under pressure and without a study of BaconServ’s full posting history, and should not be interpreted as a lucid assessment.
That’s odd and catches me completely off guard. I wouldn’t expect someone who seems to be deeply inside the hive to both cognize my stance as well as you have and be judging that my heretofore unstated arguments might be worth hearing. Your submission history reflects what I assume; that you are on the outer edges of the hive despite an apparently deep investment.
With the forewarning that my ideas may well be hard to rid yourself of and that you might lack the communicate skills to adequately convey the ideas to your peers, are you willing to accept the consequences of being rejected by the immune system? You’re risking becoming a “carrier” of the ideas here.
Why don’t you just post them explicitly? As long they don’t involve modeling a vengeful far-future AI everyone will be fine. Plus, then you can actually test to see if they will be rejected.
Why are you convinced I haven’t posted them explicitly? Or otherwise tested the reactions of LessWrongers to my ideas? Are you under the impression that they were going to be recognized as worth thinking about and that they would be brought to your personal attention?
Let’s say I actually possess ideas with future light cones on order of strong AI. Do you earently expect me to honestly send that signal and bring a ton of attention to myself? In a world of fools that want nothing more than to believe in divinity? (Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, “Works in mysterious ways that we can’t hope to fathom.”)
I have every reason not to share my thoughts and every reason to play coy and try to get LessWrong thinking for itself. I’m getting pretty damn close to jumping ship and watching the aftermath here as it is.
I’m just trying to encourage you to make you contributions moderately interesting. I don’t really care how special you think you are.
Wow, what an interesting perspective. Never heard that before.
See, that’s the kind of stance I can appreciate. Straight to the point without any wasted energy. That’s not the majority response LessWrong gives, though. If people really wanted me to post about this as the upvotes on the posts urging me to post about this would suggest, why is each and every one of my posts getting downvoted? How am I supposed to actually do what people are suggesting when they are actively preventing me from doing so?
...Or is the average voter simply not cognizant enough to realize this...?
Worst effect of having sub-zero karma? Having to wait ten minutes between comments.
Not sure if sarcasm or...
Sarcasm.
We get the “oh this is just like theism!” position articulated here every ten months or so.
Those of us who have been here a while are kind of bored with it.
(Yes, yes, yes, no doubt that simply demonstrates our inadequate levels of self-awareness and metacognition.)
What, and you just ignore it?
No, I suppose you’ll need a fuller description to see why the similarity is relevant.
LessWrong is sci-fi. Check what’s popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
These concepts straight out of sci-fi have next to zero basis. Who is to say there even are concepts that the human mind simply can’t grasp? I can’t visualize in n-dimensional space, but I can certainly understand the concept. Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be? Are generation ships feasible? Is there some way to warp space to go fast enough that you don’t need an entire ecosystem on board? If complex information processing nanites aren’t feasible, is reanimation? These concepts aren’t new, they’ve been around for ages. It’s Magic 2.0.
If it’s not about evidence, what is it about? I’m not denying any of these possibilities, but aside from being fun ideas, we are nowhere near close to proving them legitimate. It’s not something people are believing in because “it only makes sense.” It’s fantasy at it’s base, and if it turns out to be halfway possible, great. What if it doesn’t? Is there going to be some point in the future where LessWrong lets go of these childish ideas of simulated worlds and supertechnological abilities? 100 years from now, if we don’t have AI and utility fog, is LessWrong going to give up these ideas? No. Because that just means that we’re closer to finally realizing the technology! Grow up already. This stuff isn’t reasonable, it’s just plausible, and our predictions are nothing more than mere predictions. LessWrong believes this stuff because LessWrong wants to believe in this stuff. At this moment in time, it is pure fiction.
If it’s not rationa—No, you’ve stopped following along by now. It’s not enough to point out that the ideas are pure fiction that humanity has dreamed about for ages. I can’t make an argument within the context that it’s irrational because you’ve heard it all before. What, do you just ignore it? Do you have an actual counter-point? Do you just shrug it off because “it’s obvious” and you don’t like the implications?
Seriously. Grow up. If there’s a reason for me to think LessWrong isn’t filled with children who like to believe in Magic 2.0, I’m certainly not seeing it.
It is true that people have written unrealistic books about these things. People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.
The human mind is finite, and there are infinitely many possible concepts. If you’re interested in the limits of human intelligence and the possibilities of artificial intelligence, you might want to read The Hanson-Yudkowsky Debate .
Drexler wrote a PhD thesis which probably answers this. For discussion on LessWrong, see Is Molecular Nanotechnology “Scientific”? and How probable is Molecular Nanotech?.
Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we’d need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I’d love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I’m not holding my breath for femtoengineering. Nevertheless, if such things were as readily predictable as people seem to think, you have to ask why we don’t have the technology already. The answer is that actually expressing our ideas onto physical reality is non-trivial, and by direct consequence, potentially non-viable.
I need backing on both of these points. As far as I know, there isn’t enough verified neuroscience to determine if our brains are conceptually limited in any way. Primarily because we don’t actually know how abstract mental concepts map onto physical neurons. Even ignoring that (contrary to memetic citation) the brain does grow new neural cells and repair itself in adults, even if the number of neurons is finite, the number of and potential for connections between them is astronomical. We simply don’t know the maximum conceptual complexity of the human brain.
As far as there being infinitely many concepts, “flying car” isn’t terribly more complicated than “car” and “flying.” Even if something in the far future is given a name other than “car,” we can still grasp the concept of “transportation device,” paired with any number of accessory concepts like, “cup holder,” “flies,” “transforms,” “teleports,” and so on. Maybe it’s closer to a “suit” than anything we would currently call a “car;” some sort of “jetpack” or other. I’d need an expansion on “concept” before you could effectively communicate that concept-space is infinite. Countably infinite or uncountably infinite? All the formal math I’m aware of indicates that things like conceptual language are incomputable or give rise to paradoxes or some other such problem that would make “infinite” simply be inapplicable/nonsensical.
(nods) IOW, it merely demonstrates our inadequate levels of self-awareness and meta-cognition.
This doesn’t actually counter my argument, for two main reasons:
That wasn’t my argument.
That doesn’t counter anything.
Please don’t bother replying to me unless you’re going to actually explain something. Anything else is disuseful and you know it. I want to know how you justify to yourself that LessWrong is anything but childish. If you’re not willing to explain that, I’m not interested.
I don’t.
I often have conversations here that interest me, which is all the justification I need for continuing to have conversations here. If I stopped finding them interesting, I would stop spending time here.
Perhaps those conversations are childish; if so, it follows that I am interested in childish conversations. Perhaps it follows that I myself am childish. That doesn’t seem true to me, but presumably if it is my opinions on the matter aren’t worth much.
All of that would certainly be a low-status admission, but denying it or pretending otherwise wouldn’t change the fact if it’s true. It seems more productive to pursue what interests me without worrying too much about how childish it is or isn’t, let alone worrying about demonstrating to others that I or LW meet some maturity threshold.
Few places online appreciate drama-queening, you know.
Hypothesis: the above was deliberate downvote-bait.
I’m willing to take the risk. PM or public comment as you prefer.
I would prefer public comment, or to be exposed to the information as well.
How specifically can you be surprised to hear “be specific” on LessWrong? (Because that’s more or less what Nancy said.) If nothing else, this suggests that your model of LessWrong is seriously wrong.
Giving specific examples of “LessWrong is unable to discuss X, Y, Z” is so much preferable to saying “you know… LessWrong is a hivemind… there are things you can’t think about...” without giving any specific examples.
How specifically? Easy. Because LessWrong is highly dismissive, and because I’ve been heavily signalling that I don’t have any actual arguments or criticisms. I do, obviously, but I’ve been signalling that that’s just a bluff on my part, up to an including this sentence. Nobody’s supposed to read this and think, “You know, he might actually have something that he’s not sharing.” Frankly, I’m surprised that with all the attention this article got that I haven’t been downvoted a hell of a lot more. I’m not sure where I messed up that LessWrong isn’t hammering me and is actually bothering to ask for specifics, but you’re right; it doesn’t fit the pattern I’ve seen prior to this thread.
I’m not yet sure where the limits of LessWrong’s patience lie, but I’ve come too far to stop trying to figure that out now.
I do not represent Less Wrong, but you have crossed a limit with me. The magic moment came when I realized that BaconServ means spambot. Spammers are the people I most love to hate. I respond to their provocations with a genuine desire to find them and torture them to death. If you were any more obnoxious, I wouldn’t even be telling you this, I would just be trying to find out who you are.
So wake the fuck up. We are all real people with lives, stop wasting our time. Try to keep the words “I”, “Less Wrong”, and “signalling” out of your next two hundred comments.
ETA This angry comment was written while under pressure and without a study of BaconServ’s full posting history, and should not be interpreted as a lucid assessment.