Imagine LessWrong started with an obsessive focus on the dangers of time-travel.
Because the writers are persuasive there are all kinds of posts filled with references that are indeed very persuasive regarding the idea that time-travel is ridiculously dangerous, will wipe out all human life and we must make all attempts to stop time-travel.
So we see some new quantum entanglement experiment treated with a kind of horror. People would breathlessly “update their horizon” like this matters at all. Physicists completing certain problems or working in certain areas would be mentioned by name and some people would try to reach out to them to convince them how dangerous time-travel and what they’re doing is.
Meanwhile, from someone not taken in by very persuasive writing, vast holes are blindingly obvious. When those vast holes are discussed… well, they’re not discussed. They get nil traction, are ignored, aren’t treated with any seriousness.
Examples of magical thinking (they’re going to find unobtainium and that’ll be it, they’ll have a working time-machine within five years) are rife but rarely challenged.
I view a lot of LessWrong like this.
I’ll provide two examples.
AI will improve itself very quickly, becoming the most intelligent being that can exist and then will have the power to wipe humans out.
AI will be able to make massive technological jumps, here come nanites, bye humans
For 1 - we don’t have any examples of this in nature. We have evolution over enormous timelines which has eventually produced intelligence in humans and varying degrees of it in other species. We don’t have any strong examples of computers improving code which in turn improves code which in turn improves code. ChatGPT for all the amazing things it can do—okay, so here’s the source code for Winzip, make compression better. I do agree “this slow thing but done faster” is possible but it is an extraordinarily weak claim that self-improvement can exist at all. Just because learning exists, does not mean fundamental architecture upgrades can be made self-recursively.
For 2 - AI seems to always be given near godlike magical powers. It will be able to “hack” any computer system. Oh, so it worked out how to break all cryptography? It will be able to take over manufacturing to make things to kill people? How exactly? It’ll be able to work up a virus to kill all humans and then hire some lab to make it… are we really sure about this?
I wrote about the “reality of the real world” recently. So many technologies and processes aren’t written down. They’re stored in meat minds, not in patents, and embodied in plant equipment and vast, intricate supply chains. Just trying to take over Taiwan chip manufacturing would be near impossible because they’re so far out on the the cutting edge they jealously guard their processes.
I love sci-fi but there are more than a few posts that are pretty close to sci-fi fan fiction than actual real problems.
The risk of humans using ChatGPT and so on to distort narratives, destroy opponents, screw with political processes and so on seems vastly more deadly and serious than an AI will self-improve and kill us all.
Going back to the idea of LessWrong obsessed with time-travel—what would you think of such a place? It would have all the predictions, and persuasive posts, and people very dedicated to it… and they could all just be wrong.
For what it’s worth, I strongly support the premise that anything possible in nature is possible for humans to replicate with technology. X-rays exist, we learn how to make and use them. Fusion exists, we will learn how to make fusion. Intelligence/sentience/sapience exists—we will learn how to do this. But I rarely see anyone touch on the idea of “what if we only make something as smart as us?”
For 1 - we don’t have any examples of this in nature.
We don’t have any examples of steam engines, supersonic aircraft or transistors in nature either. Saying that something can’t happen because it hasn’t evolved in nature is an extraordinarily poor argument.
We do have examples of these things in nature, in degrees. Like flowers turning to the sun because they contain light-sensing cells. Thus, it exists in nature and we eventually replicate it.
Steam engines is just energy transfer and use, and that exists. So does flying fast.
Something not in nature (as far as we can tell) is teleportation. Living inside a star.
I don’t mean specific narrow examples in nature. I mean the broader idea.
So I can see intelligence evolving over enormous time-frames, and learning exists, so I do concur we can speed up learning and replicate it… but the underlying idea of a being modifying itself? Nowhere in nature. No examples anywhere on any level.
You have no control down on the cellular level over your body. No deliberate conscious control. No being does. This is what I mean by does not exist in nature. Like teleportation.
If I do weight training, my muscles get bigger and stronger. If I take a painkiller, a toothache is reduced in severity. A vaccination gives me better resistance to some disease. All of these are myself modifying myself.
Everything you have written on this subject seems to be based on superficial appearances and analogies, with no contact with the deep structure of things.
You have no atomic level control over that. You can’t grow a cell at will or kill one or release a hormone. This is what I’m referring to. No being that exists has this level of control. We all operate far above the physical reality of our bodies.
But we suggest an AI will have atomic control. Or that code control is the same as control.
Total control would be you sitting there directing cells to grow or die or change at will.
No AI will be there modifying the circuitry it runs on down at the atomic level.
Quick very off the cuff mod note: I haven’t actually looked into the details of this thread and don’t have time today, but skimming it it looks like it’s maybe spiralling into a Demon Thread and it might be good for people to slow down and think more about what their goals are.
(If everyone involved is actually just having fun hashing an idea out, sorry for my barging in)
Your argument is fundamentally broken, because nature only contains things that happen to biologically evolve, so it first has to be the result of the specific algorithm (evolution) and also the result of a random roll of a dice (the random part of it). Even if there were no self-modifying beings in nature (humans do self-modify) or self-modifying AI, it would still be prima facie possible for it to exist because all it means it is for the being to turn its optimization power at itself (this is prima facie possible, since the being is a part of the environment).
So instead of trying to think of an argument about why something that already exists is impossible, you should’ve simply considered the general principle.
No being has cellular level control. Can’t direct brain cells to grow or hormones to release etc. This is what I mean by it does not exist in nature. There is no self modification that is being propagated that AI will have.
Teleportation doesn’t exist so we shouldn’t make arguments where teleportation is part of it.
No being has cellular level control. Can’t direct brain cells to grow or hormones to release etc.
Humans can already do that, albeit indirectly. Once again, you’re “explaining” why something that already exists is impossible.
It’s sufficient for a self-modifying superhuman AI that it can do that indirectly (for it to be self-modifying), but self-modification of the source code is even easier than manipulation on the level of individual molecules.
1) True, we don’t have any examples of this in nature. Would we expect them?
Lets say that to improve something, it is necessary and sufficient to understand it and have some means to modify it. Plenty of examples, most of the complicated ones are with humans understanding some technology and designing a better version.
At the moment, the only minds able to understand complicated things are humans, and we haven’t got much human self improvement because neuroscience is hard.
I think it is fairly clear that there is a large in practice gap between humans and the theoretical/physical limits to intelligence. Evidence of this includes neuron signals traveling at a millionth light speed, most of the heuristics and biases literature, humans just sucking at arithmetic.
The AI’s working on AI research is a positive feedback loop, and probably quite a strong one. It seems that, when a new positive feedback loop is introduced, the rate of progress should speed up, not slow down.
2) You attribute magical chess game winning powers to stockfish. But how in particular would it win? Would it use it’s pawns, advance a knight? The answer is that I don’t know which move in chess is best. And I don’t know what stockfish will do. But these two probability distributions are strongly correlated, in the sense that I am confidant stockfish will make one of the best moves.
I don’t know what an ASI will do, and I don’t know where the vulnerabilities are, but again, I think these are correlated. If an SQL injection would work best, the AI will use an SQL injection. If a buffer overflow works better, the AI will use that.
There is an idea here that modern software is complicated enough that most of it is riddled with vulnerabilities. This is a picture backed up by the existence of hacks like stuxnet, where a big team of humans put a lot of resources and brainpower into hacking a particular “highly secure” target and succeeded.
I mean it might be that P=NP and the AI finds a quick way to factorize large primes. Or it might be that the AI gets really good at phishing instead.
Some of the “AI has a magic ability to hack everything” is worst case thinking. We don’t want security to rest on the assumption that the AI can’t hack a particular system.
It’ll be able to work up a virus to kill all humans and then hire some lab to make it… are we really sure about this?
(Naturally, if the AI is hiring a lab to make it’s kill all humans virus, it will have done it’s homework. It sets up a webpage claiming to be a small biotech startup. Claims that the virus is a prototype cancer cure. Writes a plausible looking papers talking about a similar substance reducing cancer in rats. …)
I am not confident that it will be able to do that. But there are all sorts of things it could try, from strangelets to a really convincing argument for why we should all kill ourselves. And I expect the AI to be really good at working out which approach would work.
The power of reason is that it is easier to write convincing rational arguments for true things than for false things. I don’t think there is a similarly convincing case for the dangers of timetravel. (I mean I wouldn’t be surprised if there was some hypno-video that convinced me that the earth was flat, but that’s beside the point, because such video wouldn’t be anything like rational argument)
But I rarely see anyone touch on the idea of “what if we only make something as smart as us?”
But why would intelligence reach human level and then halt there? There’s no reason to think there’s some kind of barrier or upper limit at that exact point.
Even in the weird case where that were true, aren’t computers going to carry on getting faster? Just running a human level AI on a very powerful computer would be a way of creating a human scientist that can think at 1000x speed, create duplicates of itself, modify it’s own brain. That’s already a superintelligence isn’t it?
The assumption there is that the faste the hardware underneath, the faster the sentience running on it will be. But this isn’t supported by evidence. We haven’t produced a sentient AI to know whether this is true or not.
For all we know, there may be a upper limit to “thinking” based on neural propagation of information. To understand and integrate a concept requires change and that change may move slowly across the mind and underlying hardware.
Humans have sleep for example to help us learn and retain information.
As for self modification—we don’t have atomic level control over the meat we run on. A program or model doesn’t have atomic level control over its hardware. It can’t move an atom at will in its underlying circuitry to speed up processing for example. This level of control does not exist in nature in any way.
We don’t know so many things. For example, what if consciousness requires meat? That it is physically impossible on anything other than meat? We just assume it’s possible using metal and silica.
A helpful way of thinking about 2 is imagining something less intelligent than humans trying to predict how humans will overpower it.
You could imagine a gorilla thinking “there’s no way a human could overpower us. I would just punch it if it came into my territory.”
The actual way a human would overpower it is literally impossible for the gorilla to understand (invent writing, build a global economy, invent chemistry, build a tranquilizer dart gun...)
The AI in the AI takeover scenario is that jump of intelligence and creativity above us. There’s literally no way a puny human brain could predict what tactics it would use. I’d imagine it almost definitely involves inventing new branches of science.
I’d suggest there may be an upper bound to intelligence because intelligence is bound by time and any AI lives in time like us. They can’t gather information from the environment any faster. They cannot automatically gather all the right information. They cannot know what they do not know.
The system of information, brain propagation, cellular change runs at a certain speed for us. We cannot know if it is even possible to run faster.
One of the magical thinking criticisms I have of AI is that it suddenly is virtually omniscient. Is that AI observing mold cultures and about to discover penicillin? Is it doing some extremely narrow gut bateria experiment to reveal the source of some disease? No it’s not. Because there are infinite experiments to run. It cannot know what it does not know. Some things are Petri dishes and long periods of time in the physical world and require a level of observation the AI may not possess.
Yes, physical constraints do impose an upper bound. However, I would be shocked if human-level intelligence were anywhere close to that upper bound. The James Webb Space Telescope has an upper bound on the level of detail it can see based on things like available photons and diffraction but it’s way beyond what we can detect with the naked human eye.
This is approximately my experience of this place.
That, and the apparent runaway cult generation machine that seems to have started.
Seriously, it is apparent that over the last few years the mental health of people involved with this space has collapsed and started producing multiple outright cults. People should stay out of this fundamentally broken epistemic environment. I come closer to expecting a Heaven’s Gate event every week when I learn about more utter insanity.
I agree. When you look up criticism of LessWrong you find plenty of very clear, pointed, and largely correct criticisms.
I used time-travel as my example because I didn’t want to upset people but really any in-group/out-group forum holding some wild ideas would have sufficed. This isn’t at Flat Earther levels yet but it’s easy to see the similarities.
There’s the unspoken things you must not say otherwise you’ll be pummeled, ignored or fought. Blatantly obvious vast holes are routinely ignored. A downvote mechanism works to push comments down.
Talking about these problems just invites people in the problems to attempt to draw you in with the flawed arguments.
Saying, hey, take three big steps back from the picture and look again doesn’t get anywhere.
Some of the posts I’ve seen on here are some sort of weird doom cosplay. A person being too scared to criticize Bing Chatgpt? Seriously? That can’t be real. It reminds me of the play-along posts I’ve seen in antivaxxer communities in a way.
The idea of “hey, maybe you’re just totally wrong” isn’t super useful to move anything but it seems obvious that fan-fiction of nanites and other super techs that exist only in stories could probably be banned and this would improve things a lot.
But beyond that, I’m not certain this place can be saved or eventually be useful. Setting up a place proclaiming it’s about rationality is interesting and can be good but it also implicitly states that those who don’t share your view are irrational, and wrong.
As the group-think develops any voice not in line is pushed out all the ways they can be pushed out and there’s never a make-or-break moment where people stand up and state outright that certain topics/claims are no longer permitted (like nanites killing us all).
The OP may be a canary, making a comment but none of the responses here produced a solution or even a path.
I’d suggest one: you can’t write nanite until we make nanites. Let’s start with that.
If you link me to 1-3 criticisms which you think are clear, pointed, and largely correct, I’ll go give them a skim at least. I’m curious. You are under no obligation to do this but if you do I’ll appreciate it.
...I have, and haven’t found anything good. (I’m ignoring the criticisms hosted on LessWrong itself, which presumably don’t count?) That’s why I asked for specific links. Now it sounds like you don’t actually have anything in mind that you think would stand up to minimal scrutiny.
The RationalWiki article does make some good points about LW having to reinvent the wheel sometimes due to ignorance of disparagement of the philosophical literature. As criticisms go this is extremely minor though… I say similar things about the complaints about Yudkowsky’s views on quantum physics and consciousness.
Do you have a specific criticism? I tried that search, and the first result goes right back to lesswrong itself, you could just link the same article. Second criticism is on the lesswrong subreddit. Third is rational wiki, where apparently some thought experiment called Roko’s Basilisk got out of hand 13 years ago.
Most of the other criticisms are “it looks like a cult”, which is a perfectly fair take, and it arguably is a cult that believes in things that happen to be more true than the beliefs of most humans. Or “a lack of application” for rationality, which was also true, pre machine learning.
Imagine LessWrong started with an obsessive focus on the dangers of time-travel.
Because the writers are persuasive there are all kinds of posts filled with references that are indeed very persuasive regarding the idea that time-travel is ridiculously dangerous, will wipe out all human life and we must make all attempts to stop time-travel.
So we see some new quantum entanglement experiment treated with a kind of horror. People would breathlessly “update their horizon” like this matters at all. Physicists completing certain problems or working in certain areas would be mentioned by name and some people would try to reach out to them to convince them how dangerous time-travel and what they’re doing is.
Meanwhile, from someone not taken in by very persuasive writing, vast holes are blindingly obvious. When those vast holes are discussed… well, they’re not discussed. They get nil traction, are ignored, aren’t treated with any seriousness.
Examples of magical thinking (they’re going to find unobtainium and that’ll be it, they’ll have a working time-machine within five years) are rife but rarely challenged.
I view a lot of LessWrong like this.
I’ll provide two examples.
AI will improve itself very quickly, becoming the most intelligent being that can exist and then will have the power to wipe humans out.
AI will be able to make massive technological jumps, here come nanites, bye humans
For 1 - we don’t have any examples of this in nature. We have evolution over enormous timelines which has eventually produced intelligence in humans and varying degrees of it in other species. We don’t have any strong examples of computers improving code which in turn improves code which in turn improves code. ChatGPT for all the amazing things it can do—okay, so here’s the source code for Winzip, make compression better. I do agree “this slow thing but done faster” is possible but it is an extraordinarily weak claim that self-improvement can exist at all. Just because learning exists, does not mean fundamental architecture upgrades can be made self-recursively.
For 2 - AI seems to always be given near godlike magical powers. It will be able to “hack” any computer system. Oh, so it worked out how to break all cryptography? It will be able to take over manufacturing to make things to kill people? How exactly? It’ll be able to work up a virus to kill all humans and then hire some lab to make it… are we really sure about this?
I wrote about the “reality of the real world” recently. So many technologies and processes aren’t written down. They’re stored in meat minds, not in patents, and embodied in plant equipment and vast, intricate supply chains. Just trying to take over Taiwan chip manufacturing would be near impossible because they’re so far out on the the cutting edge they jealously guard their processes.
I love sci-fi but there are more than a few posts that are pretty close to sci-fi fan fiction than actual real problems.
The risk of humans using ChatGPT and so on to distort narratives, destroy opponents, screw with political processes and so on seems vastly more deadly and serious than an AI will self-improve and kill us all.
Going back to the idea of LessWrong obsessed with time-travel—what would you think of such a place? It would have all the predictions, and persuasive posts, and people very dedicated to it… and they could all just be wrong.
For what it’s worth, I strongly support the premise that anything possible in nature is possible for humans to replicate with technology. X-rays exist, we learn how to make and use them. Fusion exists, we will learn how to make fusion. Intelligence/sentience/sapience exists—we will learn how to do this. But I rarely see anyone touch on the idea of “what if we only make something as smart as us?”
We don’t have any examples of steam engines, supersonic aircraft or transistors in nature either. Saying that something can’t happen because it hasn’t evolved in nature is an extraordinarily poor argument.
We do have examples of these things in nature, in degrees. Like flowers turning to the sun because they contain light-sensing cells. Thus, it exists in nature and we eventually replicate it.
Steam engines is just energy transfer and use, and that exists. So does flying fast.
Something not in nature (as far as we can tell) is teleportation. Living inside a star.
I don’t mean specific narrow examples in nature. I mean the broader idea.
So I can see intelligence evolving over enormous time-frames, and learning exists, so I do concur we can speed up learning and replicate it… but the underlying idea of a being modifying itself? Nowhere in nature. No examples anywhere on any level.
Any form of learning is a being modifying itself. How else would learning occur?
You have no control down on the cellular level over your body. No deliberate conscious control. No being does. This is what I mean by does not exist in nature. Like teleportation.
If I do weight training, my muscles get bigger and stronger. If I take a painkiller, a toothache is reduced in severity. A vaccination gives me better resistance to some disease. All of these are myself modifying myself.
Everything you have written on this subject seems to be based on superficial appearances and analogies, with no contact with the deep structure of things.
You have no atomic level control over that. You can’t grow a cell at will or kill one or release a hormone. This is what I’m referring to. No being that exists has this level of control. We all operate far above the physical reality of our bodies.
But we suggest an AI will have atomic control. Or that code control is the same as control.
Total control would be you sitting there directing cells to grow or die or change at will.
No AI will be there modifying the circuitry it runs on down at the atomic level.
Quick very off the cuff mod note: I haven’t actually looked into the details of this thread and don’t have time today, but skimming it it looks like it’s maybe spiralling into a Demon Thread and it might be good for people to slow down and think more about what their goals are.
(If everyone involved is actually just having fun hashing an idea out, sorry for my barging in)
Your argument is fundamentally broken, because nature only contains things that happen to biologically evolve, so it first has to be the result of the specific algorithm (evolution) and also the result of a random roll of a dice (the random part of it). Even if there were no self-modifying beings in nature (humans do self-modify) or self-modifying AI, it would still be prima facie possible for it to exist because all it means it is for the being to turn its optimization power at itself (this is prima facie possible, since the being is a part of the environment).
So instead of trying to think of an argument about why something that already exists is impossible, you should’ve simply considered the general principle.
No being has cellular level control. Can’t direct brain cells to grow or hormones to release etc. This is what I mean by it does not exist in nature. There is no self modification that is being propagated that AI will have.
Teleportation doesn’t exist so we shouldn’t make arguments where teleportation is part of it.
Humans can already do that, albeit indirectly. Once again, you’re “explaining” why something that already exists is impossible.
It’s sufficient for a self-modifying superhuman AI that it can do that indirectly (for it to be self-modifying), but self-modification of the source code is even easier than manipulation on the level of individual molecules.
1) True, we don’t have any examples of this in nature. Would we expect them?
Lets say that to improve something, it is necessary and sufficient to understand it and have some means to modify it. Plenty of examples, most of the complicated ones are with humans understanding some technology and designing a better version.
At the moment, the only minds able to understand complicated things are humans, and we haven’t got much human self improvement because neuroscience is hard.
I think it is fairly clear that there is a large in practice gap between humans and the theoretical/physical limits to intelligence. Evidence of this includes neuron signals traveling at a millionth light speed, most of the heuristics and biases literature, humans just sucking at arithmetic.
The AI’s working on AI research is a positive feedback loop, and probably quite a strong one. It seems that, when a new positive feedback loop is introduced, the rate of progress should speed up, not slow down.
2) You attribute magical chess game winning powers to stockfish. But how in particular would it win? Would it use it’s pawns, advance a knight? The answer is that I don’t know which move in chess is best. And I don’t know what stockfish will do. But these two probability distributions are strongly correlated, in the sense that I am confidant stockfish will make one of the best moves.
I don’t know what an ASI will do, and I don’t know where the vulnerabilities are, but again, I think these are correlated. If an SQL injection would work best, the AI will use an SQL injection. If a buffer overflow works better, the AI will use that.
There is an idea here that modern software is complicated enough that most of it is riddled with vulnerabilities. This is a picture backed up by the existence of hacks like stuxnet, where a big team of humans put a lot of resources and brainpower into hacking a particular “highly secure” target and succeeded.
I mean it might be that P=NP and the AI finds a quick way to factorize large primes. Or it might be that the AI gets really good at phishing instead.
Some of the “AI has a magic ability to hack everything” is worst case thinking. We don’t want security to rest on the assumption that the AI can’t hack a particular system.
(Naturally, if the AI is hiring a lab to make it’s kill all humans virus, it will have done it’s homework. It sets up a webpage claiming to be a small biotech startup. Claims that the virus is a prototype cancer cure. Writes a plausible looking papers talking about a similar substance reducing cancer in rats. …)
I am not confident that it will be able to do that. But there are all sorts of things it could try, from strangelets to a really convincing argument for why we should all kill ourselves. And I expect the AI to be really good at working out which approach would work.
The power of reason is that it is easier to write convincing rational arguments for true things than for false things. I don’t think there is a similarly convincing case for the dangers of timetravel. (I mean I wouldn’t be surprised if there was some hypno-video that convinced me that the earth was flat, but that’s beside the point, because such video wouldn’t be anything like rational argument)
But why would intelligence reach human level and then halt there? There’s no reason to think there’s some kind of barrier or upper limit at that exact point.
Even in the weird case where that were true, aren’t computers going to carry on getting faster? Just running a human level AI on a very powerful computer would be a way of creating a human scientist that can think at 1000x speed, create duplicates of itself, modify it’s own brain. That’s already a superintelligence isn’t it?
The assumption there is that the faste the hardware underneath, the faster the sentience running on it will be. But this isn’t supported by evidence. We haven’t produced a sentient AI to know whether this is true or not.
For all we know, there may be a upper limit to “thinking” based on neural propagation of information. To understand and integrate a concept requires change and that change may move slowly across the mind and underlying hardware.
Humans have sleep for example to help us learn and retain information.
As for self modification—we don’t have atomic level control over the meat we run on. A program or model doesn’t have atomic level control over its hardware. It can’t move an atom at will in its underlying circuitry to speed up processing for example. This level of control does not exist in nature in any way.
We don’t know so many things. For example, what if consciousness requires meat? That it is physically impossible on anything other than meat? We just assume it’s possible using metal and silica.
A helpful way of thinking about 2 is imagining something less intelligent than humans trying to predict how humans will overpower it.
You could imagine a gorilla thinking “there’s no way a human could overpower us. I would just punch it if it came into my territory.”
The actual way a human would overpower it is literally impossible for the gorilla to understand (invent writing, build a global economy, invent chemistry, build a tranquilizer dart gun...)
The AI in the AI takeover scenario is that jump of intelligence and creativity above us. There’s literally no way a puny human brain could predict what tactics it would use. I’d imagine it almost definitely involves inventing new branches of science.
I’d suggest there may be an upper bound to intelligence because intelligence is bound by time and any AI lives in time like us. They can’t gather information from the environment any faster. They cannot automatically gather all the right information. They cannot know what they do not know.
The system of information, brain propagation, cellular change runs at a certain speed for us. We cannot know if it is even possible to run faster.
One of the magical thinking criticisms I have of AI is that it suddenly is virtually omniscient. Is that AI observing mold cultures and about to discover penicillin? Is it doing some extremely narrow gut bateria experiment to reveal the source of some disease? No it’s not. Because there are infinite experiments to run. It cannot know what it does not know. Some things are Petri dishes and long periods of time in the physical world and require a level of observation the AI may not possess.
Yes, physical constraints do impose an upper bound. However, I would be shocked if human-level intelligence were anywhere close to that upper bound. The James Webb Space Telescope has an upper bound on the level of detail it can see based on things like available photons and diffraction but it’s way beyond what we can detect with the naked human eye.
This is approximately my experience of this place.
That, and the apparent runaway cult generation machine that seems to have started.
Seriously, it is apparent that over the last few years the mental health of people involved with this space has collapsed and started producing multiple outright cults. People should stay out of this fundamentally broken epistemic environment. I come closer to expecting a Heaven’s Gate event every week when I learn about more utter insanity.
I agree. When you look up criticism of LessWrong you find plenty of very clear, pointed, and largely correct criticisms.
I used time-travel as my example because I didn’t want to upset people but really any in-group/out-group forum holding some wild ideas would have sufficed. This isn’t at Flat Earther levels yet but it’s easy to see the similarities.
There’s the unspoken things you must not say otherwise you’ll be pummeled, ignored or fought. Blatantly obvious vast holes are routinely ignored. A downvote mechanism works to push comments down.
Talking about these problems just invites people in the problems to attempt to draw you in with the flawed arguments.
Saying, hey, take three big steps back from the picture and look again doesn’t get anywhere.
Some of the posts I’ve seen on here are some sort of weird doom cosplay. A person being too scared to criticize Bing Chatgpt? Seriously? That can’t be real. It reminds me of the play-along posts I’ve seen in antivaxxer communities in a way.
The idea of “hey, maybe you’re just totally wrong” isn’t super useful to move anything but it seems obvious that fan-fiction of nanites and other super techs that exist only in stories could probably be banned and this would improve things a lot.
But beyond that, I’m not certain this place can be saved or eventually be useful. Setting up a place proclaiming it’s about rationality is interesting and can be good but it also implicitly states that those who don’t share your view are irrational, and wrong.
As the group-think develops any voice not in line is pushed out all the ways they can be pushed out and there’s never a make-or-break moment where people stand up and state outright that certain topics/claims are no longer permitted (like nanites killing us all).
The OP may be a canary, making a comment but none of the responses here produced a solution or even a path.
I’d suggest one: you can’t write nanite until we make nanites. Let’s start with that.
If you link me to 1-3 criticisms which you think are clear, pointed, and largely correct, I’ll go give them a skim at least. I’m curious. You are under no obligation to do this but if you do I’ll appreciate it.
Google lesswrong criticism and you’ll find them easily enough.
...I have, and haven’t found anything good. (I’m ignoring the criticisms hosted on LessWrong itself, which presumably don’t count?) That’s why I asked for specific links. Now it sounds like you don’t actually have anything in mind that you think would stand up to minimal scrutiny.
The RationalWiki article does make some good points about LW having to reinvent the wheel sometimes due to ignorance of disparagement of the philosophical literature. As criticisms go this is extremely minor though… I say similar things about the complaints about Yudkowsky’s views on quantum physics and consciousness.
Do you have a specific criticism? I tried that search, and the first result goes right back to lesswrong itself, you could just link the same article. Second criticism is on the lesswrong subreddit. Third is rational wiki, where apparently some thought experiment called Roko’s Basilisk got out of hand 13 years ago.
Most of the other criticisms are “it looks like a cult”, which is a perfectly fair take, and it arguably is a cult that believes in things that happen to be more true than the beliefs of most humans. Or “a lack of application” for rationality, which was also true, pre machine learning.