Belief in the elect. Transhumanists feel that they alone are rational enough to see the truth, to the point that some of them believe that it is better that others are not told the truth.
Incorrect. Transhumanism and rationalism are different issues. Some people self-identify as rational but not transhumanist. And outside Less Wrong, most transhumanists do not consider themselves to be rationalist in any useful sense. If you simply mean that transhumanists think they’ve recognized something as important that others have not, then any movement thinks that, whether religious or not religious.
Belief in God. Transhumanists believe that they will make a Friendly AI that will take the place of God.
Again, confusing Less Wrong attitudes about AI with general transhumanism. But even then, making a deity would be very different than having a pre-existing deity (although I see why someone of LDS background would see less distinction there). But those who want to make a Friendly AI don’t intend to worship it or the like. Moreover, many people here who don’t think that an AI is likely to foom think that it is important to figure out how to make Friendly AI because non-Friendly non-fooming AI can still do a lot of damage.
Belief in Resurrection. Transhumanists believe in Cryogenics such that when they die they will be frozen and eventually resurrected.
This is only a superficial comparison, and many people who are into cryonics (note, not cryogenics- they are different things) don’t self-identify as transhumanist. Moreover, this is very different because many of the people, even the proponents like Robin Hanson, estimate low chances of cryonics actually working. Could one imagine a major religion saying that “yeah, so if you do what we want, maybe, in a few centuries you might come back if people in that day and age get around to it?”
Belief in Immortality. Transhumanists believe in life extension to the point that eventually their bodies will be able to live indefinitely long.
One certainly has some transhumanists who think that the technology will soon be at that level. But that’s not at all the same as it applying to current people. This isn’t belief in immortality. This is belief that if our technology increases at a fast enough rate, then we will reach actuarial escape velocity.
Belief in Eternal Life. Transhumanists believe that eventually uploads will be possible such that even in the case of an unfortunate accident they will be able to survive.
Well, in so far as, if our current understanding of the human brain is correct, and one self-identifies with all functionally equivalent copies then they are correct.
Belief in Theosis/Exaltation. Transhumanists believe that it may become possible to become one with their AI/God
Same issues with the issue of comparing the AI to a deity as earlier.
Belief that everyone that disagrees with them is a heretic.
Really? I don’t buy into almost any of the standard transhumanist claims. I’ve got a karma over 4000. I’ve made a lot of posts criticizing cryonics, the probability that AI will go foom, and Bayesianism, and almost all of them have gotten voted up. Indeed, I just made another post about AI not going foom and it is getting voted up as we speak and is now at +4 for, frankly, not doing much at all other than raising an issue with no details. I presume from your remarks that Less Wrong is the environment we are talking about. So if so, heretics apparently are treated very well here.
What would it mean in this context for someone to be a heretic? One common idea is that heretics will suffer or not do as well in a messianic age. Well, I’ve haven’t gotten to discussing your notion of the Singularity as a messianic age, (which is #9 on your list) but let’s say for now that that is a good analogy. Are heretics going to suffer? No. At worst, if cryonics works, then maybe some of the less damaged people who signed up for cryonics will still be around. Those who didn’t sign up will yes get oblivion. That last part has nothing to do with transhumanism unless transhumanism now means anyone who doesn’t believe in an afterlife. That’s obviously too broad. Moreover, someone who signs up for cryonics but doesn’t think Friendly AI is worthwhile, or doesn’t meet any of your other criteria would be just as saved in that scenario. Now, another popular way of dealing with heretics is to kick them out of a community. That doesn’t seem to happen either. I can’t see any way in which anyone is being treated functionally as a heretic.
Aumann’s Agreement theorem has serious problems in it, yet it is used in arguments as though it were true.
Um, Aumann’s Agreement is a theorem. It can’t have problems. You can argue that people are misapplying it, and I’d be inclined to agree.
This in spite of the fact that they may have additional information that is influencing their priors in important ways.
Yes, but that’s exactly what Aumann says would be the expected problem among rational agents, lack of shared information, possibly some combination of information on both sides. There are, IMO, deep problems with how people apply Aumann’s theorem here, but this isn’t one of them.
Sacred Symbols. At least those that are signed up for Cryonics have symbols that they wear and use to spark interest in their beliefs or debates about their beliefs.
This is stretching. They need those symbols to signal to medical professionals that they are signed up. And as someone who isn’t signed up for cryonics but is considering it, let me just note that if I see a cryonics symbol on someone I know that they are in fact likely to have interesting ideas, because they’ve gone and done something wildly outside the mainstream.
Belief in the coming of a messiah—the singularity.
So this is possibly the best argument. Among some transhumanists there is an extremely optimistic attitude about the Singularity and an attitude about it that can only be described as evangelical. There’s been earlier discussion about this issue here, e.g. this. But that seems to be a small fraction of transhumanists as a whole, generally those of the Kurzweil disposition. But those people don’t meet much of what you list, not caring generally about Friendly AI or Aumann’s agreement theorem issues. Indeed, most of the people here who take the Singularity seriously consider it likely to be a very bad thing for humanity.
Holy Script—The sequences.
The sequences are a good set of essays that do a good job summarizing a lot of known scientific knowledge, some philosophical arguments, and a little bit of fun besides. For a set of holy scriptures they have a surprisingly large amount of people telling Eliezer when he’s just wrong. Moreover, to fit with your earlier analogy, this would require these scriptures to have been inspired by what, the supreme AI that doesn’t exist yet? That doesn’t seem to work. Note also that many people here disagree with major points of the Sequences. MWI and the Eliezer’s take on Metaethics are the two biggest ones. The person with third highest karma on the site, User:Alicorn isn’t even a utilitarian, which amounts to a not at all small rejection of most of Eliezer’s ethical attitudes.
I don’t think that deep science or philosophy is required here, just not trying to shove groups into standard categories when they don’t apply.
Could you explain the distinction? Are you using the word utilitarian to refer to a classical ‘maximise pleasure minimise pain’ utilitarian?
I think that Sniffnoy is distinguishing between someone who pushes people onto train tacks and someone who does not. Alicorn is unwilling to push people onto train tracks.
I thought that both utilitarians and consequentialists would push someone onto train tracks. Since he drew a distinction between the two I was wondering what it was.
I thought that both utilitarians and consequentialists would push someone onto train tracks. Since he drew a distinction between the two I was wondering what it was.
Yes, they will. Consequentialists are a superset of utilitarians. Not all consequentialists are utilitarians. For example, one could be a consequentialist who will choose toture over dustspecks, but a utilitarian will not.
Consequentialism decides between actions only by reducing them to expected outcomes (or probability distributions over outcomes), and comparing those outcomes. Utilitarianism is consequentialist, but with additional structure to how it compares outcomes. In particular, utilitarians combine uncertain outcomes by weighting them linearly with weights proportional to probability. Additionally, many (but not all) utilitarians also subdivide their utility functions by agent, specifying that individuals’ preferences are to be quantified and linearly combined.
Hm, that’s not how I was breaking it down. We really haven’t standardized on terminology here, have we? That would be a useful thing.
Here’s how I was using the terms:
Consequentialist—Morality/shouldness refers to a preference ordering over the set of possible universe-histories, not e.g. a set of social rules; consequences must always be considered in determining what is right. The ends (considered in totality, obviously) justify the means, or as Eliezer put it, “Shouldness flows backwards”. This description may be a bit too inclusive, but it still excludes people who say “You don’t push people onto the train tracks no matter what.”
Unnamed category—agents whose preferences are described by utility functions. This needn’t have much to do with morality at all, really, but an agent that was both rational and truly consequentialist would necessarily fall into this category, as otherwise it would (if I’m not mistaken) be vulnerable to Dutch bookings.
Utilitarian—Someone who not only uses a utility function, but uses one that assigns some number to each person P (possibilities include some measure of “net happiness”, or something based on P’s utility function… even though raw numbers from utility functions are meaningless...) and then computes utility from combining these numbers in some way that treats all people symmetrically (e.g. summing or averaging.)
Consequentialists still have to have an underlying “feel” for morality outside of consequentialism. That is, they need to have some preference ordering that is not itself consequentialist in nature be it social Darwinism, extreme nationalism, or whatever other grouping it may be.
Utilitarianism is a subset of consequentialism that gives as it preference ordering the over all happiness of society.
Yes, consequentialism is a criterion a system can satisfy, not a system in and of itself. Your definition of utilitarianism is too narrow, though, in that it seems to only include “classical utilitarianism”, and not e.g. preference utilitarianism.
I’m not sure that the distinctions are precise. As I understand it, a utilitarian assigns everyone a utility and then just takes the sum, and sees how to maximize that. A consequentialist might not calculate their end goals in that way even as they are willing to take any relevant actions to move the world into a state they consider better.
1 - I can give you that one. Still disturbing to read in multiple posts people advocate lying rather than telling the truth to people that aren’t in the “in” crowd.
2, 6 - how familiar are you with Gnosticism?
4 - that is just quibbling with how immortality is defined. Same with 3.
8- yes, it was stretching from anyone else s perspective except the LDS one, so I will give you it.
Singularity seriously consider it likely to be a very bad thing for humanity
and
these scriptures to have been inspired by what, the supreme AI that doesn’t exist yet?
Again, how familiar are you with gnosticism?
7,10- Interesting points. That isn’t what I see happening on the main sequences but it does fit with the scoring of some of my comments. These are also the only two answers that actually address the argument to show that it is not a solidified religion, at least not on this site.
4 - I am setup for cryonics, and I do not ‘believe’ in ‘eternal life’. 1) Even if cryonics works, I am very unlikely to live forever; forever is a long time. I would probably live a long time though, and that sounds better than not. 2) I don’t think cryonics is very likely to work, but it has a high payoff and it’s not absurdly expensive, so it makes sense even if it has a (say) 5% chance of working.
9 - ‘believing’ in the singularity (I hate that word, but I am using it to refer to the time when an intelligence explosion (rapidly self improving intelligence) takes place) is more like Native Americans believing in European settlers after Columbus arrived. Big changes are coming. They might be good or they might be bad, but they’re coming, and you’d like to try to make them good.
This is an example of not being aware that your beliefs are beliefs. Where is the evidence of an intelligence explosion? Where is the AI that can pass a Turing test, leaving alone a strong AI doing so? Are you aware that AI is like Fusion in that it is always some 20 years in the future and has been for much more than 20 years?
Still disturbing to read in multiple posts people advocate lying rather than telling the truth to people that aren’t in the “in” crowd.
What are you referring to?
8- yes, it was stretching from anyone else s perspective except the LDS one, so I will give you it.
LDS aren’t the only religion that has ritual items of clothing (compare for example the tzizit warn by many Orthodox Jews) so that’s one of the less stretched examples in that respect. But it is still silly since people carry specific things for medical reasons all the time. Thus, people who have allergies to common medications will sometimes have armbands to let doctors know.
Again, how familiar are you with gnosticism?
Somewhat but I fail to see the relevancy. Can you expand?
These are also the only two answers that actually address the argument to show that it is not a solidified religion, at least not on this site.
Huh? Among other problems, see the remark that you are confusing transhumanism with ratonalism. You are taking a large different set of ideas some of which don’t normally even go together and putting them all together as an attempt to make something which (vaguely) resembles a religion.
Knowing About Biases Can Hurt People, many of the comments in Crises of Faith, there are other places but those are the two examples I have down for this.
Can you expand?
The tenets of Gnosticism is that the God of the Bible is evil (this is constantly being brought up in the mysterious answers sequence) and here to ensnare us who are the true gods. (Also, that the material world is evil (see uploads)). To free ourselves from this state we must create a god (or ourselves become god), not necessarily that we are to worship this god. Now comes the boot-strapping problem, where does this knowledge come from and how to create this god? The answer is that we have always had it and must merely rediscover it using those that have seen the way as our guides. It is really just a confusion of the temple ceremony (if one is LDS and believes the temple ceremonies were had anciently) with the standard twists.
Hopefully that clears up how I see a connection and don’t see your objections on those points as being arguments against it being a religion.
don’t normally even go together and putting them all together as an attempt to make something which (vaguely) resembles a religion.
Um, really? The primary point there point there doesn’t seem to be about whether only the in-crowd should know about things but whether knowing about cognitive biases is safe for anyone. But maybe that’s just me.
many of the comments in Crises of Faith
But that doesn’t fit with your claim that the truth is being reserved for some elect in-group. No one (from my quick reading of the thread) is talking about just telling things to people who are already members of some super-secret transhumanist rationalist amalgam, just that there are some people out there who aren’t going to respond productively to being told certain things. So the claimed in-group includes a large number of people who haven’t ever heard of Less Wrong.
Regarding the issues with gnosticism, that’s not an accurate summary of gnosticism in that gnostics thought that there was a God superior to the Demiurge. This seems pretty different from a view which has no deity at all and a plan to construct something that if you squint at it in a funny way might sort of resemble a deity if certain results occur. And no one, absolutely no one, would claim that we somehow have to rediscover pre-existing knowledge to be more rational to help make Friendly AI. Indeed, Eliezer has repeatedly emphasized that an important part of science and actual productivity is to realize that there aren’t any ancient sources of hidden knowledge, and that a logician today is really much more worth listening to than anything in Aristotle. (See, e.g. his comments here).
don’t normally even go together and putting them all together as an attempt to make something which (vaguely) resembles a religion.
They are all here on this site.
So the argument is that they are all mentioned here and that some people subscribe to some or all of them? Sure, and some people here like the Red Sox. And a lot of others think that studying pure math is fun. And we have a sizable fraction that thinks that everyone should know how to use Spivak pronons. The presence of certain classes of opinions in some fraction of a population doesn’t necessarily tell you very much.
gnostics thought that there was a God superior to the Demiurge
Only some of them thought this, others thought that they were the superior gods and had been tricked by the demiurge to give up their positions of power. Still others thought that their was no superior god but that they needed to make or become the superior god.
ancient sources of hidden knowledge,
Prexisting is not the same as ancient. The gnostics themselves had to have come up with the ideas of gnosticism without someone previously having come up with that exact idea before. Eliezer has claimed to have seen the need for a friendly AI that goes foom, hence for this thing he is a guide. He claims to have come to this conclusion logically, which the rationality that allows for the use of logic was preexisting within himself. No one else previously needs to have thought of the need for friendly AI (with the notable exception of Science Fiction writers...).
everyone should know how to use Spivak pronons.
I am fine with using one for undetermined gender. I am not going to start using Spivak pronouns.
The presence of certain classes of opinions in some fraction of a population doesn’t necessarily tell you very much.
I am not arguing that everyone is part of this particular religious group. I believe that almost all forms of transhumanism are religious, but not all of them follow the same patterns.
Yes ok. But it seems clear that the most common form didn’t. So right now, we’re cherry picking specific parts of a vague transhumanist cluster and then attaching them to a specific cherry picked gnostic beliefs. Do you see why that might not be persuasive?
Prexisting is not the same as ancient.
Granted. But the point still holds. None of what Eliezer has said claims to have anything to do with deep pre-existing sources of knowledge.
everyone should know how to use Spivak pronons.
I am fine with using one for undetermined gender. I am not going to start using Spivak pronouns.
Missing my point. I don’t care what pronouns you use. The point is that there are a lot of non-standard beliefs that are common here and some standard beliefs too. You can’t just pick the specific set that you think most resembles a religion and act like those are the relevant dominant beliefs. Or rather you can, but it isn’t very productive if you are trying to answer some question of the form “Does the cluster of beliefs common among users at Less Wrong resemble what is generally called a religion?”
Does the cluster of beliefs common among users at Less Wrong resemble what is generally called a religion?
Given that Confucianism is not a religion then as currently constituted neither Less Wrong or Transhumanism is in general a religion. Although, the Singularity2045 people are one if it is more then one guy being very enthusiastic about things.
Unless you want to argue that Less Wrong is a religion, that isn’t helpful. These are still separate things. As it happens most people here endorse both of them—this isn’t a coincidence, the transhumanism is a result of the rationalism—but in general they are still separate. In other places too you seem to have conflated transhumanism in general or rationalism in general with Less Wrong in particular; do you think most transhumanists have ever heard of Eliezer’s sequences? Nor do they necessarily expect a technological singularity or that cryonics will work.
Hm—now that I look at your original comment, you actually wrote
Transhumanism, at least as expressed on this site, is as far as I can tell a religion.
But from there on out you’ve just referred to “transhumanism” rather than this website. Please stop doing this; use the proper terms for what you’re referring to so people actually know what you’re talking about.
Having defined that I was restricting the comments to Transhumanism as expressed on this site I saw no reason to continue restating the issue. Also, many, but not all, of the points brought up are not restricted to transhumanism as found on this site, as has been established with in this discussion, but to transhumanism in general.
The problem is that using one term to mean something closely related is bound to cause confusion even if you explicitly redefine it at the outset. I would really advise against doing that. And you weren’t even entirely explicit about it; you wrote “Transhumanism, at least as expressed on this site, is as far as I can tell a religion” and then used “transhumanism” thereafter. It would have been clearer had you written something like “Transhumanism, at least as expressed on this site (which hereafter I will just refer to as transhumanism)”, but even then I would advise against it for the reason above. (Especially because people often join these discussions in the middle.)
Added the parenthesis. It is too late in the discussion to go through and relabel everything in my opinion. Doing so would also require more of an explanation of what was being talked about in my opinion.
OK, thanks, that’s at least a little helpful. I’ll disappear from this discussion now.
EDIT: To clarify, this is because really I think this whole discussion is pointless and as TheOtherDave and Vladimir Nesov have pointed out, for reasons that should be clear if you’ve read Eliezer’s 37 ways that words can be wrong. :)
I have read 37 ways that words can be wrong. Your argument is that I violated 20, 20 I feel is a valid point so I attempted to fix it. Not all of the points on that list are valid in my opinion. For instance it is necessary to agree on a definition of what is a religion for my argument to make sense and I have attempted to present that argument in detail in my response to TheOtherDave. I have also attempted to show how this discussion is not pointless, but if you disagree with me then we will have to agree to disagree, which I can do because I don’t accept Eliezer as an authority figure.
Yes; #11 was what I considered to be the problem here. I wasn’t thinking about “defying common usage without a reason” as something where the problem was nonobvious; though he happens to have written about it, referring to the sequences for that would be a cannon-to-kill-a-mosquito sort of thing and didn’t even occur to me.
Which doesn’t seem to be a term you’ve defined at all.
Which is why I haven’t brought it up before, I would say go look it up but then I would be violating a few more of the items on that list. It is also much harder to point to an example and say this is a pseudoreligion, but not a religion and not just some other form of association (at least it is harder for me).
Yes, I’ve seen the term before. The reason I’m asking you to define it if you are going to use it is because like the term “religion” it has different meanings in different contexts when different people are using it (although as far as I can tell most people use it to mean “recent religion that I don’t like” in a way similar to how some people use the term “cult”.) So without expanding out precisely what you mean it isn’t a helpful term.
As far as I can tell a pseudoreligion is a religion that hasn’t coalesced as of yet into a distinct set of shared beliefs. That is how I would use the term.
However, this contradicts many of the ways that it is used generally, which seem to match your view of how people use the term. I am therefore not sure that it is helpful term given the common usage and connotations to that usage. Cult is similarly a difficult word, it is useful in a technical sense to define the worship of something but commonly has a very different meaning.
Incorrect. Transhumanism and rationalism are different issues. Some people self-identify as rational but not transhumanist. And outside Less Wrong, most transhumanists do not consider themselves to be rationalist in any useful sense. If you simply mean that transhumanists think they’ve recognized something as important that others have not, then any movement thinks that, whether religious or not religious.
Again, confusing Less Wrong attitudes about AI with general transhumanism. But even then, making a deity would be very different than having a pre-existing deity (although I see why someone of LDS background would see less distinction there). But those who want to make a Friendly AI don’t intend to worship it or the like. Moreover, many people here who don’t think that an AI is likely to foom think that it is important to figure out how to make Friendly AI because non-Friendly non-fooming AI can still do a lot of damage.
This is only a superficial comparison, and many people who are into cryonics (note, not cryogenics- they are different things) don’t self-identify as transhumanist. Moreover, this is very different because many of the people, even the proponents like Robin Hanson, estimate low chances of cryonics actually working. Could one imagine a major religion saying that “yeah, so if you do what we want, maybe, in a few centuries you might come back if people in that day and age get around to it?”
One certainly has some transhumanists who think that the technology will soon be at that level. But that’s not at all the same as it applying to current people. This isn’t belief in immortality. This is belief that if our technology increases at a fast enough rate, then we will reach actuarial escape velocity.
Well, in so far as, if our current understanding of the human brain is correct, and one self-identifies with all functionally equivalent copies then they are correct.
Same issues with the issue of comparing the AI to a deity as earlier.
Really? I don’t buy into almost any of the standard transhumanist claims. I’ve got a karma over 4000. I’ve made a lot of posts criticizing cryonics, the probability that AI will go foom, and Bayesianism, and almost all of them have gotten voted up. Indeed, I just made another post about AI not going foom and it is getting voted up as we speak and is now at +4 for, frankly, not doing much at all other than raising an issue with no details. I presume from your remarks that Less Wrong is the environment we are talking about. So if so, heretics apparently are treated very well here.
What would it mean in this context for someone to be a heretic? One common idea is that heretics will suffer or not do as well in a messianic age. Well, I’ve haven’t gotten to discussing your notion of the Singularity as a messianic age, (which is #9 on your list) but let’s say for now that that is a good analogy. Are heretics going to suffer? No. At worst, if cryonics works, then maybe some of the less damaged people who signed up for cryonics will still be around. Those who didn’t sign up will yes get oblivion. That last part has nothing to do with transhumanism unless transhumanism now means anyone who doesn’t believe in an afterlife. That’s obviously too broad. Moreover, someone who signs up for cryonics but doesn’t think Friendly AI is worthwhile, or doesn’t meet any of your other criteria would be just as saved in that scenario. Now, another popular way of dealing with heretics is to kick them out of a community. That doesn’t seem to happen either. I can’t see any way in which anyone is being treated functionally as a heretic.
Um, Aumann’s Agreement is a theorem. It can’t have problems. You can argue that people are misapplying it, and I’d be inclined to agree.
Yes, but that’s exactly what Aumann says would be the expected problem among rational agents, lack of shared information, possibly some combination of information on both sides. There are, IMO, deep problems with how people apply Aumann’s theorem here, but this isn’t one of them.
This is stretching. They need those symbols to signal to medical professionals that they are signed up. And as someone who isn’t signed up for cryonics but is considering it, let me just note that if I see a cryonics symbol on someone I know that they are in fact likely to have interesting ideas, because they’ve gone and done something wildly outside the mainstream.
So this is possibly the best argument. Among some transhumanists there is an extremely optimistic attitude about the Singularity and an attitude about it that can only be described as evangelical. There’s been earlier discussion about this issue here, e.g. this. But that seems to be a small fraction of transhumanists as a whole, generally those of the Kurzweil disposition. But those people don’t meet much of what you list, not caring generally about Friendly AI or Aumann’s agreement theorem issues. Indeed, most of the people here who take the Singularity seriously consider it likely to be a very bad thing for humanity.
The sequences are a good set of essays that do a good job summarizing a lot of known scientific knowledge, some philosophical arguments, and a little bit of fun besides. For a set of holy scriptures they have a surprisingly large amount of people telling Eliezer when he’s just wrong. Moreover, to fit with your earlier analogy, this would require these scriptures to have been inspired by what, the supreme AI that doesn’t exist yet? That doesn’t seem to work. Note also that many people here disagree with major points of the Sequences. MWI and the Eliezer’s take on Metaethics are the two biggest ones. The person with third highest karma on the site, User:Alicorn isn’t even a utilitarian, which amounts to a not at all small rejection of most of Eliezer’s ethical attitudes.
I don’t think that deep science or philosophy is required here, just not trying to shove groups into standard categories when they don’t apply.
Nitpick: I get the idea a lot of people here are not utilitarians, what’s noteworthy about Alicorn is that she’s not a consequentialist.
Could you explain the distinction? Are you using the word utilitarian to refer to a classical ‘maximise pleasure minimise pain’ utilitarian?
I think that Sniffnoy is distinguishing between someone who pushes people onto train tacks and someone who does not. Alicorn is unwilling to push people onto train tracks.
It amuses me to no end that “Alicorn is unwilling to push people onto train tracks” is presented as a summary of how I go against consensus here.
In contrast, people on the tracks are not amused.
I thought that both utilitarians and consequentialists would push someone onto train tracks. Since he drew a distinction between the two I was wondering what it was.
Yes, they will. Consequentialists are a superset of utilitarians. Not all consequentialists are utilitarians. For example, one could be a consequentialist who will choose toture over dustspecks, but a utilitarian will not.
So what exactly is the difference?
Consequentialism decides between actions only by reducing them to expected outcomes (or probability distributions over outcomes), and comparing those outcomes. Utilitarianism is consequentialist, but with additional structure to how it compares outcomes. In particular, utilitarians combine uncertain outcomes by weighting them linearly with weights proportional to probability. Additionally, many (but not all) utilitarians also subdivide their utility functions by agent, specifying that individuals’ preferences are to be quantified and linearly combined.
Hm, that’s not how I was breaking it down. We really haven’t standardized on terminology here, have we? That would be a useful thing.
Here’s how I was using the terms:
Consequentialist—Morality/shouldness refers to a preference ordering over the set of possible universe-histories, not e.g. a set of social rules; consequences must always be considered in determining what is right. The ends (considered in totality, obviously) justify the means, or as Eliezer put it, “Shouldness flows backwards”. This description may be a bit too inclusive, but it still excludes people who say “You don’t push people onto the train tracks no matter what.”
Unnamed category—agents whose preferences are described by utility functions. This needn’t have much to do with morality at all, really, but an agent that was both rational and truly consequentialist would necessarily fall into this category, as otherwise it would (if I’m not mistaken) be vulnerable to Dutch bookings.
Utilitarian—Someone who not only uses a utility function, but uses one that assigns some number to each person P (possibilities include some measure of “net happiness”, or something based on P’s utility function… even though raw numbers from utility functions are meaningless...) and then computes utility from combining these numbers in some way that treats all people symmetrically (e.g. summing or averaging.)
Consequentialists still have to have an underlying “feel” for morality outside of consequentialism. That is, they need to have some preference ordering that is not itself consequentialist in nature be it social Darwinism, extreme nationalism, or whatever other grouping it may be.
Utilitarianism is a subset of consequentialism that gives as it preference ordering the over all happiness of society.
Yes, consequentialism is a criterion a system can satisfy, not a system in and of itself. Your definition of utilitarianism is too narrow, though, in that it seems to only include “classical utilitarianism”, and not e.g. preference utilitarianism.
I’m not sure that the distinctions are precise. As I understand it, a utilitarian assigns everyone a utility and then just takes the sum, and sees how to maximize that. A consequentialist might not calculate their end goals in that way even as they are willing to take any relevant actions to move the world into a state they consider better.
1 - I can give you that one. Still disturbing to read in multiple posts people advocate lying rather than telling the truth to people that aren’t in the “in” crowd.
2, 6 - how familiar are you with Gnosticism?
4 - that is just quibbling with how immortality is defined. Same with 3.
8- yes, it was stretching from anyone else s perspective except the LDS one, so I will give you it.
and
Again, how familiar are you with gnosticism?
7,10- Interesting points. That isn’t what I see happening on the main sequences but it does fit with the scoring of some of my comments. These are also the only two answers that actually address the argument to show that it is not a solidified religion, at least not on this site.
4 - I am setup for cryonics, and I do not ‘believe’ in ‘eternal life’. 1) Even if cryonics works, I am very unlikely to live forever; forever is a long time. I would probably live a long time though, and that sounds better than not. 2) I don’t think cryonics is very likely to work, but it has a high payoff and it’s not absurdly expensive, so it makes sense even if it has a (say) 5% chance of working.
9 - ‘believing’ in the singularity (I hate that word, but I am using it to refer to the time when an intelligence explosion (rapidly self improving intelligence) takes place) is more like Native Americans believing in European settlers after Columbus arrived. Big changes are coming. They might be good or they might be bad, but they’re coming, and you’d like to try to make them good.
This is an example of not being aware that your beliefs are beliefs. Where is the evidence of an intelligence explosion? Where is the AI that can pass a Turing test, leaving alone a strong AI doing so? Are you aware that AI is like Fusion in that it is always some 20 years in the future and has been for much more than 20 years?
What are you referring to?
LDS aren’t the only religion that has ritual items of clothing (compare for example the tzizit warn by many Orthodox Jews) so that’s one of the less stretched examples in that respect. But it is still silly since people carry specific things for medical reasons all the time. Thus, people who have allergies to common medications will sometimes have armbands to let doctors know.
Somewhat but I fail to see the relevancy. Can you expand?
Huh? Among other problems, see the remark that you are confusing transhumanism with ratonalism. You are taking a large different set of ideas some of which don’t normally even go together and putting them all together as an attempt to make something which (vaguely) resembles a religion.
Knowing About Biases Can Hurt People, many of the comments in Crises of Faith, there are other places but those are the two examples I have down for this.
The tenets of Gnosticism is that the God of the Bible is evil (this is constantly being brought up in the mysterious answers sequence) and here to ensnare us who are the true gods. (Also, that the material world is evil (see uploads)). To free ourselves from this state we must create a god (or ourselves become god), not necessarily that we are to worship this god. Now comes the boot-strapping problem, where does this knowledge come from and how to create this god? The answer is that we have always had it and must merely rediscover it using those that have seen the way as our guides. It is really just a confusion of the temple ceremony (if one is LDS and believes the temple ceremonies were had anciently) with the standard twists.
Hopefully that clears up how I see a connection and don’t see your objections on those points as being arguments against it being a religion.
They are all here on this site.
Um, really? The primary point there point there doesn’t seem to be about whether only the in-crowd should know about things but whether knowing about cognitive biases is safe for anyone. But maybe that’s just me.
But that doesn’t fit with your claim that the truth is being reserved for some elect in-group. No one (from my quick reading of the thread) is talking about just telling things to people who are already members of some super-secret transhumanist rationalist amalgam, just that there are some people out there who aren’t going to respond productively to being told certain things. So the claimed in-group includes a large number of people who haven’t ever heard of Less Wrong.
Regarding the issues with gnosticism, that’s not an accurate summary of gnosticism in that gnostics thought that there was a God superior to the Demiurge. This seems pretty different from a view which has no deity at all and a plan to construct something that if you squint at it in a funny way might sort of resemble a deity if certain results occur. And no one, absolutely no one, would claim that we somehow have to rediscover pre-existing knowledge to be more rational to help make Friendly AI. Indeed, Eliezer has repeatedly emphasized that an important part of science and actual productivity is to realize that there aren’t any ancient sources of hidden knowledge, and that a logician today is really much more worth listening to than anything in Aristotle. (See, e.g. his comments here).
So the argument is that they are all mentioned here and that some people subscribe to some or all of them? Sure, and some people here like the Red Sox. And a lot of others think that studying pure math is fun. And we have a sizable fraction that thinks that everyone should know how to use Spivak pronons. The presence of certain classes of opinions in some fraction of a population doesn’t necessarily tell you very much.
Only some of them thought this, others thought that they were the superior gods and had been tricked by the demiurge to give up their positions of power. Still others thought that their was no superior god but that they needed to make or become the superior god.
Prexisting is not the same as ancient. The gnostics themselves had to have come up with the ideas of gnosticism without someone previously having come up with that exact idea before. Eliezer has claimed to have seen the need for a friendly AI that goes foom, hence for this thing he is a guide. He claims to have come to this conclusion logically, which the rationality that allows for the use of logic was preexisting within himself. No one else previously needs to have thought of the need for friendly AI (with the notable exception of Science Fiction writers...).
I am fine with using one for undetermined gender. I am not going to start using Spivak pronouns.
I am not arguing that everyone is part of this particular religious group. I believe that almost all forms of transhumanism are religious, but not all of them follow the same patterns.
Yes ok. But it seems clear that the most common form didn’t. So right now, we’re cherry picking specific parts of a vague transhumanist cluster and then attaching them to a specific cherry picked gnostic beliefs. Do you see why that might not be persuasive?
Granted. But the point still holds. None of what Eliezer has said claims to have anything to do with deep pre-existing sources of knowledge.
Missing my point. I don’t care what pronouns you use. The point is that there are a lot of non-standard beliefs that are common here and some standard beliefs too. You can’t just pick the specific set that you think most resembles a religion and act like those are the relevant dominant beliefs. Or rather you can, but it isn’t very productive if you are trying to answer some question of the form “Does the cluster of beliefs common among users at Less Wrong resemble what is generally called a religion?”
Given that Confucianism is not a religion then as currently constituted neither Less Wrong or Transhumanism is in general a religion. Although, the Singularity2045 people are one if it is more then one guy being very enthusiastic about things.
Unless you want to argue that Less Wrong is a religion, that isn’t helpful. These are still separate things. As it happens most people here endorse both of them—this isn’t a coincidence, the transhumanism is a result of the rationalism—but in general they are still separate. In other places too you seem to have conflated transhumanism in general or rationalism in general with Less Wrong in particular; do you think most transhumanists have ever heard of Eliezer’s sequences? Nor do they necessarily expect a technological singularity or that cryonics will work.
Hm—now that I look at your original comment, you actually wrote
But from there on out you’ve just referred to “transhumanism” rather than this website. Please stop doing this; use the proper terms for what you’re referring to so people actually know what you’re talking about.
Would you rather I use “Less Wrong”ism?
If that’s what you mean, yes.
Having defined that I was restricting the comments to Transhumanism as expressed on this site I saw no reason to continue restating the issue. Also, many, but not all, of the points brought up are not restricted to transhumanism as found on this site, as has been established with in this discussion, but to transhumanism in general.
The problem is that using one term to mean something closely related is bound to cause confusion even if you explicitly redefine it at the outset. I would really advise against doing that. And you weren’t even entirely explicit about it; you wrote “Transhumanism, at least as expressed on this site, is as far as I can tell a religion” and then used “transhumanism” thereafter. It would have been clearer had you written something like “Transhumanism, at least as expressed on this site (which hereafter I will just refer to as transhumanism)”, but even then I would advise against it for the reason above. (Especially because people often join these discussions in the middle.)
Added the parenthesis. It is too late in the discussion to go through and relabel everything in my opinion. Doing so would also require more of an explanation of what was being talked about in my opinion.
OK, thanks, that’s at least a little helpful. I’ll disappear from this discussion now.
EDIT: To clarify, this is because really I think this whole discussion is pointless and as TheOtherDave and Vladimir Nesov have pointed out, for reasons that should be clear if you’ve read Eliezer’s 37 ways that words can be wrong. :)
I have read 37 ways that words can be wrong. Your argument is that I violated 20, 20 I feel is a valid point so I attempted to fix it. Not all of the points on that list are valid in my opinion. For instance it is necessary to agree on a definition of what is a religion for my argument to make sense and I have attempted to present that argument in detail in my response to TheOtherDave. I have also attempted to show how this discussion is not pointless, but if you disagree with me then we will have to agree to disagree, which I can do because I don’t accept Eliezer as an authority figure.
For what it is worth, there may be issues with #9 and #11 also.
Yes; #11 was what I considered to be the problem here. I wasn’t thinking about “defying common usage without a reason” as something where the problem was nonobvious; though he happens to have written about it, referring to the sequences for that would be a cannon-to-kill-a-mosquito sort of thing and didn’t even occur to me.
It might be closer to a Pseudoreligion.
I thought I already answered the issue with 11?
Which doesn’t seem to be a term you’ve defined at all.
Some of them. I’m confused with some of your apparent answers to that so I’m not completely sure. It may be a failing on my part.
Which is why I haven’t brought it up before, I would say go look it up but then I would be violating a few more of the items on that list. It is also much harder to point to an example and say this is a pseudoreligion, but not a religion and not just some other form of association (at least it is harder for me).
Yes, I’ve seen the term before. The reason I’m asking you to define it if you are going to use it is because like the term “religion” it has different meanings in different contexts when different people are using it (although as far as I can tell most people use it to mean “recent religion that I don’t like” in a way similar to how some people use the term “cult”.) So without expanding out precisely what you mean it isn’t a helpful term.
As far as I can tell a pseudoreligion is a religion that hasn’t coalesced as of yet into a distinct set of shared beliefs. That is how I would use the term.
However, this contradicts many of the ways that it is used generally, which seem to match your view of how people use the term. I am therefore not sure that it is helpful term given the common usage and connotations to that usage. Cult is similarly a difficult word, it is useful in a technical sense to define the worship of something but commonly has a very different meaning.