Hi everyone, I’m The Articulator. (No ‘The’ in my username because I dislike using underscores in place of spaces)
I found LessWrong originally through RationalWiki, and more recently through Iceman’s excellent pony-fic about AI and transhumanism, Friendship is Optimal.
I’ve started reading the Sequences, and made some decent progress, though we’ll see how long I maintain my current rate.
I’ll be attending University this fall for Electrical Engineering, with a desire to focus in electronics.
Prior to LW, I have a year’s worth of Philosophy and Ethics classes, and a decent amount of derivation and introspection.
As a result, I’ve started forming a philosophical position, made up of a mishmash of formally learnt and self-derived concepts. I would be very grateful if anyone would take the time to analyze, and if possible, pick apart what I’ve come up with. After all, it’s only a belief worth holding if it stands up to rigorous debate.
(If this is the wrong place to do this, I apologize—it seemed slightly presumptuous to imply that my comment thread would be large enough to warrant a separate discussion article.)
I apologize in advance for a possible lack of precise terminology for already existing concepts. As I’ve said, I’m partially self-derived, and without knowing the name of an idea, it’s hard to check if it already exists. If you do spot such gaps in my knowledge, I would be grateful if you’d point them out. Though I understand correct terminology is nice, I’d appreciate it if you could judge my ideas regardless of how many fancy words I use to descrive them.
My thought process so far:
P: Naturalism is the only standard by which we can understand the world
P: One cannot derive ethical statements or imperatives from Naturalism, as, like all good science, it is only descriptive in nature
IC : We cannot derive ethical statements
IC: There is no intrinsic value
C: Nihilism is correct
However, assuming nihilism is correct, why don’t I just kill myself now? That’s down to the evolutionary instincts that need me alive to reproduce. Well, why not overcome those and kill myself? But now, we’re in a difficult situation – why, if nothing matters, am I so desperate to kill myself?
Nihilism is the total negation of the intrinsic and definitive value in anything. It’s like sticking a coefficient of zero onto all of your utility calculations. However, that includes the bad as well as the good. Why bother doing bad things just as much as doing good things?
My eventual realization came as a result of analyzing the level or order of concepts. Firstly, we have the lowest order, instinct, which we are only partially conscious of. Then, we have a middle order of conscious thought, wherein we utilize our sapience to optimize our instinctual aims. Finally, we have the first of a series of high order thought processes devoted to analyzing our thoughts. It struck me that only this order and above is concerned with my newfound existential crisis. When I allow my rationality to slip a bit, a few minutes later, I stop caring, and start eating or taking out my testosterone on small defenseless computer images. Essentially, it is only the meta-order processes which directly suffer as a result of nihilism, as they are the ones that have to deal with the results and implications.
Nihilism expects you to give up attempting to change things or apply ethics because those are seen as meaningful concepts. However, really, the way I see it, Nihilism is about simply the state of ‘going with the flow’, colloquially speaking. However, that’s intentionally vague. Consider: if your middle-order processes don’t care that you just realized nothing matters, what’ll happen? They’ll just keep doing what they’ve always done.
In other words, since humans compartmentalize, going with the flow is synonymous with turning off your meta-level thought processes as a goal-oriented drive, and purely operate on middle-level processes and below. That corresponds, for a Naturalist, with Utilitarianism.
Now, that’s not to say “turn off your meta-level cognition”, because otherwise, what am I doing here? What I’m doing right now is optimizing utility because I enjoy LessWrong and the types of discussions they have. I bother to optimize utility despite being a nihilist because it is easier, and less work, meta-level-wise, to give in to my middle-level desires than to fight them.
To define Nihilism, for me, now comes to the concept of passively maintaining the status quo, or more aptly, not attempting to change it. Why not wirehead? – because that state is no more desirable in a world with zero utility, but takes effort to reach. It’s going up a gradient which we can comfortably sit at the bottom of instead.
I fear I haven’t done the best job of explaining concisely, and I believe my original, purely mental, formulations were more elegant, so that’s a lesson on writing everything down learned. However, I hope some of you can see some flaws in this argument that I can’t, because at the moment, this explains just about everything I can think of in one way or another.
Thank you all in advance for any help given,
The Articulator (It’s kind of an ironic choice of name, present ineptitude considered.)
Okay, whoa, hey. I clearly and repeatedly explained my lack of total understanding of LW conventions. I’m not sure what about this provoked a downvote, but I would appreciate a bit more to go on. If this is about my noobishness, well, this is the Welcome Thread. Great job on the welcoming, by the way, anonymous downvoter. At the very least offer constructive criticism.
Edit: Troll? Really?
Edit,Edit: Thank you whoever deleted the negative karma!
I wouldn’t take downvotes to heart, if I were you, unless like, a whole bunch of people all downvote you. A downvote’s not terribly meaningful by itself.
Welcome to Less Wrong, by the way.
Now, I didn’t downvote you, but here’s some criticism, hopefully constructive. I didn’t read most of your post, from where you start discussing your philosophy (maybe I will later, but right now it’s a bit tl;dr). In general, though, taking what you’ve learned and attempting to construct a coherent philosophical position out of it is usually a poor idea. You’re likely to end up with a bunch of nonsense supported by a tower of reasoning detached from anything concrete. Read more first. Anyway, having a single “this is my philosophy” is really not necessary… pretty much ever. Figure out what your questions are, what you’re confused about, and why, approach those things one at a time and in without an eye toward unifying everything or integrating everything into a coherent whole, and see what happens.
Also: read the Sequences, they are pretty much concentrated awesome and will help with like, 90% of all confusion.
Okay, noted. It’s just that from what I’ve seen so far, a post with a net downvote is generally pretty horrible. I admit I took some offense from the implication. I’ll try not to let it bother me unless N is high enough for it to be me, entirely, that’s the problem.
Thanks. :)
Thank you for taking the time to give constructive criticism.
I will attempt to make it more coherent and summarized, assuming I keep any of it.
I appreciate I am likely to inexperienced to come up with anything that impressive, but I was hoping to use this as a method to understand which parts of my cognitive function were not behaving rationally, so as to improve.
I will absolutely continue to read, but with the utmost respect to Eliezer, I have yet to come across anything in the Sequences which did more than codify or verbalize beliefs I’d already held. By the point, two and a half sequences in, I felt it was unlikely that the enlightenment value would spike in such a way as to render my previously held views obsolete.
I’ll bear your objections in mind, but I fear I won’t let go of this theory unless somebody points out why it is wrong specifically, as opposed to methodically. Not that I’m putting any onus on you or anyone else to do so.
As I said, I am reading them, but have found them mostly about how to think as opposed to what to think so far, though I daresay that is intentional in the ordering.
I appreciate I am likely to inexperienced to come up with anything that impressive,
It’s not even that (ok, it’s probably at least a little of that). Some of the most worthless and nonsensical philosophy has come from professional philosophers (guys with Famous Names, who get chapters in History of Philosophy textbooks) who’ve constructed massive edifices of blather without any connection to anything in the world. EDIT: See e.g. this quote.
with the utmost respect to Eliezer, I have yet to come across anything in the Sequences which did more than codify or verbalize beliefs I’d already held.
You’ve got it right. One of the points Eliezer sometimes makes is that true things, even novel true things, shouldn’t sound surprising. Surprising and counterintuitive is what you get when you want to sound deep and wise. When you say true things, what you get is “Oh, well… yeah. Sure. I pretty much knew that.” Also, the Sequences contain a lot of excellent distillation and coherent, accessible presentation of things that you would otherwise have to construct from a hundred philosophy books.
As for enlightenment that makes your previous views obsolete… in my case, at least, that happened slowly, as I digested things I read here and in other places, and spent time (over a long period) thinking about various things. Others may have different experiences.
As I said, I am reading them, but have found them mostly about how to think as opposed to what to think so far, though I daresay that is intentional in the ordering.
Yeah, one of the themes in Less Wrong material, I’ve found, is that how to think is more important than what to think (if for no other reason than that once you know how to think, thinking the right things follows naturally).
There is a metaethics sequence, of which this post asks what you would do if morality didn’t exist. This may be a good place to start looking, but I wouldn’t be too discouraged if you don’t find it terribly useful (as Eliezer and others see it as not as communicative as Eliezer wanted it to be).
The point I would focus on is that there’s a difference between an ethical system that would compel any possible mind to follow it, and an ethical system in harmony with you and those around you. Figure out what you can get from ethics, and then seek to discover which the results of ethics you try. Worry more about developing a system that reliably makes small, positive changes than about developing a system that is perfectly correct. As it is said, a complex system that works is invariably found to have evolved from a simple system that worked.
Thanks for that link. I probably should have read that sequence, I’ll admit, but what is interesting is that, despite me not having read it previously, the majority of comments reflect what I stated above, albeit that my formulation explains it slightly more cognitively that ‘because I want to’. (Though that is an essential premise in my argument)
Though this is probably unfortunately irrational on my part, seeing my predictions confirmed by a decently sized sample only suggests to me that I’m on to something, at least so far as articulating something I have not seen previously formalized.
It seems like my largest problem here is that I absolutely failed to be concise, and added in non-necessary intermediate conclusions.
I think of this as less an ethical system in itself, rather a justification and rationalization of my position on Nihilism and its compatibility with Utilitarianism, which, coincidentally, seems to be the same as most people on LW.
I know that this’ll be probably just as failed as the last attempt, but I’ve summarized my core argument into a much shorter series of premises and conclusions. Would you mind looking through them and telling me what you feel is invalid or is likely to be improved upon by prolonged exposure to LW?
P: Naturalism is the only standard by which we can understand the world
P: One cannot derive ethical statements or imperatives from Naturalism, as, like all good science, it is only descriptive in nature
IC : We cannot derive ethical statements
IC: There is no intrinsic value
C: Nihilism is correct
P: Ethical statements are by definition prescriptive
P: Nihilism offers a total lack of ethical statements
IC: Nihilism offers no prescriptive statements
P: Prescriptive statements are like forces, in that they modify behavior (Consider Newton’s First Law)
IC: No prescriptive statements means no modification of behavior
C: Nihilism does not modify behavior, ethically speaking
P: Humans naturally or instinctively act according to a system very close to Utilitarianism
P: Deviation from this system takes effort
IC: Without further input or behavioral modification, most intellectual individuals will follow a Utilitarian system
IC: To act contrary to Utilitarianism requires effort
P: Nihilism does not modify behavior or encourage ethical effort
C: Nihilism implies Utilitarianism (or a general ethical system akin to it that is the default of the person in question)
I apologize if trying again like this is too much to ask for.
P: Humans naturally or instinctively act according to a system very close to Utilitarianism
Were this true, the utilitarian answers to common moral thought experiments would be seen as intuitive. Instead, we find that a minority of people endorse the utilitarian answers, and they are more likely to endorse those answers the more they rely on abstract thought rather than intuition. It seems that most people are intuitive deontologists.
I think of this as less an ethical system in itself, rather a justification and rationalization of my position on Nihilism and its compatibility with Utilitarianism, which, coincidentally, seems to be the same as most people on LW.
I don’t think “nihilist” is an interesting term, because it smuggles in implications that I do not think are useful (like “why don’t you just kill yourself, then?”). I think “moral anti-realist” is better, but not by much. The practical advice I would give: do not seek to use ethics as a foundation, because there is nothing to anchor it on. The parts of your mind are connected to each other, and it makes sense to develop them as a collection. If there is no intrinsic value, then let us look for extrinsic value.
Firstly, thank you for replying and spending the time to discuss this with me.
P: Humans naturally or instinctively act according to a system very close to Utilitarianism
Were this true, the utilitarian answers to common moral thought experiments would be seen as intuitive. Instead, we find that a minority of people endorse the utilitarian answers, and they are more likely to endorse those answers the more they rely on abstract thought rather than intuition. It seems that most people are intuitive deontologists.
I admit I made a bit of a leap here, which may not be justified. I was careful to specify ‘very close’, as I realize it is obviously not an exact copy. I would argue that most people do attempt to follow Bentham’s original formulation of seeking pleasure and avoiding pain instinctively, as that is where he derived his theory from. I would argue that though people may implement a deontological system for assigning moral responsibility, they are ultimately using Utilitarian principles as the model for their instinctive morality that describes whether an action is good or bad, much the same as Rule Utilitarianism does. I don’t think I can overstate the importance of the fact that Bentham derived the idea of Utilitarianism from a human perspective.
I don’t think “nihilist” is an interesting term, because it smuggles in implications that I do not think are useful (like “why don’t you just kill yourself, then?”).
In the longer formulation, I tackled this exact question, pointing out that is is more effort to overcome your survival instincts than it is to follow them, and thus an illogical attempt to change things which don’t matter.
I like ‘nihilist’ as a term as it is immediately recognizable, short, punchy, and someone with a basic grasp of Latin or maybe even English should be able to derive a rough meaning. It also sounds better. :P
The practical advice I would give: do not seek to use ethics as a foundation, because there is nothing to anchor it on.
Well, as it currently stands, I’m happy with the logical progression necessary to reach my current understanding, and more importantly, it has given me a tremendous sense of inner peace. I don’t think that it as such limits my mental progression, since I arrived at these conclusions through rational means, and would give them up if confronted with sufficient logic contrary to my understanding.
If there is no intrinsic value, then let us look for extrinsic value.
Would you mind elaborating on looking for extrinsic value? Is that like the Existentialist viewpoint?
I don’t know what you mean by that, but I resolved my weird ethical quasi-nihilism through a combination of studying Metaethics and reading Luke’s metaethical sequence, so you might want to do that as well, if only for the terminology.
Sorry, what I meant was that while I am using something similar to Error Theory, I was also going beyond that and using it as a premise in other arguments. All I meant was that it wasn’t the entirety of my argument.
I certainly plan on reading those, but thanks for the advice. Hopefully I’ll be up to date with terminology by the end of the summer.
Hi everyone, I’m The Articulator. (No ‘The’ in my username because I dislike using underscores in place of spaces)
I found LessWrong originally through RationalWiki, and more recently through Iceman’s excellent pony-fic about AI and transhumanism, Friendship is Optimal.
I’ve started reading the Sequences, and made some decent progress, though we’ll see how long I maintain my current rate.
I’ll be attending University this fall for Electrical Engineering, with a desire to focus in electronics.
Prior to LW, I have a year’s worth of Philosophy and Ethics classes, and a decent amount of derivation and introspection.
As a result, I’ve started forming a philosophical position, made up of a mishmash of formally learnt and self-derived concepts. I would be very grateful if anyone would take the time to analyze, and if possible, pick apart what I’ve come up with. After all, it’s only a belief worth holding if it stands up to rigorous debate.
(If this is the wrong place to do this, I apologize—it seemed slightly presumptuous to imply that my comment thread would be large enough to warrant a separate discussion article.)
I apologize in advance for a possible lack of precise terminology for already existing concepts. As I’ve said, I’m partially self-derived, and without knowing the name of an idea, it’s hard to check if it already exists. If you do spot such gaps in my knowledge, I would be grateful if you’d point them out. Though I understand correct terminology is nice, I’d appreciate it if you could judge my ideas regardless of how many fancy words I use to descrive them.
My thought process so far:
P: Naturalism is the only standard by which we can understand the world
P: One cannot derive ethical statements or imperatives from Naturalism, as, like all good science, it is only descriptive in nature
IC : We cannot derive ethical statements
IC: There is no intrinsic value
C: Nihilism is correct
However, assuming nihilism is correct, why don’t I just kill myself now? That’s down to the evolutionary instincts that need me alive to reproduce. Well, why not overcome those and kill myself? But now, we’re in a difficult situation – why, if nothing matters, am I so desperate to kill myself?
Nihilism is the total negation of the intrinsic and definitive value in anything. It’s like sticking a coefficient of zero onto all of your utility calculations. However, that includes the bad as well as the good. Why bother doing bad things just as much as doing good things?
My eventual realization came as a result of analyzing the level or order of concepts. Firstly, we have the lowest order, instinct, which we are only partially conscious of. Then, we have a middle order of conscious thought, wherein we utilize our sapience to optimize our instinctual aims. Finally, we have the first of a series of high order thought processes devoted to analyzing our thoughts. It struck me that only this order and above is concerned with my newfound existential crisis. When I allow my rationality to slip a bit, a few minutes later, I stop caring, and start eating or taking out my testosterone on small defenseless computer images. Essentially, it is only the meta-order processes which directly suffer as a result of nihilism, as they are the ones that have to deal with the results and implications.
Nihilism expects you to give up attempting to change things or apply ethics because those are seen as meaningful concepts. However, really, the way I see it, Nihilism is about simply the state of ‘going with the flow’, colloquially speaking. However, that’s intentionally vague. Consider: if your middle-order processes don’t care that you just realized nothing matters, what’ll happen? They’ll just keep doing what they’ve always done.
In other words, since humans compartmentalize, going with the flow is synonymous with turning off your meta-level thought processes as a goal-oriented drive, and purely operate on middle-level processes and below. That corresponds, for a Naturalist, with Utilitarianism.
Now, that’s not to say “turn off your meta-level cognition”, because otherwise, what am I doing here? What I’m doing right now is optimizing utility because I enjoy LessWrong and the types of discussions they have. I bother to optimize utility despite being a nihilist because it is easier, and less work, meta-level-wise, to give in to my middle-level desires than to fight them.
To define Nihilism, for me, now comes to the concept of passively maintaining the status quo, or more aptly, not attempting to change it. Why not wirehead? – because that state is no more desirable in a world with zero utility, but takes effort to reach. It’s going up a gradient which we can comfortably sit at the bottom of instead.
I fear I haven’t done the best job of explaining concisely, and I believe my original, purely mental, formulations were more elegant, so that’s a lesson on writing everything down learned. However, I hope some of you can see some flaws in this argument that I can’t, because at the moment, this explains just about everything I can think of in one way or another.
Thank you all in advance for any help given,
The Articulator (It’s kind of an ironic choice of name, present ineptitude considered.)
Okay, whoa, hey. I clearly and repeatedly explained my lack of total understanding of LW conventions. I’m not sure what about this provoked a downvote, but I would appreciate a bit more to go on. If this is about my noobishness, well, this is the Welcome Thread. Great job on the welcoming, by the way, anonymous downvoter. At the very least offer constructive criticism.
Edit: Troll? Really?
Edit,Edit: Thank you whoever deleted the negative karma!
I wouldn’t take downvotes to heart, if I were you, unless like, a whole bunch of people all downvote you. A downvote’s not terribly meaningful by itself.
Welcome to Less Wrong, by the way.
Now, I didn’t downvote you, but here’s some criticism, hopefully constructive. I didn’t read most of your post, from where you start discussing your philosophy (maybe I will later, but right now it’s a bit tl;dr). In general, though, taking what you’ve learned and attempting to construct a coherent philosophical position out of it is usually a poor idea. You’re likely to end up with a bunch of nonsense supported by a tower of reasoning detached from anything concrete. Read more first. Anyway, having a single “this is my philosophy” is really not necessary… pretty much ever. Figure out what your questions are, what you’re confused about, and why, approach those things one at a time and in without an eye toward unifying everything or integrating everything into a coherent whole, and see what happens.
Also: read the Sequences, they are pretty much concentrated awesome and will help with like, 90% of all confusion.
Okay, noted. It’s just that from what I’ve seen so far, a post with a net downvote is generally pretty horrible. I admit I took some offense from the implication. I’ll try not to let it bother me unless N is high enough for it to be me, entirely, that’s the problem.
Thanks. :)
Thank you for taking the time to give constructive criticism.
I will attempt to make it more coherent and summarized, assuming I keep any of it.
I appreciate I am likely to inexperienced to come up with anything that impressive, but I was hoping to use this as a method to understand which parts of my cognitive function were not behaving rationally, so as to improve.
I will absolutely continue to read, but with the utmost respect to Eliezer, I have yet to come across anything in the Sequences which did more than codify or verbalize beliefs I’d already held. By the point, two and a half sequences in, I felt it was unlikely that the enlightenment value would spike in such a way as to render my previously held views obsolete.
I’ll bear your objections in mind, but I fear I won’t let go of this theory unless somebody points out why it is wrong specifically, as opposed to methodically. Not that I’m putting any onus on you or anyone else to do so.
As I said, I am reading them, but have found them mostly about how to think as opposed to what to think so far, though I daresay that is intentional in the ordering.
Thanks again for your help and kindness. :)
It’s not even that (ok, it’s probably at least a little of that). Some of the most worthless and nonsensical philosophy has come from professional philosophers (guys with Famous Names, who get chapters in History of Philosophy textbooks) who’ve constructed massive edifices of blather without any connection to anything in the world. EDIT: See e.g. this quote.
You’ve got it right. One of the points Eliezer sometimes makes is that true things, even novel true things, shouldn’t sound surprising. Surprising and counterintuitive is what you get when you want to sound deep and wise. When you say true things, what you get is “Oh, well… yeah. Sure. I pretty much knew that.” Also, the Sequences contain a lot of excellent distillation and coherent, accessible presentation of things that you would otherwise have to construct from a hundred philosophy books.
As for enlightenment that makes your previous views obsolete… in my case, at least, that happened slowly, as I digested things I read here and in other places, and spent time (over a long period) thinking about various things. Others may have different experiences.
Yeah, one of the themes in Less Wrong material, I’ve found, is that how to think is more important than what to think (if for no other reason than that once you know how to think, thinking the right things follows naturally).
Oh, I know. I start crying inside every time I learn about Kant.
Well, I’ll take what you’ve said on board. Thanks for the help!
Welcome to LW!
There is a metaethics sequence, of which this post asks what you would do if morality didn’t exist. This may be a good place to start looking, but I wouldn’t be too discouraged if you don’t find it terribly useful (as Eliezer and others see it as not as communicative as Eliezer wanted it to be).
The point I would focus on is that there’s a difference between an ethical system that would compel any possible mind to follow it, and an ethical system in harmony with you and those around you. Figure out what you can get from ethics, and then seek to discover which the results of ethics you try. Worry more about developing a system that reliably makes small, positive changes than about developing a system that is perfectly correct. As it is said, a complex system that works is invariably found to have evolved from a simple system that worked.
Thanks!
Thanks for that link. I probably should have read that sequence, I’ll admit, but what is interesting is that, despite me not having read it previously, the majority of comments reflect what I stated above, albeit that my formulation explains it slightly more cognitively that ‘because I want to’. (Though that is an essential premise in my argument)
Though this is probably unfortunately irrational on my part, seeing my predictions confirmed by a decently sized sample only suggests to me that I’m on to something, at least so far as articulating something I have not seen previously formalized.
It seems like my largest problem here is that I absolutely failed to be concise, and added in non-necessary intermediate conclusions.
I think of this as less an ethical system in itself, rather a justification and rationalization of my position on Nihilism and its compatibility with Utilitarianism, which, coincidentally, seems to be the same as most people on LW.
I know that this’ll be probably just as failed as the last attempt, but I’ve summarized my core argument into a much shorter series of premises and conclusions. Would you mind looking through them and telling me what you feel is invalid or is likely to be improved upon by prolonged exposure to LW?
P: Naturalism is the only standard by which we can understand the world
P: One cannot derive ethical statements or imperatives from Naturalism, as, like all good science, it is only descriptive in nature
IC : We cannot derive ethical statements
IC: There is no intrinsic value
C: Nihilism is correct
P: Ethical statements are by definition prescriptive
P: Nihilism offers a total lack of ethical statements
IC: Nihilism offers no prescriptive statements
P: Prescriptive statements are like forces, in that they modify behavior (Consider Newton’s First Law)
IC: No prescriptive statements means no modification of behavior
C: Nihilism does not modify behavior, ethically speaking
P: Humans naturally or instinctively act according to a system very close to Utilitarianism
P: Deviation from this system takes effort
IC: Without further input or behavioral modification, most intellectual individuals will follow a Utilitarian system
IC: To act contrary to Utilitarianism requires effort
P: Nihilism does not modify behavior or encourage ethical effort
C: Nihilism implies Utilitarianism (or a general ethical system akin to it that is the default of the person in question)
I apologize if trying again like this is too much to ask for.
Were this true, the utilitarian answers to common moral thought experiments would be seen as intuitive. Instead, we find that a minority of people endorse the utilitarian answers, and they are more likely to endorse those answers the more they rely on abstract thought rather than intuition. It seems that most people are intuitive deontologists.
I don’t think “nihilist” is an interesting term, because it smuggles in implications that I do not think are useful (like “why don’t you just kill yourself, then?”). I think “moral anti-realist” is better, but not by much. The practical advice I would give: do not seek to use ethics as a foundation, because there is nothing to anchor it on. The parts of your mind are connected to each other, and it makes sense to develop them as a collection. If there is no intrinsic value, then let us look for extrinsic value.
Firstly, thank you for replying and spending the time to discuss this with me.
I admit I made a bit of a leap here, which may not be justified. I was careful to specify ‘very close’, as I realize it is obviously not an exact copy. I would argue that most people do attempt to follow Bentham’s original formulation of seeking pleasure and avoiding pain instinctively, as that is where he derived his theory from. I would argue that though people may implement a deontological system for assigning moral responsibility, they are ultimately using Utilitarian principles as the model for their instinctive morality that describes whether an action is good or bad, much the same as Rule Utilitarianism does. I don’t think I can overstate the importance of the fact that Bentham derived the idea of Utilitarianism from a human perspective.
In the longer formulation, I tackled this exact question, pointing out that is is more effort to overcome your survival instincts than it is to follow them, and thus an illogical attempt to change things which don’t matter.
I like ‘nihilist’ as a term as it is immediately recognizable, short, punchy, and someone with a basic grasp of Latin or maybe even English should be able to derive a rough meaning. It also sounds better. :P
Well, as it currently stands, I’m happy with the logical progression necessary to reach my current understanding, and more importantly, it has given me a tremendous sense of inner peace. I don’t think that it as such limits my mental progression, since I arrived at these conclusions through rational means, and would give them up if confronted with sufficient logic contrary to my understanding.
Would you mind elaborating on looking for extrinsic value? Is that like the Existentialist viewpoint?
Specifically, they seem to be talking about something similar to Error Theory.
Well, I just looked it up, and I’d agree with it, though I do use it more as an intermediate conclusion than an actual end point.
I don’t know what you mean by that, but I resolved my weird ethical quasi-nihilism through a combination of studying Metaethics and reading Luke’s metaethical sequence, so you might want to do that as well, if only for the terminology.
Sorry, what I meant was that while I am using something similar to Error Theory, I was also going beyond that and using it as a premise in other arguments. All I meant was that it wasn’t the entirety of my argument.
I certainly plan on reading those, but thanks for the advice. Hopefully I’ll be up to date with terminology by the end of the summer.