I’m a moderately long term lurker (a couple of years), and in the last ~6 months have had much more free time due to dropping all my online projects to travel around Asia. As a result of this is I’ve ended up reading a lot of of LW and having a huge amount of time to think. I really like understanding things, and it feels like a lot of parts of how the interesting bits of reality work which were tricky for many years are making much more sense. This is pretty awesome and I can’t wait to get back home and talk to people about it (maybe visit some LW meetups so there’s people with less inferential distance to talk to).
I’ve got a few ideas which seem relevant and possibly interesting/useful/new to LWers, but am hesitant to post up without some feedback because it’s more than a little intimidating, especially since most of the posts I’d like to make seem like they should go in main not discussion. I’d like someone to bounce ideas off and look over my posts so I know I’m not just going over old ground, skipping ahead too fast without explaining each step properly, or making silly mistakes.
An outline of the posts I’d like to write:
Fuzzy Pattern Theory of Identity—Like all concepts in conceptspace, “me”ness is non-binary and blurry , with the central example being me right now, close examples being past and future selves or mes in alternate branches, more distant examples including other humans, and really distant examples including a dog. Proximity to “me” in thingsspace seems most usefully defined as “how similar to me in the ways I attribute importance to is this configuration of matter”, examples to test your intuitions about this (e.g. a version of you without a brain is physically very like you, but you probably consider them much harder to identify with than a sibling, or perhaps even your pet cat). Possibly some stuff about the evolutionary usefulness of identity, how proximity to “me now” can be used as a fairly good measure of how much to follow a being’s preferences, or that may come later.
The Layers of Evolution - Briefly going through the layers of evolution: Single strand RNA can replicate, but two strands which generate each other are much more efficient , and more complex webs of chemical reactions are more efficient still, but “selfish” molecules could hijack accessible energy from the web to multiply without contributing. Cells separate different webs of self-replicating chemical reaction, giving some protection from rouge molecules at the cost of maintaining a cell wall and slowing molecule level evolution. Multicellular organisms can be more efficient reproducers in certain ways due to cell specialization and ability to act on a larger scale, but suffer from individual cells taking more than their fair share and growing to harm the organism. They counteract this by making all cells share the same DNA and not sharing cells, so an organism which has cancerous growth will die but not spread. Tribes of humans work effectively because division of labour and being able to accumulate larger stores of food make us more efficient in groups than as individuals, but make us vulnerable to selfish individuals taking more than their share at the cost to the group . Some more specific parallels drawn between levels of evolution, and how each layer specifically acts to prevent the layer below it from evolving towards “selfish” behaviour, why this happens (co-operation is a great strategy when it works), and why this is difficult (evolution is innately selfish and will lead individuals to exploit the group for if they can).
Morality and Maths - Most of the features of morality which occur reliably in many different cultures indicate that it’s a method of enforcing co-operation between members of a group, with parallels to the lower levels of evolution. Examples w/ explanation (avoid harming others, share fairly, help those in need, reproductive co-operation), and the limits of each. Other common trends (often don’t apply outside own tribe/family, don’t override self-preservation generally, punishing non-punishers, having unusual in-group specific traditions, how much of modern globalized society reacts). An argument that it is okay and kind of awesome that morality emerges from statistically advantageous strategies evolution ends up following, and how since conflict on a specific level is inherently unstable while relatively stable co-operation is definitely possible at lower levels and widely agreed to be beneficial, general co-operation may be the eventual rest state for humanity (though likely not perfect co-operation, resources needed to check everyone’s playing fair and discouraging those who are not).
Chairman Yang’s quote about “Extend the self of body outward to the self of group and the self of humanity.”, and how each level of evolution (including morality) can be seen as partially extending the “self” or the priorities of the individual evolving unit to include a larger group in order to gain the co-operation of that group.
Fuzzy Identity and Decision Theory—If it makes sense to talk about how “me” something is as a slightly blurry non-binary property, this has interesting implications for decision theory. For example, it can help explain hyperbolic discounting (far future-ete is less me than near future-ete, so has smaller weight), and how working with your future selves/being worked with by your past selves also has slight parallels with the expanding self for more co-operation thing. An analysis of how treating identity as non-binary affects each decision theory, how many of the disjunctions between how a decision theory tells us to act in a situation and our intuition directs us towards arise from the DTs treating identity as binary, how TDT can be seen as a partial implementation of a Fuzzy Identity Decision Theory, and the effects of fully embracing fuzzy identity (I think it can solve counterfactual mugging while giving sane answers to every other problem I’ve considered so far, but have not formalized and I could be missing something).
The above posts seem like they could be a mini sequence, since they all depend fairly heavily on each other and share a theme. Not sure of the title though, my best current is “Identity, Morality, Decision Theory”, which seems like it could be improved.
The Strange Loop of Consciousness—Treating consciousness as a strange loop; the mind sensing its current state, including its sense of current state, has reduced my confusion on the issue dramatically, though not completely. Some speculation on reasons why evolution could have produced key features of consciousness, labelled as potential just-so stories. I wrote half a post up on this a few months ago, but abandoned it mostly because I was worried my point would not come across properly and it would get a bad reaction. Probably best start from scratch if I get here again.
A Guided Tour of LessWrong—A post talking through the main areas of LW content and important ideas with plenty of links to key posts/places to start exploring and very brief summaries of key points. There’s a lot of really interesting content which is buried pretty deeply in the archives or in long lists of links, it’d be nice to point users at a bunch of nodes from one place.
There’s a bunch of other rough ideas I have for posts, but the above ones (plus something about logical first movers I’m waiting for more feedback from the decision theory google group on before posting) are the things I think I could potentially write a decent post on soon. Rough future ideas include some ideas on raising curious and intelligent children (needs research+talking to people with experience), improving the LW wiki (I’ve founded and run a couple of wikis and know my way around MW, the LW wiki has significant room for improvement), post(s) explaining my opinions on common LW positions (AI takeoff, cryonics, etc).
So, is anyone interested in helping out? I’ve got a lot more detailed reasoning in my head than above, so if you’ve got specific questions about justifications for some part which I’m likely to respond to anyway maybe best to hold them until I’ve made a draft for that post. Pointing me at posts which cover things I’m talking about would be good though, I may have missed them and would prefer not to duplicate effort if something’s already been said. I’m thinking I’ll probably write on google docs and give read/add comments power to anyone who wants it.
Make one article. Make it a standalone article about one topic. (Not an introduction to a planned long series of articles; just write the first article of the series. Not just the first half of an article, to be continued later; instead choose a narrower topic for the first article. As a general rule: links to already written articles are okay, but links to yet unexisting articles are bad; especially if those yet unexisting articles are used as an excuse for why the existing articles don’t have a conclusion.)
Put the article in Discussion; if it is successful and upvoted, someone will move it to Main. Later perhaps, when two of your articles were moved, put the third one directly in Main.
The topics seems interesting, but it’s not just what topic you write about, but also how do you write it. For example “The Layers of Evolution”: I can imagine it written both very good and very badly. For example, whether you will only speak generally, or give specific examples; whether those examples will be correct or incorrect. (As a historical warning, read “the tragedy of group selectionism” for an example of something that seemed like it would make sense, but at the end, it failed. There is a difference between imagining a mechanism, and having a proof that it exists.)
If you have a lot of topics, perhaps you should start with the one where you feel most experienced.
The Fuzzy Pattern Theory of Identity could reasonably be created as a stand alone post, and probably the Layers of Evolution too. Guided Tour and Strange Loop of Consciousness too, though I’d rather have a few easier ones done before I attempt those. The other posts rely on one or both of the previous ones.
Glad they seem interesting to you :). And yes, layers of evolution is the one I feel could go wrong the most easily (though morality and maths may be hardest to explain my point clearly in). It’s partly meant as a counterpoint to Eliezer’s post you linked actually, since even though altruistic group selection is clearly nonsense when you look at how evolution works, selfish group selection seems like it exists in some specific but realistic conditions (at minimum, single->multicellular requires cells to act for the good of other cells, and social insects have also evolved co-operation). When individuals can be forced to bear significant reproductive losses for harming the group selfishly, selfishly harming the group no longer is an advantage. The cost of punishing an individual for harming your group is much smaller than the cost of passing up chances to help yourself at the expense of the group, so more plausibly evolveable, but still requires specific conditions. I do need to get specific examples to cite as well as general points and do some more research before I’ll be ready to write that one.
I.. would still feel a lot more comfortable about posting something which at least one other person had looked over and thought about, at least for my first post. I’ve started writing several LW posts before, and the main reason I’ve not posted them up is worry about negative reaction due to some silly mistake. Most of my ideas follow non-trivial chains of reasoning and without much feedback I’m afraid to have ended up in outer mongolia. Posting to discussion would help a bit, but does not make me entirely comfortable. How about if I write up something on google docs, post a link here, then if there’s not much notice in a few days use Discussion for getting initial feedback?
How about if I write up something on google docs, post a link here, then if there’s not much notice in a few days use Discussion for getting initial feedback?
I think that would remove a substantial portion of your potential readers. Just suck it up and post something rough in Discussion, even if it feels uncomfortable.
For example: the piece that starts with “even though altruistic group selection is clearly nonsense” up until the end of the paragraph might be expanded just a little and posted stand-alone in an open thread. Gather reactions. Create an expanded post that addresses those reactions. Post it to Discusssion. Rinse and repeat.
I think the better course of action is if you just post your ideas first in the discussion section, let the feedback pour and then, based on what you receive, craft posts for the main section. After all, that’s what the discussion section is for. In this way, you’ll get a lot of perspectives at a time, without waiting the help of a single individual.
hm, you’re suggesting making one rough post in discussion, then use feedback to make a second post in main? I can see how that’s often useful advise, but I think I’d prefer to try and thoroughly justify from the start so would find it hard to avoid making a Main length post straight away. Revising the post based on feedback from Discussion before moving it seems like a good idea though.
Well, you have three layers in LW: a post in an open thread, a discussion post, a main post (who might get promoted). Within these you can pretty much refine any idea you want without losing too much karma (some you will lose, mind you, it’s almost unavoidable), so that you can reject it quickly or polish it until it shines duly for the main section.
I’m a moderately long term lurker (a couple of years), and in the last ~6 months have had much more free time due to dropping all my online projects to travel around Asia. As a result of this is I’ve ended up reading a lot of of LW and having a huge amount of time to think. I really like understanding things, and it feels like a lot of parts of how the interesting bits of reality work which were tricky for many years are making much more sense. This is pretty awesome and I can’t wait to get back home and talk to people about it (maybe visit some LW meetups so there’s people with less inferential distance to talk to).
I’ve got a few ideas which seem relevant and possibly interesting/useful/new to LWers, but am hesitant to post up without some feedback because it’s more than a little intimidating, especially since most of the posts I’d like to make seem like they should go in main not discussion. I’d like someone to bounce ideas off and look over my posts so I know I’m not just going over old ground, skipping ahead too fast without explaining each step properly, or making silly mistakes.
An outline of the posts I’d like to write: Fuzzy Pattern Theory of Identity—Like all concepts in conceptspace, “me”ness is non-binary and blurry , with the central example being me right now, close examples being past and future selves or mes in alternate branches, more distant examples including other humans, and really distant examples including a dog. Proximity to “me” in thingsspace seems most usefully defined as “how similar to me in the ways I attribute importance to is this configuration of matter”, examples to test your intuitions about this (e.g. a version of you without a brain is physically very like you, but you probably consider them much harder to identify with than a sibling, or perhaps even your pet cat). Possibly some stuff about the evolutionary usefulness of identity, how proximity to “me now” can be used as a fairly good measure of how much to follow a being’s preferences, or that may come later.
The Layers of Evolution - Briefly going through the layers of evolution: Single strand RNA can replicate, but two strands which generate each other are much more efficient , and more complex webs of chemical reactions are more efficient still, but “selfish” molecules could hijack accessible energy from the web to multiply without contributing. Cells separate different webs of self-replicating chemical reaction, giving some protection from rouge molecules at the cost of maintaining a cell wall and slowing molecule level evolution. Multicellular organisms can be more efficient reproducers in certain ways due to cell specialization and ability to act on a larger scale, but suffer from individual cells taking more than their fair share and growing to harm the organism. They counteract this by making all cells share the same DNA and not sharing cells, so an organism which has cancerous growth will die but not spread. Tribes of humans work effectively because division of labour and being able to accumulate larger stores of food make us more efficient in groups than as individuals, but make us vulnerable to selfish individuals taking more than their share at the cost to the group . Some more specific parallels drawn between levels of evolution, and how each layer specifically acts to prevent the layer below it from evolving towards “selfish” behaviour, why this happens (co-operation is a great strategy when it works), and why this is difficult (evolution is innately selfish and will lead individuals to exploit the group for if they can).
Morality and Maths - Most of the features of morality which occur reliably in many different cultures indicate that it’s a method of enforcing co-operation between members of a group, with parallels to the lower levels of evolution. Examples w/ explanation (avoid harming others, share fairly, help those in need, reproductive co-operation), and the limits of each. Other common trends (often don’t apply outside own tribe/family, don’t override self-preservation generally, punishing non-punishers, having unusual in-group specific traditions, how much of modern globalized society reacts). An argument that it is okay and kind of awesome that morality emerges from statistically advantageous strategies evolution ends up following, and how since conflict on a specific level is inherently unstable while relatively stable co-operation is definitely possible at lower levels and widely agreed to be beneficial, general co-operation may be the eventual rest state for humanity (though likely not perfect co-operation, resources needed to check everyone’s playing fair and discouraging those who are not).
Chairman Yang’s quote about “Extend the self of body outward to the self of group and the self of humanity.”, and how each level of evolution (including morality) can be seen as partially extending the “self” or the priorities of the individual evolving unit to include a larger group in order to gain the co-operation of that group.
Fuzzy Identity and Decision Theory—If it makes sense to talk about how “me” something is as a slightly blurry non-binary property, this has interesting implications for decision theory. For example, it can help explain hyperbolic discounting (far future-ete is less me than near future-ete, so has smaller weight), and how working with your future selves/being worked with by your past selves also has slight parallels with the expanding self for more co-operation thing. An analysis of how treating identity as non-binary affects each decision theory, how many of the disjunctions between how a decision theory tells us to act in a situation and our intuition directs us towards arise from the DTs treating identity as binary, how TDT can be seen as a partial implementation of a Fuzzy Identity Decision Theory, and the effects of fully embracing fuzzy identity (I think it can solve counterfactual mugging while giving sane answers to every other problem I’ve considered so far, but have not formalized and I could be missing something).
The above posts seem like they could be a mini sequence, since they all depend fairly heavily on each other and share a theme. Not sure of the title though, my best current is “Identity, Morality, Decision Theory”, which seems like it could be improved.
The Strange Loop of Consciousness—Treating consciousness as a strange loop; the mind sensing its current state, including its sense of current state, has reduced my confusion on the issue dramatically, though not completely. Some speculation on reasons why evolution could have produced key features of consciousness, labelled as potential just-so stories. I wrote half a post up on this a few months ago, but abandoned it mostly because I was worried my point would not come across properly and it would get a bad reaction. Probably best start from scratch if I get here again.
A Guided Tour of LessWrong—A post talking through the main areas of LW content and important ideas with plenty of links to key posts/places to start exploring and very brief summaries of key points. There’s a lot of really interesting content which is buried pretty deeply in the archives or in long lists of links, it’d be nice to point users at a bunch of nodes from one place.
There’s a bunch of other rough ideas I have for posts, but the above ones (plus something about logical first movers I’m waiting for more feedback from the decision theory google group on before posting) are the things I think I could potentially write a decent post on soon. Rough future ideas include some ideas on raising curious and intelligent children (needs research+talking to people with experience), improving the LW wiki (I’ve founded and run a couple of wikis and know my way around MW, the LW wiki has significant room for improvement), post(s) explaining my opinions on common LW positions (AI takeoff, cryonics, etc).
So, is anyone interested in helping out? I’ve got a lot more detailed reasoning in my head than above, so if you’ve got specific questions about justifications for some part which I’m likely to respond to anyway maybe best to hold them until I’ve made a draft for that post. Pointing me at posts which cover things I’m talking about would be good though, I may have missed them and would prefer not to duplicate effort if something’s already been said. I’m thinking I’ll probably write on google docs and give read/add comments power to anyone who wants it.
I guess you just have to try it.
Make one article. Make it a standalone article about one topic. (Not an introduction to a planned long series of articles; just write the first article of the series. Not just the first half of an article, to be continued later; instead choose a narrower topic for the first article. As a general rule: links to already written articles are okay, but links to yet unexisting articles are bad; especially if those yet unexisting articles are used as an excuse for why the existing articles don’t have a conclusion.)
Put the article in Discussion; if it is successful and upvoted, someone will move it to Main. Later perhaps, when two of your articles were moved, put the third one directly in Main.
The topics seems interesting, but it’s not just what topic you write about, but also how do you write it. For example “The Layers of Evolution”: I can imagine it written both very good and very badly. For example, whether you will only speak generally, or give specific examples; whether those examples will be correct or incorrect. (As a historical warning, read “the tragedy of group selectionism” for an example of something that seemed like it would make sense, but at the end, it failed. There is a difference between imagining a mechanism, and having a proof that it exists.)
If you have a lot of topics, perhaps you should start with the one where you feel most experienced.
The Fuzzy Pattern Theory of Identity could reasonably be created as a stand alone post, and probably the Layers of Evolution too. Guided Tour and Strange Loop of Consciousness too, though I’d rather have a few easier ones done before I attempt those. The other posts rely on one or both of the previous ones.
Glad they seem interesting to you :). And yes, layers of evolution is the one I feel could go wrong the most easily (though morality and maths may be hardest to explain my point clearly in). It’s partly meant as a counterpoint to Eliezer’s post you linked actually, since even though altruistic group selection is clearly nonsense when you look at how evolution works, selfish group selection seems like it exists in some specific but realistic conditions (at minimum, single->multicellular requires cells to act for the good of other cells, and social insects have also evolved co-operation). When individuals can be forced to bear significant reproductive losses for harming the group selfishly, selfishly harming the group no longer is an advantage. The cost of punishing an individual for harming your group is much smaller than the cost of passing up chances to help yourself at the expense of the group, so more plausibly evolveable, but still requires specific conditions. I do need to get specific examples to cite as well as general points and do some more research before I’ll be ready to write that one.
I.. would still feel a lot more comfortable about posting something which at least one other person had looked over and thought about, at least for my first post. I’ve started writing several LW posts before, and the main reason I’ve not posted them up is worry about negative reaction due to some silly mistake. Most of my ideas follow non-trivial chains of reasoning and without much feedback I’m afraid to have ended up in outer mongolia. Posting to discussion would help a bit, but does not make me entirely comfortable. How about if I write up something on google docs, post a link here, then if there’s not much notice in a few days use Discussion for getting initial feedback?
I think that would remove a substantial portion of your potential readers. Just suck it up and post something rough in Discussion, even if it feels uncomfortable.
For example: the piece that starts with “even though altruistic group selection is clearly nonsense” up until the end of the paragraph might be expanded just a little and posted stand-alone in an open thread.
Gather reactions.
Create an expanded post that addresses those reactions.
Post it to Discusssion.
Rinse and repeat.
I think the better course of action is if you just post your ideas first in the discussion section, let the feedback pour and then, based on what you receive, craft posts for the main section. After all, that’s what the discussion section is for.
In this way, you’ll get a lot of perspectives at a time, without waiting the help of a single individual.
hm, you’re suggesting making one rough post in discussion, then use feedback to make a second post in main? I can see how that’s often useful advise, but I think I’d prefer to try and thoroughly justify from the start so would find it hard to avoid making a Main length post straight away. Revising the post based on feedback from Discussion before moving it seems like a good idea though.
Well, you have three layers in LW: a post in an open thread, a discussion post, a main post (who might get promoted). Within these you can pretty much refine any idea you want without losing too much karma (some you will lose, mind you, it’s almost unavoidable), so that you can reject it quickly or polish it until it shines duly for the main section.