I don’t understand why this post and some of Dmytry’s comments are downvoted so hard. The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.
My personal impression has been that emotions are a result of a hidden subconscious logical chain, and can be affected by consciously following this chain, thus reducing this apparent complexity to something simple. The experiences of others here seem to agree, from Eliezer’s admission that he has developed a knack for “switching off arbitrary minor emotions” to Alicorn’s “polyhacking”.
It is not such a big leap to suggest that our snap moral judgments likewise result from a complex, or at least hidden, subconscious reasoning.
I can’t speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant. That said, I certainly agree that our emotions and moral judgments are the result of reasoning (for a properly broad understanding of “reasoning”, though I’d be more inclined to say “algorithms” to avoid misleading connotations) of which we’re unaware. And, yes, recapitulating that covert reasoning overtly frequently gives us influence over those judgments. Similar things are true of social behavior when someone articulates the underlying social algorithms that are ordinarily left covert.
I can’t speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant.
Sorry for that, was a bit of leak out of how the interactions here about the AI issues are rather adversarial in nature, in the sense that ambiguity—unavoidable in human language—of anything that is in disagreement with the opinion here, is resolved in favour of interpretation that makes least amount of sense. The AI is, definitely, a very scary risk. Scariness doesn’t result in most reasonable processing. I do not claim to be immune to this.
I agree that some level of ambiguity is unavoidable, especially on initial exchange. Given iterated exchange, I usually find that ambiguity can be reduced to negligible levels, but sometimes that fails. I agree that some folks here have the habit you describe, of interpreting other people’s comments uncharitably. This is not unique to AI issues; the same occurs from time to time with respect to decision theory, moral philosophy, theology, various other things. I don’t find it as common here as you describe it as being, either with respect to AI risks or anything else. Perhaps it’s more common here than I think but I attend to the exceptions disproportionally; perhaps it’s less common here than you think but you attend to it disproportionally; perhaps we actually perceive it as equally common but you choose to describe it as the general case for rhetorical reasons; perhaps your notion of “the interpretation that makes the least amount of sense” is not what I would consider an uncharitable interpretation; perhaps something else is going on. I agree that fear tends to inhibit reasonable processing.
Well, I think it is the case that the fear is mind killer to some extent. Fear rapidly assigns the truth value to a proposition, using a heuristic. That is necessary for survival. Unfortunately this value makes a very bad prior.
ambiguity—unavoidable in human language—of anything that is in disagreement with the opinion here, is resolved in favour of interpretation that makes least amount of sense
Ambiguity should be resolved by figuring out the intended meaning, irrespective of the intended meaning’s merits, which should be discussed separately from the procedure of ambiguity resolution.
I don’t understand why this post and some of Dmytry’s comments are downvoted so hard.
I’m going with the position that the post got the votes that it deserved. It’s not very good thinking and Dmytry goes out of his way to convey arrogance and condescension while he posts. It doesn’t help that rather than simply being uninformed of prior work he explicitly belligerently defies it—that changes a response of sympathy with his efforts and ‘points for trying’ to an expectation that he says stuff that makes sense. Of course that is going to get downvoted.
The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.
It isn’t self-contradictory, just the other two.
Seriously, complexity maximisation and “This also aligns with what ever it is that the evolution has been maximizing on the path leading up to H. Sapiens.” That is crazy and obviously false.
It is not such a big leap to suggest that our snap moral judgments likewise result from a complex, or at least hidden, subconscious reasoning.
Of course that is true! But that isn’t what the post says. There is a world of difference between “our values are complex” and “we value complexity”.
Because someone’s going through mass-downvoting. Note that your defence got downvoted by them too. When someone gets a downvote for posting the actual answer to the question, there’s little going on but blue-green politics with respect to local tropes.
Because someone’s going through mass-downvoting. Note that your defence got downvoted by them too.
Not only that, his defense got downvoted by me before the post itself did and with greater intent to influence.
there’s little going on but blue-green politics with respect to local tropes.
It doesn’t take local tropes to prompt disagreement here. Not thinking that human values can be attributed to valuing complexity is hardly a weird and unique-to-lesswrong position. In fact Eliezer-values (in Fun-Theory) are if anything closer to what this post advocates than what can be expected in the mainstream.
Actually I had 2 upvotes on that answer then it got to −1. I think I’m just going to bail out because on that same post about the Rubik’s cube I could of gotten a lot of ‘thanks man’ replies on e.g. a programming contest forum, or the like, if there was a Rubik’s cube talk like this. edit: or wait, it was at −1, then at +2, then at −1
Also on the evolution part of it, it is the case that evolution is crappy hill climber (and mostly makes better bacteria), but you can look at human lineage, and reward something that’s increasing along this line, to avoid wasting too much time on bacteria. E.g. by making agents play some sort of games of wit against each other where bacteria won’t get free pass.
Consistent downvotes can be considered a signal sent by the voter consensus that they would prefer that you either bail or change your behavior. Unfortunately the behavior change in question here amounts to adopting a lower status role (ie. more willing to read and understand the words of others, less inclined to insult and dismiss others out of hand, more likely to change your mind about things when things are explained to you). I don’t expect or presume that others will willingly adopt a lower status role—even when to do so will increase their status in the medium term. I must accept that they will do what they wish to do and continue to downvote and oppose the behaviors that I would see discouraged.
because on that same post about the Rubik’s cube I could of gotten a lot of ‘thanks man’ replies on e.g. a programming contest forum, or the like, if there was a Rubik’s cube talk like this. edit: or wait, it was at −1, then at +2, then at −1
It is quite possible—in fact my model puts it as highly likely—that your current style of social interaction would result in far greater social at other locations. Lesswrong communication norms are rather unique in certain regards.
You guys are very willing to insult me personally, but I am rather trying not to go personal (albeit it is rather difficult at times). That doesn’t mean I don’t say things that members of community can’t take personally; still, in last couple days I’ve noticed that borderline personal insults here are tolerated way more than i’d consider normal while any stabs at community (or shared values) are not, and the disagreements tend to be taken more personally than normal in technical discourse.
I don’t understand why this post and some of Dmytry’s comments are downvoted so hard. The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.
Because meme complex generated by bright philosopher guy (Eliezer) doing depth-first search for solutions to engineering problem (FAI) and blogging what ever rationalizations he had for discarding anything relevant to alternative approaches.
I don’t understand why this post and some of Dmytry’s comments are downvoted so hard. The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.
My personal impression has been that emotions are a result of a hidden subconscious logical chain, and can be affected by consciously following this chain, thus reducing this apparent complexity to something simple. The experiences of others here seem to agree, from Eliezer’s admission that he has developed a knack for “switching off arbitrary minor emotions” to Alicorn’s “polyhacking”.
It is not such a big leap to suggest that our snap moral judgments likewise result from a complex, or at least hidden, subconscious reasoning.
I can’t speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant. That said, I certainly agree that our emotions and moral judgments are the result of reasoning (for a properly broad understanding of “reasoning”, though I’d be more inclined to say “algorithms” to avoid misleading connotations) of which we’re unaware. And, yes, recapitulating that covert reasoning overtly frequently gives us influence over those judgments. Similar things are true of social behavior when someone articulates the underlying social algorithms that are ordinarily left covert.
Sorry for that, was a bit of leak out of how the interactions here about the AI issues are rather adversarial in nature, in the sense that ambiguity—unavoidable in human language—of anything that is in disagreement with the opinion here, is resolved in favour of interpretation that makes least amount of sense. The AI is, definitely, a very scary risk. Scariness doesn’t result in most reasonable processing. I do not claim to be immune to this.
I agree that some level of ambiguity is unavoidable, especially on initial exchange.
Given iterated exchange, I usually find that ambiguity can be reduced to negligible levels, but sometimes that fails.
I agree that some folks here have the habit you describe, of interpreting other people’s comments uncharitably. This is not unique to AI issues; the same occurs from time to time with respect to decision theory, moral philosophy, theology, various other things.
I don’t find it as common here as you describe it as being, either with respect to AI risks or anything else.
Perhaps it’s more common here than I think but I attend to the exceptions disproportionally; perhaps it’s less common here than you think but you attend to it disproportionally; perhaps we actually perceive it as equally common but you choose to describe it as the general case for rhetorical reasons; perhaps your notion of “the interpretation that makes the least amount of sense” is not what I would consider an uncharitable interpretation; perhaps something else is going on.
I agree that fear tends to inhibit reasonable processing.
Well, I think it is the case that the fear is mind killer to some extent. Fear rapidly assigns the truth value to a proposition, using a heuristic. That is necessary for survival. Unfortunately this value makes a very bad prior.
Yup, that’s one mechanism whereby fear tends to inhibit reasonable processing.
Excellent use of fogging in this conversation Dave.
Seconding TheOtherDave’s thanks. I stumbled on this technique a couple days ago, it’s nice to know that it has a name.
Upvoted back to zero for teaching me a new word.
.
Ambiguity should be resolved by figuring out the intended meaning, irrespective of the intended meaning’s merits, which should be discussed separately from the procedure of ambiguity resolution.
I’m going with the position that the post got the votes that it deserved. It’s not very good thinking and Dmytry goes out of his way to convey arrogance and condescension while he posts. It doesn’t help that rather than simply being uninformed of prior work he explicitly belligerently defies it—that changes a response of sympathy with his efforts and ‘points for trying’ to an expectation that he says stuff that makes sense. Of course that is going to get downvoted.
It isn’t self-contradictory, just the other two.
Seriously, complexity maximisation and “This also aligns with what ever it is that the evolution has been maximizing on the path leading up to H. Sapiens.” That is crazy and obviously false.
Of course that is true! But that isn’t what the post says. There is a world of difference between “our values are complex” and “we value complexity”.
Netting zero average, though i guess pointing that out is not a very good thing for votes.
I don’t understand what you are trying to convey.
I understood it to mean that comments about karma tend to get downvoted.
Because someone’s going through mass-downvoting. Note that your defence got downvoted by them too. When someone gets a downvote for posting the actual answer to the question, there’s little going on but blue-green politics with respect to local tropes.
This is often the first explanation proposed, but is wrong most of the time. Charity, context, etc. etc.
Not only that, his defense got downvoted by me before the post itself did and with greater intent to influence.
It doesn’t take local tropes to prompt disagreement here. Not thinking that human values can be attributed to valuing complexity is hardly a weird and unique-to-lesswrong position. In fact Eliezer-values (in Fun-Theory) are if anything closer to what this post advocates than what can be expected in the mainstream.
edit: oh wait, you are speaking of shminux . I was thinking of the answer to a question.
Actually I had 2 upvotes on that answer then it got to −1. I think I’m just going to bail out because on that same post about the Rubik’s cube I could of gotten a lot of ‘thanks man’ replies on e.g. a programming contest forum, or the like, if there was a Rubik’s cube talk like this. edit: or wait, it was at −1, then at +2, then at −1
Also on the evolution part of it, it is the case that evolution is crappy hill climber (and mostly makes better bacteria), but you can look at human lineage, and reward something that’s increasing along this line, to avoid wasting too much time on bacteria. E.g. by making agents play some sort of games of wit against each other where bacteria won’t get free pass.
Consistent downvotes can be considered a signal sent by the voter consensus that they would prefer that you either bail or change your behavior. Unfortunately the behavior change in question here amounts to adopting a lower status role (ie. more willing to read and understand the words of others, less inclined to insult and dismiss others out of hand, more likely to change your mind about things when things are explained to you). I don’t expect or presume that others will willingly adopt a lower status role—even when to do so will increase their status in the medium term. I must accept that they will do what they wish to do and continue to downvote and oppose the behaviors that I would see discouraged.
It is quite possible—in fact my model puts it as highly likely—that your current style of social interaction would result in far greater social at other locations. Lesswrong communication norms are rather unique in certain regards.
You guys are very willing to insult me personally, but I am rather trying not to go personal (albeit it is rather difficult at times). That doesn’t mean I don’t say things that members of community can’t take personally; still, in last couple days I’ve noticed that borderline personal insults here are tolerated way more than i’d consider normal while any stabs at community (or shared values) are not, and the disagreements tend to be taken more personally than normal in technical discourse.
Because meme complex generated by bright philosopher guy (Eliezer) doing depth-first search for solutions to engineering problem (FAI) and blogging what ever rationalizations he had for discarding anything relevant to alternative approaches.