OK, I believe we have more than enough information to consider him identified now:
Dmytry
private_messaging
JaneQ
Comment
Shrink
All_work_and_no_play
Those are the currently known sockpuppets of Dmytry. This one warrants no further benefit of the doubt. It is a known troll wilfully abusing the system. To put it mildly, this is something I would prefer not to see encouraged.
I agree. Dmytry was OK; private_messaging was borderline but he did admit to it and I’m loathe to support the banning of a critical person who is above the level of profanity and does occasionally make good points; JaneQ was unacceptable, but starting Comment after JaneQ was found out is even more unacceptable. Especially when none of the accounts were banned in the first place! (Were this Wikipedia, I don’t think anyone would have any doubts about how to deal with an editor abusing multiple socks.)
private_messaging was borderline but he did admit to it
Absolutely, and he also stopped using Dmytry. My sockpuppet aversion doesn’t necessarily have a problem with abandoning one identity (for reasons such as the identity being humiliated) and working to establish a new one. Private_messaging earned a “Do Not Feed!” tag itself through consistent trolling but that’s a whole different issue to sockpuppet abuse.
JaneQ was unacceptable
And even used in the same argument as his other account, with them supporting each other!
Private_messaging earned a “Do Not Feed!” tag itself through consistent trolling
What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio?
If I were to try to construct some repertoire model of him (e.g. signalling intellectual superiority by contradicting the alphas, seems like a standard contrarian mindset), it might be a good match. But frankly: Why care, his points should stand or fall on their own merit, regardless of why he chose to make them.
He raised some excellent points regarding e.g. Solomonoff induction that I’ve yet to see answered, (e.g. accepting simple models with assumed noise over complex models with assumed less noise, given the enourmously punishing discounting for length that may only work out in theoretical complexity class calculations and Monte Carlo approximations with a trivial solution) and while this is a CS dominated audience, additional math proficiency should be highly sought after—especially for contrarians, since it makes their criticisms that much more valuable.
Is he a consistent fountain of wisdom? No. Is anyone?
I will not defend sockpuppet abuse here, though, that’s a different issue and one I can get behind. Don’t take this comment personal, the sentiment was spawned from when he just had 2 known accounts but was already met with high levels of “do not feed!”, your comment just now seemed as good a place as any to voice it.
Well there is definitely some sort of a Will Newsome-like projection technique going on, i.e. his comments—those that are on topic—are sometimes sufficiently opaque so that the insight is generated by the reader filling in the gaps meaningfully.
The example I used was somewhat implicit in this comment:
You end up modelling a crackpot scientist with this. Pick simplest theory that doesn’t fit the data, then distrust the data virtually no matter how much evidence is collected, and explain it as people conspiring, that’s what the AI will do. Gets even worse when you are unable to determine minimum length for either theory (which you are proven unable).
The universal prior discount for length is so severe (just a 20 bits longer description = 2^20 discounting, and what can you even say with 20 bits?), that this quote from Shane Legg’s paper comes at little surprise:
“However it is clear that only the shortest program for will have much affect (sp) on [the universal prior].”
If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data.
How well does that argument hold up to challenges? I’m not sure, I haven’t thought AIXI sufficiently through when taking into account the map-territory divide. But it sure is worthy of further consideration, which it did not get.
Here’s some other comments that come to mind: This comment of his, which I interpreted to essentially refer to what I explained in my answering comment.
There’s a variation of that point in this comment, third paragraph.
There’s comments I don’t quite understand on first reading, but which clearly go into the actual meat of the topic, which is a good direction.
My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting “truth” for “meaningful argument”. Those deserve answers, not ignoring, regardless of their source.
If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data.
It looks to me like you’re reading your own interpretation into what he wrote, because the sentence he wrote before “You end up with” was
they are not uniquely determined and your c can be kilobits long, meaning, one hypothesis can be given prior >2^1000 larger than another, or vice versa, depending to choice of the language.
which is clearly talking about another issue. I can give my views on both if you’re interested.
On the issue private_messaging raises, I think it’s a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. private_messaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don’t trust that too much.
On the issue you raised, a hypothesis of “simple model + random errors” must still match the past history perfectly to not be discarded, and the exact errors would have to be part of the hypothesis (i.e., program) and therefore count towards its length.
My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting “truth” for “meaningful argument”. Those deserve answers, not ignoring, regardless of their source.
I defended private_messaging/Dmytry before for similar reasons, but the problem is that it’s often not fun to argue with him. I do engage with him sometimes if I think I can draw out some additional insights or get him to clarify something, but now I tend not to respond just to correct something that I think is wrong.
On the issue private_messaging raises, I think it’s a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. private_messaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don’t trust that too much.
Are you picturing AI that has simulated multiverse from big bang up inside a single universe, and then it just uses camera sense data to very rapidly pick the right universe? Well yes that will dispose of 2^1000 prior very easily. Something that is instead e.g. modelling humans using minimum amount of guessing without knowing what’s inside their heads, and which can’t really run any reductionist simulations at the level of quarks to predict it’s camera data, can have real trouble getting right the fine details of it’s grand unified theory of everything, and most closely approximate a crackpot scientist. Furthermore, having to include a non-reductionist model of humans, it may even end up religious (feeding stuff into human mind model to build it’s theory of everything by intelligent design).
How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem. edit: and the very strong intuition I have is that you can’t just dismiss this sort of stuff out of hand. So many ways it can fail. So few ways it can work great. And no rigour what so ever in the speculations here.
How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem.
I certainly don’t disagree when you put it like that, but I think the convention around here is when we say “SI/AIXI will do X” we are usually referring to the theoretical (uncomputable) construct, not predicting that an actual future AI inspired by SI/AIXI will do X (in part because we do recognize the difficulty of this latter problem). The reason for saying “SI/AIXI will do X” may for example be to point out how even a simple theoretical model can behave in potentially dangerous ways that its designer didn’t expect, or just to better understand what it might mean to be ideally rational.
It’s not so much ignoring observations as testing models that allow for your sense data to be subject to both gaussian noise as well as systematic errors, i.e. explaining part of the observations as sensory fuzziness.
In such a case, an overly simply model that posits e.g. some systematic error in its sensors may have an advantage over an actually correct albeit more complex model, due to the way that the length penalty for the Universal Prior rapidly accumulates.
Imagine AIXI coming to the conclusion that the string it is watching is in fact partly output by a random string generator that intermittently takes over. If the competing (but potentially correct) model that works without such a random string generator needs just a megabit more space to specify, do the math.
I’ll still have to think upon it further. It’s just not something to be dismissed out of hand, and just one of several highly relevant tangents (since it pertains to real world applicability; if its a design byproduct it might well translate to any Monte Carlo or assorted formulations). It might well turn out to be a non-issue.
Does AIXI admit the possibility of random string generators? IIRC it only allows deterministic programs, so if it sees patterns a simple model can’t match, then it’s forced to update the model with “but there are exceptions: bit N is 1, and bit N+1 is 1, and bit N+2 is 0… etc” to account for the error. In other words, the size of the “simple model” then grows to be the size of the deterministic part plus the size of the error correction part. And in that case, even a megabyte of additional complexity in a model would stop effectively ruling out that complex model just as soon as more than a couple megabytes of simple-model-incompatible data had been seen.
IANAE, but doesn’t AIXI work based on prediction instead of explanation? An algorithm that attempts to “explain away” sense data will be unable to predict the next sequence of the AI’s input, and will be discarded.
If your agent operates in an environment such that your sense data contains errors or such that the world that spawns that sense data isn’t deterministic, at least not on a level that your sense data can pick up—both of which cannot be avoided—then perfect predictability is out of the question anyways.
The problem then shifts to “how much error or fuzziness of the sense data or the underlying world is allowed”, at which point there’s a trade-off between “short and enourmously more preferred model that predicts more errors/fuzziness” versus “longer and enourmously less preferred model that predicts less errors/fuzziness”.
This is as far as I know not an often discussed topic, at least not around here, probably because people haven’t yet hooked up any computable version of AIXI with sensors that are relevantly imperfect and that are probing a truly probabilistic environment. Those concerns do not really apply to learning PAC-Man.
An uncharitable reading, notice the “consistent” and referring to an acceptable ratio of (implied) signal/noise in the very first sentence.
Also, this may be biased, but I value relevant comments on algorithmic information theory particularly highly, and they are a rare enough commodity. We probably agree on that at least.
What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio?
Exactly. I often lament that the word ‘troll’ contains motive as part of the meaning. I often try to avoid the word and convey “Account to which Do Not Feed needs to be applied” without making any assertion about motive. Those are hard to prove.
As far as I’m concerned if it smells like a troll, has amazing regenerative powers, creates a second self when attacked and loses a limb and goes around damaging things I care about then it can be treated like a troll. I care very little whether it is trying to rampage around and destroy things—I just want to stop it.
I needed clean data on how people react to various commentary here. I falsified several anti-LW hypotheses (if I think you guys are the Scientology 2.0 I want to see if I can falsify that, ok?), though at some point I was really curious to see what you do about two accounts in same place talking in exact same style, that was entirely unscientific, sorry about this.
Furthermore, the comments were predominantly rated at >0 and not through socks rating each other up (I would want to see if first-vote effect is strong but that would require far too much data). Sorry if there is any sort of disruption to anything.
I actually have significantly more respect for you guys now, with regards to considering the commentary, and subsequently non cultness. I needed a way to test hypotheses. That utterly requires some degree of statistical independence. I do still honestly think this FAI idea is pretty damn misguided (and potentially dangerous to boot), but I am allowing it much more benefit of the doubt.
edit: actually, can you reset the email of Dmytry to dmytryl at gmail ? I may want to post article sometime in the future (I will try to offer balanced overview as I see, and it will have plus points as well. Seriously.).
Also, on the Eliezer, I really hate his style but like his honesty, and its a very mixed feeling all around, i mean, its atrocious to just go ahead and say, whoever didn’t get my MWI stuff is stupid, thats the sort of stuff that evaporates out a LOT of people, and if you e.g. make some mistakes, you risk evaporating meticulous people. On the other hand, if that’s what he feels, that’s what he feels, to conceal it is evil.
I needed clean data on how people react to various commentary here. I falsified several anti-LW hypotheses
So presumably we can expect a post soon explaining the background & procedure, giving data and perhaps predictions or hash precommitments, with an analysis of the results; all of which will also demonstrate that this is not a post hoc excuse.
edit: actually, can you reset the email of Dmytry to dmytryl at gmail ?
I can’t, no. I’d guess you’d have to ask someone at Trike, and I don’t know if they’d be willing to help you out...
Well basically I did expect much more negative ratings, and then I’d just stop posting on those. I couldn’t actually set up proper study without zillion socks, and that’d be serious abuse. I am currently quite sure you guys are not Eliezer cult. You might be a bit of an idea cult but not terribly much. edit: Also as you guys are not Eliezer cult, and as he actually IS pretty damn good at talking people into silly stuff, in so much it is also evidence he’s not building a cult.
re: email address, doesn’t matter too much.
edit: Anyhow, I hope you do consider content of the comments to be of the benefit, actually I think you do. E.g. my comment against the idea of overcoming some biases, I finally nailed what bugs me so much about the ‘overcomingbias’ title and the carried-over cached concept of overcoming them.
edit: do you want me to delete all socks? No problem either way.
It certainly does sound like him, although I didn’t notice any of his most obvious tells like ghmm or obsession with complexity of updating Bayesian networks.
“For the risk estimate per se”
“The rationality and intelligence are not precisely same thing.”
“To clarify, the justice is not about the beliefs held by the person.”
“The honesty is elusive matter, ”
Characteristic misuse of “the.”
“You can choose any place better than average—physicsforums, gamedev.net, stackexchange, arstechnica observatory,”
Ah yes, I forgot Dymtry had tried discussing LW on the Ars forums (and claiming we endorsed terrorism, etc. He got shut down pretty well by the other users.) Yeah, how likely is it that they would both like Ars forums...
It certainly does sound like him, although I didn’t notice any of his most obvious tells like ghmm or obsession with complexity of updating Bayesian networks.
He did open by criticising many worlds and in subsequent posts had an anti LW and SIAI chip on his shoulder that couldn’t plausibly have been developed in the time from the account had existed.
Well spotted. I hadn’t even noticed the Shrink account existing, much less identified it by the content. Looking at the comment history I agree it seems overwhelmingly likely.
OK, I believe we have more than enough information to consider him identified now:
Dmytry
private_messaging
JaneQ
Comment
Shrink
All_work_and_no_play
Those are the currently known sockpuppets of Dmytry. This one warrants no further benefit of the doubt. It is a known troll wilfully abusing the system. To put it mildly, this is something I would prefer not to see encouraged.
I agree. Dmytry was OK; private_messaging was borderline but he did admit to it and I’m loathe to support the banning of a critical person who is above the level of profanity and does occasionally make good points; JaneQ was unacceptable, but starting Comment after JaneQ was found out is even more unacceptable. Especially when none of the accounts were banned in the first place! (Were this Wikipedia, I don’t think anyone would have any doubts about how to deal with an editor abusing multiple socks.)
Absolutely, and he also stopped using Dmytry. My sockpuppet aversion doesn’t necessarily have a problem with abandoning one identity (for reasons such as the identity being humiliated) and working to establish a new one. Private_messaging earned a “Do Not Feed!” tag itself through consistent trolling but that’s a whole different issue to sockpuppet abuse.
And even used in the same argument as his other account, with them supporting each other!
What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio?
If I were to try to construct some repertoire model of him (e.g. signalling intellectual superiority by contradicting the alphas, seems like a standard contrarian mindset), it might be a good match. But frankly: Why care, his points should stand or fall on their own merit, regardless of why he chose to make them.
He raised some excellent points regarding e.g. Solomonoff induction that I’ve yet to see answered, (e.g. accepting simple models with assumed noise over complex models with assumed less noise, given the enourmously punishing discounting for length that may only work out in theoretical complexity class calculations and Monte Carlo approximations with a trivial solution) and while this is a CS dominated audience, additional math proficiency should be highly sought after—especially for contrarians, since it makes their criticisms that much more valuable.
Is he a consistent fountain of wisdom? No. Is anyone?
I will not defend sockpuppet abuse here, though, that’s a different issue and one I can get behind. Don’t take this comment personal, the sentiment was spawned from when he just had 2 known accounts but was already met with high levels of “do not feed!”, your comment just now seemed as good a place as any to voice it.
Can you link to the original post or comment? Your restatement of whatever he wrote is not making much sense to me.
Well there is definitely some sort of a Will Newsome-like projection technique going on, i.e. his comments—those that are on topic—are sometimes sufficiently opaque so that the insight is generated by the reader filling in the gaps meaningfully.
The example I used was somewhat implicit in this comment:
The universal prior discount for length is so severe (just a 20 bits longer description = 2^20 discounting, and what can you even say with 20 bits?), that this quote from Shane Legg’s paper comes at little surprise:
If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data.
How well does that argument hold up to challenges? I’m not sure, I haven’t thought AIXI sufficiently through when taking into account the map-territory divide. But it sure is worthy of further consideration, which it did not get.
Here’s some other comments that come to mind: This comment of his, which I interpreted to essentially refer to what I explained in my answering comment.
There’s a variation of that point in this comment, third paragraph.
He also linked to this marvelous presentation by Marcus Hutter in another comment, which (the presentation) unfortunately did not get the attention it clearly deserves.
There’s comments I don’t quite understand on first reading, but which clearly go into the actual meat of the topic, which is a good direction.
My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting “truth” for “meaningful argument”. Those deserve answers, not ignoring, regardless of their source.
It looks to me like you’re reading your own interpretation into what he wrote, because the sentence he wrote before “You end up with” was
which is clearly talking about another issue. I can give my views on both if you’re interested.
On the issue private_messaging raises, I think it’s a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. private_messaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don’t trust that too much.
On the issue you raised, a hypothesis of “simple model + random errors” must still match the past history perfectly to not be discarded, and the exact errors would have to be part of the hypothesis (i.e., program) and therefore count towards its length.
I defended private_messaging/Dmytry before for similar reasons, but the problem is that it’s often not fun to argue with him. I do engage with him sometimes if I think I can draw out some additional insights or get him to clarify something, but now I tend not to respond just to correct something that I think is wrong.
Are you picturing AI that has simulated multiverse from big bang up inside a single universe, and then it just uses camera sense data to very rapidly pick the right universe? Well yes that will dispose of 2^1000 prior very easily. Something that is instead e.g. modelling humans using minimum amount of guessing without knowing what’s inside their heads, and which can’t really run any reductionist simulations at the level of quarks to predict it’s camera data, can have real trouble getting right the fine details of it’s grand unified theory of everything, and most closely approximate a crackpot scientist. Furthermore, having to include a non-reductionist model of humans, it may even end up religious (feeding stuff into human mind model to build it’s theory of everything by intelligent design).
How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem. edit: and the very strong intuition I have is that you can’t just dismiss this sort of stuff out of hand. So many ways it can fail. So few ways it can work great. And no rigour what so ever in the speculations here.
I certainly don’t disagree when you put it like that, but I think the convention around here is when we say “SI/AIXI will do X” we are usually referring to the theoretical (uncomputable) construct, not predicting that an actual future AI inspired by SI/AIXI will do X (in part because we do recognize the difficulty of this latter problem). The reason for saying “SI/AIXI will do X” may for example be to point out how even a simple theoretical model can behave in potentially dangerous ways that its designer didn’t expect, or just to better understand what it might mean to be ideally rational.
Solomonoff induction never ignores observations.
One liners, eh?
It’s not so much ignoring observations as testing models that allow for your sense data to be subject to both gaussian noise as well as systematic errors, i.e. explaining part of the observations as sensory fuzziness.
In such a case, an overly simply model that posits e.g. some systematic error in its sensors may have an advantage over an actually correct albeit more complex model, due to the way that the length penalty for the Universal Prior rapidly accumulates.
Imagine AIXI coming to the conclusion that the string it is watching is in fact partly output by a random string generator that intermittently takes over. If the competing (but potentially correct) model that works without such a random string generator needs just a megabit more space to specify, do the math.
I’ll still have to think upon it further. It’s just not something to be dismissed out of hand, and just one of several highly relevant tangents (since it pertains to real world applicability; if its a design byproduct it might well translate to any Monte Carlo or assorted formulations). It might well turn out to be a non-issue.
Does AIXI admit the possibility of random string generators? IIRC it only allows deterministic programs, so if it sees patterns a simple model can’t match, then it’s forced to update the model with “but there are exceptions: bit N is 1, and bit N+1 is 1, and bit N+2 is 0… etc” to account for the error. In other words, the size of the “simple model” then grows to be the size of the deterministic part plus the size of the error correction part. And in that case, even a megabyte of additional complexity in a model would stop effectively ruling out that complex model just as soon as more than a couple megabytes of simple-model-incompatible data had been seen.
Nesov is right.
IANAE, but doesn’t AIXI work based on prediction instead of explanation? An algorithm that attempts to “explain away” sense data will be unable to predict the next sequence of the AI’s input, and will be discarded.
If your agent operates in an environment such that your sense data contains errors or such that the world that spawns that sense data isn’t deterministic, at least not on a level that your sense data can pick up—both of which cannot be avoided—then perfect predictability is out of the question anyways.
The problem then shifts to “how much error or fuzziness of the sense data or the underlying world is allowed”, at which point there’s a trade-off between “short and enourmously more preferred model that predicts more errors/fuzziness” versus “longer and enourmously less preferred model that predicts less errors/fuzziness”.
This is as far as I know not an often discussed topic, at least not around here, probably because people haven’t yet hooked up any computable version of AIXI with sensors that are relevantly imperfect and that are probing a truly probabilistic environment. Those concerns do not really apply to learning PAC-Man.
The fallacy of gray.
An uncharitable reading, notice the “consistent” and referring to an acceptable ratio of (implied) signal/noise in the very first sentence.
Also, this may be biased, but I value relevant comments on algorithmic information theory particularly highly, and they are a rare enough commodity. We probably agree on that at least.
Exactly. I often lament that the word ‘troll’ contains motive as part of the meaning. I often try to avoid the word and convey “Account to which Do Not Feed needs to be applied” without making any assertion about motive. Those are hard to prove.
As far as I’m concerned if it smells like a troll, has amazing regenerative powers, creates a second self when attacked and loses a limb and goes around damaging things I care about then it can be treated like a troll. I care very little whether it is trying to rampage around and destroy things—I just want to stop it.
I needed clean data on how people react to various commentary here. I falsified several anti-LW hypotheses (if I think you guys are the Scientology 2.0 I want to see if I can falsify that, ok?), though at some point I was really curious to see what you do about two accounts in same place talking in exact same style, that was entirely unscientific, sorry about this.
Furthermore, the comments were predominantly rated at >0 and not through socks rating each other up (I would want to see if first-vote effect is strong but that would require far too much data). Sorry if there is any sort of disruption to anything.
I actually have significantly more respect for you guys now, with regards to considering the commentary, and subsequently non cultness. I needed a way to test hypotheses. That utterly requires some degree of statistical independence. I do still honestly think this FAI idea is pretty damn misguided (and potentially dangerous to boot), but I am allowing it much more benefit of the doubt.
edit: actually, can you reset the email of Dmytry to dmytryl at gmail ? I may want to post article sometime in the future (I will try to offer balanced overview as I see, and it will have plus points as well. Seriously.).
Also, on the Eliezer, I really hate his style but like his honesty, and its a very mixed feeling all around, i mean, its atrocious to just go ahead and say, whoever didn’t get my MWI stuff is stupid, thats the sort of stuff that evaporates out a LOT of people, and if you e.g. make some mistakes, you risk evaporating meticulous people. On the other hand, if that’s what he feels, that’s what he feels, to conceal it is evil.
So presumably we can expect a post soon explaining the background & procedure, giving data and perhaps predictions or hash precommitments, with an analysis of the results; all of which will also demonstrate that this is not a post hoc excuse.
I can’t, no. I’d guess you’d have to ask someone at Trike, and I don’t know if they’d be willing to help you out...
Well basically I did expect much more negative ratings, and then I’d just stop posting on those. I couldn’t actually set up proper study without zillion socks, and that’d be serious abuse. I am currently quite sure you guys are not Eliezer cult. You might be a bit of an idea cult but not terribly much. edit: Also as you guys are not Eliezer cult, and as he actually IS pretty damn good at talking people into silly stuff, in so much it is also evidence he’s not building a cult.
re: email address, doesn’t matter too much.
edit: Anyhow, I hope you do consider content of the comments to be of the benefit, actually I think you do. E.g. my comment against the idea of overcoming some biases, I finally nailed what bugs me so much about the ‘overcomingbias’ title and the carried-over cached concept of overcoming them.
edit: do you want me to delete all socks? No problem either way.
One more:
http://lesswrong.com/user/All_work_and_no_play/
Agree; that’s either Dmytry or someone deliberately imitating him.
And here’s one more (judging by content, style, and similar linguistic issues): Shrink. Also posting in the same discussions as private_messaging.
It certainly does sound like him, although I didn’t notice any of his most obvious tells like ghmm or obsession with complexity of updating Bayesian networks.
“For the risk estimate per se” “The rationality and intelligence are not precisely same thing.” “To clarify, the justice is not about the beliefs held by the person.” “The honesty is elusive matter, ”
Characteristic misuse of “the.”
“You can choose any place better than average—physicsforums, gamedev.net, stackexchange, arstechnica observatory,”
Favorite forums from other accounts.
Ah yes, I forgot Dymtry had tried discussing LW on the Ars forums (and claiming we endorsed terrorism, etc. He got shut down pretty well by the other users.) Yeah, how likely is it that they would both like Ars forums...
He did open by criticising many worlds and in subsequent posts had an anti LW and SIAI chip on his shoulder that couldn’t plausibly have been developed in the time from the account had existed.
Well spotted. I hadn’t even noticed the Shrink account existing, much less identified it by the content. Looking at the comment history I agree it seems overwhelmingly likely.
Huh, I didn’t see this whole conversation before. Will update appropriately.