Private_messaging earned a “Do Not Feed!” tag itself through consistent trolling
What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio?
If I were to try to construct some repertoire model of him (e.g. signalling intellectual superiority by contradicting the alphas, seems like a standard contrarian mindset), it might be a good match. But frankly: Why care, his points should stand or fall on their own merit, regardless of why he chose to make them.
He raised some excellent points regarding e.g. Solomonoff induction that I’ve yet to see answered, (e.g. accepting simple models with assumed noise over complex models with assumed less noise, given the enourmously punishing discounting for length that may only work out in theoretical complexity class calculations and Monte Carlo approximations with a trivial solution) and while this is a CS dominated audience, additional math proficiency should be highly sought after—especially for contrarians, since it makes their criticisms that much more valuable.
Is he a consistent fountain of wisdom? No. Is anyone?
I will not defend sockpuppet abuse here, though, that’s a different issue and one I can get behind. Don’t take this comment personal, the sentiment was spawned from when he just had 2 known accounts but was already met with high levels of “do not feed!”, your comment just now seemed as good a place as any to voice it.
Well there is definitely some sort of a Will Newsome-like projection technique going on, i.e. his comments—those that are on topic—are sometimes sufficiently opaque so that the insight is generated by the reader filling in the gaps meaningfully.
The example I used was somewhat implicit in this comment:
You end up modelling a crackpot scientist with this. Pick simplest theory that doesn’t fit the data, then distrust the data virtually no matter how much evidence is collected, and explain it as people conspiring, that’s what the AI will do. Gets even worse when you are unable to determine minimum length for either theory (which you are proven unable).
The universal prior discount for length is so severe (just a 20 bits longer description = 2^20 discounting, and what can you even say with 20 bits?), that this quote from Shane Legg’s paper comes at little surprise:
“However it is clear that only the shortest program for will have much affect (sp) on [the universal prior].”
If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data.
How well does that argument hold up to challenges? I’m not sure, I haven’t thought AIXI sufficiently through when taking into account the map-territory divide. But it sure is worthy of further consideration, which it did not get.
Here’s some other comments that come to mind: This comment of his, which I interpreted to essentially refer to what I explained in my answering comment.
There’s a variation of that point in this comment, third paragraph.
There’s comments I don’t quite understand on first reading, but which clearly go into the actual meat of the topic, which is a good direction.
My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting “truth” for “meaningful argument”. Those deserve answers, not ignoring, regardless of their source.
If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data.
It looks to me like you’re reading your own interpretation into what he wrote, because the sentence he wrote before “You end up with” was
they are not uniquely determined and your c can be kilobits long, meaning, one hypothesis can be given prior >2^1000 larger than another, or vice versa, depending to choice of the language.
which is clearly talking about another issue. I can give my views on both if you’re interested.
On the issue private_messaging raises, I think it’s a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. private_messaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don’t trust that too much.
On the issue you raised, a hypothesis of “simple model + random errors” must still match the past history perfectly to not be discarded, and the exact errors would have to be part of the hypothesis (i.e., program) and therefore count towards its length.
My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting “truth” for “meaningful argument”. Those deserve answers, not ignoring, regardless of their source.
I defended private_messaging/Dmytry before for similar reasons, but the problem is that it’s often not fun to argue with him. I do engage with him sometimes if I think I can draw out some additional insights or get him to clarify something, but now I tend not to respond just to correct something that I think is wrong.
On the issue private_messaging raises, I think it’s a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. private_messaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don’t trust that too much.
Are you picturing AI that has simulated multiverse from big bang up inside a single universe, and then it just uses camera sense data to very rapidly pick the right universe? Well yes that will dispose of 2^1000 prior very easily. Something that is instead e.g. modelling humans using minimum amount of guessing without knowing what’s inside their heads, and which can’t really run any reductionist simulations at the level of quarks to predict it’s camera data, can have real trouble getting right the fine details of it’s grand unified theory of everything, and most closely approximate a crackpot scientist. Furthermore, having to include a non-reductionist model of humans, it may even end up religious (feeding stuff into human mind model to build it’s theory of everything by intelligent design).
How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem. edit: and the very strong intuition I have is that you can’t just dismiss this sort of stuff out of hand. So many ways it can fail. So few ways it can work great. And no rigour what so ever in the speculations here.
How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem.
I certainly don’t disagree when you put it like that, but I think the convention around here is when we say “SI/AIXI will do X” we are usually referring to the theoretical (uncomputable) construct, not predicting that an actual future AI inspired by SI/AIXI will do X (in part because we do recognize the difficulty of this latter problem). The reason for saying “SI/AIXI will do X” may for example be to point out how even a simple theoretical model can behave in potentially dangerous ways that its designer didn’t expect, or just to better understand what it might mean to be ideally rational.
It’s not so much ignoring observations as testing models that allow for your sense data to be subject to both gaussian noise as well as systematic errors, i.e. explaining part of the observations as sensory fuzziness.
In such a case, an overly simply model that posits e.g. some systematic error in its sensors may have an advantage over an actually correct albeit more complex model, due to the way that the length penalty for the Universal Prior rapidly accumulates.
Imagine AIXI coming to the conclusion that the string it is watching is in fact partly output by a random string generator that intermittently takes over. If the competing (but potentially correct) model that works without such a random string generator needs just a megabit more space to specify, do the math.
I’ll still have to think upon it further. It’s just not something to be dismissed out of hand, and just one of several highly relevant tangents (since it pertains to real world applicability; if its a design byproduct it might well translate to any Monte Carlo or assorted formulations). It might well turn out to be a non-issue.
Does AIXI admit the possibility of random string generators? IIRC it only allows deterministic programs, so if it sees patterns a simple model can’t match, then it’s forced to update the model with “but there are exceptions: bit N is 1, and bit N+1 is 1, and bit N+2 is 0… etc” to account for the error. In other words, the size of the “simple model” then grows to be the size of the deterministic part plus the size of the error correction part. And in that case, even a megabyte of additional complexity in a model would stop effectively ruling out that complex model just as soon as more than a couple megabytes of simple-model-incompatible data had been seen.
IANAE, but doesn’t AIXI work based on prediction instead of explanation? An algorithm that attempts to “explain away” sense data will be unable to predict the next sequence of the AI’s input, and will be discarded.
If your agent operates in an environment such that your sense data contains errors or such that the world that spawns that sense data isn’t deterministic, at least not on a level that your sense data can pick up—both of which cannot be avoided—then perfect predictability is out of the question anyways.
The problem then shifts to “how much error or fuzziness of the sense data or the underlying world is allowed”, at which point there’s a trade-off between “short and enourmously more preferred model that predicts more errors/fuzziness” versus “longer and enourmously less preferred model that predicts less errors/fuzziness”.
This is as far as I know not an often discussed topic, at least not around here, probably because people haven’t yet hooked up any computable version of AIXI with sensors that are relevantly imperfect and that are probing a truly probabilistic environment. Those concerns do not really apply to learning PAC-Man.
An uncharitable reading, notice the “consistent” and referring to an acceptable ratio of (implied) signal/noise in the very first sentence.
Also, this may be biased, but I value relevant comments on algorithmic information theory particularly highly, and they are a rare enough commodity. We probably agree on that at least.
What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio?
Exactly. I often lament that the word ‘troll’ contains motive as part of the meaning. I often try to avoid the word and convey “Account to which Do Not Feed needs to be applied” without making any assertion about motive. Those are hard to prove.
As far as I’m concerned if it smells like a troll, has amazing regenerative powers, creates a second self when attacked and loses a limb and goes around damaging things I care about then it can be treated like a troll. I care very little whether it is trying to rampage around and destroy things—I just want to stop it.
What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio?
If I were to try to construct some repertoire model of him (e.g. signalling intellectual superiority by contradicting the alphas, seems like a standard contrarian mindset), it might be a good match. But frankly: Why care, his points should stand or fall on their own merit, regardless of why he chose to make them.
He raised some excellent points regarding e.g. Solomonoff induction that I’ve yet to see answered, (e.g. accepting simple models with assumed noise over complex models with assumed less noise, given the enourmously punishing discounting for length that may only work out in theoretical complexity class calculations and Monte Carlo approximations with a trivial solution) and while this is a CS dominated audience, additional math proficiency should be highly sought after—especially for contrarians, since it makes their criticisms that much more valuable.
Is he a consistent fountain of wisdom? No. Is anyone?
I will not defend sockpuppet abuse here, though, that’s a different issue and one I can get behind. Don’t take this comment personal, the sentiment was spawned from when he just had 2 known accounts but was already met with high levels of “do not feed!”, your comment just now seemed as good a place as any to voice it.
Can you link to the original post or comment? Your restatement of whatever he wrote is not making much sense to me.
Well there is definitely some sort of a Will Newsome-like projection technique going on, i.e. his comments—those that are on topic—are sometimes sufficiently opaque so that the insight is generated by the reader filling in the gaps meaningfully.
The example I used was somewhat implicit in this comment:
The universal prior discount for length is so severe (just a 20 bits longer description = 2^20 discounting, and what can you even say with 20 bits?), that this quote from Shane Legg’s paper comes at little surprise:
If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data.
How well does that argument hold up to challenges? I’m not sure, I haven’t thought AIXI sufficiently through when taking into account the map-territory divide. But it sure is worthy of further consideration, which it did not get.
Here’s some other comments that come to mind: This comment of his, which I interpreted to essentially refer to what I explained in my answering comment.
There’s a variation of that point in this comment, third paragraph.
He also linked to this marvelous presentation by Marcus Hutter in another comment, which (the presentation) unfortunately did not get the attention it clearly deserves.
There’s comments I don’t quite understand on first reading, but which clearly go into the actual meat of the topic, which is a good direction.
My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting “truth” for “meaningful argument”. Those deserve answers, not ignoring, regardless of their source.
It looks to me like you’re reading your own interpretation into what he wrote, because the sentence he wrote before “You end up with” was
which is clearly talking about another issue. I can give my views on both if you’re interested.
On the issue private_messaging raises, I think it’s a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. private_messaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don’t trust that too much.
On the issue you raised, a hypothesis of “simple model + random errors” must still match the past history perfectly to not be discarded, and the exact errors would have to be part of the hypothesis (i.e., program) and therefore count towards its length.
I defended private_messaging/Dmytry before for similar reasons, but the problem is that it’s often not fun to argue with him. I do engage with him sometimes if I think I can draw out some additional insights or get him to clarify something, but now I tend not to respond just to correct something that I think is wrong.
Are you picturing AI that has simulated multiverse from big bang up inside a single universe, and then it just uses camera sense data to very rapidly pick the right universe? Well yes that will dispose of 2^1000 prior very easily. Something that is instead e.g. modelling humans using minimum amount of guessing without knowing what’s inside their heads, and which can’t really run any reductionist simulations at the level of quarks to predict it’s camera data, can have real trouble getting right the fine details of it’s grand unified theory of everything, and most closely approximate a crackpot scientist. Furthermore, having to include a non-reductionist model of humans, it may even end up religious (feeding stuff into human mind model to build it’s theory of everything by intelligent design).
How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem. edit: and the very strong intuition I have is that you can’t just dismiss this sort of stuff out of hand. So many ways it can fail. So few ways it can work great. And no rigour what so ever in the speculations here.
I certainly don’t disagree when you put it like that, but I think the convention around here is when we say “SI/AIXI will do X” we are usually referring to the theoretical (uncomputable) construct, not predicting that an actual future AI inspired by SI/AIXI will do X (in part because we do recognize the difficulty of this latter problem). The reason for saying “SI/AIXI will do X” may for example be to point out how even a simple theoretical model can behave in potentially dangerous ways that its designer didn’t expect, or just to better understand what it might mean to be ideally rational.
Solomonoff induction never ignores observations.
One liners, eh?
It’s not so much ignoring observations as testing models that allow for your sense data to be subject to both gaussian noise as well as systematic errors, i.e. explaining part of the observations as sensory fuzziness.
In such a case, an overly simply model that posits e.g. some systematic error in its sensors may have an advantage over an actually correct albeit more complex model, due to the way that the length penalty for the Universal Prior rapidly accumulates.
Imagine AIXI coming to the conclusion that the string it is watching is in fact partly output by a random string generator that intermittently takes over. If the competing (but potentially correct) model that works without such a random string generator needs just a megabit more space to specify, do the math.
I’ll still have to think upon it further. It’s just not something to be dismissed out of hand, and just one of several highly relevant tangents (since it pertains to real world applicability; if its a design byproduct it might well translate to any Monte Carlo or assorted formulations). It might well turn out to be a non-issue.
Does AIXI admit the possibility of random string generators? IIRC it only allows deterministic programs, so if it sees patterns a simple model can’t match, then it’s forced to update the model with “but there are exceptions: bit N is 1, and bit N+1 is 1, and bit N+2 is 0… etc” to account for the error. In other words, the size of the “simple model” then grows to be the size of the deterministic part plus the size of the error correction part. And in that case, even a megabyte of additional complexity in a model would stop effectively ruling out that complex model just as soon as more than a couple megabytes of simple-model-incompatible data had been seen.
Nesov is right.
IANAE, but doesn’t AIXI work based on prediction instead of explanation? An algorithm that attempts to “explain away” sense data will be unable to predict the next sequence of the AI’s input, and will be discarded.
If your agent operates in an environment such that your sense data contains errors or such that the world that spawns that sense data isn’t deterministic, at least not on a level that your sense data can pick up—both of which cannot be avoided—then perfect predictability is out of the question anyways.
The problem then shifts to “how much error or fuzziness of the sense data or the underlying world is allowed”, at which point there’s a trade-off between “short and enourmously more preferred model that predicts more errors/fuzziness” versus “longer and enourmously less preferred model that predicts less errors/fuzziness”.
This is as far as I know not an often discussed topic, at least not around here, probably because people haven’t yet hooked up any computable version of AIXI with sensors that are relevantly imperfect and that are probing a truly probabilistic environment. Those concerns do not really apply to learning PAC-Man.
The fallacy of gray.
An uncharitable reading, notice the “consistent” and referring to an acceptable ratio of (implied) signal/noise in the very first sentence.
Also, this may be biased, but I value relevant comments on algorithmic information theory particularly highly, and they are a rare enough commodity. We probably agree on that at least.
Exactly. I often lament that the word ‘troll’ contains motive as part of the meaning. I often try to avoid the word and convey “Account to which Do Not Feed needs to be applied” without making any assertion about motive. Those are hard to prove.
As far as I’m concerned if it smells like a troll, has amazing regenerative powers, creates a second self when attacked and loses a limb and goes around damaging things I care about then it can be treated like a troll. I care very little whether it is trying to rampage around and destroy things—I just want to stop it.