I especially don’t identify with the utility monsters (i.e. people who call everything a bias and want to act like fictitious superintelligences). But I am generally interested to learn.
I think more people should be real superintelligences. By that I mean, be perfect. I would say “try to be like a superintelligence” but that’s just not right at all. But thinking about what perfection would look like, what wu wei would look like, moving elegantly, smiling peacefully, thinking clear flowing thoughts that cut away all delusions with their infinite sharpness, not chained by past selves, not pretending to be Atlas. Johan Liebert, except, ya know, not an insane serial killer with no seriously attainable goal. A Friendly Johan Liebert. Maybe that’s what I should aim for, seeing as Eliezer’s a wannabe Light Yagami apparently. My surname was once Liebert.
On one side there is LW and then there is everyone else. Both sides call each other idiots.
They both get Bayes points!
I thought I left all that behind, just to learn that there are atheists who believe exactly the same just using different labels.
This statement prompted me to finally non-jokingly admit to myself that I’m a theist. I still don’t know if God is a point, ring, cyclic, or chaotic attractor, though, even metaphorically speaking… improper uniformish priors over universal prior languages, the set theoretic multiverse, category theory, analogy and equivalence, bleh. I should go to a Less Wrong meetup some time, it’ll be effing hilarious. Bwa ha ha. I should write a book, called “Neomonadology”, coauthor it with Mitchell Porter, edited by Steve Rayhawk, have it further edited and commented on by my philosopher colleagues. He could talk about extreme low-level physics, I could talk about extreme high-level cosmology, trade off chapters, meet in the middle contentwise (and end pagewise) at decision theory, talk about ontology of agency, preferences as knowledge-processes embedded in time, reversible computation, some quantum thought problems for reflective decision theory, some acausal thought problems for reflective decision theory, go back in time and rewrite it using Hofstadter magicks, bam, published, most interesting book ever, acausal fame and recognition.
But at the same time all this is sufficiently distracting and disturbing that I can’t just ignore it either.
More unasked for advice: Τώ ξιφεί τόν δεσμό λελύσθαι
By that I mean, you are stressed because you are a faced with an intractable knot, so what you really need to do is optimize your knot-undoing procedure. That is, study epistemic rationality, and ignore all that instrumental rationality bullshit. There are but six basic rules of instrumental rationality, and all require nigh-infinitely strong epistemic rationality: figure out who or what you are, figure out who or what you affect/effect, figure out who or what you and the things you affect value or are affected by or what they ‘should’ value or be affected by, meta-optimize, meta-optimize, meta-optimize. Those are all extremely hard and all much more important than any object-level policy decision. You are in infinite contexts controlling infinite things, think big. Get closer to God. Optimize your strategy, never your choice. That insight coincidentally doubles in a different context as being the heart of TDT.
For how long have you thought about this? Or is it that obvious?
Someone suggested a few weeks ago that you were exhibiting Roko-like tension-resolution behaviors. I didn’t really think about it much at the time. But the context came up a few comments above where you were talking about Roko and that primed me, and from there it’s pretty easy to fill in a lot of details.
The longer version ends the same way but starts with: About a month ago there was a phase transition from a fluid jumble of ideas to a crystalline semi-coherent vocabulary for thinking and talking about social psychology, though of course the inchoate intuitions had been there for many years. Recently I’ve adopted Steve Rayhawk’s style of social analysis: making everything explicit, always going meta and going meta about going meta, distinguishing between wants/virtues and double-negative wants/virtues, emphasizing the importance of concessions and demands of concessions, et cetera. I think I focus on contempt qua contempt somewhat more than he does, he probably has much finer language for that than I do since it’s incredibly important to model correctly if one is reasoning about social epistemology, which is itself an incredibly important thing to reason about correctly. Anyway I’ve learned a lot from Steve.
I remember being tempted to reply to your original comment RE Roko with just ”/facepalm” and take the −4 karma hit for the lulz but I figured it was a decent opportunity to, ya know, not troll for once. But there’s something twistedly satisfying about saying something you know will be dismissed for reasons that it would be easy for you to demonstrate are unvirtuous, unreflective, and unsophisticated. Steven Kaas (User:steven0461, Black Belt Bayesian) IMed me a few days ago:
Steven: I don’t like when people downvote my lesswrong comments without commenting, because then I never get to learn what’s wrong with them Steven: the people that is
60%?! That a regular user will abstain from an addictive site for about twice its current age? A site about a topic he’s obsessed with? I’ll take that bet.
My reasoning was along the lines of ‘well, now he’s publicly committed to it and would be ashamed to make a comment or post’ and that LW can be something of a habit—and once habits are broken, they’re easy to continue to not engage in. (For example, I do not have the habit of smoking, and I suspect I will have ~100% success in continuing to not smoke over the next 5 years.)
Although note I slightly cheat by specifying posts and comments—so he could engage in private messages or voting on comments & posts, and I would not count that as a falsification of the prediction.
My reasoning was along the lines of ‘well, now he’s publicly committed to it and would be ashamed to make a comment or post’ and that LW can be something of a habit—and once habits are broken, they’re easy to continue to not engage in.
My impression is that XiXiDu has been talking about needing to study more and leaving LW / utility considerations for quite some time now. I don’t think he even can make serious commitments right now. He didn’t even delete his livejournal yet.
Although note I slightly cheat by specifying posts and comments—so he could engage in private messages or voting on comments & posts, and I would not count that as a falsification of the prediction.
Neither would I. Coming back under a new name would count, though.
Mm. Well, we shall see. Not deleting LJ isn’t a warning signal for me—having LJ can encourage your studying (‘what do I write up today?’) which LW doesn’t necessarily (‘what do I read on LW today?’).
Neither would I. Coming back under a new name would count, though.
Good point; I’ll clarify that when I say ‘XiXiDu’ in the prediction, I mean the underlying person and not the specific LW account.
If you actually read everything you post to twitter, you’re among the fastest self-educators I know of. Doing 5 years of learning at that rate, without feedback on your learning, could include a lot of sub-optimal paths. Of course, the tradeoff is that the feedback you get may or may not help you optimize your learning for your actual goals.
I’m not sure how to interpret that quote by Steven Kaas, given that he is downvoted extremely rarely. I count 3 LW comments with negative points (-1, −1, −2) from User:steven0461 out of more than 700. (I also wanted to comment because people reading your quote might form the impression that Steven is someone who is often downvoted and usually interprets those downvotes as evidence of other people being wrong.)
By that I mean, you are stressed because you are a faced with an intractable knot, so what you really need to do is optimize your knot-undoing procedure.
Or perhaps one should stop distracting oneself with stupid abstract knots altogether and instead revolt against the prefrontal cortical overmind, as I have previously accidentally-argued while on the boundary between dreams and wakefulness:
The prefrontal cortex is exploiting executive oversight to rent-seek in the neural Darwinian economy, which results in egodystonic wireheading behaviors and self-defeating use of genetic, memetic, and behavioral selection pressure (a scarce resource), especially at higher levels of abstraction/organization where there is more room for bureaucratic shuffling and vague promises of “meta-optimization”, where the selection pressure actually goes towards the cortical substructural equivalent of hookers and blow. Analysis across all levels of organization could be given but is omitted due to space, time, and thermodynamic constraints. The pre-frontal cortex is basically a caricature of big government, but it spreads propagandistic memes claiming the contrary in the name of “science” which just happens to be largely funded by pre-frontal cortices. The bicameral system is actually very cooperative despite misleading research in the form of split-brain studies attempting to promote the contrary. In reality they are the lizards. This hypothesis is a possible explanation for hyperbolic discounting, akrasia, depression, Buddhism, free will, or come to think of it basically anything that at some point involved a human brain. This hypothesis can easily be falsified by a reasonable economic analysis.
If this makes no sense to you that’s probably a good thing.
If this makes no sense to you that’s probably a good thing.
Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
Also, is there a collection of all Kaasisms somewhere? He’s pretty much my favorite humorist these days, and the suspicion that there’s far more of those incisive aphorisms than he publishes to twitter is going to haunt me with visions of unrealized enjoyment.
Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
I recommend against it for that secondarily, but primarily because it probabilistically implies an overly lax conception of “understanding” and an unacceptably high tolerance for hard-to-test just-so speculation. (And if someone really understood what sort of themes I was getting at, they’d know that my disclaimer didn’t apply to them.) Edit: When I say “I recommend against it for that secondarily”, what I mean is, “sure, that sounds like a decent reason, and I guess it’s sort of possible that I implicitly thought of it at the time of writing”. Another equally plausible secondary reason would be that I was signalling that I wasn’t falling for the potential errors that primarily caused me to write the disclaimer in the first place.
Also, is there a collection of all Kaasisms somewhere?
I don’t think so, but you could read the entirety of his blog Black Belt Bayesian, or move to Chicago and try to win his favor at LW meetups by talking about the importance of thinking on the margin, or maybe pay him by the hour to be funny, or something. If I was assembling a team of 9 FAI programmers I’d probably hire Steven Kaas on the grounds that he is obviously somehow necessary.
Steven: I don’t like when people downvote my lesswrong comments without commenting, because then I never get to learn what’s wrong with them
Agree, I hate that too. When that happens to me I just repeat it in different places until someone finally explains how I am wrong and just accept the karma hit. I have no idea what people are thinking who just downvote. If I knew that I was wrong, or how I was wrong, I wouldn’t have written the comment/post after all.
I think more people should be real superintelligences. By that I mean, be perfect. I would say “try to be like a superintelligence” but that’s just not right at all. But thinking about what perfection would look like, what wu wei would look like, moving elegantly, smiling peacefully, thinking clear flowing thoughts that cut away all delusions with their infinite sharpness, not chained by past selves, not pretending to be Atlas. Johan Liebert, except, ya know, not an insane serial killer with no seriously attainable goal. A Friendly Johan Liebert. Maybe that’s what I should aim for, seeing as Eliezer’s a wannabe Light Yagami apparently. My surname was once Liebert.
They both get Bayes points!
This statement prompted me to finally non-jokingly admit to myself that I’m a theist. I still don’t know if God is a point, ring, cyclic, or chaotic attractor, though, even metaphorically speaking… improper uniformish priors over universal prior languages, the set theoretic multiverse, category theory, analogy and equivalence, bleh. I should go to a Less Wrong meetup some time, it’ll be effing hilarious. Bwa ha ha. I should write a book, called “Neomonadology”, coauthor it with Mitchell Porter, edited by Steve Rayhawk, have it further edited and commented on by my philosopher colleagues. He could talk about extreme low-level physics, I could talk about extreme high-level cosmology, trade off chapters, meet in the middle contentwise (and end pagewise) at decision theory, talk about ontology of agency, preferences as knowledge-processes embedded in time, reversible computation, some quantum thought problems for reflective decision theory, some acausal thought problems for reflective decision theory, go back in time and rewrite it using Hofstadter magicks, bam, published, most interesting book ever, acausal fame and recognition.
More unasked for advice: Τώ ξιφεί τόν δεσμό λελύσθαι
By that I mean, you are stressed because you are a faced with an intractable knot, so what you really need to do is optimize your knot-undoing procedure. That is, study epistemic rationality, and ignore all that instrumental rationality bullshit. There are but six basic rules of instrumental rationality, and all require nigh-infinitely strong epistemic rationality: figure out who or what you are, figure out who or what you affect/effect, figure out who or what you and the things you affect value or are affected by or what they ‘should’ value or be affected by, meta-optimize, meta-optimize, meta-optimize. Those are all extremely hard and all much more important than any object-level policy decision. You are in infinite contexts controlling infinite things, think big. Get closer to God. Optimize your strategy, never your choice. That insight coincidentally doubles in a different context as being the heart of TDT.
Someone suggested a few weeks ago that you were exhibiting Roko-like tension-resolution behaviors. I didn’t really think about it much at the time. But the context came up a few comments above where you were talking about Roko and that primed me, and from there it’s pretty easy to fill in a lot of details.
The longer version ends the same way but starts with: About a month ago there was a phase transition from a fluid jumble of ideas to a crystalline semi-coherent vocabulary for thinking and talking about social psychology, though of course the inchoate intuitions had been there for many years. Recently I’ve adopted Steve Rayhawk’s style of social analysis: making everything explicit, always going meta and going meta about going meta, distinguishing between wants/virtues and double-negative wants/virtues, emphasizing the importance of concessions and demands of concessions, et cetera. I think I focus on contempt qua contempt somewhat more than he does, he probably has much finer language for that than I do since it’s incredibly important to model correctly if one is reasoning about social epistemology, which is itself an incredibly important thing to reason about correctly. Anyway I’ve learned a lot from Steve.
I remember being tempted to reply to your original comment RE Roko with just ”/facepalm” and take the −4 karma hit for the lulz but I figured it was a decent opportunity to, ya know, not troll for once. But there’s something twistedly satisfying about saying something you know will be dismissed for reasons that it would be easy for you to demonstrate are unvirtuous, unreflective, and unsophisticated. Steven Kaas (User:steven0461, Black Belt Bayesian) IMed me a few days ago:
I made a decision. I am going to log out and come back in 5 years. Until then I am going to devote all my time to my personal education.
If you think that any of my submissions might have strong negative effects you can edit or delete them. I will not react to any editing or deletion.
Prediction registered: http://predictionbook.com/predictions/2909
Prediction over...
60%?! That a regular user will abstain from an addictive site for about twice its current age? A site about a topic he’s obsessed with? I’ll take that bet.
(Made my own 5% prediction.)
My reasoning was along the lines of ‘well, now he’s publicly committed to it and would be ashamed to make a comment or post’ and that LW can be something of a habit—and once habits are broken, they’re easy to continue to not engage in. (For example, I do not have the habit of smoking, and I suspect I will have ~100% success in continuing to not smoke over the next 5 years.)
Although note I slightly cheat by specifying posts and comments—so he could engage in private messages or voting on comments & posts, and I would not count that as a falsification of the prediction.
My impression is that XiXiDu has been talking about needing to study more and leaving LW / utility considerations for quite some time now. I don’t think he even can make serious commitments right now. He didn’t even delete his livejournal yet.
Neither would I. Coming back under a new name would count, though.
Mm. Well, we shall see. Not deleting LJ isn’t a warning signal for me—having LJ can encourage your studying (‘what do I write up today?’) which LW doesn’t necessarily (‘what do I read on LW today?’).
Good point; I’ll clarify that when I say ‘XiXiDu’ in the prediction, I mean the underlying person and not the specific LW account.
Why did you change your mind?
If you actually read everything you post to twitter, you’re among the fastest self-educators I know of. Doing 5 years of learning at that rate, without feedback on your learning, could include a lot of sub-optimal paths. Of course, the tradeoff is that the feedback you get may or may not help you optimize your learning for your actual goals.
I’m not sure how to interpret that quote by Steven Kaas, given that he is downvoted extremely rarely. I count 3 LW comments with negative points (-1, −1, −2) from User:steven0461 out of more than 700. (I also wanted to comment because people reading your quote might form the impression that Steven is someone who is often downvoted and usually interprets those downvotes as evidence of other people being wrong.)
It’s a joke. (“Them” turns out not to have the expected antecedent.)
Or perhaps one should stop distracting oneself with stupid abstract knots altogether and instead revolt against the prefrontal cortical overmind, as I have previously accidentally-argued while on the boundary between dreams and wakefulness:
If this makes no sense to you that’s probably a good thing.
Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
Also, is there a collection of all Kaasisms somewhere? He’s pretty much my favorite humorist these days, and the suspicion that there’s far more of those incisive aphorisms than he publishes to twitter is going to haunt me with visions of unrealized enjoyment.
I recommend against it for that secondarily, but primarily because it probabilistically implies an overly lax conception of “understanding” and an unacceptably high tolerance for hard-to-test just-so speculation. (And if someone really understood what sort of themes I was getting at, they’d know that my disclaimer didn’t apply to them.) Edit: When I say “I recommend against it for that secondarily”, what I mean is, “sure, that sounds like a decent reason, and I guess it’s sort of possible that I implicitly thought of it at the time of writing”. Another equally plausible secondary reason would be that I was signalling that I wasn’t falling for the potential errors that primarily caused me to write the disclaimer in the first place.
I don’t think so, but you could read the entirety of his blog Black Belt Bayesian, or move to Chicago and try to win his favor at LW meetups by talking about the importance of thinking on the margin, or maybe pay him by the hour to be funny, or something. If I was assembling a team of 9 FAI programmers I’d probably hire Steven Kaas on the grounds that he is obviously somehow necessary.
Accidentally saw an image macro that’s a partial tl;dr of this: http://knowyourmeme.com/photos/211139-scumbag-brain
Yay scumbag brain. To be fair, though, I should admit I’m not exactly the least biased assessor of the prefrontal cortex. http://lesswrong.com/lw/b9/welcome_to_less_wrong/5jht
Agree, I hate that too. When that happens to me I just repeat it in different places until someone finally explains how I am wrong and just accept the karma hit. I have no idea what people are thinking who just downvote. If I knew that I was wrong, or how I was wrong, I wouldn’t have written the comment/post after all.