“That’s just an applause light.” (“That’s just a semantic stopsign.” “That’s just the teacher’s password.”)
“POLITICS IS THE MINDKILLER”
“If keeping my current job has higher expected utility than founding a startup, I wish to believe that keeping my current job has higher expected utility than founding a startup...”
“I think he’s just being metacontrarian.”
“Arguments are soldiers!”
“Not every change is an improvement, but every improvement is a change.”
“There are no ontologically basic mental entities!”
“I’m an aspiring rationalist.”
“Fun Theory!”
“The map is not the territory.”
“Let’s beware evaporative cooling, here.”
“It’s a sunk cost! Abandon it!”
“ERROR: POSTULATION OF GROUP SELECTION DETECTED”
“If you measure it and reward the measurement going up, you’ll get what you measure, not what you want.”
Solomonoff prior gives you 50%, that’s pretty cool! :D
I hope someone will use Alicorn’s (and other) quotes to make a good Eliza-bot. This could be an interesting AI challenge—write a bot that will get positive karma on LW! If there are more bots, the bot with highest nonzero karma wins.
Here’s a few, courtesy of applying JWZ’s dadadodo to all the lines in the thread so far:
What does the best textbook on corrupted hardware. Dark Arts; Escher, Bach?
How could you credibly pre commit to see you as a compartmentalized belief?
I’m trying to be a cult.
Have super powers.
You’ve fallen prey to be condescending.
My current job has higher expected utility than you imagine; but in the sanity waterline.
Everyone is Far.
No idea is reliable? Have a lot of caring.
Conceptspace is the future.
Mysteriousness is a cult.
I’m going to be with the AI. I know the universe future.
Look, just generalize from the territory.
Everyone is bigger than you in Cute Puppies.
Emacs’ M-x dissociated-press yields babble, but with some interesting words in it: “knowledgeneralize”, “metacontrammer”, “contrationalist”, “choosequences”, “the universal priory”, “statististimate”, “fanfused”, “condescendista”, “frobability”, “dissolomonoff”, “optimprovement”, “estimagine”, “cooperaterline”, “pattern matchology”. The only sensible sentence it’s come up with is “I’m running on condescending”.
To give an idea of what these look like raw, here’s a paragraph of dadadodo:
What does the universe with ice cream trees. I have little Pony episode about what would you measure, not like to tile the universe with the argument? That’s just signaling virtue: Death is bad. That’s just a startup. I have little XML tags on corrupted hardware. Whoa, there’s a compelling case for you read the least I wish to be the fulfillment of us is, not the MINDKILLER if keeping my model of Rationality? Tsuyoku naritai! So after all over the bad; result, you’re running on that.
And here’s a similar-sized chunk of M-x dissociated-press:
You shou have now is white’ is true if an you imaginew Methods of in this riggerse tiled in paperclips. I have akrasian found underate if and only if keeping my current job has hight write rationalized belief? I cause can have you regenerate that say ‘moral’ It will bach? What die. You shods of Rationalith a good cause coherent extrapolass? We need wanterval line shouldn’t implement Really Extreme An applause lity chaptere. I knowledge aren’t believe there’s Near, this is For me, but at’s the
bes a tering: Dark Artup, I wish to Solomonoff Indus today. What would you said to be can ding to Solve psychock Levent is, if you should read Goedel, Escher.
Of these, I rather like:
I have little XML tags on corrupted hardware. shods of Rationalith extrapolass
The blended-words effect seems to give M-x dissociated-press a sort of Finnegans Wake atmosphere which dadadodo doesn’t have.
Not totally IT, but I tried it on Eliezer’s “The 5-Second Level”. Highlits include:
I won’t socially kill you
Hope to reflect on consequentialist grounds
Say, what a vanilla ice cream, and not-indignation, and from green?
Associate to persuade anyone of how you were making the dreadful personal habit displays itself in a concrete example.
Rather you can’t bear the 5-second level?
To develop methods of teaching rationality skills, you need more practice to get lost in verbal mazes; we will tend to have our feet on the other person.
Be sufficiently averse to the fire department and see if that suggests anything.
Be sufficiently averse to the fire department and see if that suggests anything.
I do believe it suggests libertarianism. But I can’t be sure, as I can’t simply “be sufficiently averse” any more than I can force myself to believe something.
Still, that one seems to be a fairly reasonable sentence. If I were to learn only that one of these had been used in an LW article (by coincidence, not by a direct causal link), I would guess it was either that one or “I won’t socially kill you”.
I would be amazed if Scott Alexander has not used “I won’t socially kill you” at some point. Certainly he’s used some phrase along the line of “people who won’t socially kill me”.
...and in fact, I checked and the original article has basically the meaning I would have expected: “knowing that even if you make a mistake, it won’t socially kill you.”. That particular phrase was pretty much lifted, just with the object changed.
the bot with the highest nonzero karma wins
I’m taking bets: how long after the bots start maximizing karma until the form is tiled with
/~\ | _ | || || || || || || || || | | `\/′
“We need whiteboards.”
“I’m trying paleo.”
“I might write rationalist fanfiction of that.”
“That’s just an applause light.” (“That’s just a semantic stopsign.” “That’s just the teacher’s password.”)
“POLITICS IS THE MINDKILLER”
“If keeping my current job has higher expected utility than founding a startup, I wish to believe that keeping my current job has higher expected utility than founding a startup...”
“I think he’s just being metacontrarian.”
“Arguments are soldiers!”
“Not every change is an improvement, but every improvement is a change.”
“There are no ontologically basic mental entities!”
“I’m an aspiring rationalist.”
“Fun Theory!”
“The map is not the territory.”
“Let’s beware evaporative cooling, here.”
“It’s a sunk cost! Abandon it!”
“ERROR: POSTULATION OF GROUP SELECTION DETECTED”
“If you measure it and reward the measurement going up, you’ll get what you measure, not what you want.”
“Azahoth!”
“Death is bad.”
This is too much fuuuuuuuun
“She’s just signaling virtue.”
“Money is the unit of caring.”
“One-box!”
“Beliefs should constrain anticipations.”
“Existential risk...”
“I’ll cooperate if and only if the other person will cooperate if and only if I cooperate.”
“I’m going to update on that.”
“Tsuyoku naritai!”
“My utility function includes a term for the fulfillment of your utility function.”
“Yeah, it’s objective, but it’s subjectively objective.”
“I am a thousand shards of desire.”
“Whoa, there’s an inferential gap here that one of us is failing to bridge.”
“My coherent extrapolated volition says...”
“Humans aren’t agents.” (“I’m trying to be more agenty.” “Humans don’t really have goals.”)
“Wait, wait, this is turning into an argument about definitions.”
“Look, just rejecting religion and astrology doesn’t make someone rational.”
“No, no, you shouldn’t implement Really Extreme Altruism. Unless the alternative is doing it without, anyway...”
“I’ll be the Gatekeeper, you be the AI.”
“That’s Near, this is Far.”
“Don’t fall into bottom-line thinking like that.”
I think I’m done. If I think of any more I’ll add them to this comment instead of making a new one.
“How do you operationalize that?”
“‘Snow is white’ is true if and only if snow is white.”
“If I may generalize from one example here...”
“I’m suffering from halo effect.”
“Warning: Dark Arts.”
“Okay, but in the Least Convenient Possible World...”
“We want to raise the sanity waterline.”
“You’ve fallen prey to the illusion of transparency.”
“Bought some warm fuzzies today.”
“What does the outside view say?”
“So the idea is that we make all scientific knowledge a sacred and closely guarded secret, so it will be treated with the reverence it deserves!”
“How could you test that belief?”
RATIONALISTS SAY ALL THE THINGS!
Solomonoff prior gives you 50%, that’s pretty cool! :D
I hope someone will use Alicorn’s (and other) quotes to make a good Eliza-bot. This could be an interesting AI challenge—write a bot that will get positive karma on LW! If there are more bots, the bot with highest nonzero karma wins.
As a start, I copied all Alicorn’s lines into a Markov text synthesizer . Some of the best results were:
I burst out laughing while reading this, so of course my officemates wanted to know what was so funny.
I cannot remember the last time the gulf of inferential distances was so very very wide.
Here’s a few, courtesy of applying JWZ’s dadadodo to all the lines in the thread so far:
Emacs’ M-x dissociated-press yields babble, but with some interesting words in it: “knowledgeneralize”, “metacontrammer”, “contrationalist”, “choosequences”, “the universal priory”, “statististimate”, “fanfused”, “condescendista”, “frobability”, “dissolomonoff”, “optimprovement”, “estimagine”, “cooperaterline”, “pattern matchology”. The only sensible sentence it’s come up with is “I’m running on condescending”.
I visualized that being said simultaneously with the middle-finger gesture.
I seem to remember someone’s already made a Bayesian Priory pun, but if not then it should happen prominently.
EDIT: here
Wrong. Electronic old men are.
To give an idea of what these look like raw, here’s a paragraph of dadadodo:
And here’s a similar-sized chunk of M-x dissociated-press:
Of these, I rather like:
The blended-words effect seems to give M-x dissociated-press a sort of Finnegans Wake atmosphere which dadadodo doesn’t have.
From another generator:
“I’m going to solve metaethics.” “I’m going, you’re going to found the Society for infanticide.”
“”Snow is white” is failing to solve psychology.”
“Wait, wait, “this is white” is a more technical explanation?”
“My utility function includes a semantic stopsign.”
“If keeping my current job has little XML tags on it that say the Least Convenient Possible World...”″
“Sure, I’d take over the sanity waterline.”
“I’ll be the symbol with ice cream trees.”
“So after we take over the alternative universe that is the Least Convenient Possible World...”
“I want to tile the sanity waterline with the unit of a thing.”
Not totally IT, but I tried it on Eliezer’s “The 5-Second Level”. Highlits include:
I do believe it suggests libertarianism. But I can’t be sure, as I can’t simply “be sufficiently averse” any more than I can force myself to believe something.
Still, that one seems to be a fairly reasonable sentence. If I were to learn only that one of these had been used in an LW article (by coincidence, not by a direct causal link), I would guess it was either that one or “I won’t socially kill you”.
I would be amazed if Scott Alexander has not used “I won’t socially kill you” at some point. Certainly he’s used some phrase along the line of “people who won’t socially kill me”.
...and in fact, I checked and the original article has basically the meaning I would have expected: “knowing that even if you make a mistake, it won’t socially kill you.”. That particular phrase was pretty much lifted, just with the object changed.
If we had signatures on LW, this would be mine.
Surely you mean Eliezer-bot.
Should it be made, it will of course be known as Elieza.
But in any case I think you need to keep in mind that a blank map does not correspond to a blank territory.
I initially read the parent in a straightforward way, but then I noticed it is also a meta-joke.
Usually. It could.
What is your prior? (For Eliezer being empty.)
Hopefully they’d keep improving.
“My utility function includes a term for the fulfillment of your utility function.”
Awww.… :)