This advice is probably good in most social contexts—but I really appreciate the rationalist norm of taking deception very seriously. I resolve this class of conflict by being much more apprehensive than usual about casually misleading my rationalist-adjacent friends and business associates.
dkirmani
See also: Physics Envy.
From Daniel Ingram’s Mastering the Core Teachings of the Buddha (slatestarcodex review):
Immediately after a physical sensation arises and passes is a discrete pulse of reality that is the mental knowing of that physical sensation, here referred to as “mental consciousness” (as contrasted with the problematic concept of “awareness” in Part Five). By physical sensations I mean the five senses of seeing, hearing, smelling, tasting, and touching, and I guess you could add some proprioceptive, other extended sensate abilities and perhaps a few others, but for traditional purposes, let’s stick to these five. This habit of creating a mental impression following any of the physical sensations is the standard way the mind operates on phenomena that are no longer actually there, even mental sensations such as seemingly auditory thoughts, that is, mental talk (our inner “voice”), intentions, and mental images. It is like an echo, a resonance. The mind forms a general impression of the object, and that is what we can think about, remember, and process. Then there may be a thought or an image that arises and passes, and then, if the mind is stable, another physical pulse.
Or a decision market!
Advertisements on Lesswrong (like lsusr’s now-deleted “Want To Hire Me?” post) are good, because they let the users of this site conduct mutually-beneficial trade.
I disagree with Ben Pace in the sibling comment; advertisements should be top-level posts, because any other kind of post won’t get many eyeballs on it. If users don’t find the advertised proposition useful, if the post is deceptive or annoying, then they should simply downvote the ad.
you can get these principles in other ways
I got them via cultural immersion. I just lurked here for several months while my brain adapted to how the people here think. Lurk moar!
The Sequences Highlights on YouTube
I noticed this happening with goose.ai’s API as well, using the gpt-neox model, which suggests that the cause of the nondeterminism isn’t unique to OpenAI’s setup.
The SAT switched from a 2400-point scale back to a 1600-point scale in 2016.
Lesswrong is a garden of memes, and the upvote button is a watering can.
This post is unlisted and is still awaiting moderation. Users’ first posts need to go through moderation.
Is it a bug that I can see this post? I got alerted because it was tagged “GPT”.
Timelines. USG could unilaterally slow AI progress. (Use your imagination.)
Even if only a single person’s values are extrapolated, I think things would still be basically fine. While power corrupts, it takes time do so. Value lock-in at the moment of creation of the AI prevents it from tracking (what would be the) power-warped values of its creator.
My best guess is that there are useful things for 500 MLEs to work on, but publicly specifying these things is a bad move.
Agree, but LLM + RL is still preferable to muzero-style AGI.
I’m not so sure! Some of my best work was done from the ages of 15-16. (I am currently 19.)
Here’s an idea for a decision procedure:
Narrow it down to a shortlist of 2-20 video ideas that you like
For each video, create a conditional prediction market on Manifold with the resolution criterion “if made, would this video get over X views/likes/hours of watch-time”, for some constant threshold X
Make the video the market likes the most
Resolve the appropriate market
[copying the reply here because I don’t like looking at the facebook popup]
(I usually do agree with Scott Alexander on almost everything, so it’s only when he says something I particularly disagree with that I ever bother to broadcast it. Don’t let that selection bias give you a misleading picture of our degree of general agreement. #long)
I think Scott Alexander is wrong that we should regret our collective failure to invest early in cryptocurrency. This is very low on my list of things to kick ourselves about. I do not consider it one of my life’s regrets, a place where I could have done better.
Sure, Clippy posted to LW in 2011 about Bitcoin back when Bitcoins were $1 apiece, and gwern gave an argument for why Bitcoin had a 0.5% probability of going to $5,000 and that this made it a good investment to run your GPU to mine Bitcoins, and Wei Dai commented that in this case you could just buy Bitcoin directly. I don’t remember reading that post, it wasn’t promoted, and gwern’s comment was only upvoted by 3 points; but it happened. I do think I heard about Bitcoin again on LW later, so I won’t press the point.
I do not consider our failure to buy in as a failure of group or individual rationality.
A very obvious reply is that of efficient markets. There were lots and lots of people in the world who want money, who specialize in getting more money, who invest a lot of character points in doing that. Some of them knew about cryptocurrency, even. Almost all of them did the same thing we did and stayed out of Bitcoin—by the time even 0.1% of them had bought in, the price had thereby gone higher. At worst we are no less capable than 99.9% of those specialists.
Now, it is sometimes possible to do better than all of the professionals, under the kinds of circumstances that I talk about in Inadequate Equilibria. But when the professionals can act unilaterally and only need to invest a couple of hundred bucks to grab the low-hanging fruit, that is not by default favorable conditions for beating them.
Could all the specialists have a blind spot en masse that you see through? Could your individual breadth of knowledge and depth of sanity top their best efforts even when they’re not gummed up in systemic glue? Well, I think I can sometimes pull off that kind of hat trick. But it’s not some kind of enormous surprise when you don’t. It’s not the kind of thing that means you should have a crisis of faith in yourself and your skills.
To put it another way, the principle “Rationalists should win” does not translate into “Bounded rationalists should expect to routinely win harder than prediction markets and kick themselves when they don’t.”
You want to be careful about what excuses you give yourself. You don’t want to update in the opposite direction of experience, you don’t want to reassure yourself so hard that you anti-learn when somebody else does better. But some other people’s successes should give you only a tiny delta toward trying to imitate them. Financial markets are just about the worst place to think that you ought to learn from your mistake in having not bought something.
Back when Bitcoin was gaining a little steam for the first time, enough that nerds were starting to hear about it, I said to myself back then that it wasn’t my job to think about cryptocurrency. Or about clever financial investment in general. I thought that actually winning there would take a lot of character points I didn’t have to spare, if I could win at all. I thought that it was my job to go off and solve AI alignment, that I was doing my part for Earth using my own comparative advantage; and that if there was really some low-hanging investment fruit somewhere, somebody else needed to go off and investigate it and then donate to MIRI later if it worked out.
I think that this pattern of thought in general may have been a kind of wishful thinking, a kind of argument from consequences, which I do regret. In general, there isn’t anyone else doing their part, and I wish I’d understood that earlier to a greater degree. But that pattern of thought didn’t actually fail with respect to cryptocurrency. In 2017, around half of MIRI’s funding came from cryptocurrency donations. That part more or less worked.
More generally, I worry Scott Alexander may be succumbing to hindsight bias here. I say this with hesitation, because Scott has his own skillz; but I think Scott might be looking back and seeing a linear path of reasoning where in fact there would have been a garden of forking paths.
Or as I put it to myself when I felt briefly tempted to regret: “Gosh, I sure do wish that I’d advised everyone to buy in at Bitcoin at $1, hold it at $10, keep holding it at $100, sell at $800 right before the Mt. Gox crash, invest the proceeds in Ethereum, then hold Ethereum until it rose to $1000.”
The idea of “rationality” is that we can talk about general, abstract algorithms of cognition which tend to produce better or worse results. If there’s no general thinking pattern that produces a systematically better result, you were perfectly rational. If there’s no thinking pattern a human can realistically adopt that produces a better result, you were about as sane as a human gets. We don’t say, “Gosh, I sure do wish I’d bought the Mega Millions ticket for 01-04-14-17-40-04* yesterday.” We don’t say, “Rationalists should win the lottery.”
What thought pattern would have generated the right answer here, without generating a lot of wrong answers in other places if you had to execute it without benefit of hindsight?
Have less faith in the market professionals? Michael Arc née Vassar would be the go-to example of somebody who would have told you, at that time, that Eliezer-2011 vastly overestimates the competence of the rest of the world. He didn’t invest in cryptocurrency, and then hold until the right time, so far as I know.
Be more Pascal’s Muggable? But then you’d have earlier invested in three or four other supposed 5,000x returns, lost out on them, gotten discouraged, and just shaken your head at Bitcoin by the time it came around. There’s no such thing as advising your past self to only be Pascal’s Muggable for Bitcoin, to grant enough faith to just that one opportunity for 5,000x gains, and not pay equal amounts of money into any of the other supposed opportunities for 5,000x gains that you encountered.
I don’t see a simple, generally valid cognitive strategy that I could have executed to buy Bitcoin and hold it, without making a lot of other mistakes.
Not only am I not kicking myself, I’d worry about somebody trying to learn lessons from this. Before Scott Alexander wrote that post, I think I said somewhere—possibly Facebook?--that if you can’t manage to not regret having not bought Bitcoin even though you knew about it when it was $1, you shouldn’t ever buy anything except index funds because you are not psychologically suited to survive investing.
Though I find it amusing to be sure that the LessWrong forum had a real-life Pascal’s Mugging that paid out—of which the real lesson is of course that you ought to be careful as to what you imagine has a tiny probability.
EDIT: This is NOT me saying that anyone who did buy in early was irrational. That would be “updating in the wrong direction” indeed! More like, a bounded rationalist should not expect to win at everything at once; and looking back and thinking you ought to have gotten all the fruits that look easy in hindsight can lead to distorted thought patterns.
This comment reminded me: I get a lot of value from Twitter DMs and groupchats. More value than I get from the actual feed, in fact, which—according to my revealed preferences—is worth multiple hours per day. Groupchats on LessWrong have promise.