No, I think it’s a fair question. Show me a non-trivial project coded end-to-end by an AI agent, and I’ll believe these claims.
PoignardAzur
Off-topic, but what the heck is “The Tyranny of the Marginal Spice Jar”?
(according to claude)
I wish people would stop saying this. We shouldn’t normalize relying on AI to have opinions for us. These days they can even link their sources! Just look at the sources.
I mean I guess the alternative is that people use Claude without checking and just don’t mention it, so I guess I don’t have a solution. But at least it would be considered embarrassing in that scenario. We should stay aware that there are better practices that don’t require much more effort.
Likewise, Ev put in some innate drives related to novelty and aesthetics, with the idea that people would wind up exploring their local environment. Very sensible! But Ev would probably be surprised that her design is now leading to people “exploring” open-world video game environments while cooped up inside.
I think it’s not obvious that Ev’s design is failing to work as intended here.
Video games are a form of training. Some games can get pretty wireheady (Cookie Clicker), but many of the most popular, most discussed, most played games are games that exercise some parts of your brain in useful ways. The best-selling game of all times is Minecraft.
Moreover, I wouldn’t be surprised if people who played Breath of the Wild were statistically more likely to go hiking afterward.
First, I could have just gone and talked to Oliver earlier.
I’d say you’re underrating that option.
Part of it is domain-specific: a lot of developers start off with very little agency, and get “try to do the thing yourself before asking to the teacher how to do the thing” drilled into them; it’s easy to overcorrect to “never ask for help”. Learning to ask for help faster is a valuable senior developer skill.
On a more general level, “asking for help faster” is a disgustingly common answer to the question “how could I have found the solution sooner?”. Life isn’t an exam, you don’t get points off for talking to people. (And using ChatGPT, StackOverflow, etc.)
I recommend Thinking Physics and Baba is You as sources of puzzles to start grinding on this
Mhh. Interesting. I haven’t played Baba Is You in a while. It has enough puzzles that I think you could actually practice some skills with it.
I might try your routine, though I’m a bit skeptical of it.
Yeah, it seems a little weird to me that the post includes Eliezer’s claim uncritically. “I totally train myself to improve on the spot, all the time” seems like a bold claim for someone who’s admitted to having unusually low willpower?
If they are good at explaining or you are good at interviewing, you can learn the details about what you’d do differently if you yourself made the update, and ask yourself whether the update actually makes sense.
I would be extremely surprised if this didn’t help at all.
I wouldn’t be very surprised. One, it seems coherent with what the world looks like.
Two, I suspect for the kind of wisdom / tacit knowledge you want, you need to register the information in types of memory that are never activated by verbal discussion or visceral imagining, by design.
Otherwise, yeah, I agree that it’s worth posting ideas even if you’re not sure of them, and I do appreciate the epistemic warning at the top of the post.
To be clear, I think you should treat this as bogus until you have evidence better than what you listed.
You’re trying to do a thing where, historically, a lot of people have had clever ideas that they were sincerely persuaded were groundbreaking, and have been able to find examples of their grand theory working, even though it didn’t amount to anything. So you should treat your own sincere enthusiasm and your own “subjectively it feels like it’s working” vibes with suspicion. You should actively be looking for ways to falsify your theory, which I’m not seeing anywhere in your post.
Again, I note that you haven’t tried the chess thing.
I don’t really know what to tell you. My mindset basically boils down to “epistemic learned helplessness”, I guess?
It’s like, if you see a dozen different inventors try to elaborate ways to go to the moon based on Aristotelian physics, and you know the last dozen attempts failed, you’re going to expect them to fail as well, even if you don’t have the tools to articulate why. The precise answer is “because you guys haven’t invented Newtonian physics and you don’t know what you’re doing”, but the only answer you can give is “Your proposal for how to get to the moon uses a lot of very convincing words but the last twelve attempts used a lot of very convincing words too, and you’re not giving evidence of useful work that you did and these guys didn’t (or at least, not work that’s meaningfully different from the work these guys did and the guys before them didn’t).”
And overall, the general posture of your article gives me a lot of “Aristotelian rocket” vibes. The scattershot approach of making many claims, supporting them with a collage of stories, having a skill tree where you need ~15 (fifteen!) skills to supposedly unlock the final skill, strikes me as the kind of model you build when you’re trying to build redundancy into your claims because you’re not extremely confident in any one part. In other words, too many epicycles.
I especially notice that the one empirical experiment you ran, trying to invent tacit knowledge transfer with George in one hour, seems to have failed a lot more than it succeeded, and you basically didn’t update on that. The start of the post says:
“I somehow believe in my heart that it is more tractable to spend an hour trying to invent Tacit Soulful Knowledge Transfer via talking, than me spending 40-120 hours practicing chess. Also, Tacit Soulful Transfer seems way bigger-if-true than the Competitive Deliberate Practice thing. Also if it doesn’t work I can still go do the chess thing later.”
The end says:
That all sounds vaguely profound, but one thing our conversation wrapped up without is “what actions would I do differently, in the worlds where I had truly integrated all of that?”
We didn’t actually get to that part.
And yet (correct me if I’m wrong) you didn’t go do the chess thing.
Here are my current guesses:
No! Don’t!
I’m actually angry at you there. Imagine me saying rude things.
You can’t say “I didn’t get far enough to learn the actual lesson, but here’s the lesson I think I would have learned”! Emotionally honest people don’t do that! You don’t get to say “Well this is speculative, buuuuuut”! No “but”! Everything after the “but” is basically Yudkowsky’s proverbial bottom line.
if you (PoignardAzur) set about to say “okay, is there a wisdom I want to listen to, or convey to a particular person? How would I do that?”
Through failure. My big theory of learning is that you learn through trying to do something, and failing.
So if you want to teach someone something, you set up a frame for them where they try to do it, and fail, and then you iterate from there. You can surround them with other people who are trying the same thing so they can compare notes, do post-mortems with them, etc (that’s how my software engineering school worked), but in every case the secret ingredient is failure.
This post is very evocative, it touches on a lot of very relatable anxieties and hopes and “things most rationalists are frustrated they can’t do better” type of stuff.
But its “useful or actionable content to personal anecdote” ratio seems very low, extremely low for a post that made the curated list. To me it reads as a collection of mini-insights, but I don’t really see any unifying vision to them, anything giving better handles on why pedagogy fails or why people fail to learn from other people’s wisdom.
It’s too bad, because the list of examples you give at the start is fairly compelling. It just doesn’t feel like the rest of the article delivers.
Not necessarily.
I think any method that calculates the value/utility of your wealth as a timeless function of utility per amount will be pretty disconnected to how people behave in practice. It doesn’t account for people making plans and having to scrap them because an accident cost them their savings, for instance.
(But then again, I’m not an economist, maybe there are timeless frameworks that account for that.)
Something about the article felt off to me, and “should I buy insurance if interest rates are zero” is a good intuition pump for why.
Yes, I think you should still buy insurance. The reason I’d come up with, peace of mind aside, is losing a lot of money at once is worse than losing a little money over time, because when you lose a lot of money your options are more limited. You have less flexibility, less ability to capitalize on opportunities, less ability to withstand other catastrophes, etc.
It also shows up on the “Curated” RSS feed, which is much easier to follow than the other article feeds.
This raises an interesting view. There’s no reason it should do that, but if you give it to an unprepared group of competitive boardgame players, that is how they would be have.
There is a pretty simple reason: these games are zero-sum.
Players in engine-building games don’t optimize for “the best society” within the diegesis of the game, especially since these games almost never have any mechanics keeping track of things like population happiness.
Given that, there’s no incentive to create a coal rationing tribunal by default. If every player benefits equally from the coal, then no player benefits from the tribunal. (Unless the players agree that keeping the coal longer makes the game more fun, or some players have a long term strategy that relies on coal and want to defend it, or something like that.)
Yeah, bad habits are a bitch.
One of the ways by which these kinds of strategies get implemented is that the psyche develops a sense of extreme discomfort around acting in the “wrong” way, with successful execution of that strategy then blocking that sense of discomfort. For example, the thought of not apologizing when you thought someone might be upset at you might feel excruciatingly uncomfortable, with that discomfort subsiding once you did apologize.
Interesting. I’ve had friends who had this “really needs to apologize when they think they might have upset me” thing, and something I noticed is that they when they don’t over-apologize they feel the need to point it out too.
I never thought too deeply about it, but reading you, I’m thinking maybe their internal experience was “I just felt really uncomfortable for a moment and I still overcame my discomfort, I’m proud of that, I should tell him about it”.
My default assumption for any story that ends with “And this is why our ingroup is smarter than everyone else and people outside won’t recognize our genius” is that the story is self-serving nonsense, and this article isn’t giving me any reason to think otherwise.
A “userbase with probably-high intelligence, community norms about statistics and avoiding predictable stupidity” describes a lot of academia. And academia has a higher barrier to entry than “taking the time to write some blog articles”. The average lesswrong user doesn’t need to run an RCT before airing their latest pet theory for the community, so why would it be so much better at selectively spreading true memes than academia is?
I would need a much more thorough gears-level model of memetic spread of ideas, one with actual falsifiable claims (you know, like when people do science) before I could accept the idea of LessWrong as some kind of genius incubator.
I don’t find that satisfying. Anyone can point out that a perimeter is “reasonably hard” to breach by pointing at a high wall topped with barbed wire, and naive observers will absolutely agree that the wall sure is very high and sure is made of reinforced concrete.
The perimeter is still trivially easy to breach if, say, the front desk is susceptible to social engineering tactics.
Claiming that an architecture is even reasonably secure still requires looking at it with an attacker’s mindset. If you just look at the parts of the security you like, you can make a very convincing-sounding case that still misses glaring flaws. I’m not definitely saying that’s what this article does, but it sure is giving me this vibe.
I find that these situations, doing anything agentic at all can break you out of the spiral. Sports is a good example: you can just do 5 pushups and tell yourself “That’s enough for today, tomorrow I’ll get back to my full routine”.
One aspect of this I’m curious about is the role of propaganda, and especially russian-bot-style propaganda.
Under the belief cascade model, the goal may not be to make arguments that persuade people, so much as it is to occupy the space, to create a shared reality of “Everyone who comments under this Youtube video agrees that X”. That shared reality discourages people from posting contrary opinions, and creates the appearance of unanimity.
I wonder if sociologists have ever tried to test how susceptible propaganda is to cascade dynamics.