LessWrong Team
I have signed no contracts or agreements whose existence I cannot mention.
LessWrong Team
I have signed no contracts or agreements whose existence I cannot mention.
I’m just wondering if we were ever sufficiently positively justified to anticipate a good future, or if we were just uncertain about the future and then projected our hopes and dreams onto this uncertainty, regardless of how realistic that was.
I think that’s a very reasonable question to be asking. My answer is I think it was justified, but not obvious.
My understanding is it wasn’t taken for granted that we had a way to get more progress with simply more compute until deep learning revolution, and even then people updated on specific additional data points for transformers, and even then people sometimes say “we’ve hit a wall!”
Maybe with more time we’d have time for the US system to collapse and be replaced with something fresh and equal to the challenges. To the extent the US was founded and set in motion by a small group of capable motivated people, it seems not crazy to think a small to large group such people could enact effective plans with a few decades.
So gotta keep in mind that probabilities are in your head (I flip a coin, it’s already tails or heads in reality, but your credence should still be 50-50). I think it can be the case that we were always doomed even if weren’t yet justified in believing that.
Alternatively, it feels like this pushes up against philosophies of determinism and freewill. The whole “well the algorithm is a written program and it’ll choose what is chooses deterministically” but also from the inside there are choices.
I think a reason to have been uncertain before and update more now is just that timelines seem short. I used to have more hope because I thought we had a lot more time to solve both technical and coordination problems, and then there was the DL/transformers surprise. You make a good case and maybe 50 years more wouldn’t make a difference, but I don’t know, I wouldn’t have as high p-doom if we had that long.
But since the number is subjective living your life like you know you are right is certainly wrong
I don’t think this makes sense. Suppose you have a subjective belief that a vial of tasty fluid is lethal poison 90%, you’re going to act in accordance with that belief. Now if other people think differently from you, and you think they might be right, maybe you adjust your final subjective probability to something else, but at the end of the day it’s yours. That it’s subjective doesn’t rule it out being pretty extreme.
If what you mean is you can’t be that confident given disagreement, I dunno, I wish I could have that much faith in people.
Was a true trender-bender
Frick. Happened to me already.
“Serendipity” is a term I’ve been seen used for this, possibly was Venkatesh Rao.
Curated. The wiki pages collected here, despite being written in 2015-2017 remain excellent resources on concepts and arguments for key AI alignment ideas (both still widely used and those lesser known). I found that even for concepts/arguments like the orthogonality thesis and corrigibility, I felt a gain in crispness from reading these pages. The concept of, e.g. epistemic and instrumental efficiency I didn’t have, yet feels useful in thinking about the rise of increasingly powerful AI.
Of course, there’s also non-AI content that got imported. The Bayes guide likely remains the best resource for building Bayes intuition, and same with the guide on logarithms that is extremely thorough.
I think the guide should be 10x more prominent in this post.
You should see the option when you click on the triple dot menu (next to the Like button).
So the nice thing about karma is that if someone thinks a wikitag is worthy of attention for any reason (article, tagged posts, importance of concept), they’re able to upvote it and make it appear higher.
Much of the current karma comes from Ben Pace and I who did a pass. Rationality Quotes didn’t strike me a page I particularly wanted to boost up the list, but if you disagree with me you’re able to Like it.
In general, I don’t think have a lot of tagged posts should mean a wikitag should be ranked highly. It’s a consideration, but I like it flowing via people’s judgments about whether or not to upvote it.
The categorization is an interesting question. Indeed currently only admins can do it and that perhaps requires more thought.
Interesting. Doesn’t replicate for me. What phone are you using?
It’s a compass rose, thematic with the Map and Territory metaphor for rationality/truthseeking.
The real question is why does NATO have our logo.
Curated! I like this post for the object-level interestingness of the cited papers, but also for pulling in some interesting models from elsewhere and generally reminding us that this is something we can do.
In times of yore, LessWrong venerated the the neglected virtue of scholarship. And well, sometimes it feels like it’s still neglected. It’s tough because indeed many domains have a lot of low quality work, especially outside of hard sciences, but I’d wager on there being a fair amount worth reading, and appreciate Buck point at a domain where that seems to be the case.
Was there the text of the post in the email or just a link to it?
Curated. I was reluctant to curate this post because I found myself bouncing off it some due to length – I guess in pedagogy there’s a tradeoff between explaining at length (and you lose people) and you convey enough info vs keeping it brief and people read it but they don’t get enough. Based on private convo, Raemon thinks length is warranted.
I’m curating because I do think this kind of project is valuable. Everyday it feels easier to lose our minds entirely to AI, and I think it’s important to remember we can think better or worse, and we should be trying to do the former.
I have mixed feeling about Raemon’s project overall. Parts of it feel good, something feels missing (I think I’m partial to John Wentworth’s claim elsewhere that you need a bunch of technical study in the recipe), but I except the stuff Raemon is developing to be helpful to have engaged with for anyone who gets better at thinking.
For me, S2 explicitly I can’t justify being quite that confident, maybe 90-95%, but emotionally 9:1 odds feels very like “that’s what’s happening”.