LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon
I was briefly disoriented but it seemed fairly obviously Bumbling Henchmen Duo.
I definitely do not stand by this as either explicit Lightcone policy or my own considered opinion, but, I feel like a bunch of forces on the internet nudge everyone towards the same generic site designs (mobile-first, darkmode ready, etc), and while I agree there is a cost, I do feel actively sad about the tradeoff in the other direction.
(like, there are a lot of websites that don’t have a proper darkmode. And I… just… turn down the brightness if it’s a big deal, which it usually isn’t? I don’t really like most websites turning dark at night. And again, if you set the setting once on LessWrong it mostly should be stable, I don’t really buy that there are that many people who get the setting lost?)
I think the baseline site is pretty fine in darkmode, it’s just that whenever we do artsy illustration stuff it’s only really as-an-afterthought ported to darkmode. So, I think we have at least some preference for people’s first experience of it to be on lightmode so that you at least for-a-bit get a sense of what the aesthetic is meant to be.
(the part where it keeps reverting whenever you lose localstorage does sound annoying, sorry about that)
I don’t know how considered this is, but, we’ve up a lot of the site aesthetic built around the light mode (aiming for a “watercolor on paper” vibe) and it is fairly hard to get it to work well on dark mode as well.
(I’m interested in reading this but the lack of line-breaks makes it pretty hard)
Mod note: this is an edge case on frontpaging (which mostly goes off of “is this timeless? would someone still care about this in 4-10 years?”). I think probably analysis of this bill will still be useful to read in the future, but, “a particular bill happening this year” is usually not frontpage.
I might separately criticize shortform video and twitter (sure, they definitely have benefits, I just think they also have major costs, and if we can alleviate the costs we should. This doesn’t have to mean banning shortform and twitter).
But, I think that’s (mostly) a different topic that the OP.
The question here is not “is it good you can post on twitter?”, it’s “is it good you can post on the version of twitter that was brought into being by “most people using small-screens.” (or, more accurately: is it good that we’re in the world where small-screen twitter is a dominant force shaping humanity, as opposed to an ecosystem where less-small-screen-oriented social media app is more dominant)
If someone were interested I’d probably be happy to make a version of lesswrong.com/moderation that was more optimized for this.
I think this framing was somewhat new to me and a useful explanation in contrast/in-addition-to The Schelling Choice is “Rabbit”, not “Stag”
Part of the generator was “I’ve seen a demo of apple airpods basically working for this right now” (it’s not, like, 100% silent, you have to speak at a whisper, but, it seemed fine for a room with some background noise)
I think one has to admit that smartphones with limited-attention-space are the revealed modal preference of consumers. It’s not at all clear that this is an inadequate equilibrium to shift, so much as a thing that many consumers actively want.
I do totally agree, this is what the people want. I do concretely say “yep, and the people are wrong”. But, I think the solution is not “ban cell phones” or similar, it’s “can we invent a technology that gives people the thing they want out of smartphones but with less bad side effects?”
I doubt it’ll ever be mostly voice interface—there is no current solution to use voice in public without bothering others. It will very likely be hybrid/multi-modal, with different sets of modality for different users/contexts.
Oh ye of little faith about how fast technology is about to change. (I think it’s already pretty easy to do almost-subvocalized messages. I guess this conversation is sort of predicated on it being pre-uploads and maybe pre-ubiquitous neuralink-ish things)
Every now and then I’m like “smart phones are killing America / the world, what can I do about that?”.
Where I mean: “Ubiquitous smart phones mean most people are interacting with websites in a fair short attention-space, less info-dense-centric way. Not only that, but because websites must have a good mobile version, you probably want your website to be mobile-first or at least heavily mobile-optimized, and that means it’s hard to build features that only really work when users have a large amount of screen space.”
I’d like some technological solution that solves the problems smartphones solve but somehow change the default equilibria here, that has a chance at global adoption.
I guess the answer these days is “prepare for the switch to fully LLM voice-control Star Trek / Her world where you are mostly talking to it, (maybe with a side-option of “AR goggles” but I’m less optimistic).
I think the default way those play out will be very attention-economy-oriented, and wondering if there’s a way to get ahead of that and build something deeply good that might actually sell well.
This post seems like a good time to relink Critch’s LLM chatbots have ~half of the kinds of “consciousness” that humans believe in. Humans should avoid going crazy about that.
I don’t agree with everything in the post, but, the generally point is humans don’t reliably mean the same thing by “consciousness” (Critch claims it’s actually quite common rather than a rare edge case, and that there 17 somewhat different things people turn out to mean). So, be careful while having this argument.
I suspect this post is more focused on “able to introspect” and “have a self model” which is different from “have subjective experiences”. (You might think those all come bundled together but they don’t have to)
I do not find this post very persuasive though, it looks more like standard manuvering LLMs into a position where they are roleplaying “an AI awakening” in basically the usual way that So You Think You’ve Awoken ChatGPT was written to counteract.
(I actually do think AIs having like self-modeling and maybe some forms of introspection, I just don’t think the evidence in this post is very compelling about it. Or, it’s maybe compelling about self-modeling but not in a very interesting way)
(Quick mod note: we wouldn’t normally accept this sort of comment as a first comment from a new user, but, seems fine for there to be an exception for replies on this particular post)
I don’t think this argument is exactly true – we review and reject like 30 people a day from LW and accept 1-3, and most of the time we aren’t that optimistic about the 1-3, and it’s not that crazy that we switch to the world where we’re just actually pretty selective.
(I think you are nonetheless pointing at an important thing where, when you factor in a variety of goals / resources available, it probably makes more sense to think of LessWrong as a grayspace. Although I think Duncan also thinks, if it were trying on purpose to be a grayspace, there would be more active effort guiding people towards some particular way of thinking/conversing)
Also, Duncan’s written a fair amount about this both in blogposts and comment-back-and-forths and I’m feeling a bit of sadness of “this convo feels like by default it’s going to rehash the Duncan LW Concerns 101 conversation” instead of saying something new.
Some recap:
The headline result was obviously going to happen, not an update for anyone paying attention.
I agree with this comment but am kinda surprised you are the one saying it. I realize this isn’t that strong an argument for “LLMs are actually good” but it happening about-now as opposed to like 6 months later seems like more evidence for them eventually being able to reliably to novel intellectual work.
I roughly think that the previous post was more clearly on the “frontpage side” than this one, and this one is edge-casey. (I’m only one of the mods and we don’t all agree all the time, but for people modeling where the line is in mod-judgment-aggregate, uh, that’s my current take)
By the time we got here I feel like I’ve lost track of what the actual generating models of this disagreement were.
I guess I forgot that the whole reason* people seemed to conflate “slow timelines and smooth takeoff” because Paul seemed to believe in both, and
My experience has varied with this over time – sometimes, body doubling is just free extra focused work time, and sometimes, it mostly seems to concentrate my focused work time into the beginning of the day and then I crash, but usually that is still preferable because concentration-of-focus lets me do tasks with more things-I-need-to-keep-track of and serial steps.
I’ve updated the post title to “Buckle up bucko, and get ready for multiple hard cognitive steps”, because its what I expect I’ll usually want the link to be when I link to this (so it’s easier at a glance what it means in the context I’m linking to it. (I am considering making slightly more use of “initially name a post something more fun and attention getting while it’s on the home page, but change the name slightly to something more linkable”)