Open and Welcome Thread – August 2021
If it’s worth saying, but not worth its own post, here’s a place to put it.
If you are new to LessWrong, here’s the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don’t want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
- 29 Aug 2021 16:37 UTC; 2 points) 's comment on Open & Welcome Thread—August 2020 by (
New member here.
I happen (outside of this community) to already be friends with Eric Raymond (who posts here upon occasion) and I’ve met (once) Scott Alexander. I expect to make some more new online (and hopefully F2F) friends.
I have spent most of my adult lifetime trying to think more clearly, more ‘rationally’. Hope being here helps!
Being able to comment on our posts before they’re published (i.e, on drafts) would be nice. Sometimes I want to add a note in a comment but can’t do that until the post is posted.
[Edit: I was misunderstanding the parent comment, sorry, see reply.] I (and I think a lot of people) generally write and revise and solicit comments in google docs, and then copy-paste to lesswrong at the end. Copy-paste from Google docs into the LessWrong editor works great, it preserves the formatting almost perfectly. There’s just a couple little things that you need to do manually after copy-pasting.
This isn’t what I meant here. I meant if we want to have a comment of ours on the post before it’s posted. A bit like what YouTubers do.
oh, oops, sorry :-P
In that case, I agree, that’s a reasonable suggestion.
Our world in data offers a free download of their big Covid-19 dataset. It’s got data on lots of things including cases, deaths, and vaccines (full list of columns here), and all that by country and date—i.e., each row corresponds to one (country,date) pair with date date ranging from 2020-02-24 to 2021-08-20 for each country, stepsize one day.
Is there any not-ultra-complicated way to demonstrate vaccine effectiveness from this dataset? I.e., is there any way to measure the effect such that you would be confident predicting the direction ahead of time? (E.g., something like, for date Z, plot all countries by x=% vaccinated and y=#cases and measure the correlation, but you can make it reasonably more complicated than this by controlling for a hand full of variables or something.)
What do you mean by “demonstrate vaccine effectiveness”? My instinct is that it’s going to be ~impossible to prove a casual result in a principled way just from this data. (This is different from how hard it will be to extract Bayesian evidence from the data.)
For intuition, consider the hypothesis that countries can (at some point after February 2020) unlock Blue Science, which decreases cases and deaths by a lot. If the time to develop and deploy Blue Science is sufficiently correlated with the time to develop and deploy vaccines (and the common component can’t be measured well), it won’t be possible to distinguish casual effectiveness of vaccines from casual effectiveness of Blue Science.
(A Bayesian would draw some update even from an uncontrolled correlation, so if you want the Bayesian answer, the real question is “how much of an update so you want to demonstrate (and assuming what prior)”?
I mean something like, “a result that would constitute a sizeable Bayesian update to a perfectly rational but uninformed agent”. Think of someone who has never heard much about those vaccine thingies going from 50⁄50 to 75⁄25, that range.
Aren’t Open Threads made obsolete by Shortforms?
One advantage of Open Thread over a Shortform is that of periodical reset; an Open Thread may contain hundreds of comments (used to happen in the past) but then a new one is created and the debate starts anew.
When some Shortforms get hundreds of comments, using them may become inconvenient. Maybe at some moment, a mechanism to reset Shortform will be needed—basically, just create a new one, and make it so that the “new shortform” button writes into the new one. Could happen automatically, e.g. when the total number of comments exceeds some predefined threshold.
But… returning to my meta point… I also could have written this in my Shortform. Is there any advantage of writing it here?
I doubt that the introduction posts that get regularly written in this thread would be written at all if the thread wouldn’t exist.
Shortform posts currently feel more like they are your personal space with your own norms, that someone could visit. The Open Thread feels more like a town plaza where you are acting on shared norms, where you expect things to get a bit more visibility, but also are generally acting in a more shared social and intellectual context.
It’s not obvious that this distinction is worth having both of them, but both features seem to continue getting usage, and I personally often have a sense that a specific comment/post is more shortform or more Open Thread shaped.
Maybe a stupid question but how do I access other people’s shortforms? First time I’m hearing of this
If you go to their pofiles, you might see their “X’s shortform post”. Alternatively, go to www.lesswrong.com/shortform
Shortform posts show up on the frontpage in the recent discussion section, and can be visited from people’s profiles if they’ve created at least one shortform post. All shortform posts are listed as just one post in their post-list.
They are also visible in the All-Posts page.
You probably use your shortform more, so it might get more attention as a comment here.
I’m new here. I wanted to ask if there are any specific proposed regulations for AI governance? Or any type of proposed solutions?
Is there a good case for the usefulness (or uselessness) of brain-computer interfaces in AI alignment (à la Neuralink etc.)? I’ve searched around a bit, but there seems to be no write-up for the path to making AI go well using BCIs.
Edit: Post about this is up.
Maybe if we could give a human more (emulated) cortical columns without also making him insane in the process, we’d end up with a limited superintelligence who maybe isn’t completely Friendly, but also isn’t completely alien to human values. If we just start with the computer, all bets are off. He might still go insane later though. Arms race scenarios are still a concern. Reckless approaches might make hybrid intelligence sooner, but they’d also be less stable. The end result of most unfriendly AIs is that all the humans are dead. It takes a perverse kind of near-miss to get to the hellish, worse-than-death scenarios; an unFriendly AI that doesn’t just kill us. A crazy hybrid might be that.
If the smartest of humans could be made just a little smarter, maybe we could solve the alignment problem before AI goes FOOM. Otherwise, the next best approach seems to involve somehow getting the AI to solve the problem for us, without killing everyone (or worse) in the meantime. Of course, that’s only if they’re working on alignment, and not just improving AI.
If the Borg Collective becomes the next Facebook, then at least we’re not all dead. Unfortunately, an AI trying to FOOM on a pure machine substrate would still outcompete us poor meat brains.
Well, it might make it easier for someone to steal your credit card info if you’re wearing one of these headsets.
I don’t know of any writeup, and I do think it would be great for someone to make one. I’ve definitely discussed this for many hours with people over the years.
Some related tags that might have some stuff in this space covered:
https://www.lesswrong.com/tag/neuromorphic-ai
https://www.lesswrong.com/tag/brain-computer-interfaces
But overall doesn’t look like there are any posts that really cover the AI Alignment angle.