Steven K
steven0461
“Broekveg” should be “Broekweg”
partly as a result of other projects like the Existential Risk Persuasion Tournament (conducted by the Forecasting Research Institute), I now think of it as a data-point that “superforecasters as a whole generally come to lower numbers than I do on AI risk, even after engaging in some depth with the arguments.”
I participated in the Existential Risk Persuasion Tournament and I disagree that most superforecasters in that tournament engaged in any depth with the arguments. I also disagree with the phrase “even after arguing about it”—barely any arguing happened, at least in my subgroup. I think much less effort went into these estimates than it would be natural to assume based on how the tournament has been written about by EAs, journalists, and so on.
Thanks, yes, this is a helpful type of feedback. We’ll think about how to make that section make more sense without background knowledge. The site is aimed at all audiences, and this means we’ll have to navigate tradeoffs about text leaving gaps in justifying claims vs. being too long vs. not having enough scope to be an overview. In this case, it does look like we could make the tradeoff on the side of adding a bit more text and links. Your point about the glossary sounds reasonable and I’ll pass it along. (I guess the tradeoff there is people might see an unexplained term and not realize that an earlier instance of it had a glossary link.)
You’re right that it’s confusing, and we’ve been planning to change how collapsing and expanding works. I don’t think specifics have been decided on yet; I’ll pass your ideas along.
I don’t think there should be “random” tabs, unless you mean the ones that appear from the “show more questions” option at the bottom. In some cases, the content of child questions may not relate in an obvious way to the content of their parent question. Is that what you mean? If questions are appearing despite not 1) being linked anywhere below “Related” in the doc corresponding to the question that was expanded, or 2) being left over from a different question that was expanded earlier, then I think that’s a bug, and I’d be interested in an example.
Quoting from our Manifund application:
We have received around $46k from SHfHS and $54k from LTFF, both for running content writing fellowships. We have been offered a $75k speculation grant from Lightspeed Grants for an additional fellowship, and made a larger application to them for the dev team which has not been accepted. We have also recently made an application to Open Philanthropy.
EA Forum version (manually crossposting to make coauthorship work on both posts):
Stampy’s AI Safety Info soft launch
if there’s interest in finding a place for a few people to cowork on this in Berkeley, please let me know
Thanks, I made a note on the doc for that entry and we’ll update it.
Traffic is pretty low currently, but we’ve been improving the site during the distillation fellowships and we’re hoping to make more of a real launch soon. And yes, people are working on a Stampy chatbot. (The current early prototype isn’t finetuned on Stampy’s Q&A but searches the alignment literature and passes things to a GPT context window.)
Yes, but we decided to reschedule it before making the announcement. Apologies to anyone who found the event in some other way and was planning on it being around the 11th; if Aug 25-27 doesn’t work for you, note that there’s still the option to participate early.
Since somebody was wondering if it’s still possible to participate without having signed up through alignmentjam.com:
Yes, people are definitely still welcome to participate today and tomorrow, and are invited to head over to Discord to get up to speed.
AISafety.info “How can I help?” FAQ
Announcing AISafety.info’s Write-a-thon (June 16-18) and Second Distillation Fellowship (July 3-October 2)
All AGI Safety questions welcome (especially basic ones) [May 2023]
Stampy’s AI Safety Info is a little like that in that it has 1) pre-written answers, 2) a chatbot under very active development, and 3) a link to a Discord with people who are often willing to explain things. But it could probably be more like that in some ways, e.g. if more people who were willing to explain things were habitually in the Discord.
Also, I plan to post the new monthly basic AI safety questions open thread today (edit: here), which is also a little like that.
I tried to answer this here
Anonymous #7 asks:
I am familiar with the concept of a utility function, which assigns numbers to possible world states and considers larger numbers to be better. However, I am unsure how to apply this function in order to make decisions that take time into account. For example, we may be able to achieve a world with higher utility over a longer period of time, or a world with lower utility but in a shorter amount of time.
Anonymous #6 asks:
Why hasn’t an alien superintelligence within our light cone already killed us?
As I understand it, the Metaculus crowd forecast performs as well as it does (relative to individual predictors) in part because it gives greater weight to more recent predictions. If “superhuman” just means “superhumanly up-to-date on the news”, it’s less impressive for an AI to reach that level if it’s also up-to-date on the news when its predictions are collected. (But to be confident that this point applies, I’d have to know the details of the research better.)