Thanks, I’d be interested in @Matthew Barnett’s response.
Linch
A Qualitative Case for LTFF: Filling Critical Ecosystem Gaps
Interesting! I didn’t consider that angle
Agreed, I was trying to succinctly convey something that I think is underrated, unfortunately going to miss some nuances.
If the means/medians are higher, the tails are also higher as well (usually).
Norm(μ=115, σ=15) distribution will have a much lower proportion of data points above 150 than Norm(μ=130, σ=15). Same argument for other realistic distributions. So if all I know about fields A and B is that B has a much lower mean than A, by default I’d also assume B has a much lower 99th percentile than A, and much lower percentage of people above some “genius” cutoff.
Again using the replication crisis as an example, you may have noticed the very wide (like, 1 sd or more) average IQ gap between students in most fields which turned out to have terrible replication rates and most fields which turned out to have fine replication rates.
This is rather weak evidence for your claim (“memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers”), unless you additionally posit another mechanism like “fields with terrible replication rates have a higher standard deviation than fields without them” (why?).
Some people I know are much more pessimistic about the polls this cycle, due to herding. For example, nonresponse bias might just be massive for Trump voters (across demographic groups), so pollsters end up having to make a series of unprincipled choices with their thumbs on the scales.
There’s also a comic series with explicitly this premise, unfortunately this is a major plot point so revealing it will be a spoiler:
Yeah this was my first thought halfway through. Way too many specific coincidences to be anything else.
Constitutionally protected free speech, efforts opposing it were ruled explicitly unconstitutional.
God LW standards sure are slipping. 8 years ago people would be geeking out about the game theory implications, commitments, decision theory, alternative voting schemas, etc. These days after the first two downvotes it’s just all groupthink, partisan drivel, and people making shit up, apparently.
See Scott Aaronson and Julia Galef on “vote trading” in 2016: https://www.happyscribe.com/public/rationally-speaking-podcast/rationally-speaking-171-scott-aaronson-on-the-ethics-and-strategy-of-vote-trading
My guess is that we wouldn’t actually know with high confidence before (and likely even some time after) things-will-definitely-be-fine.
E.g. 3 months after safe ASI people might still be publishing their alignment takes.
There are also times where “foreign actors” (I assume by that term you mean actors interested in muddying the waters in general, not just literal foreign election interference) know that it’s impossible to push a conversation towards their preferred 1)A or 5)B, at least among informed/educated voices, so they try to muddy the waters and push things towards 3). Climate change[1] and covid vaccines are two examples that comes to mind.
- ^
Though the correct answer for climate change is closer to 2) than 1)
- ^
They were likely using inferior techniques to RLHF to implement ~Google corporate standards; not sure what you mean by “ethics-based,” presumably they have different ethics than you (or LW) does but intent alignment has always been about doing what the user/operator wants, not about solving ethics.
I’m not suggesting to the short argument should resolve those background assumptions, I’m suggesting that a good argument for people who don’t share those assumptions roughly entails being able to understand someone else’s assumptions well enough to speak their language and craft a persuasive and true argument on their terms.
Reverend Thomas Bayes didn’t strike me as a genius either, but of course the bar was a lot lower back then.
Norman Borlaug (father of the Green Revolution) didn’t come across as very smart to me. Reading his Wikipedia page, there didn’t seem to be notable early childhood signs of genius, or anecdotes about how bright he is.
AI News so far this week.
1. Mira Murati (CTO) leaving OpenAI2. OpenAI restructuring to be a full for-profit company (what?)
3. Ivanka Trump calls Leopold’s Situational Awareness article “excellent and important read”
4. More OpenAI leadership departing, unclear why.
4a. Apparently sama only learned about Mira’s departure the same day she announced it on Twitter? “Move fast” indeed!
4b. WSJ reports some internals of what went down at OpenAI after the Nov board kerfuffle.
5. California Federation of Labor Unions (2million+ members) spoke out in favor of SB 1047.
Mild spoilers for a contemporary science-fiction book, but the second half was a major plot point in
The Dark Forest, the sequel to Three-Body Problem
Wow. So many of the sociological predictions became true even though afaict the technical predictions are (thankfully) lagging behind.