OP did the work to collect these emails and put them into a post. When people do work for you, you shouldn’t punish them by giving them even more work.
MichaelDickens
I’ve only read a little bit of Martin Gardner, but he might be the Matt Levine of recreational math.
Many newspapers have a (well-earned) reputation for not technically lying.
Thank you, this information was useful for a project I’m working on.
I don’t think I understand what “learn to be visibly weird” means, and how it differs from not following social conventions because you fail to understand them correctly.
I was recently looking into donating to CLTR and I’m curious why you are excited about it? My sense was that little of its work was directly relevant to x-risk (for example this report on disinformation is essentially useless for preventing x-risk AFAICT), and the relevant work seemed to be not good or possibly counterproductive. For example their report on “a pro-innovation approach to regulating AI” seemed bad to me on two counts:
There is a genuine tradeoff between accelerating AI-driven innovation and decreasing x-risk. So to the extent that this report’s recommendations support innovation, they increase x-risk, which makes this report net harmful.
The report’s recommendations are kind of vacuous, e.g. they recommend “reducing inefficiencies”, like yes, this is a fully generalizable good thing but it’s not actionable.
(So basically I think this report would be net negative if it wasn’t vacuous, but because it’s vacuous, it’s net neutral.)
This is the sense I get as someone who doesn’t know anything about policy and is just trying to get the sense of orgs’ work by reading their websites.
My perspective is that I’m much more optimistic about policy than about technical research, and I don’t really feel qualified to evaluate policy work, and LTFF makes almost no grants on policy. I looked around and I couldn’t find any grantmakers who focus on AI policy. And even if they existed, I don’t know that I could trust them (like I don’t think Open Phil is trustworthy on AI policy and I kind of buy Habryka’s arguments that their policy grants are net negative).
I’m in the process of looking through a bunch of AI policy orgs myself. I don’t think I can do a great job of evaluating them but I can at least tell that most policy orgs aren’t focusing on x-risk so I can scratch them off the list.
if you think the polling error in 2024 remains unpredictable / the underlying distribution is unbiased
Is there a good reason to think that if polls have recently under-reported Republican votes?
I don’t know how familiar you are with regular expressions but you could do this with a two-pass regular expression search and replace: (I used Emacs regex format, your preferred editor might use a different format. notably, in Emacs [ is a literal bracket but ( is a literal parenthesis, for some reason)
replace “^(https://.? )([[.?]] )*” with “\1″
replace “[[(.*?)]]” with “\1″
This first deletes any tags that occur right after a hyperlink at the beginning of a line, then removes the brackets from any remaining tags.
RE Shapley values, I was persuaded by this comment that they’re less useful than counterfactual value in at least some practical situations.
(2) have “truesight”, i. e. a literally superhuman ability to suss out the interlocutor’s character
Why do you believe this?
If your goal is to influence journalists to write better headlines, then it matters whether the journalist has the ability to take responsibility over headlines.
If your goal is to stop journalists from misrepresenting you, then it doesn’t actually matter whether the journalist has the ability to take responsibility, all that matters is whether they do take responsibility.
Often, you write something short that ends up being valuable. That doesn’t mean you should despair about your longer and harder work being less valuable. Like if you could spend 40 hours a week writing quick 5-hour posts that are as well-received as the one you wrote, that would be amazing, but I don’t think anyone can do that because the circumstances have to line up just right, and you can’t count on that happening. So you have to spend most of your time doing harder and predictably-less-impactful work.
(I just left some feedback for the mapping discussion post on the post itself.)
Some feedback:
IMO this project was a good use of your time ex ante.[1] Unclear if it will end up being actually useful but I think it’s good that you made it.
“A new process for mapping discussions” is kind of a boring title and IMO does not accurately reflect the content. It’s mapping beliefs more so than discussions. Titles are hard but my first idea for a title would be “I made a website that shows a graph of what public figures believe about SB 1047″
I didn’t much care about the current content because it’s basically saying things I already knew (like, the people pessimistic about SB 1047 are all the usual suspects—Andrew Ng, Yann LeCun, a16z).
If I cared about AI safety but didn’t know anything about SB 1047, this site would have led me to believe that SB 1047 was good because all the AI safety people support it. But I already knew that AI safety people supported SB 1047.
In general, I don’t care that much about what various people believe. It’s unlikely that I would change my mind based on seeing a chart like the ones on this site.[2] Perhaps most LW readers are in the same boat. I think this is the sort of thing journalists and maybe public policy people care more about.
I have changed my mind based on opinion polls before. Specifically, I’ve changed my mind on scientific issues based on polls of scientists showing that they overwhelmingly support one side (e.g. I used to be anti-nuclear power until I learned that the expert consensus went the other way). The surveys on findingconsensus.ai are much smaller and less representative.
[1] At least that’s my gut feeling. I don’t know you personally but my impression from seeing you online is that you’re very talented and therefore your counterfactual activities would have also been valuable ex ante, so I can’t really say that this was the best use of your time. But I don’t think it was a bad use.
[2] Especially because almost all the people on the side I disagree with are people I have very little respect for, eg a16z.
This is a good and important point. I don’t have a strong opinion on whether you’re right, but one counterpoint: AI companies are already well-incentivized to figure out how to control AI, because (as Wei Dai said) controllable AI is more economically useful. It makes more sense for nonprofits / independent researchers to do work that AI companies wouldn’t do otherwise.
If Open Phil is unwilling to fund some/most of the best orgs, that makes earning to give look more compelling.
(There are some other big funders in AI safety like Jaan Tallinn, but I think all of them combined still have <10% as much money as Open Phil.)
I should add that I don’t want to dissuade people from criticizing me if I’m wrong. I don’t always handle criticism well, but it’s worth the cost to have accurate beliefs about important subjects. I knew I was gonna be anxious about this post but I accepted the cost because I thought there was a ~25% chance that it would be valuable to post.
A few people (i.e. habryka or previously Benquo or Jessicata) make it their thing to bring up concerns frequently.
My impression is that those people are paying a social cost for how willing they are to bring up perceived concerns, and I have a lot of respect for them because of that.
Thanks for the reply. When I wrote “Many people would have more useful things to say about this than I do”, you were one of the people I was thinking of.
AI Impacts wants to think about AI sentience and OP cannot fund orgs that do that kind of work
Related to this, I think GW/OP has always been too unwilling to fund weird causes, but it’s generally gotten better over time: originally recommending US charities over global poverty b/c global poverty was too weird, taking years to remove their recommendations for US charities that were ~100x less effective than their global poverty recs, then taking years to start funding animal welfare and x-risk, then still not funding weirder stuff like wild animal welfare and AI sentience. I’ve criticized them for this in the past but I liked that they were moving in the right direction. Now I get the sense that recently they’ve gotten worse on AI safety (and weird causes in general).
Do you think a 3-state dark mode selector is better than a 1-state (where “auto” is the only state)? My website is 1-state, on the assumption that auto will work for almost everyone and it lets me skip the UI clutter of having a lighting toggle that most people won’t use.
Also, I don’t know if the site has been updated but it looks to me like turntrout.com’s two modes aren’t dark and light, they’re auto and light. When I set Firefox’s appearance to dark or auto, turntrout.com’s dark mode appears dark, but when I set Firefox to light, turntrout.com appears light. turntrout.com’s light mode appears to be light regardless of my Firefox setting.