IMO this project was a good use of your time ex ante.[1] Unclear if it will end up being actually useful but I think it’s good that you made it.
“A new process for mapping discussions” is kind of a boring title and IMO does not accurately reflect the content. It’s mapping beliefs more so than discussions. Titles are hard but my first idea for a title would be “I made a website that shows a graph of what public figures believe about SB 1047″
I didn’t much care about the current content because it’s basically saying things I already knew (like, the people pessimistic about SB 1047 are all the usual suspects—Andrew Ng, Yann LeCun, a16z).
If I cared about AI safety but didn’t know anything about SB 1047, this site would have led me to believe that SB 1047 was good because all the AI safety people support it. But I already knew that AI safety people supported SB 1047.
In general, I don’t care that much about what various people believe. It’s unlikely that I would change my mind based on seeing a chart like the ones on this site.[2] Perhaps most LW readers are in the same boat. I think this is the sort of thing journalists and maybe public policy people care more about.
I have changed my mind based on opinion polls before. Specifically, I’ve changed my mind on scientific issues based on polls of scientists showing that they overwhelmingly support one side (e.g. I used to be anti-nuclear power until I learned that the expert consensus went the other way). The surveys on findingconsensus.ai are much smaller and less representative.
[1] At least that’s my gut feeling. I don’t know you personally but my impression from seeing you online is that you’re very talented and therefore your counterfactual activities would have also been valuable ex ante, so I can’t really say that this was the best use of your time. But I don’t think it was a bad use.
[2] Especially because almost all the people on the side I disagree with are people I have very little respect for, eg a16z.
Yeah, I feel the same way about being personally disinterested in the content. I am already perhaps-problematically overfocused on following opinions/arguments/news about AI. I am clearly not the target audience for something like this. Nor am I personally engaged in trying to educate/persuade people who know little enough about the issues that they would be the target audience.
So, while I certainly approve of the idea, I suspect that LessWrong readers are mostly not your target audience, and at least some are probably also in my boat of not even being all that interested in persuading/educating the people who would be the audience.
So my guess is that that explains the lack of enthusiastic response. I don’t think that that’s a sign that it’s a bad project though!
Some feedback:
IMO this project was a good use of your time ex ante.[1] Unclear if it will end up being actually useful but I think it’s good that you made it.
“A new process for mapping discussions” is kind of a boring title and IMO does not accurately reflect the content. It’s mapping beliefs more so than discussions. Titles are hard but my first idea for a title would be “I made a website that shows a graph of what public figures believe about SB 1047″
I didn’t much care about the current content because it’s basically saying things I already knew (like, the people pessimistic about SB 1047 are all the usual suspects—Andrew Ng, Yann LeCun, a16z).
If I cared about AI safety but didn’t know anything about SB 1047, this site would have led me to believe that SB 1047 was good because all the AI safety people support it. But I already knew that AI safety people supported SB 1047.
In general, I don’t care that much about what various people believe. It’s unlikely that I would change my mind based on seeing a chart like the ones on this site.[2] Perhaps most LW readers are in the same boat. I think this is the sort of thing journalists and maybe public policy people care more about.
I have changed my mind based on opinion polls before. Specifically, I’ve changed my mind on scientific issues based on polls of scientists showing that they overwhelmingly support one side (e.g. I used to be anti-nuclear power until I learned that the expert consensus went the other way). The surveys on findingconsensus.ai are much smaller and less representative.
[1] At least that’s my gut feeling. I don’t know you personally but my impression from seeing you online is that you’re very talented and therefore your counterfactual activities would have also been valuable ex ante, so I can’t really say that this was the best use of your time. But I don’t think it was a bad use.
[2] Especially because almost all the people on the side I disagree with are people I have very little respect for, eg a16z.
Thanks, appreciated.
Yeah, I feel the same way about being personally disinterested in the content. I am already perhaps-problematically overfocused on following opinions/arguments/news about AI. I am clearly not the target audience for something like this. Nor am I personally engaged in trying to educate/persuade people who know little enough about the issues that they would be the target audience. So, while I certainly approve of the idea, I suspect that LessWrong readers are mostly not your target audience, and at least some are probably also in my boat of not even being all that interested in persuading/educating the people who would be the audience.
So my guess is that that explains the lack of enthusiastic response. I don’t think that that’s a sign that it’s a bad project though!