I want to gather information about what people care about in AI Risks. Other people should help me if they also want to gather information about what people care about in AI Risks.
Oh, you’ve not read the document I linked to in the post. I planned to try and get it posted in LW, EA forum and subreddits associated with AI and AIrisk.
I looked at that document. I still don’t see why do you think you’ll be able to extract useful information out of a bunch of unqualified opinions (and a degree in psychology qualifies for AI risk discussions? really?) And why is the EA forum relevant to this?
I’m bound to get useful information as I am only interested in what people think. If you are interested in existential risk reduction, why wouldn’t you be interested in what other people think? Surviving is a team sport.
Someone recommended EA here for existential risk discussion
For the same reasons quantum physicists don’t ask the public which experiments they should run next.
But a quantum research institute that is funded via donations might ask the public which of the many experiments they want to run might attract funding. They can hire more researchers and answer more questions. Build good will etc.
Sure, he who pays the piper calls the tune :-) but I don’t know if it’s a good way to run science. However if you want to go in that direction, shouldn’t your poll be addressed to potential (large) donors?
If you can get access to them sure. Convincing smaller donors to donate to you is a good way of not being too dependent on the big ones and also being able to show a broad support base to the larger donors.
Do you have a short write-up somewhere about what do you want to do and why other people should help you?
I want to gather information about what people care about in AI Risks. Other people should help me if they also want to gather information about what people care about in AI Risks.
By “people” do you mean “LW people”? If you’re interested in what the world cares about, running polls on LW will tell you nothing useful.
Oh, you’ve not read the document I linked to in the post. I planned to try and get it posted in LW, EA forum and subreddits associated with AI and AIrisk.
I looked at that document. I still don’t see why do you think you’ll be able to extract useful information out of a bunch of unqualified opinions (and a degree in psychology qualifies for AI risk discussions? really?) And why is the EA forum relevant to this?
I’m bound to get useful information as I am only interested in what people think. If you are interested in existential risk reduction, why wouldn’t you be interested in what other people think? Surviving is a team sport.
Someone recommended EA here for existential risk discussion
For the same reasons quantum physicists don’t ask the public which experiments they should run next.
Errrr… That really depends X-)
But a quantum research institute that is funded via donations might ask the public which of the many experiments they want to run might attract funding. They can hire more researchers and answer more questions. Build good will etc.
Sure, he who pays the piper calls the tune :-) but I don’t know if it’s a good way to run science. However if you want to go in that direction, shouldn’t your poll be addressed to potential (large) donors?
If you can get access to them sure. Convincing smaller donors to donate to you is a good way of not being too dependent on the big ones and also being able to show a broad support base to the larger donors.