Does Eliezer assign lots of probability mass to a particular failure mode or does he have his probability mass fairly evenly spread across many failure modes? His answer seems a bit overconfident to me for a question that involves the actions of squishy and unpredictable humans.
Sergey Brin, an apparently smart person who has met politicians (unlike anyone quoted here?), says the ones he has met are “invariably thoughtful, well-meaning people” whose main problem is the fact that “90% of their effort seems to be focused on how to stick it to the other party”. So it could matter a lot how the issue ends up getting framed. What are the issues that the government seems to deal with most intelligently, and how can we make FAI end up getting treated like those issues?
Nate Silver’s book discusses the work of government weather forecasters, earthquake researchers, and disease researchers and seems to give them positive reviews.
Some publicly funded universities do important and useful research.
My dad told me a story about a group of quants hired by the city of New York to develop a model of what buildings needed visits from the city fire inspector, with impressive results. Here’s an article I found while trying to track down that anecdote. (Oh wait, maybe this is it?)
Does anyone have more examples? Note that I’ve reframed the problem from “elites and AI” to “government and AI”; not sure if that’s a good reframing.
One guess: make the part of the government concerned with AI be a boring government department staffed with PhDs in a technical subject where one of the best careers one can get given a PhD in this subject is to work for this government department (math?). IMO, the intelligence of government workers probably matters more than the fact that they are government workers, and there are factors that determine this. This strategy (a variant of “can’t beat ‘em? join ’em”) would probably end up working better in a scenario where there is no “AI Sputnik moment”. (BTW, can we expect politicians to weigh the opinions of government experts over those of experts who have private-sector or nonprofit jobs?)
Does Eliezer assign lots of probability mass to a particular failure mode or does he have his probability mass fairly evenly spread across many failure modes? His answer seems a bit overconfident to me for a question that involves the actions of squishy and unpredictable humans.
Sergey Brin, an apparently smart person who has met politicians (unlike anyone quoted here?), says the ones he has met are “invariably thoughtful, well-meaning people” whose main problem is the fact that “90% of their effort seems to be focused on how to stick it to the other party”. So it could matter a lot how the issue ends up getting framed. What are the issues that the government seems to deal with most intelligently, and how can we make FAI end up getting treated like those issues?
Nate Silver’s book discusses the work of government weather forecasters, earthquake researchers, and disease researchers and seems to give them positive reviews.
Some publicly funded universities do important and useful research.
My dad told me a story about a group of quants hired by the city of New York to develop a model of what buildings needed visits from the city fire inspector, with impressive results. Here’s an article I found while trying to track down that anecdote. (Oh wait, maybe this is it?)
I like the Bureau of Labor Statistics’ Occupational Outlook Handbook, but it’s hard to know how accurate it is.
Does anyone have more examples? Note that I’ve reframed the problem from “elites and AI” to “government and AI”; not sure if that’s a good reframing.
One guess: make the part of the government concerned with AI be a boring government department staffed with PhDs in a technical subject where one of the best careers one can get given a PhD in this subject is to work for this government department (math?). IMO, the intelligence of government workers probably matters more than the fact that they are government workers, and there are factors that determine this. This strategy (a variant of “can’t beat ‘em? join ’em”) would probably end up working better in a scenario where there is no “AI Sputnik moment”. (BTW, can we expect politicians to weigh the opinions of government experts over those of experts who have private-sector or nonprofit jobs?)
Here’s an interesting government department. I wonder how hard it would be for a few of us Less Wrong users to get jobs there?