High Failure-Rate Solutions
Short interview, doesn’t go into too much depth, but makes an interesting point relevant to LW:
“When you’re mitigating a complex problem, you become a bad picker. It’s too complicated, so you can’t separate the ideas that are going to work and the ones that won’t. [...] But if we were gonna use the venture-capital method – if we were willing to admit we weren’t competent pickers and even all the wise men gathering in the Oval Office were not going to be able to pick the winner from the loser – we would have gone in there with 30 ways of plugging up that hole at one time and realized that maybe 29 were gonna fail and one was going to stop the ecological disaster. But that’s not what we did.”
Are there any potential solutions other than “create the first GAI and ensure it’s provably Friendly” that can be advanced simultaneously?
That quote completely ignores the risk of worsening the situation each ‘solution’ might carry. The venture-capital method only works because of limited liability.
Off the top of my head: Take over the world (starting with taking over science with rationality, then technology, then the socioeconomic sphere). Help people figure out what they want (with rationality, happiness psychology, cognitive behavioral therapy, sleek Buddhism) so that they can get it themselves. Give people what they probably want but can’t get (with nanotech, biotech, tech generally, new communities, better dating sites, rationality dojos, dietary advice, et cetera). Make people smarter so they can help do all of those things (with rationality, cognitive enhancement, better institutions, better technology, et cetera).
If you aren’t doing one of these things (or something better I missed), you should probably be optimizing your life for maximum fun (rich, diverse happiness) instead. There are smart ways to do that, too. That said, optimizing for fun and optimizing for being a Bodhisattva can look a lot like each other, if you do it right.
Very not impressed by the content of the interview. We’ve lost the solar panel and wind turbine market to the Chinese- is there a particular reason we should be troubled by that? Oh no, we don’t subsidize unprofitable technology enough! “We” are running airplanes into buildings. Wait, I thought this was about America’s failures? If you’re willing to accept Middle Easterners as part of “we,” why not the Chinese, who are actually tied to us much more? The problem with Dawkins’ view is that it doesn’t allow for altruism. Money is bad because of greed.
When someone makes that many questionable conclusions, I have to wonder where they’re coming from. The BP reference is just bizarre- I strongly suspect that if BP could have tried multiple things simultaneously, they would have. (This is actually what they did, if I recall correctly- drilling relief wells while doing things at the top, and switching around which things they did at the top.)
I looked at the paragraph you’re talking about, and it was all about focusing on where the US has a comparative advantage, and not on giving more subsidies to domestic wind and solar manufacturers. And the reason she used “we” to refer to the US in that context is because it’s something she’d like to see a candidate for political office say. The later use of “we” which included Middle Easterners was in a context where she was talking about humanity in general.
She was misstating Dawkins’ views, actually.
This is how I parsed that: “We are bad at doing X.” (I don’t agree this is a problem.) “We could be the best at doing X if we did it another way.” Space-based solar is a neat idea but it has a number of significant problems with it- and the suggestion that NASA do it instead of a private satellite company suggests to me that it’s going to be unprofitable for quite some time (which agrees with what I know about the technical details).
I’m aware. That are the following sentence were paraphrases of what she said, which I repeated because I think she’s wrong.
I had a comment here that mentioned basilisks. Just in case anyone wonders why it was deleted, it was by me, just because I posted it after a brain-fart made me misread some crucial stuff in the OP and thus my comment made no sense, not any act of censorship.
LOL and upvoted. Which is kind of sad, when you think about it.
I’m not sure if the thing you want solutions for is how to build AI, or reduce existential risk in general.
If the second, Yes. Getting people into space comes to mind.
If the first, The cost of a failed startup/policy vs. the cost of a failed AGI are different.
A failed startup/policy loses/wastes money. An unfriendly AI probably murders us all while it takes over this part of the universe (though, it could just destroy everything we value). If just one does that, then there’s not much we can do to recover, so starting 30 AGI projects and hoping one turns out friendly is a bad idea.
This could make the whole 30-projects-hope-one-comes-out-right thing work, although there are some problems.
An unfriendly AI probably gets turned off. The problem is it might take over the universe.
I’m not sure, but the fact that you’ve chosen to ask that question has improved my opinion of yourself/the SIAI back up to the point where it was before the basilisk-censorship debacle.