Question: Are you as afraid of artificial intelligence as your Paypal colleague Elon Musk?
Thiel: I’m super pro-technology in all its forms. I do think that if AI happened, it would be a very strange thing. Generalized artificial intelligence. People always frame it as an economic question, it’ll take people’s jobs, it’ll replace people’s jobs, but I think it’s much more of a political question. It would be like aliens landing on this planet, and the first question we ask wouldn’t be what does this mean for the economy, it would be are they friendly, are they unfriendly? And so I do think the development of AI would be very strange. For a whole set of reasons, I think it’s unlikely to happen any time soon, so I don’t worry about it as much, but it’s one of these tail risk things, and it’s probably the one area of technology that I think would be worrisome, because I don’t think we have a clue as to how to make it friendly or not.
Context: Elon Musk thinks there’s an issue in the 5-7 year timeframe (probably due to talking to Demis Hassabis at Deepmind, I would guess). By that standard I’m also less afraid of AI than Elon Musk, but as Rob Bensinger will shortly be fond of saying, this conflates AGI danger with AGI imminence (a very very common conflation).
I’m sorry to say that even a chatbot might refute this line of reasoning. Of course, economical impact is more important than such unfounded concerns. That might be the greatest danger of AI software. It might end up refuting a lot of pseudo-science about ethics.
Countries are starting wars over oil. High technology is a good thing, it might make us more wealthy, more capable, more peaceful. If employed wisely, of course. What we must concern ourselves with is how wise, how ethical we ourselves are in our own actions and plans.
Transcript:
Question: Are you as afraid of artificial intelligence as your Paypal colleague Elon Musk?
Thiel: I’m super pro-technology in all its forms. I do think that if AI happened, it would be a very strange thing. Generalized artificial intelligence. People always frame it as an economic question, it’ll take people’s jobs, it’ll replace people’s jobs, but I think it’s much more of a political question. It would be like aliens landing on this planet, and the first question we ask wouldn’t be what does this mean for the economy, it would be are they friendly, are they unfriendly? And so I do think the development of AI would be very strange. For a whole set of reasons, I think it’s unlikely to happen any time soon, so I don’t worry about it as much, but it’s one of these tail risk things, and it’s probably the one area of technology that I think would be worrisome, because I don’t think we have a clue as to how to make it friendly or not.
Context: Elon Musk thinks there’s an issue in the 5-7 year timeframe (probably due to talking to Demis Hassabis at Deepmind, I would guess). By that standard I’m also less afraid of AI than Elon Musk, but as Rob Bensinger will shortly be fond of saying, this conflates AGI danger with AGI imminence (a very very common conflation).
The Rob Bensinger post on this is now here.
Hopefully his enthusiasm (financially) isn’t too dampened when that fails to be vindicated.
I’m sorry to say that even a chatbot might refute this line of reasoning. Of course, economical impact is more important than such unfounded concerns. That might be the greatest danger of AI software. It might end up refuting a lot of pseudo-science about ethics.
Countries are starting wars over oil. High technology is a good thing, it might make us more wealthy, more capable, more peaceful. If employed wisely, of course. What we must concern ourselves with is how wise, how ethical we ourselves are in our own actions and plans.