A one-sentence formulation of the AI X-Risk argument I try to make

Unprecedented dangers
inevitably follow
from exponentially scaling
powerful technology
that we do not understand.

n.b. I’m a masters student in international policy (this program). In my experience, policy oriented folks do not understand that lines four and five can be simultaneously true. I think there are some simple ways that ML researchers can help address this misconception, and I’ll share those here once I’ve written them up.

Crossposted from EA Forum (3 points, 0 comments)
No comments.