A few others have pointed to this, but I’d say more fully that to me the main reason not to worry about AI risk is that AI alignment is likely intractable in theory (although in practice I think we can dramatically reduce the space of AI minds we might create by removing lots of possibilities that are obviously unaligned and thus increase the probability we end up with an AI that’s friendly to humanity anyway). This problem is that we’re asking an AI to reason correctly about something we don’t have a good way to observe and measure (human values), and those methods we do have produce unwanted results via Goodharting. Of course this is, as I think of it, a reason not to worry, but it is not a reason not to work on the problem, since even if we will inevitable live with non-trivial risk of AI-induced extinction, we can still reduce that risk.
Not exactly what you were meaning to ask for maybe but I think contributes some flavor to the idea we can still work on a problem and care about it even if we don’t much worry about it.
A few others have pointed to this, but I’d say more fully that to me the main reason not to worry about AI risk is that AI alignment is likely intractable in theory (although in practice I think we can dramatically reduce the space of AI minds we might create by removing lots of possibilities that are obviously unaligned and thus increase the probability we end up with an AI that’s friendly to humanity anyway). This problem is that we’re asking an AI to reason correctly about something we don’t have a good way to observe and measure (human values), and those methods we do have produce unwanted results via Goodharting. Of course this is, as I think of it, a reason not to worry, but it is not a reason not to work on the problem, since even if we will inevitable live with non-trivial risk of AI-induced extinction, we can still reduce that risk.
Not exactly what you were meaning to ask for maybe but I think contributes some flavor to the idea we can still work on a problem and care about it even if we don’t much worry about it.