A recent StackExchange discussion suggests that self-improving general problem solvers are considered unfeasible by relevant experts, i.e. computer scientists. This suggests that the risk of ufAI should be updated downwards.
They just say that, in their current form their algorithms are too inefficient. That hardly sounds like the same thing!
A recent StackExchange discussion suggests that self-improving general problem solvers are considered unfeasible by relevant experts, i.e. computer scientists. This suggests that the risk of ufAI should be updated downwards.
They just say that, in their current form their algorithms are too inefficient. That hardly sounds like the same thing!