There is probably a big difference between “doing something reliably well” and “doing something, hit and miss”.
I wonder if this will also be true for the first AGIs. Maybe they will greatly surpass humans in all areas randomly, and will occasionally do something utterly stupid in any area. And the probability of doing the stupid thing will slowly decrease, while the ability of doing something superhumanly awesome will already be there.
That alone sounds scary, even ignoring all the usual worries about alignment.
There is probably a big difference between “doing something reliably well” and “doing something, hit and miss”.
I wonder if this will also be true for the first AGIs. Maybe they will greatly surpass humans in all areas randomly, and will occasionally do something utterly stupid in any area. And the probability of doing the stupid thing will slowly decrease, while the ability of doing something superhumanly awesome will already be there.
That alone sounds scary, even ignoring all the usual worries about alignment.