Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn’t fall that way at all.
On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don’t think it is very likely that we’d see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.
Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn’t fall that way at all.
On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don’t think it is very likely that we’d see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.