These models are very good for estimating external risks but there are also internal risks if it’s possible to somehow provide enough processing power to make a super powerful AI, like it could torture internal simulations in order to understand emotions.
These models are very good for estimating external risks but there are also internal risks if it’s possible to somehow provide enough processing power to make a super powerful AI, like it could torture internal simulations in order to understand emotions.