A friendly singularity would likely produce an AI that in one second could think all the thoughts that would take a billion scientists a billion years to contemplate.
Without a source these figures seem to imply a precision that you don’t back up. Are you really so confident that an AI of this level of intelligence will exist? I feel your point would be stronger by removing the implied precision. Perhaps:
A friendly singularity would likely produce a superintelligence capable of mastering nanotechnology.
More generally, any time the subject of AI comes up I would recommend making efforts to avoid describing it in terms that sound suspiciously like wish fulfillment, snake-oil promises, or generally any phrasing that triggers scam/sect red flags.
A little nit-picky, but:
Without a source these figures seem to imply a precision that you don’t back up. Are you really so confident that an AI of this level of intelligence will exist? I feel your point would be stronger by removing the implied precision. Perhaps:
More generally, any time the subject of AI comes up I would recommend making efforts to avoid describing it in terms that sound suspiciously like wish fulfillment, snake-oil promises, or generally any phrasing that triggers scam/sect red flags.