No, it does not rely on the assumption that a superhuman AI couldn’t rely on it’s ability to destroy humanity. It never even starts to make such a silly baldly incorrect assumption.
Please don’t rely on “quick reads” if you’re prone to such bad misunderstandings when doing quick reads.
No, no, no, no, no. “It is probably better that I don’t” simply means that you CAN’T.
Looking at the history of your comments, it seems that you tend to make very brief comments supporting the echo chamber and never back them up.
Pjeby’s reply was a solid question/statement but it had absolutely NOTHING to with with an AI’s ability to destroy humanity.
You have given absolutely nothing to support your contention. As I’ve said elsewhere—Please support me and your community by doing more than throwing cryptic opinionated darts and then refusing to elaborate. You’re only wasting everyone’s time and acting as a drag on the community.
You have given absolutely nothing to support your contention.
My contention, if you need it to be overt, is that hairyfigment need not doubt his sanity and certainly does not deserve to be laughed at in “TROLLCAPS” or insulted childishly. I expect harry to be able to see the relationship between his reading and Pjeby’s comments regarding ‘edge cases’ since I can infer from his comment that he has already had the necessary insights.
From a quick read, it seems to rely on the assumption that a superhuman AI couldn’t rely on its ability to destroy humanity.
HAHAHAHAHA!
No, it does not rely on the assumption that a superhuman AI couldn’t rely on it’s ability to destroy humanity. It never even starts to make such a silly baldly incorrect assumption.
Please don’t rely on “quick reads” if you’re prone to such bad misunderstandings when doing quick reads.
Your comments here support hairy’s reading, whether or not your other material does.
Could you explain how my comments (or which comments) support hairy’s reading? (So I can attempt to rectify the my apparently poor communication)
I firmly believe that a superhuman AI is VERY likely to be able to destroy humanity far more easily than we are able to destroy the rain forests.
I must be communicating VERY poorly if it looks like I am saying otherwise.
It is probably better that I don’t. But Pjeby’s reply over there was a solid attempt at such an explanation.
No, no, no, no, no. “It is probably better that I don’t” simply means that you CAN’T.
Looking at the history of your comments, it seems that you tend to make very brief comments supporting the echo chamber and never back them up.
Pjeby’s reply was a solid question/statement but it had absolutely NOTHING to with with an AI’s ability to destroy humanity.
You have given absolutely nothing to support your contention. As I’ve said elsewhere—Please support me and your community by doing more than throwing cryptic opinionated darts and then refusing to elaborate. You’re only wasting everyone’s time and acting as a drag on the community.
My contention, if you need it to be overt, is that hairyfigment need not doubt his sanity and certainly does not deserve to be laughed at in “TROLLCAPS” or insulted childishly. I expect harry to be able to see the relationship between his reading and Pjeby’s comments regarding ‘edge cases’ since I can infer from his comment that he has already had the necessary insights.