Not necessarily a proper subset. I consider a Singularity to be unlikely but P(Bad Singularity Occurs|Singularity Occurs) to me is high. I’m disturbed by the fact that the smartest Singulitarians seem to be people who agree with this assessment and that our primary disagreement is just on P(Singularity). I doubt I’m a representative sample, so it does seem that to a close approximation your assessment is correct.
Hmm, interesting how the last part does touch on AI risks but seems to think that that’s a counterargument against the Singularitarians. It does seem like the reporter didn’t realize that there’s a segment of the Singularitarians who see absolutely eye-to-eye with the writer on that. And the Hitchhiker’s analogy is definitely an amusing one.
It’s funny that she wrote about AI risks in the manner she did. I was I bit miffed at first because she did seem to be misrepresenting or maybe generalising those at the summit somewhat, especially when the AI risk is something that many take very seriously. Her portrayal of this obliviousness via her analogy was kinda annoying for me because of its innaccuracy, it sure doesn’t represent a lot of this community.
However, it’s possible by taking this kind of stance while still mentioning the AI risk, could she be bringing more validity to the possibility of the singularity?
I mean, it’s possible that a message of “these people are kinda weird and crazy, and even if they get their stuff to work it might not end well” gives the idea that the singularity has some kind of possibility of success, at least in comparison to a simple “these people are crazy, look at their crazy ideas and their weird ideals”.
I’m not sure if I making sense, but I’m saying that this lady is at least getting people who might be in agreement with her general tone of the rest of her article to also consider the possibility of a singularity or at least AI risk posed in the last paragraph or so. This could be granting more validity to these concepts that the average guardian reader wouldn’t take as easily if they had read about it somewhere else.
I mean, it’s possible that a message of “these people are kinda weird and crazy, and even if they get their stuff to work it might not end well” gives the idea that the singularity has some kind of possibility of success, at least in comparison to a simple “these people are crazy, look at their crazy ideas and their weird ideals”.
I can see how one might think that, but it reads to me a bit differently. It read to me to be closer to trying to simply end on an interesting note, or alternatively to be an example of belief overkill/arguments as soldiers where the author is simply trying to marshal as many possible possible arguments that sound like they go against the entire Singularity idea.
I would say that either of those was probably her intention; perhaps I’m being optimistic in hoping that she might have accidentally said something that gives even the tiniest amount of validity to an idea that I feel more people should care about.
Reading this, I wanted to scream; [CITATION NEEDED]!
I agree, but the people who are actively thinking about benevolence or otherwise are a proper subset of all “singularitarians”.
Not necessarily a proper subset. I consider a Singularity to be unlikely but P(Bad Singularity Occurs|Singularity Occurs) to me is high. I’m disturbed by the fact that the smartest Singulitarians seem to be people who agree with this assessment and that our primary disagreement is just on P(Singularity). I doubt I’m a representative sample, so it does seem that to a close approximation your assessment is correct.
Hmm, interesting how the last part does touch on AI risks but seems to think that that’s a counterargument against the Singularitarians. It does seem like the reporter didn’t realize that there’s a segment of the Singularitarians who see absolutely eye-to-eye with the writer on that. And the Hitchhiker’s analogy is definitely an amusing one.
It’s funny that she wrote about AI risks in the manner she did. I was I bit miffed at first because she did seem to be misrepresenting or maybe generalising those at the summit somewhat, especially when the AI risk is something that many take very seriously. Her portrayal of this obliviousness via her analogy was kinda annoying for me because of its innaccuracy, it sure doesn’t represent a lot of this community.
However, it’s possible by taking this kind of stance while still mentioning the AI risk, could she be bringing more validity to the possibility of the singularity?
I mean, it’s possible that a message of “these people are kinda weird and crazy, and even if they get their stuff to work it might not end well” gives the idea that the singularity has some kind of possibility of success, at least in comparison to a simple “these people are crazy, look at their crazy ideas and their weird ideals”.
I’m not sure if I making sense, but I’m saying that this lady is at least getting people who might be in agreement with her general tone of the rest of her article to also consider the possibility of a singularity or at least AI risk posed in the last paragraph or so. This could be granting more validity to these concepts that the average guardian reader wouldn’t take as easily if they had read about it somewhere else.
I can see how one might think that, but it reads to me a bit differently. It read to me to be closer to trying to simply end on an interesting note, or alternatively to be an example of belief overkill/arguments as soldiers where the author is simply trying to marshal as many possible possible arguments that sound like they go against the entire Singularity idea.
I would say that either of those was probably her intention; perhaps I’m being optimistic in hoping that she might have accidentally said something that gives even the tiniest amount of validity to an idea that I feel more people should care about.
If one argues against a position one misunderstands, one might be arguing for the actual position.
There’s also some discussion of this in the link exchange thread.