What might work is linking the views of someone who is not an AI doomer, he might feel more affinity to that. Scott Aaronson’s blog comes to mind, and Scott Alexander’s as well. One does not need any ML background to follow their logic.
Yes, I agree. I think it is important to remind that achieving AGI and doom are two separate events. Many people around here do make a strong connection between them, but not everyone. I’m on the camp that we are 2 or 3 years away to an AGI (it’s hard to see why GPT4 does not qualify as that), I don’t think that implies the imminent extinction of human beings. It is much easier to convince people of the first point because the evidence is already out there
What might work is linking the views of someone who is not an AI doomer, he might feel more affinity to that. Scott Aaronson’s blog comes to mind, and Scott Alexander’s as well. One does not need any ML background to follow their logic.
Yes, I agree. I think it is important to remind that achieving AGI and doom are two separate events. Many people around here do make a strong connection between them, but not everyone. I’m on the camp that we are 2 or 3 years away to an AGI (it’s hard to see why GPT4 does not qualify as that), I don’t think that implies the imminent extinction of human beings. It is much easier to convince people of the first point because the evidence is already out there