I notice that as someone without domain specific knowledge of this area, that Paul’s article seems to fill my model of a reality-shaped hole better than Eliezer’s. This may just be an artifact of the specific use of language and detail that Paul provides which Eliezer does not, and Eliezer may have specific things he could say about all of these things and is not choosing to do so. Paul’s response at least makes it clear to me that people, like me, without domain specific knowledge are prone to being pulled psychologically by use of language in various directions and should be very careful about making important life decisions based on concerns of AI safety without first educating themselves much further on the topic, especially since giving attention and funding to the issue at least has the capacity to cause harm.
I notice that as someone without domain specific knowledge of this area, that Paul’s article seems to fill my model of a reality-shaped hole better than Eliezer’s. This may just be an artifact of the specific use of language and detail that Paul provides which Eliezer does not, and Eliezer may have specific things he could say about all of these things and is not choosing to do so. Paul’s response at least makes it clear to me that people, like me, without domain specific knowledge are prone to being pulled psychologically by use of language in various directions and should be very careful about making important life decisions based on concerns of AI safety without first educating themselves much further on the topic, especially since giving attention and funding to the issue at least has the capacity to cause harm.