I think it’s a very useful perspective, sadly the commenters do not seem to engage with your main point, that the presentation of the topic is unpersuasive to an intelligent layperson, instead focusing on specific arguments.
the presentation of the topic is unpersuasive to an intelligent layperson
There is, of course, no single presentation, but many presentations given by many people, targeting many different audiences. Could some of those presentations be improved? No doubt.
I agree that the question of how to communicate the problem effectively is difficult and largely unsolved. I disagree with some of the specific prescriptions (i.e. the call to falsely claim more-modest beliefs to make them more palatable for a certain audience), and the object-level arguments are either arguing against things that nobody[1] thinks are core problems[2] or are missing the point[3].
Wireheading may or may not end up being a problem, but it’s not the thing that kills us. Also, that entire section is sort of confused. Nobody thinks that an AI will deliberately change its own values to be easier to fulfill; goal stability implies the opposite.
Specific arguments about whether superintelligence will be able to exploit bugs in human cognition or create nanotech (which… I don’t see an arguments against, here, except for the contention that nothing was ever invented by a smart person sitting in an armchair, even though of course an AI will not be limited in its ability to experiment in the real world if it needs to) are irrelevant. Broadly speaking, the reason we might expect to lose control to a superintelligent AI is that achieving outcomes in real life is not a game with an optimal solution the way tic tac toe is, and the idea that something more intelligent than us will do better at achieving its goals than other agents in the system should be your default prior, not something that needs to overcome a strong burden of proof.
It’s very strange to me that there isn’t a central, accessible “101” version of the argument given how much has been written.
I don’t think anyone should make false claims, and this is an uncharitable mischaracterization of what I wrote. I am telling you that, from the outside view, what LW/rationalism gets attention for is the “I am sure we are all going to die”, which I don’t think is a claim most of its members hold, and this repels the average person because it violates common sense.
The object level responses you gave are so minimal and dismissive that I think they highlight the problem. “You’re missing the point, no one thinks that anymore.” Responses like this turn discussion into an inside-view only affair. Your status as a LW admin sharpens this point.
Yeah, I probably should have explicitly clarified that I wasn’t going to be citing my sources there. I agree that the fact that it’s costly to do so is a real problem, but Robert Miles points out, some of the difficulty here is insoluble.
It’s very strange to me that there isn’t a central, accessible “101” version of the argument given how much has been written.
There are several, in fact; but as I mentioned above, none of them will cover all the bases for all possible audiences (and the last one isn’t exactly short, either). Off the top of of my head, here are a few:
The focus of the post is not on this fact (at least not in terms of the quantity of written material). I responded to the arguments made because they comprised most of the post, and I disagreed with them.
If the primary point of the post was “The presentation of AI x-risk ideas results in them being unconvincing to laypeople”, then I could find reason in responding to this, but other than this general notion, I don’t see anything in this post that expressly conveys why (excluding troubles with argumentative rigor, and the best way to respond to this I can think of is by refuting said arguments).
I think it’s a very useful perspective, sadly the commenters do not seem to engage with your main point, that the presentation of the topic is unpersuasive to an intelligent layperson, instead focusing on specific arguments.
There is, of course, no single presentation, but many presentations given by many people, targeting many different audiences. Could some of those presentations be improved? No doubt.
I agree that the question of how to communicate the problem effectively is difficult and largely unsolved. I disagree with some of the specific prescriptions (i.e. the call to falsely claim more-modest beliefs to make them more palatable for a certain audience), and the object-level arguments are either arguing against things that nobody[1] thinks are core problems[2] or are missing the point[3].
Approximately.
Wireheading may or may not end up being a problem, but it’s not the thing that kills us. Also, that entire section is sort of confused. Nobody thinks that an AI will deliberately change its own values to be easier to fulfill; goal stability implies the opposite.
Specific arguments about whether superintelligence will be able to exploit bugs in human cognition or create nanotech (which… I don’t see an arguments against, here, except for the contention that nothing was ever invented by a smart person sitting in an armchair, even though of course an AI will not be limited in its ability to experiment in the real world if it needs to) are irrelevant. Broadly speaking, the reason we might expect to lose control to a superintelligent AI is that achieving outcomes in real life is not a game with an optimal solution the way tic tac toe is, and the idea that something more intelligent than us will do better at achieving its goals than other agents in the system should be your default prior, not something that needs to overcome a strong burden of proof.
It’s very strange to me that there isn’t a central, accessible “101” version of the argument given how much has been written.
I don’t think anyone should make false claims, and this is an uncharitable mischaracterization of what I wrote. I am telling you that, from the outside view, what LW/rationalism gets attention for is the “I am sure we are all going to die”, which I don’t think is a claim most of its members hold, and this repels the average person because it violates common sense.
The object level responses you gave are so minimal and dismissive that I think they highlight the problem. “You’re missing the point, no one thinks that anymore.” Responses like this turn discussion into an inside-view only affair. Your status as a LW admin sharpens this point.
Yeah, I probably should have explicitly clarified that I wasn’t going to be citing my sources there. I agree that the fact that it’s costly to do so is a real problem, but Robert Miles points out, some of the difficulty here is insoluble.
There are several, in fact; but as I mentioned above, none of them will cover all the bases for all possible audiences (and the last one isn’t exactly short, either). Off the top of of my head, here are a few:
An artificially structured argument for expecting AGI ruin
The alignment problem from a deep learning perspective
AGI safety from first principles: Introduction
The focus of the post is not on this fact (at least not in terms of the quantity of written material). I responded to the arguments made because they comprised most of the post, and I disagreed with them.
If the primary point of the post was “The presentation of AI x-risk ideas results in them being unconvincing to laypeople”, then I could find reason in responding to this, but other than this general notion, I don’t see anything in this post that expressly conveys why (excluding troubles with argumentative rigor, and the best way to respond to this I can think of is by refuting said arguments).