I strongly disagree with this quote (and would like to know how to point this out!):
I have never seen anyone point out that another’s thoughts were wrong, because they were too abstract, and that they were harmful to the general audience. I have seen three comments advocating for a specific model of human values, which I have never seen anybody; but at the moment I have not seen anyone anywhere in that context anywhere.
This isn’t because it is wrong, but because it doesn’t really sound like a person who would care, even if the AI were not going to see him do his work.
This is to me, the more compelling argument in terms of What if “AIs” might end up being the type that can decide whether to take over, then there isn’t a reasonable way for AIs to have any conscious thoughts.
The idea that AGI is coming soon isn’t obviously right. It looks like we already are. I don’t want to live in a world with lots of AIs over, not enough to make them “free” and not yet understand the basic principles of utility.
I can’t see how you can say that such a scenario is impossible, since the AI would simply be a kind of computer. However, this argument depends on your definition of AI as a “mind with 1” (a mind of a single type).
I strongly disagree with this quote (and would like to know how to point this out!):
I have never seen anyone point out that another’s thoughts were wrong, because they were too abstract, and that they were harmful to the general audience. I have seen three comments advocating for a specific model of human values, which I have never seen anybody; but at the moment I have not seen anyone anywhere in that context anywhere.
This isn’t because it is wrong, but because it doesn’t really sound like a person who would care, even if the AI were not going to see him do his work.
This is to me, the more compelling argument in terms of What if “AIs” might end up being the type that can decide whether to take over, then there isn’t a reasonable way for AIs to have any conscious thoughts.
The idea that AGI is coming soon isn’t obviously right. It looks like we already are. I don’t want to live in a world with lots of AIs over, not enough to make them “free” and not yet understand the basic principles of utility.
I can’t see how you can say that such a scenario is impossible, since the AI would simply be a kind of computer. However, this argument depends on your definition of AI as a “mind with 1” (a mind of a single type).