most people seem to stubbornly think of AI systems as passive tools and believe strongly that it cannot anytime soon become agentic
but then
people who are not aware of the technical details are more likely to accept AI could be agentic while a lot of academics seem to be resistant to this idea
Aren’t non-academics and non-experts the majority, i.e. “most people”?
Your main idea seems to be that academics and other technically knowledgeable people are supposed to be materialists and reductionists, but in their hearts they still don’t think of themselves as automata, and this prevents them from conceiving that nonhuman automata could develop all of the capacities that humans have. So in order to open their minds to the higher forms of AI risk, one should emphasize materialist philosophy of mind, and that human nature and machine nature are not that different.
Well, people have a variety of attitudes. Many of the people working in deep learning or in AI safety, maybe even a majority, definitely believe that artificial neural networks can be conscious, and can be people. Some are more agnostic and say, higher cognitive capabilities don’t necessarily imply personhood, and that we just don’t know which AIs would be conscious and which not. It is even possible to think that AIs (at least on non-quantum computers) can probably never be conscious, and still think that they are capable of surpassing us; that would be my view.
Given this situation, I think that not tying AI safety to a particular philosophy of mind is appropriate. However, people have their views, and if someone thinks that a particular philosophy of mind is necessarily part of the case for AI safety, then that is how they will they present it.
You’re writing from India, so maybe people there, who are working on AI and machine learning, more often have a religious or spiritual concept of human nature, compared to their counterparts in the secularized West?
Aren’t non-academics and non-experts the majority,
I was talking about people who had not grokked materialism which is the majority. The people who are not aware of the technical details model AI as this black box, therefore, seem to be more open to considering that it might be agentic but that is them just deferring to an outside view that sounds convincing rather than building their own model.
so maybe people there, who are working on AI and machine learning, more often have a religious or spiritual concept of human nature, compared to their counterparts in the secularized West?
Most people I talked to were from India and it is possible there is a pattern there. But I see similar arguments come up even in the people in the west. When people say “it is just statistics”, they seem to be pointing to the idea that deterministic processes can never be agentic.
I am not trying to bring consciousness into the discussion necessarily but I think there is value in helping people make their existing philosophical beliefs more explicit so that they can see it to the natural conclusion.
I am confused by who you’re talking about, e.g.
but then
Aren’t non-academics and non-experts the majority, i.e. “most people”?
Your main idea seems to be that academics and other technically knowledgeable people are supposed to be materialists and reductionists, but in their hearts they still don’t think of themselves as automata, and this prevents them from conceiving that nonhuman automata could develop all of the capacities that humans have. So in order to open their minds to the higher forms of AI risk, one should emphasize materialist philosophy of mind, and that human nature and machine nature are not that different.
Well, people have a variety of attitudes. Many of the people working in deep learning or in AI safety, maybe even a majority, definitely believe that artificial neural networks can be conscious, and can be people. Some are more agnostic and say, higher cognitive capabilities don’t necessarily imply personhood, and that we just don’t know which AIs would be conscious and which not. It is even possible to think that AIs (at least on non-quantum computers) can probably never be conscious, and still think that they are capable of surpassing us; that would be my view.
Given this situation, I think that not tying AI safety to a particular philosophy of mind is appropriate. However, people have their views, and if someone thinks that a particular philosophy of mind is necessarily part of the case for AI safety, then that is how they will they present it.
You’re writing from India, so maybe people there, who are working on AI and machine learning, more often have a religious or spiritual concept of human nature, compared to their counterparts in the secularized West?
I was talking about people who had not grokked materialism which is the majority. The people who are not aware of the technical details model AI as this black box, therefore, seem to be more open to considering that it might be agentic but that is them just deferring to an outside view that sounds convincing rather than building their own model.
Most people I talked to were from India and it is possible there is a pattern there. But I see similar arguments come up even in the people in the west. When people say “it is just statistics”, they seem to be pointing to the idea that deterministic processes can never be agentic.
I am not trying to bring consciousness into the discussion necessarily but I think there is value in helping people make their existing philosophical beliefs more explicit so that they can see it to the natural conclusion.