Will it be feasible in the next decade or so to actually do real research into how to make sure AI systems don’t instantiate anything with any non-negligible level of sentience?
If it is a question to me, I think no. It may be simpler to make bad AI able to kill anyone in the next decade than to solve nature of consciousness and thus learn if any AI actually have any subjective experiences. I have been working 2 years ago on a plan of research of the nature of consciousness, but later mostly abandoned it as I think it is not an urgent question.
Will it be feasible in the next decade or so to actually do real research into how to make sure AI systems don’t instantiate anything with any non-negligible level of sentience?
If it is a question to me, I think no. It may be simpler to make bad AI able to kill anyone in the next decade than to solve nature of consciousness and thus learn if any AI actually have any subjective experiences. I have been working 2 years ago on a plan of research of the nature of consciousness, but later mostly abandoned it as I think it is not an urgent question.