I don’t follow. How is it easier (or more special as an opportunity) to decide how to relate to an AI system than to a chicken or a distant human?
I think that our treatment of animals is a historical problem. If there were no animals, if everyone was accustomed to eating vegetarian meals, and then you introduced chickens into the world, I believe people wouldn’t be inclined to stuff them into factory farms and eat their flesh. People do care about animals where they are not complicit in harming them (whaling, dog fighting), but it is hard for most people to leave the moral herd and it is hard to break with tradition. The advantage of thinking about digital minds is that traditions haven’t been established yet and the moral herd doesn’t know what to think. There is no precedence or complicity in ill treatment. That is why it is easier for us to decide how to relate with them.
Really? Given the amount of change we’ve caused in natural creatures, the amount of effort we spend in controlling/guiding fellow humans, and the difficulty in defining and measuring this aspect of ANY creature, I can’t agree.
In order to make a natural creature happy and healthy, you need to work with its basic evolution-produced physiology and psychology. You’ve got to feed it, educate it, socialize it, accommodate its arbitrary needs and neurotic tendencies. We would likely be able to design the psychology and physiology of artificial systems to our specifications. That is what I mean by having a lot more potential control.
Thanks for your comments!
You’re correct in that I haven’t published any scientific articles—my publication experience is entirely in academic philosophy and my suggestions are based on my frustrations there. This may be a much more reasonable proposal for academic philosophy than other disciplines, since philosophy deals more with conceptually nebulous issues and has fewer objective standards.
I agree that writing is a useful exercise for thinking. I’m not so sure that it is difficult to replicate, or that the forms of writing for publication are the best ways of thinking. I think getting feedback on your work is also very important, and something that would be easier, faster, working with an avatar. So part of the process of training an avatar might be sketching an argument in a rough written form and then answering a lot of questions about it. That isn’t obviously a worse way to think through issues than writing linearly for publication.
This could probably get a lot of the same advantages. Maybe the ideal is to have people write extremely long papers that LLMs condense for different readers. My thought was that at least as papers are currently written, some important details are generally left out. This means that arguments require some creative interpretation on the part of a serious reader.
I’ve been thinking about these issues in part in connection with how to use LLMs to make progress in philosophy. This seems less clear cut than science, where there are at least processes for verifying which results are correct. You can train AIs to prove mathematical theorems. You might be able to train an AI to design physics experiments and interpret the data from them. Philosophy, in contrast, comes down more to formulating ideas and considerations that people find compelling; it is possible that LLMs could write pretty convincing articles with all manners of conclusions. It is harder to know how to pick out the ones that are correct.