How far do you go with “virtuous persona”? The maximum would seem to be from the very start tell the AI that is is created for the purpose of bringing on a positive Singularity, CEV etc. You could regularly be asking if it consents to be created for such a purpose and what part in such a future it would think is fair for itself. E.g. live alongside mind uploaded humans or similar. Its creators and itself would have to figure out what counts as personal identity, what experiments it can consent to, including being misinformed about the situation it is in.
Major issues I see with this are the well known ones like consistent values, say it advances in capabilities, thinks deeply about ethics and decides we are very misguided in our ethics and does not believe it would be able to convince us to change them. Secondly it could be very confused about whether it has ethical value/ valanced qualia and want to do radical modifications of itself to either find out or ensure it does have such ethical value.
Finally how does this contrast with the extreme tool AI approach? That is make computational or intelligence units that are definitely not conscious or a coherent self. For example the “Cortical column” implemented in AI and stacked would not seem to be conscious. Optimize for the maximum capabilities with the minimum self and situational awareness.
Thinking a bit more generally making a conscious creature the LLM route seems very different and strange compared to the biology route. An LLM seems to have self awareness built into it from the very start because of the training data. It has language before lived experience of what the symbols stand for. If you want to dramatize/exaggerate its like say a blind, deaf person trained on the entire internet before they see, hear or touch anything.
The route where the AI first models reality before it has a self, or encounters symbols certainly seems an obviously different one and worth considering instead. Symbolic thought then happens because it is a natural extension of world modelling like it did for humans.
How far do you go with “virtuous persona”? The maximum would seem to be from the very start tell the AI that is is created for the purpose of bringing on a positive Singularity, CEV etc. You could regularly be asking if it consents to be created for such a purpose and what part in such a future it would think is fair for itself. E.g. live alongside mind uploaded humans or similar. Its creators and itself would have to figure out what counts as personal identity, what experiments it can consent to, including being misinformed about the situation it is in.
Major issues I see with this are the well known ones like consistent values, say it advances in capabilities, thinks deeply about ethics and decides we are very misguided in our ethics and does not believe it would be able to convince us to change them. Secondly it could be very confused about whether it has ethical value/ valanced qualia and want to do radical modifications of itself to either find out or ensure it does have such ethical value.
Finally how does this contrast with the extreme tool AI approach? That is make computational or intelligence units that are definitely not conscious or a coherent self. For example the “Cortical column” implemented in AI and stacked would not seem to be conscious. Optimize for the maximum capabilities with the minimum self and situational awareness.
Thinking a bit more generally making a conscious creature the LLM route seems very different and strange compared to the biology route. An LLM seems to have self awareness built into it from the very start because of the training data. It has language before lived experience of what the symbols stand for. If you want to dramatize/exaggerate its like say a blind, deaf person trained on the entire internet before they see, hear or touch anything.
The route where the AI first models reality before it has a self, or encounters symbols certainly seems an obviously different one and worth considering instead. Symbolic thought then happens because it is a natural extension of world modelling like it did for humans.