def checkMorals():
>[simulate philosophy student's brain]
>if [simulated brain's state is offended]:
>>return False
>else:
>>return True
if checkMorals():
>[keep doing AI stuff]
there. that’s how we tell an AI capable of being an AI and capable of simulating a brain to not to take actions which the simulated brain thinks offend against liberty, as implemented in python.
there. that’s how we tell an AI capable of being an AI and capable of simulating a brain to not to take actions which the simulated brain thinks offend against liberty, as implemented in python.
oh, it’s so clear and obvious now, how could I have missed that?