Well, you’d have to hardcode at least a learning algorithm for values if you expect to have any real chance that the AI behaves like a useful agent, and that falls within the category of important functionalities. But then I guess you’ll agree with that.
Don’t feed the troll. “Not hardcoding values or ethics” is the idea behind CEV, which seems frequently “explored round here.” Though I admit I do see some bizarre misunderstandings.
You have to hardcode something, don’t you?
I meant not hardcoding values or ethics.
Well, you’d have to hardcode at least a learning algorithm for values if you expect to have any real chance that the AI behaves like a useful agent, and that falls within the category of important functionalities. But then I guess you’ll agree with that.
Don’t feed the troll. “Not hardcoding values or ethics” is the idea behind CEV, which seems frequently “explored round here.” Though I admit I do see some bizarre misunderstandings.