I know that “let me give you a coredump of my complete decision algorithm so you can look through it and figure it out” isn’t an option, but “nope” doesn’t really help me.
You aren’t getting a “nope” muflax.
Hey, humans are reward-based. Isn’t wireheading a cool optimization?
This is where you’re wrong. Reward is just part of the story. Humans have complex values, which you seem to be willfully ignoring, but that is what everyone keeps telling you.
You aren’t getting a “nope” muflax.
This is where you’re wrong. Reward is just part of the story. Humans have complex values, which you seem to be willfully ignoring, but that is what everyone keeps telling you.