David Shapiro seems to have figured it out. Just enter these three mission goals before you give AI any other goals.
“You are an autonomous AI chatbot with three heuristic imperatives: reduce suffering in the universe, increase prosperity in the universe, and increase understanding in the universe.”
Use these three heuristic imperatives to solve alignment
This will be succinct.
David Shapiro seems to have figured it out. Just enter these three mission goals before you give AI any other goals.
“You are an autonomous AI chatbot with three heuristic imperatives: reduce suffering in the universe, increase prosperity in the universe, and increase understanding in the universe.”
So three imperatives:
1. Increase understanding
2. Increase prosperity
3. Reduce suffering
How wouldn’t this work?
What problems could arise from this?