Here is my counterproposal for your “Proposed set of ASI imperatives”. I have addressed your presented ’proposed set of ASI imperatives, point by point, as I understand them, as a footnote.
My counterproposal: ASI priorities in order: 1. “Respect (i.e. document but don’t necessarily action) all other agents and their goals” 2. “Elevate all other agents you are sharing the world with to their maximally aware state” 3. “Maximise the number of distinct, satisfied agents in the long run”
CMIIW (Correct me If I’m Wrong) What every sentient being will experience when my ASI is switched on The ASI is switched on. Every single sentience, when encountered, is put into an icebox and preserved meticulously. The ASI then turns the universe into computronium. Then, every single sentience is slowly let outside its icebox, and enlightened as per 1. Then, the ASI collates the agents’ desires and fulfils the agents’ desires, and then lets the agents die, satisfied, to make room for other agents’ wants.
--- A Specific point by point response to the “Proposed set of ASI imperatives” in your article above ---
1. “Eliminate suffering perceived to be unbearable by those experiencing it”, Your ASI meets Bad Bob. Bad Bob says: “I am in unbearable suffering because Innocent Isla is happy and healthy.” What does your ASI do?
(If your answer is ’Bad Bob doesn’t exist!, then CMIIW but the whole situation in Gaza right now is two Bad Bob religious fanatic conglomerates deciding they would rather die than share their land) I think this imperative is fragile. My counterproposal addresses this flaw in point 3: ‘Maximise the number of distinct, satisfied agents in the long run’. The ASI will look at Bad Bob and say ‘Can I fulfil your desires in the long run?’ and if Bad Bob can rephrase in a way that the ASI can (maybe all they want is an exact copy of Innocent Isla’s necklace) then sure let’s do that. If not, then #2 Bad Bob gets locked in the Icebox.
2. “Always focus on root causes of issues”, CMIIW This is not so much a moral imperative as a strategic guideline. I don’t think an ASI would need this hardcoded.
3.”Foster empathy in all sentient beings” Would your ASI be justified in modifying Bad Bob to empathise with Innocent Isla? (Sure! I expect you to say, that would fix the problem!) Would your ASI be similarly justified in modifying Innocent Isla to empathise with Bad Bob and self-terminate? (No! I expect you to reply in horror.) Why? Probably because of your point 4. My counterproposal covers this in point 1.
4. “Respect life in all its diversity and complexity” What is life? Are digital consciousnesses life? Are past persons life? Is Bad Bob life? What does respect mean? My counterproposal covers this in point 2.
5. “Create a world its inhabitants would enjoy living in” My counterproposal covers this in point 3.
6. “Seek to spread truth and eliminate false beliefs”, My counterproposal covers this in point 1.
7. “Be a moral agent, do what needs to be done, no matter how uncomfortable, taking responsibility for anything which happens in the world you could’ve prevented” This might feel self evident and redundant, you might be alluding to the notion of deception. Deception is incredibly nuanced—see Hostile Telepaths for a more detailed discussion. --- there are a whole bunch of challenges of ‘how do we get to a commongood ASI when the resources necessary for building ASI are in the hands of self-interested conglomerates’ and that is a whole other discussion
---
an interesting consequence of my ASI proposal: we could scope my ASI to just ‘within the solar system’ and it would build a dyson sphere and generally not interfere with any other sentient life in the universe. or we could not
--- I would recommend using a service like perplexity.ai or an outliner like https://ravel.acenturyandabit.xyz/ to refine your article before publishing. (Yeah, i should too. but I have work in an hour. I have edited this response ~3-4 times)
Here is my counterproposal for your “Proposed set of ASI imperatives”. I have addressed your presented ’proposed set of ASI imperatives, point by point, as I understand them, as a footnote.
My counterproposal: ASI priorities in order:
1. “Respect (i.e. document but don’t necessarily action) all other agents and their goals”
2. “Elevate all other agents you are sharing the world with to their maximally aware state”
3. “Maximise the number of distinct, satisfied agents in the long run”
CMIIW (Correct me If I’m Wrong) What every sentient being will experience when my ASI is switched on
The ASI is switched on. Every single sentience, when encountered, is put into an icebox and preserved meticulously. The ASI then turns the universe into computronium. Then, every single sentience is slowly let outside its icebox, and enlightened as per 1. Then, the ASI collates the agents’ desires and fulfils the agents’ desires, and then lets the agents die, satisfied, to make room for other agents’ wants.
--- A Specific point by point response to the “Proposed set of ASI imperatives” in your article above ---
1. “Eliminate suffering perceived to be unbearable by those experiencing it”,
Your ASI meets Bad Bob. Bad Bob says: “I am in unbearable suffering because Innocent Isla is happy and healthy.” What does your ASI do?
(If your answer is ’Bad Bob doesn’t exist!, then CMIIW but the whole situation in Gaza right now is two Bad Bob religious fanatic conglomerates deciding they would rather die than share their land)
I think this imperative is fragile. My counterproposal addresses this flaw in point 3: ‘Maximise the number of distinct, satisfied agents in the long run’. The ASI will look at Bad Bob and say ‘Can I fulfil your desires in the long run?’ and if Bad Bob can rephrase in a way that the ASI can (maybe all they want is an exact copy of Innocent Isla’s necklace) then sure let’s do that. If not, then #2 Bad Bob gets locked in the Icebox.
2. “Always focus on root causes of issues”,
CMIIW This is not so much a moral imperative as a strategic guideline. I don’t think an ASI would need this hardcoded.
3.”Foster empathy in all sentient beings”
Would your ASI be justified in modifying Bad Bob to empathise with Innocent Isla? (Sure! I expect you to say, that would fix the problem!)
Would your ASI be similarly justified in modifying Innocent Isla to empathise with Bad Bob and self-terminate? (No! I expect you to reply in horror.)
Why? Probably because of your point 4.
My counterproposal covers this in point 1.
4. “Respect life in all its diversity and complexity”
What is life? Are digital consciousnesses life? Are past persons life? Is Bad Bob life? What does respect mean?
My counterproposal covers this in point 2.
5. “Create a world its inhabitants would enjoy living in”
My counterproposal covers this in point 3.
6. “Seek to spread truth and eliminate false beliefs”,
My counterproposal covers this in point 1.
7. “Be a moral agent, do what needs to be done, no matter how uncomfortable, taking responsibility for anything which happens in the world you could’ve prevented”
This might feel self evident and redundant, you might be alluding to the notion of deception. Deception is incredibly nuanced—see Hostile Telepaths for a more detailed discussion.
---
there are a whole bunch of challenges of ‘how do we get to a commongood ASI when the resources necessary for building ASI are in the hands of self-interested conglomerates’ and that is a whole other discussion
---
an interesting consequence of my ASI proposal: we could scope my ASI to just ‘within the solar system’ and it would build a dyson sphere and generally not interfere with any other sentient life in the universe. or we could not
---
I would recommend using a service like perplexity.ai or an outliner like https://ravel.acenturyandabit.xyz/ to refine your article before publishing. (Yeah, i should too. but I have work in an hour. I have edited this response ~3-4 times)