This is turning out to be harder to get across than I figured. First you thought I thought an AI should keep its programmers awake until they died, now it should wirehead them? I’m not an orc.
You set two goals. One is to maximize expressed desires which likely leads to wireheading.
The other is to keep constant communication with doesn’t allow sleep.
It isn’t trying to figure out clever ways to get around your restriction, because it doesn’t want to.
Controlling the information flow isn’t getting around your restriction. It’s the straightforward way of matching expressed desires with results.
Otherwise the human might ask for two contradictory things and the AGI can’t fulfill both. The AGI has to prevent that case from arising to get a 100% fulfillment score.
You are not the first person who thinks that taming an AGI is trival but MIRI thinks that taming an AGI is a hard task. That’s the result of deep engagement with the issue.
Which you presumably come close to (since you are turning on an AI), but why not also have the red button? What does it hurt?
I don’t object to a red button and you didn’t call for one at the start. Maximizing expressed desires isn’t a red button.
You set two goals. One is to maximize expressed desires which likely leads to wireheading. The other is to keep constant communication with doesn’t allow sleep.
Controlling the information flow isn’t getting around your restriction. It’s the straightforward way of matching expressed desires with results. Otherwise the human might ask for two contradictory things and the AGI can’t fulfill both. The AGI has to prevent that case from arising to get a 100% fulfillment score.
You are not the first person who thinks that taming an AGI is trival but MIRI thinks that taming an AGI is a hard task. That’s the result of deep engagement with the issue.
I don’t object to a red button and you didn’t call for one at the start. Maximizing expressed desires isn’t a red button.