An AI which acts toward whatever the observer deems to be beneficial to the human condition. It’s impossible to put it into falsifiable criteria if you can’t define what is (and on what timescale?) beneficial to the human race. And I’m pretty confident nobody knows what’s beneficial to the human condition on the longest term, because that’s the problem we’re building the FAI to solve.
In the end, we will have to build an AI as best we can and trust its judgement. Or not build it. It’s a cosmic gamble.
Sounds interesting. We must now verify if it works for useful questions.
Could someone explain what FAI is without using the words “Friendly”, or any synonyms?
An AI which acts toward whatever the observer deems to be beneficial to the human condition. It’s impossible to put it into falsifiable criteria if you can’t define what is (and on what timescale?) beneficial to the human race. And I’m pretty confident nobody knows what’s beneficial to the human condition on the longest term, because that’s the problem we’re building the FAI to solve.
In the end, we will have to build an AI as best we can and trust its judgement. Or not build it. It’s a cosmic gamble.