How can anyone sincerely want to build an AI that fulfills anything except their own current, personal volition?
For the same reason they voluntarily do anything which doesn’t perfectly align with their own personal volition. Because they understand that they can accomplish more of their own desires by joining a coalition and cooperating. Even though that means having to work to fulfill other people’s desires to the same extent that you work to fulfill your own.
A mad scientist building an AI in his basement doesn’t have to compromise with anyone, … until he has to go out and get funding, that is.
A mad scientist building an AI in his basement doesn’t have to compromise with anyone, … until he has to go out and get funding, that is.
So he’ll get funding for one thing, and then secretly build something else. Or he’ll wait and in another 20 years the hardware will be cheap enough that he won’t need external funding. Or he’ll get funding from a rich individual, which would result in a compromise between a total of 2 people—not a great improvement.
For the same reason they voluntarily do anything which doesn’t perfectly align with their own personal volition. Because they understand that they can accomplish more of their own desires by joining a coalition and cooperating. Even though that means having to work to fulfill other people’s desires to the same extent that you work to fulfill your own.
A mad scientist building an AI in his basement doesn’t have to compromise with anyone, … until he has to go out and get funding, that is.
So he’ll get funding for one thing, and then secretly build something else. Or he’ll wait and in another 20 years the hardware will be cheap enough that he won’t need external funding. Or he’ll get funding from a rich individual, which would result in a compromise between a total of 2 people—not a great improvement.
Or, even more likely, some other team of researchers, more tolerant of compromise, will construct a FOOMing AGI first.