What if the friendly AI finds that our extrapolated volition is coherent and contains the value of ‘self-determination’ and concludes that it cannot meddle too much in our affairs? “Well, humankind, it looks like you don’t want to have your destiny decided by a machine. my hands are tied. You need to save yourselves.”
What if the friendly AI finds that our extrapolated volition is coherent and contains the value of ‘self-determination’ and concludes that it cannot meddle too much in our affairs? “Well, humankind, it looks like you don’t want to have your destiny decided by a machine. my hands are tied. You need to save yourselves.”
http://lesswrong.com/lw/xb/free_to_optimize/