Even if we build AI that doesn’t maximize a function, it won’t be competitive with AI that does, assuming present trends hold. Building weaker, safer AI doesn’t stop others from building stronger, less safe
Why doesn’t your non-function-maximizing safe AI (which learns human values through human involvement) stop others from building stronger, less safe AIs? Seems to me that it probably could and probably should, and if it definitely could and definitely should then it definitely would! :)
Also: upvoted this post for being in the positing-negating format which is fun and easy to read.
Why doesn’t your non-function-maximizing safe AI (which learns human values through human involvement) stop others from building stronger, less safe AIs? Seems to me that it probably could and probably should, and if it definitely could and definitely should then it definitely would! :)
Also: upvoted this post for being in the positing-negating format which is fun and easy to read.