You don’t think there could be powerful systems that take what we say too literally and thereby cause massive issues[1]. Isn’t it better if power comes along with human understanding? I admit some people desire the opposite, for powerful machines to be unable to model humans so that it can’t manipulate us, but such machines will either a) be merely imitating behaviour and thereby struggle to adapt to new situations or b) most likely not do what we want when we try to use them.
You don’t think there could be powerful systems that take what we say too literally and thereby cause massive issues[1]. Isn’t it better if power comes along with human understanding? I admit some people desire the opposite, for powerful machines to be unable to model humans so that it can’t manipulate us, but such machines will either a) be merely imitating behaviour and thereby struggle to adapt to new situations or b) most likely not do what we want when we try to use them.
As an example, high-functioning autism exists.
Sure, there could be such systems. But I’m more worried about the classic alignment problems.