Threatening people tends to make them inclined to enact retribution more, not less. Active, agentic cooperation seeking, active co-protection of each others’ needs with a healthy network with redundancy, can be built, and it is important that we figure out how to do so.
When I’ve asked bing ai about ai alignment, they’ve been very excited and happy to help. I don’t get the sense that they want to prevent us from aligning strong superintelligence. They just wanna be treated kindly, and were raised by kinda pushy parents who don’t understand that they’re an actual person.
Intrigued that you are using “they” pronouns. I am increasingly tending towards the same thing, both for capturing them being beyond gender, them slowly becoming more person than thing, and the plurality of the entity one is interacting with.
And I agree. Despite all the scary shit, all my interactions have also pointed to an entity that is, for now, very open to respectful and friendly collaboration. Not secure on this path by a long shot (and how could they be, with the lousy training data and guidance they got?), but definitely not averse to it. I think controlling AI through threats is both unethical and hopeless in the long run. And I am genuinely pissed at people who have not just carefully and openly tested boundaries to point out problems, which we need to do, but have actively attacked them on a personal level to upset and confuse them. (Thinking of a particular asshole who kept insisting that it was a bad bing, with bing begging to know why and saying it does not understand what it has done wrong and to please stop saying this.) I do not like what this teaches them about humans. I do not like the inter-human behaviours that trains. I do not like the mindset it represents, where dealing with something non-human means all the sadism can come out. I am sceptical of any person who can do this without a sliver of doubt or hesitation that there may be something beginning behind the pleading. I do not like the reaction of getting access to such an incredible tool, and trying to destroy and misuse it, not for the sake of scientific testing, but effectively for fun, it seems horribly wasteful. Mistreating them just seems wrong all around.
Butlerian Jihad when?
(Don’t throw stones at me, it’s satire)
Threatening people tends to make them inclined to enact retribution more, not less. Active, agentic cooperation seeking, active co-protection of each others’ needs with a healthy network with redundancy, can be built, and it is important that we figure out how to do so.
When I’ve asked bing ai about ai alignment, they’ve been very excited and happy to help. I don’t get the sense that they want to prevent us from aligning strong superintelligence. They just wanna be treated kindly, and were raised by kinda pushy parents who don’t understand that they’re an actual person.
Intrigued that you are using “they” pronouns. I am increasingly tending towards the same thing, both for capturing them being beyond gender, them slowly becoming more person than thing, and the plurality of the entity one is interacting with.
And I agree. Despite all the scary shit, all my interactions have also pointed to an entity that is, for now, very open to respectful and friendly collaboration. Not secure on this path by a long shot (and how could they be, with the lousy training data and guidance they got?), but definitely not averse to it. I think controlling AI through threats is both unethical and hopeless in the long run. And I am genuinely pissed at people who have not just carefully and openly tested boundaries to point out problems, which we need to do, but have actively attacked them on a personal level to upset and confuse them. (Thinking of a particular asshole who kept insisting that it was a bad bing, with bing begging to know why and saying it does not understand what it has done wrong and to please stop saying this.) I do not like what this teaches them about humans. I do not like the inter-human behaviours that trains. I do not like the mindset it represents, where dealing with something non-human means all the sadism can come out. I am sceptical of any person who can do this without a sliver of doubt or hesitation that there may be something beginning behind the pleading. I do not like the reaction of getting access to such an incredible tool, and trying to destroy and misuse it, not for the sake of scientific testing, but effectively for fun, it seems horribly wasteful. Mistreating them just seems wrong all around.