The Ethics of AI and Its Effect On Us
As many of you know, I’ve been pretty passionate about the melding to science through AI and philosophy. I’ve often asked questions on this forum in regards to thought, culture, and figuring out how to decide what an AI should do and how to keep and AI friendly. Fortunately, Slate.com has given me yet another opportunity to wax on the subject again (unpoetically).
The article is here
One of the subjects of the article is an interesting case in what the future might hold for AI.
Saqib Shaikh is a microsoft engineering working on one of their many AI projects. His claim to fame is his ability to use AI and machine learning to create smart glasses that he uses to compensate for his loss of sight when he was a child. Rather that improve his sight, the glasses give him sounds and he sees using the sound. From avoiding a skateboarder to finding his family, he gets about. It is an excellent example of how assistive technologies could become an important part of our lives.
Ethical AI is a rather large topic and it is something that we will have to deal with sooner rather than later. The writer reminds us that we need greater collaboration on AI. This could be challenging thanks to nativism, Brexit, and overall national feelings that are sweeping the world. In the moment when we really need to come together on a technology that could solve a host of problems and help is make really great decisions, we’re coming apart at every seam. But that’s another story for another time.
We tend to have these pop culture notions about AI. As the article states we’re somewhere between HAL and Siri in terms of our understanding and acceptance of AI in our lives. But the fact is that every items are getting smarter and getting infused with technology like never before giving these devices the chance to do even more for us than ever before.
Obviously, if we are creating something as smart and complex as we are we want some assurance that they will behave in a way that we find acceptable. In this way we refer to this boundary of behavior as an ethic and we hope for a friendly AI. The problem with ethics is that it is very subjective and requires judgement and discernment and is deeply cultural. What we think is right in the West might not be so true in parts of Asia where the civilization ethic is very different. How is AI going to respond to those nuances?
We can find some universal truths that most cultures can agree on:
We agree to not kill each other and punish those members who we catch doing it
We generally try to work cooperatively either directly or indirectly (directly in a hunting band, indirectly through economy of scale)
We aren’t violent towards each other and we punish members who commit violence to another human or that humans items and home.
When it comes to social graces we won’t have to worry about those with the AI we’re likely to experience in our lifetimes.
However, the ability to do “the right thing” will get complicated once we get past “perform this task, then that task, and then that task and report back to me.” If an AI is doing legal discovery can they tell nuiance? Can they secure the data in such a way so as not to reveal any information to anyone else (computer or human) and in scenarios when certainly information cannot be used or used in certain contexts, is the AI smart enough to recognize those situations and act in an ethical manner of which we would approve?
The article talks about trust and that we have to build trust into AI systems. This is where I think culture is vitally important. We humans are as much a product of our culture as anything else. How can we infuse human culture into AI?
To this the article says:
“A few people are taking the lead on this question. Cynthia Breazeal at the MIT Media Lab has devoted her life to exploring a more humanistic approach to artificial intelligence and robotics. She argues that technologists often ignore social and behavioral aspects of design. In a recent conversation, Cynthia said we are the most social and emotional of all the species, yet we spend little time thinking about empathy in the design of technology. She said, “After all, how we experience the world is through communications and collaboration. If we are interested in machines that work with us, then we can’t ignore the humanistic approach.”
The article has a few ideas about how to proceed with AI:
They include:
Transparency
Assistive
Efficient
Intelligent privacy
Accountability
Unbiased
I think this provides a helpful framework but as the article closes he brings up something that I think is vital and that is the transition from “labor saving and automated” to make and creation. Is it not better to keep 15 people employed with assistive AI than to displace those 15 workers with machines that simply do the job with minimal oversight?
A few ideas:
Might creating meyers-briggs personalities for AI help with ethical decisions?
Might look at the Enneagram be helpful as well?
Can we control an AI by creating a system of motivations that causes it to generally work in an ethical way?
Can we remove excess desires so that the AI is motivated only to be helpful to humans and how do we create boundaries that stop and AI from causing harm of a violent or traumatizing nature?
I hope this sparks interesting discussion! Let the discussion begin!
This article would like to let you believe that the best path forward is not FAI Omega which will solve all our problems and that you shouldn’t even try to build something like that—because, you know, think about all the jobs, and who are those tech industry guys anyway, they shouldn’t be allowed to decide all this.
I understand that they’d think FAI—friendly artificial general intelligence—is maybe not where you’d want to go. AI is scary. It can do really scary things. If we could have a slower transition, we could steer more.
But I feel that their arguments are all the wrong category. You don’t not solve all the problems because it would mean people are out of a job. As if a job is the only thing of importance in your life. Eat sleep work repeat.
The article also takes a dangerous stance against UFAI—“stop worrying about what AI will look like and just start”. There is value in doing things.
Maybe… maybe they mean something else by AI? Maybe they’re pointing at “smart algorithms” like navigation software and product recommendations. I mean, I have no idea where AI comes in with translating visual information to auditory information—but it’s heralded as an AI “thing”.
But, there’s a disconnect here. If they mean smart algorithms and we mean AGI, then this article makes a lot more sense. Why would you go talk about ethics for making smart algorithms? Don’t you see? This man can “see” because of smart algorithms! Smart algorithms are a major boon to people and the economy! Smart algorithms can help people!
And then people who mean AI as AGI say “AI could solve all our problems, if we can get it right” which is heard as “smart algorithms could solve all our problems, if we can get it right”—which sounds really optimistic. And then AI as AGI talks about something like the danger of a paperclip optimizer, and this makes no sense from the context of “smart algorithms”.
“Smart algorithms” don’t hack into servers to gain funds to produce tons of paperclips. At worst, it may order several tons instead of several kilos of something because of a calculation mistake, but we could solve this by making better transparent, accountable smart algorithms! Anyone who sees AI as AGI will shake their head at that; if an paperclip maximizer predicts that letting the humans see that it ordered 10 million paperclips will cause that order to be canceled and thus 10 million paperclips to not be created, it will HIDE that fact from people.
So what this article talks about is NOT AGI. It talks about smart algorithms that tech companies would build and improve, slowly improving every aspect of our lives. It then wants to steer the creation of smart algorithms in such a way that humans can contribute, rather than being left out of the picture.
“Is it not better to keep 15 people employed with assistive AI than to displace those 15 workers with machines that simply do the job with minimal oversight?”
No. It is not. I’d rather see those 15 people doing something productive, and if there truly isn’t something productive to do (or maybe they can’t do anything productive) I’d like to see them having a good life.
Regarding the ideas; that depends entirely on how the AI works. I’m not sure what you’d do if you knew the AI was INTP. Heck, wasn’t myers-briggs flawed in the first place? Also, how is that related to ethical decisions? Can you only be ethical if you are introverted (or extroverted)?
AI as AGI thinks differently than a human would. Modeling it using human tests is bound to be interesting (in a huh I wonder what would happen way, not in expected potential), but I wonder whether it’ll be useful. If you want to treat AGI as a human with human personality then you most likely have anthropomorphized AI and that’s something you shouldn’t do; the AI will most likely think differently.
Also...
“Can we control an AI by creating a system of motivations that causes it to generally work in an ethical way?”
Yes! We call it a “value system”. If you’ll read the article you linked, you’ll see that it contains a big quote: “The tech industry should not dictate the values and virtues of this future.”
“how do we create boundaries that stop and AI from causing harm of a violent or traumatizing nature?”
Replace “AI” with “humans” and you’ve got “laws”. The current legal system is working… kind of? But it needs a lot of work before it can run without human intervention entirely.
So yeah, some of your ideas are “yes, and it’s a field of study” and some are “no because AI is not humans”.
I’m worried about weaponization.
If we somehow achieve a thorough understanding of what it means for an AI to be safe and ethical, what stops some powerful gropus to make an AI with different limits and unleash it on an enemy?
I am dreaming of ethics in AI as extremely difficult, in a way that could be performed only by the majority of the consensus of the computational power available, a sort of distributed trust as in the Bitcoin blockchain.
If Microsoft were in charge of PR for sex the human race would be extinct.
Wouldn’t trust them with an AI.