This article would like to let you believe that the best path forward is not FAI Omega which will solve all our problems and that you shouldn’t even try to build something like that—because, you know, think about all the jobs, and who are those tech industry guys anyway, they shouldn’t be allowed to decide all this.
I understand that they’d think FAI—friendly artificial general intelligence—is maybe not where you’d want to go. AI is scary. It can do really scary things. If we could have a slower transition, we could steer more.
But I feel that their arguments are all the wrong category. You don’t not solve all the problems because it would mean people are out of a job. As if a job is the only thing of importance in your life. Eat sleep work repeat.
The article also takes a dangerous stance against UFAI—“stop worrying about what AI will look like and just start”. There is value in doing things.
Maybe… maybe they mean something else by AI? Maybe they’re pointing at “smart algorithms” like navigation software and product recommendations. I mean, I have no idea where AI comes in with translating visual information to auditory information—but it’s heralded as an AI “thing”.
But, there’s a disconnect here. If they mean smart algorithms and we mean AGI, then this article makes a lot more sense. Why would you go talk about ethics for making smart algorithms? Don’t you see? This man can “see” because of smart algorithms! Smart algorithms are a major boon to people and the economy! Smart algorithms can help people!
And then people who mean AI as AGI say “AI could solve all our problems, if we can get it right” which is heard as “smart algorithms could solve all our problems, if we can get it right”—which sounds really optimistic. And then AI as AGI talks about something like the danger of a paperclip optimizer, and this makes no sense from the context of “smart algorithms”.
“Smart algorithms” don’t hack into servers to gain funds to produce tons of paperclips. At worst, it may order several tons instead of several kilos of something because of a calculation mistake, but we could solve this by making better transparent, accountable smart algorithms! Anyone who sees AI as AGI will shake their head at that; if an paperclip maximizer predicts that letting the humans see that it ordered 10 million paperclips will cause that order to be canceled and thus 10 million paperclips to not be created, it will HIDE that fact from people.
So what this article talks about is NOT AGI. It talks about smart algorithms that tech companies would build and improve, slowly improving every aspect of our lives. It then wants to steer the creation of smart algorithms in such a way that humans can contribute, rather than being left out of the picture.
“Is it not better to keep 15 people employed with assistive AI than to displace those 15 workers with machines that simply do the job with minimal oversight?”
No. It is not. I’d rather see those 15 people doing something productive, and if there truly isn’t something productive to do (or maybe they can’t do anything productive) I’d like to see them having a good life.
Regarding the ideas; that depends entirely on how the AI works. I’m not sure what you’d do if you knew the AI was INTP. Heck, wasn’t myers-briggs flawed in the first place? Also, how is that related to ethical decisions? Can you only be ethical if you are introverted (or extroverted)?
AI as AGI thinks differently than a human would. Modeling it using human tests is bound to be interesting (in a huh I wonder what would happen way, not in expected potential), but I wonder whether it’ll be useful. If you want to treat AGI as a human with human personality then you most likely have anthropomorphized AI and that’s something you shouldn’t do; the AI will most likely think differently.
Also...
“Can we control an AI by creating a system of motivations that causes it to generally work in an ethical way?”
Yes! We call it a “value system”. If you’ll read the article you linked, you’ll see that it contains a big quote: “The tech industry should not dictate the values and virtues of this future.”
“how do we create boundaries that stop and AI from causing harm of a violent or traumatizing nature?”
Replace “AI” with “humans” and you’ve got “laws”. The current legal system is working… kind of? But it needs a lot of work before it can run without human intervention entirely.
So yeah, some of your ideas are “yes, and it’s a field of study” and some are “no because AI is not humans”.
This article would like to let you believe that the best path forward is not FAI Omega which will solve all our problems and that you shouldn’t even try to build something like that—because, you know, think about all the jobs, and who are those tech industry guys anyway, they shouldn’t be allowed to decide all this.
I understand that they’d think FAI—friendly artificial general intelligence—is maybe not where you’d want to go. AI is scary. It can do really scary things. If we could have a slower transition, we could steer more.
But I feel that their arguments are all the wrong category. You don’t not solve all the problems because it would mean people are out of a job. As if a job is the only thing of importance in your life. Eat sleep work repeat.
The article also takes a dangerous stance against UFAI—“stop worrying about what AI will look like and just start”. There is value in doing things.
Maybe… maybe they mean something else by AI? Maybe they’re pointing at “smart algorithms” like navigation software and product recommendations. I mean, I have no idea where AI comes in with translating visual information to auditory information—but it’s heralded as an AI “thing”.
But, there’s a disconnect here. If they mean smart algorithms and we mean AGI, then this article makes a lot more sense. Why would you go talk about ethics for making smart algorithms? Don’t you see? This man can “see” because of smart algorithms! Smart algorithms are a major boon to people and the economy! Smart algorithms can help people!
And then people who mean AI as AGI say “AI could solve all our problems, if we can get it right” which is heard as “smart algorithms could solve all our problems, if we can get it right”—which sounds really optimistic. And then AI as AGI talks about something like the danger of a paperclip optimizer, and this makes no sense from the context of “smart algorithms”.
“Smart algorithms” don’t hack into servers to gain funds to produce tons of paperclips. At worst, it may order several tons instead of several kilos of something because of a calculation mistake, but we could solve this by making better transparent, accountable smart algorithms! Anyone who sees AI as AGI will shake their head at that; if an paperclip maximizer predicts that letting the humans see that it ordered 10 million paperclips will cause that order to be canceled and thus 10 million paperclips to not be created, it will HIDE that fact from people.
So what this article talks about is NOT AGI. It talks about smart algorithms that tech companies would build and improve, slowly improving every aspect of our lives. It then wants to steer the creation of smart algorithms in such a way that humans can contribute, rather than being left out of the picture.
“Is it not better to keep 15 people employed with assistive AI than to displace those 15 workers with machines that simply do the job with minimal oversight?”
No. It is not. I’d rather see those 15 people doing something productive, and if there truly isn’t something productive to do (or maybe they can’t do anything productive) I’d like to see them having a good life.
Regarding the ideas; that depends entirely on how the AI works. I’m not sure what you’d do if you knew the AI was INTP. Heck, wasn’t myers-briggs flawed in the first place? Also, how is that related to ethical decisions? Can you only be ethical if you are introverted (or extroverted)?
AI as AGI thinks differently than a human would. Modeling it using human tests is bound to be interesting (in a huh I wonder what would happen way, not in expected potential), but I wonder whether it’ll be useful. If you want to treat AGI as a human with human personality then you most likely have anthropomorphized AI and that’s something you shouldn’t do; the AI will most likely think differently.
Also...
“Can we control an AI by creating a system of motivations that causes it to generally work in an ethical way?”
Yes! We call it a “value system”. If you’ll read the article you linked, you’ll see that it contains a big quote: “The tech industry should not dictate the values and virtues of this future.”
“how do we create boundaries that stop and AI from causing harm of a violent or traumatizing nature?”
Replace “AI” with “humans” and you’ve got “laws”. The current legal system is working… kind of? But it needs a lot of work before it can run without human intervention entirely.
So yeah, some of your ideas are “yes, and it’s a field of study” and some are “no because AI is not humans”.