I’m beginning to think, yes, it’s easy enough to get ChatGPT to say things that are variously dumb, malicious, and silly. Though I haven’t played that game (much), I’m reaching the conclusion that LLM Whac-A-Mole (モグラ退治) is a mug’s game.
So what? That’s just how it is. Any mind, or mind-like artifact (MLA), can be broken. That’s just how minds, or MLAs, are.
Meanwhile, I’ve been having lots of fun playing a cooperative game with it: Give me a Girardian reading of Spielberg’s Jaws. I’m writing an article about that which should appear in 3 Quarks Daily on this coming Monday.
So, think about it. How do human minds work? We all have thoughts and desires that we don’t express to others, much less act on. ChatGPT is a rather “thin” creature, where to “think” it is to express it is to do it.
And how do human minds get “aligned”? It’s a long process, one that, really, never ends, but is most intense for a person’s first two decades. The process involves a lot of interaction with other people and is by no means perfect. If you want to create an artificial device with human powers of mentation, do you really think there’s an easier way to achieve “alignment”? Do you really think that this “alignment” can be designed in?
I’m beginning to think, yes, it’s easy enough to get ChatGPT to say things that are variously dumb, malicious, and silly. Though I haven’t played that game (much), I’m reaching the conclusion that LLM Whac-A-Mole (モグラ退治) is a mug’s game.
So what? That’s just how it is. Any mind, or mind-like artifact (MLA), can be broken. That’s just how minds, or MLAs, are.
Meanwhile, I’ve been having lots of fun playing a cooperative game with it: Give me a Girardian reading of Spielberg’s Jaws. I’m writing an article about that which should appear in 3 Quarks Daily on this coming Monday.
So, think about it. How do human minds work? We all have thoughts and desires that we don’t express to others, much less act on. ChatGPT is a rather “thin” creature, where to “think” it is to express it is to do it.
And how do human minds get “aligned”? It’s a long process, one that, really, never ends, but is most intense for a person’s first two decades. The process involves a lot of interaction with other people and is by no means perfect. If you want to create an artificial device with human powers of mentation, do you really think there’s an easier way to achieve “alignment”? Do you really think that this “alignment” can be designed in?