Nobody special, nor any desire to be. Just sharing my ideas when I appear to know better than the person I’m responding to, or when I believe I have something interesting to share/add. I’m not a serious nor a formal person, and if you’re more knowledgeable than intelligent, you probably won’t like me as I lack academic rigor.
Feel free to correct me when I make mistakes. I’m too certain of myself as my ideas are rarely challenged. Crocker’s rules are fine! When playing intellectual (I do on here) I find that social things only get in the way, and when I socialize I find that intellectual things get in the way, so I separate them.
Finally, beliefs don’t seem to be a measure of knowledge and intelligence alone, but a result of experiences and personality. Those who have had similar experiences and thoughts already will recognize what I say, and those who don’t will mostly perceive noise.
I’ve read some of your other replies on here and I think I’ve found a pattern, but it’s actually more general than AI.
Harmful tendencies outcompete those which aren’t harmful
This is true (even outside of AI), but only at the limit. When you have just one person, you cannot tell if he will make the moral choice or not, but “people” will make the wrong choice. The harmful behaviour is emergent at scale. Discrete people don’t follow these laws, but the continous person does.
Again, even without AGI, you can apply this idea to technology and determine that it will eventually destroy us, and this is what Ted Kaczynski did. Thinking about incentives in this manner is depressing, because it feels like everything is deterministic and that we can only watch as everything gets worse. Those who are corrupt outcompete those who are not, so all the elites are corrupt. Evil businessmen outcompete good businessmen, so all successful businessmen are evil. Immoral companies outcompete moral companies, so all large companies are immoral.
I think this is starting to be true, but it wasn’t true 200 years ago. At least, it wasn’t half as harmful as it is now, why? It’s because the defense against this problem is human taste, human morals, and human religions. Dishonesty, fraud, selling out, doing what’s most efficient with no regard for morality. We consider this behaviour to be in bad taste, we punished it and branded it low-status, so that it never succeeded in ruining everything.
But now, everything could kill us (if the incentives are taken as laws, at least), you don’t even need to involve AI. For instance, does Google want to be shut down? No, so they will want to resist antitrust laws. Do they want to be replaced? No, so they will use cruel tricks to kill small emerging competitors. When fines for illegal behaviour are less than the gains Google can make by doing illegal things, they will engage in illegal behaviour, for that is the logical best choice available to Google if all which matters is money. If we let it, Google would take over the world, in fact, it couldn’t do otherwise. You can replace “Google” with any powerful structure in which no human is directly in charge. When it starts being more profitable to kill people than it is to keep them alive, the global population will start dropping fast. When you optimize purely for Money, and you optimize strongly enough, everyone dies. An AI just kills us faster because it optimizes more strongly, we already have something which acts similarly to AI. If you optimize too hard for anything, no matter what it is (even love, well-being, or happiness), everyone eventually dies (hence the paperclip maximizer warning).
If this post gave you existential dread, I’ve been told that Elinor Ostrom’s books make for a good antidote.