I meant the noise pollution example in my essay to be the Coase theorem, but I agree with you that property rights are not strong enough to solve with AI risk. I agree that AI will open up new paths for solving all kinds of problems, including giving us solutions that could end up helping with alignment.
James_Miller
Quantum Immortality: A Perspective if AI Doomers are Probably Right
The big thing I used it for was asking it to find sentences it thinks it can improve, and then have it give me the improved sentence. I created this GPT to help with my writing: https://chat.openai.com/g/g-gahVWDJL5-iterative-text-improver
I agree with the analogy in your last paragraph, and this gives hope for governments slowing down AI development, if they have the will.
Adam Smith Meets AI Doomers
Cortés, AI Risk, and the Dynamics of Competing Conquerors
Germany should plant agents inside of Russia to sabotage Russian railroads at the start of the war. At the start of the war Austro-Hungary should just engage in a holding action against Serbia and instead use almost all their forces to hold off the Russians. Germany should attack directly into France by making use of a surprise massive chemical weapons attack against static French defenses.
He wrote “unless your GPT conversator is able to produce significantly different outputs when listening the same words in a different tone, I think it would be fair to classify it as not really talking.” So if that is true and I’m horribly at picking up tone and so it doesn’t impact my “outputs”, I’m not really talking.
I think you have defined me as not really talking as I am on the autism spectrum and have trouble telling emotions from tone. Funny, given that I make my living talking (I’m a professor at a liberal arts college). But this probably explains why I think my conversator can talk and you don’t.
You wrote “GPT4 cannot really hear, and it cannot really talk”. I used GPT builder to create Conversation. If you use it on a phone in voice mode it does, for me at least, seem like it can hear and talk, and isn’t that all that matters?
Most journalists trying to investigate this story would attempt to interview Annie Altman. The base rate (converted to whatever heuristic the journalist used) would be influenced by whether she agreed to the interview and if she did how she came across. The reference class wouldn’t just be “estranged family members making accusations against celebrity relatives”.
By “discredited” I didn’t mean receive bad but undeserved publicity. I meant operate in a way that would cause reasonable people to distrust you.
“I would like to note that this is my first post on LessWrong.” I find this troubling given the nature of this post. It would have been better if this post was made by someone with a long history of posting to LessWrong, or someone writing under a real name that could be traced to a real identity. As someone very concerned with AI existential risk, I greatly worry that the movement might be discredited. I am not accusing the author of this post of engaging in improper actions.
“they also could do things like run prediction markets on people researching S-risk, to forecast the odds that they end up going crazy ”
If this is a real concern we should check if fear of hell often drove people crazy.
I don’t think Austria-Hungry was in a prisoners’ dilemma as they wanted a war so long as they would have German support. I think the Prisoners’ dilemma (imperfectly) comes into play for Germany, Russia, and then France given that Germany felt it needed to have Austria-Hungry as a long-term ally or risk getting crushed by France + Russia in some future war.
Cleaner, but less interesting plus I have a entire Demon Games exercise we do on the first day of class. Yes the defense build up, but also everyone going to war even though everyone (with the exception of the Austro-Hungarians) thinking they are worse off going to war than having the peace as previously existed, but recognizing that if they don’t prepare for war, they will be worse off. Basically, if the Russians don’t mobilize they will be seen to have abandoned the Serbs, but if they do mobilize and then the Germans don’t quickly move to attack France through Belgium then Russia and France will have the opportunity (which they would probably take) to crush Germany.
I think the disagreement is that I think the traditional approach to the prisoners’ dilemma makes it more useful as a tool for understanding and teaching about the world. Any miscommunication is probably my fault for my failing to sufficiently engage with your arguments, but it FEELS to me like you are either redefining rationality or creating a game that is not a prisoners’ dilemma because I would define the prisoners’ dilemma as a game in which both parties have a dominant strategy in which they take actions that harm the other player, yet both parties are better off if neither play this dominant strategy than if both do, and I would define a dominant strategy as something a rational player always plays regardless of what he things the other player would do. I realize I am kind of cheating by trying to win through definitions.
I teach an undergraduate game theory course at Smith College. Many students start by thinking that rational people should cooperate in the prisoners’ dilemma. I think part of the value of game theory is in explaining why rational people would not cooperate, even knowing that everyone not cooperating makes them worse off. If you redefine rationality such that you should cooperate in the prisoners’ dilemma, I think you have removed much of the illuminating value of game theory. Here is a question I will be asking my game theory students on the first class:
Our city is at war with a rival city, with devastating consequences awaiting the loser. Just before our warriors leave for the decisive battle, the demon Moloch appears and says “sacrifice ten healthy, loved children and I will give +7 killing power (which is a lot) to your city’s troops and subtract 7 from the killing power of your enemy. And since I’m an honest demon, know that right now I am offering this same deal to your enemy.” Should our city accept Moloch’s offer?
I believe under your definition of rationality this Moloch example loses its power to, for example, in part explain the causes of WW I.
A historical analogy might be the assassination of Bardiya, who was the king of Persia and the son of Cyrus the Great. Darius, who led the assassination, claimed that the man he killed was an impostor who used magic powers to resemble the son of Cyrus. As Darius became the next king of Persia, everyone was brute forced into accepting his narrative of the assassination.