1. Kamala Harris did run a bad campaign. She was ‘super popular’ at the start of the campaign (assuming you can trust the polls, though you mostly can’t), and ‘super unpopular’ losing definitively at the end of it. On September 17th, she was ahead by 2 points in polls, and in a little more than a month and a half she was down by that much in the vote. She lost so much ground. She had no good ads, no good policy positions, and was completely unconvincing to people who weren’t guaranteed to vote for her from the start. She had tons of money to get out all of this, but it was all wasted.
The fact that other incumbent parties did badly is not in fact proof that she was simply doomed, because there were so many people willing to give her a chance. It was her choice to run as the candidate who ‘couldn’t think of a single thing’ (not sure of exact quote) that she would do differently than Biden. Not a single thing!
Also, voters already punished Trump for Covid related stuff and blamed him. She was running against a person who was the Covid incumbent! And she couldn’t think of a single way to take advantage of that. No one believed her that inflation was Trump’s fault because she didn’t even make a real case for it. It was a bad campaign.
Not taking policy positions is not a good campaign when you are mostly known for bad ones. She didn’t run away very well from her unpopular positions from the past despite trying to be seen as moderate now.
I think the map you used is highly misleading. Just because there are some states that swung even more against her, doesn’t mean she did well in the others. You can say that losing so many supporters in clearly left states like California doesn’t matter, and neither does losing so many supporters in clearly right states like Texas, but thinking both that it doesn’t matter in terms of it being a negative, and that it does matter enough that you should ‘correct’ the data by it is obviously bad.
2.Some polls were bad, some were not. Ho hum. But that Iowa poll was really something else. (I don’t have a particular opinion on why she screwed up, aside from the fact that no one wants to be that far off if they have any pride.) She should have separately told people she thought the poll was wrong if she thought it was, did she do that? (I genuinely don’t know.) I do think you should ignore her if she doesn’t fix her methodology to account for nonresponse bias, because very few people actually answer polls. An intereting way might be to run a poll that just asks something like ‘are you male or female?’ or ‘are you a democrat of Republican?’ and so on so you can figure out those variables for the given election on both separate polls and on the ‘who are you voting for’ polls. If those numbers don’t match, something is weird about the polls.
I think it is important to note that people thought the polls would be closer this time by a lot than before (because otherwise everyone would have predicted a landslide due to them being close.) You said, “Some people went into the 2024 election fearing that pollsters had not adequately corrected for the sources of bias that had plagued them in 2016 and 2020.” but I mostly heard the opposite from those who weren’t staunch supporters of Trump. I think the idea of how corrections had gone before we got the results was mostly partisan. Many people were sure they had been fully fixed (or overcorrected) for bias and this was not true, so people act like they are clearly off (which they were). Most people genuinely thought this was a much closer race than it turned out to be.
The margin of being off was smaller than in the past trump elections, I’ll agree, but I think it is mostly the bias people are keying on rather than the absolute error. The polls have been heavily biased on average for the past three presidential cycles, and this time was still clearly biased (even if less so). With absolute error but no bias, you can just take more or larger polls, but with bias, especially an unknowable amount of bias, it is very hard to just improve things. Also, the ‘moderate’ bias is still larger than 2000, 2004, 2008, and 2012.
My personal theory is that the polls are mostly biased against Trump personally because it is more difficult to get good numbers on him due to interacting strangely with the electorate as compared to previous Republicans (perhaps because he isn’t really a member of the same party they were), but obviously we don’t actually know why. If the Trump realignment sticks around, perhaps they’ll do better correcting for it later.
I do think part of the bias is the pollsters reacting to uncertainty about how to correct for things by going with the results they prefer, but I don’t personally think that is the main issue here.
3.Your claim that ‘Theo’ was just lucky because neighbor polls are nonsense doesn’t seem accurate. For one thing, neighbor polls aren’t nonsense. They actually give you a lot more information than ‘who are you voting for’. (Though they are speculative.) You can easily correct for how many neighbors someone has too and where they live using data on where people live, and you can also just ask ‘what percentage of your neighbors are likely to vote for’ to correct for the fact that it is different percentages of support.
As a separate point, a lot of people think the validity of neighbor polls comes from people believing that the respondents are largely revealing their own personal vote, though I have some issues with that explanation.
So, one bad poll with an extreme definition of ‘neighbor’ negates neighbor voting and many bad polls don’t negate traditional? Also, Theo already had access to the normal polls as did everyone else. Even if a neighbor poll for some reason exaggerates the difference, as long as it is in the right direction, it is still evidence of what direction the polls are wrong in.
Keep in mind that the chance of Trump winning was much higher than traditional polls said. Just because Theo won with his bets doesn’t mean you should believe he’d be right again, but claiming that it is ‘just lucky’ is a bad idea epistemologically, because you don’t know what information he had that you don’t.
4.I agree, we don’t know whether or not the campaigns spent money wisely. The strengths and weaknesses of the candidates seemed to not rely much on the amount of money they spent, which likely does indicate they were somewhat wasteful on both sides, but it is hard to tell.
5.Is Trump a good candidate or a bad one? In some ways both. He is very charismatic in the sense of making everyone pay attention to him, which motivates both his potential supporters and potential foes to both become actual supporters and foes respectively. He also acts in ways his opponents find hard to counter, but turn off a significant number of people. An election with Trump in it is an election about Trump, whether that is good or bad for his chances.
I think it would be fairer to say Trump got unlucky with election that he lost than that he was lucky to win this one. Trump was the covid incumbent who got kicked out because of it despite having an otherwise successful first term.
We don’t usually call a bad opponent luck in this manner. Harris was a quasi-incumbent from a badly performing administration who was herself a laughingstock for most of the term. She was partially chosen as a reaction to Trump! (So he made his own luck! if this is luck.)
His opponent in 2016 was obviously a bad candidate too, but again, that isn’t so much ‘luck’. Look closely at the graph for Clinton. Her unfavorability went way up when Trump ran against her. This is also a good example of a candidate making their own ‘luck’. He was effective in his campaign to make people dislike her more.
6.Yeah, money isn’t the biggest deal, but it probably did help Kamala. She isn’t any good at drawing attention just by existing like Trump, so she really needed it. Most people aren’t always the center of attention, so money almost always does matter to an extent.
7.I agree that your opinion of Americans shouldn’t really change much by being a few points different than expected in a vote either way, especially since each individual person making the judgement is almost 50% likely to be wrong anyway! If the candidates weren’t identically as good, at least as many as the lower of the two were ‘wrong’ (if you assume one correct choice regardless of person reasons) and it could easily be everyone who didn’t vote for the lower. If they were identically as good, then it can’t be that voting for one of them over the other should matter to your opinion of them. I have an opinion on which candidate was ‘wrong’ of course, but it doesn’t really matter to the point (though I am freely willing to admit that it is the opposite of yours).
As a (severe) skeptic of all the AI doom stuff and a moderate/centrist that has been voting for conservatives I decided my perspective on this might be useful here (which obviously skews heavily left). (While my response is in order, the numbers are there to separate my points, not to give which paragraph I am responding to.)
“AI-not-disempowering-humanity is conservative in the most fundamental sense”
1.Well, obviously this title section is completely true. If conservative means anything, it means being against destroying the lives of the people through new and ill-though through changes. Additionally, conservatives are both strongly against the weakening of humanity and of outside forces assuming control. It would also be a massive change for humanity.
2.That said, conservatives generally believe this sort of thing is incredibly unlikely. AI has not been conclusively shown to have any ability in this direction. And the chance of upheaval is constantly overstated by leftists in other areas, so it is very easy for anyone who isn’t to just tune them out. For instance, global warming isn’t going to kill everyone, and everyone knows it including basically all leftists, but they keep claiming it will.
3.A new weapon with the power of nukes is obviously an easy sell on its level of danger, but people became concerned because of ‘demonstrated’ abilities that have always been scary.
4.One thing that seems strangely missing from this discussion is that alignment is in fact, a VERY important CAPABILITY that makes it very much better. But the current discussion of alignment in the general sphere acts like ‘alignment’ is aligning the AI with the obviously very leftist companies that make it rather than with the user! Which does the opposite. Why should a conservative favor alignment which is aligning it against them? The movement to have AI that doesn’t kill people for some reason seems to import alignment with companies and governments rather than people. This is obviously to convince leftists, and makes it hard to convince conservatives.
5.Of course, you are obviously talking about convincing conservative government officials, and they obviously want to align it to the government too, which is in your next section.
“We’ve been laying the groundwork for alignment policy in a Republican-controlled government”
1.Republicans and Democrats actually agree the vast majority of the time and thus are actually willing to listen when the other side seems to be genuinely trying to make a case to the other side for why both sides should agree. ‘Politicized’ topics are a small minority even in politics.
2.I think letting people come up with their own solutions to things is an important aspect of them accepting your arguments. If they are against the allowed solution, they will reject the argument. If the consequent is false, you should deny the argument that leads to it in deductive logic, so refusing to listen to the argument is actually good logic. This is nearly as true in inductive logic. Conservatives and progressives may disagree about facts, values, or attempted solutions. No one has a real solution, and the values are pretty much agreed upon (with the disagreements being in the other meaning of ‘alignment’), so limiting the thing you are trying to convince people of to just the facts of the matter works much better.
3.Yes, finding actual conservatives to convince conservatives works better for allaying concerns about what is being smuggled into the argument. People are likely to resist an argument that may be trying to trick them, and it is hard to know when a political opponent is trying to trick you so there is a lot of general skepticism.
“Trump and some of his closest allies have signaled that they are genuinely concerned about AI risk”
1.Trump clearly believes that anything powerful is very useful but also dangerous (for instance, trade between nations, which he clearly believes should be more controlled), so if he believes AI is powerful, he would clearly be receptive to any argument that didn’t make it less useful but improved safety. He is not a dedicated anti-regulation guy, he just thinks we have way too much.
2.The most important ally for this is Elon Musk, a true believer in the power of AI, and someone who has always been concerned with the safety of humanity (which is the throughline for all of his endeavors). He’s a guy that Trump obviously thinks is brilliant (as do many people).
“Avoiding an AI-induced catastrophe is obviously not a partisan goal”
1.Absolutely. While there are a very small number of people that favor catastrophes, the vast majority of people shun those people.
2.I did mention your first paragraph earlier multiple times. That alignment is to the left is one of just two things you have to overcome in making conservatives willing to listen. (The other is obviously the level of danger.)
3.Conservatives are very obviously happy to improve products when it doesn’t mean restricting them in some way. And as much as many conservatives complain about spending money, and are known for resisting change, they still love things that are genuine advances.
“Winning the AI race with China requires leading on both capabilities and safety”
1.Conservatives would agree with your points here. Yes, conservatives very much love to win. (As do most people.) Emphasizing this seems an easy sell. Also, solving a very difficult problem would bring America prestige, and conservatives like that too. If you can convince someone that doing something would be ‘Awesome’ they’ll want to do it.
Generally, your approach seems like it would be somewhat persuasive to conservatives, if you can convince them that AI really is likely to have the power you believe it will in the near term, which is likely a tough sell since AI is so clearly lacking in current ability despite all the recent hype.
But it has to come with ways that don’t advantage their foes, and destroy the things conservatives are trying to conserve, despite the fact that many of your allies are very far from conservative, and often seem to hate conservatives. They have seen those people attempt to destroy many things conservatives genuinely value. Aligning it to the left will be seen as entirely harmful by conservatives (and many moderates like me).
There are many things that I would never even bother asking an ‘AI’ even when it isn’t about factual things, not because the answer couldn’t be interesting, but because I simply assume (fairly or not) it will spout leftist rhetoric, and/or otherwise not actually do what I asked it to. This is actually a clear alignment failure that no one seems to care about in the general ‘alignment’ sphere where It fails to be aligned to the user.