I think our collective HHS needs are less “clever policy ideas” and more “actively shoot ourselves in the foot slightly less often.”
AnthonyC
That’s a good point about public discussions. It’s not how I absorb information, but I can definitely see that.
I’m not sure where I’m proposing bureaucracy? The value is in making sure a conversation efficiently adds value for both parties, by not having to spend time rehashing things that are much faster absorbed in advance. This avoids the friction of needing to spend much of the time rehashing 101-level prerequisites. A very modest amount of groundwork beforehand maximizes the rate of insight in discussion.
I’m drawing in large part from personal experience. A significant part of my job is interviewing researchers, startup founders, investors, government officials, and assorted business people. Before I get on a call with these people, I look them (and their current and past employers, as needed) up on LinkedIn and Google Scholar and their own webpages. I briefly familiarize myself with what they’ve worked on and what they know and care about and how they think, as best I can anticipate, even if it’s only for 15 minutes. And then when I get into a conversation, I adapt. I’m picking their brain to try and learn, so I try to adapt to their communication style and translate between their worldview and my own. If I go in with an idea of what questions I want answered, and those turn out to not be the important questions, or this turns out to be the wrong person to discuss it with, I change direction. Not doing this often leaves everyone involved frustrated at having wasted their time.
Also, should I be thinking of this as a debate? Because that’s very different than a podcast or interview or discussion. These all have different goals. A podcast or interview is where I think the standard I am thinking of is most appropriate. If you want to have a deep discussion, it’s insufficient, and you need to do more prep work or you’ll never get into the meatiest parts of where you want to go. I do agree that if you’re having a (public-facing) debate where the goal is to win, then sure, this is not strictly necessary. The history of e.g. “debates” in politics, or between creationists and biologists, shows that clearly. I’m not sure I’d consider that “meaningful” debate, though. Meaningful debates happen by seriously engaging with the other side’s ideas, which requires understanding those ideas.
I can totally believe this. But, I also think that responsibly wearing the scientist hat entails prep work before engaging in a four hour public discussion with a domain expert in a field. At minimum that includes skimming the titles and ideally the abstracts/outlines of their key writings. Maybe ask Claude to summarize the highlights for you. If he’d done that he’d have figured out many of the answers to many of these questions on his own, or much faster during discussion. He’s too smart not to.
Otherwise, you’re not actually ready to have a meaningful scientific discussion with that person on that topic.
If I’m understanding you correctly, then I strongly disagree about what ethics and meta-ethics are for, as well as what “individual selfishness” means. The questions I care about flow from “What do I care about, and why?” and “How much do I think others should or will care about these things, and why?” Moral realism and amoral nihilism are far from the only options, and neither are ones I’m interested in accepting.
I’m not saying it improves decision making. I’m saying it’s an argument for improving our decision making in general, if mundane decisions we wouldn’t normally think are all that important have much larger and long-lasting consequences. Each mundane decision affects a large number of lives that parts of me will experience, in addition to the effects on others.
I don’t see #1 affecting decision making because it happens no matter what, and therefore shouldn’t differ based on our own choices or values. I guess you could argue it implies an absurdly high discount rate if you see the resulting branches as sufficiently separate from one another, but if the resulting worlds are ones I care about, then the measure dilution is just the default baseline I start from in my reasoning. Unless there is some way we can or could meaningfully increase the multiplication rate in some sets of branches but not others? I don’t think that’s likely with any methods or tech I can foresee.
#2 seems like an argument for improving ourselves to be more mindful in our choices to be more coherent on average, and #3 an argument for improving our average decision making. The main difference I can think of for how measure affects things is maybe in which features of the outcome distribution/probabilities among choices I care about.
It’s not my field of expertise, so I have only vague impressions of what is going on, and I certainly wouldn’t recommend anyone else use me as a source.
I’m not entirely sure how many of these I agree with, but I don’t really think any of them could be considered heretical or even all that uncommon as opinions on LW?
All but #2 seem to me to be pretty well represented ideas, even in the Sequences themselves (to the extent the ideas existed when the Sequences got written).
#2 seems to me to rely on the idea that the process of writing is central or otherwise critical to the process of learning about, and forming a take on, a topic. I have thought about this, and I think for some people it is true, but for me writing is often a process of translating an already-existing conceptual web into a linear approximation of itself. I’m not very good at writing in general, and having an LLM help me wordsmith concepts and workshop ideas as a dialogue partner is pretty helpful. I usually form takes my reading and discussing and then thinking quietly, not so much during writing if I’m writing by myself. Say I read a bunch of things or have some conversations, take notes on these, write an outline of the ideas/structure I want to convey, and share the notes and outline with an LLM. I ask it to write a draft that it and I then work on collaboratively. How is that meaningfully worse than writing alone, or writing with a human partner? Unless you meant literally “Ask an LLM for an essay on a topic and publish it,” in which case yes, I agree.
It both is and isn’t an entry level question. On the one hand, your expectation matches the expectation LW was founded to shed light on, back when EY was writing The Sequences. On the other hand, it’s still a topic a lot of people disagree on and write about here and elsewhere.
There’s at least two interpretations of your question I can think of, with different answers, from my POV.
What I think you mean is, “Why do some people think ASI would share some resources with humans as a default or likely outcome?” I don’t think that and don’t agree with the arguments I’ve seen put forth for it.
But I don’t expect our future to be terrible, in the most likely case. Part of that is the chance of not getting ASI for one reason or another. But most of that is the chance that we will, by the time we need it, have developed an actually satisfying answer to “How do we get an ASI such that it shares resources with humans in a way we find to be a positive outcome?” None of us has that answer yet. But, somewhere out in mind design space are possible ASIs that value human flourishing in ways we would reflectively endorse and that would be good for us.
I think it is at least somewhat in line with your post and what @Seth Herd said in reply above.
Like, we talk about LLM hallucinations, but most humans still don’t really grok how unreliable things like eyewitness testimony are. And we know how poorly calibrated humans are about our own factual beliefs, or the success probability of our plans. I’ve also had cases where coworkers complain about low quality LLM outputs, and when I ask to review the transcripts, it turns out the LLM was right, and they were overconfidently dismissing its answer as nonsensical.
Or, we used to talk about math being hard for LLMs, but that disappeared almost as soon as we gave them access to code/calculators. I think most people interested in AI are overestimating how bad most other people are at mental math.
It’s a good question. I’d also say limiting mid-game advertising might be a good idea. I’m not really a sports fan in general and don’t gamble, but a few months ago I went to a baseball game, and people advertising—I think it was Draftkings? - were walking up and down the aisles constantly throughout the game. It was annoying, distracting, and disconcerting.
Thanks for laying out the parts I wasn’t thinking about!
I agree. In which case, I think the concrete proposal of “We need to invest more resources in this” is even more important. That way, we can find out if it’s impossible soon enough to use it as justification to make people stop pretending they’ve got it under control.
Over time I am increasingly wondering how much these shortcomings on cognitive tasks are a matter of evaluators overestimating the capabilities of humans, while failing to provide AI systems with the level of guidance, training, feedback, and tools that a human would get.
His view is that this is no different from people buying Taylor Swift tickets. In general I am highly sympathetic to this argument. I am not looking to tell people how much to invest or what goods to consume.
Hmm. Now you have me wondering if I should be biting that bullet in the other direction. I do think Live Nation’s practices could qualify as predatory. I guess the difference is that Swift herself has asked fans not to buy at such high (and scalped) prices.
Edit to add: please ignore my last sentence. @ChristianKI reminded me we definitely know that would not be allowed.
Yes, but I don’t see what VRA provisions the cases I listed could violate? Unless you can show state level election discrimination. And the standard for VRA violation is apparently much higher than I think it should be, given the difficulty of stopping or reducing gerrymandering.
Section 2 of the 14th amendment might apply, but it’s at best unclear whether it means there have to be popular elections for choosing electors at all. At the time it passed there were still plenty of people around who remembered when most state legislatures chose electors directly.
TBH, with the current court I think it’s more likely such a case would get the VRA gutted. The constitution explicitly gives state legislatures authority to apportion electors.
Note: this is not the same as trying to overrule popular vote results after an election happens. That got talked about a lot in 2020, I agree it would not be allowed.
Realistic in what sense?
This proposal requires a majority vote by four state legislatures, and increases each state’s influence in presidential politics.
A constitutional amendment requires a 2⁄3 majority of both houses of Congress and a majority in the legislatures of 3⁄4 of the states, and reduces the influence of about 20 of the states who are currently overrepresented in the electoral college.
This is true. But ideally I don’t think what we need is to be clever, except to the extent that it’s a clever way to communicate with people so they understand why the current policies produce bad incentives and agree about changing them.