I agree with your first paragraph. I think the second is off-topic in a way that encourages readers, and possibly you yourself, to get mind-killed. Couldn’t you use a less controversial topic as an example? (Very nearly any topic is less controversial.) And did you really need to compound the problem by assigning motivations to other people whom you disagree with? That’s a really good way to start a flame war.
Dumbledore's Army
I spent eighteen months working for a quantitative hedge fund. So we were using financial data—that is accounts, stock prices, things that are inherently numerical. (Not like, say, defining employee satisfaction.) And we got the data from dedicated financial data vendors, the majority from a single large company, who had already spent lots of effort to standardise it and make it usable. We still spent a lot of time on data cleaning.
The education system also tells students which topics they should care about and think about. Designing a curriculum is a task all by itself, and if done well it can be exceptionally helpful. (As far as I can tell, most universities don’t do it well, but there are probably exceptions.)
A student who has never heard of, say, a Nash equilibrium isn’t spontaneously going to Google for it, but if it’s listed as a major topic in the game theory module of their economics course, then they will. And yes, it’s entirely plausible that, once students know what to google for, then they find that YouTube or Wikipedia are more helpful than their official lecture content. Telling people they need to Google for Nash equilibria is still a valuable function.
As Richard Kennaway said, there are no essences of words. In addition to the points others have already made, I would add: Alice learns what the university tells her to. She follows a curriculum that someone else sets. Bob chooses his own curriculum. He himself decides what he wants to learn. In practice, that indicates a huge difference in their relative personalities, and it probably means that they end up learning different things.
While it’s certainly possible that Bob will choose a curriculum similar to a standard university course, most autodidacts end up picking a curriculum wildly different. Maybe the university’s standard chemistry course includes an introduction to medical drugs and biochemistry, and Bob already knows he doesn’t care about that so he can skip that part. Maybe the university’s standard course hardly mentions superconducting materials but Bob unilaterally decides to read everything about them and make that his main focus of study.
The argument given relies on a potted history of the US. It doesn’t address the relative success of UK democracy—which even British constitutional scholars sometimes describe as an elective dictatorship that notoriously doesn’t give a veto to minorities. It doesn’t address the history of France, Germany, Italy, Canada, or any other large successful democracy, none of which use the US system, most of which aren’t even presidential,
If you want to make a point about US history, fine. If you want to talk about democracy, please try drawing from a sample size larger than one.
I second GeneSmith’s suggestion to ask readers for feedback. Be aware that this is something of an imposition and that you’re asking people to spend time and energy critiquing what is currently not great writing. If possible, offer to trade—find some other people with similar problems and offer to critique their writing. For fiction, you can do this on CritiqueCircle but I don’t know of an organised equivalent for non-fiction.
The other thing you can do is to iterate. When you write something, say to yourself that you are writing the first draft of X. Then go away and do something else, come back to your writing later, and ask how you can edit it to make it better. You already described problems like using too many long sentences. So edit your work to remove them. If possible, aim to edit the day after writing—it helps if you can sleep on it. If you have time constraints, at least go away and get a cup of coffee or something in order to separate writing time from editing time.
First, I just wanted to say that this is an important question and thank you for getting people to produce concrete suggestions.
Disclaimer, I’m not a computer scientist so I’m approaching the question from the point of view of an economist. As such, I found it easier to come up with examples of bad regulation than good regulation.
Some possible categories of bad regulation:
1 It misses the point.
Example: a regulation only focused on making sure that the AI can’t be made to say racist things, without doing anything to address extinction risk.
Example: a regulation that requires AI-developers to employ ethics officers or risk management or similar without any requirement that they be effective. (Something similar to cyber-security today: the demand is that companies devote legible resources to addressing the problem, so they can’t be sued for negligence. The demand is not that the resources are used effectively to reduce societal risk.)
NB: I am implicitly assuming that a government which misses the point will pass bad regulation and then stop because they feel that they have now addressed ‘AI safety’. That is, passing bad legislation makes it less likely that good legislation is passed.
2 It creates bad incentives
Example: from 2027 the government will cap maximum permissible compute for training at whatever the maximum used by that date was. Companies are incentivised to race to do the biggest training runs they can before that date
Example: restrictions or taxes on compute apply to all AI companies unless they’re working on military or national security projects. Companies are incentivised to classify as much of their research as possible as military, meaning the research still goes ahead, but it’s now much harder for independent researchers to assess safety, because now it’s a military system with a security classification.
Example: the regulation makes AI developers liable for harms caused by AI but makes an exception for open-source projects. There is now a financial incentive to make models open-source
3 It is intentionally accelerationist, without addressing safety
A government that wants to encourage a Silicon Valley type cluster in its territory offers tax breaks for AI research over and above existing tax credits. Result: they are now paying people for going into capabilities research, so there is a lot more capabilities research
Industrial policy, or supply chain friendshoring, that results in a lot of new semiconductor fabs being built (this is an explicit aim of America’s IRA). The result is a global glut of chip capacity, and training AI ends up a lot cheaper than in a free-market situation.
Although clown attacks may seem mundane on their own, they are a case study proving that powerful human thought steering technologies have probably already been invented, deployed, and tested at scale by AI companies, and are reasonably likely to end up being weaponized against the entire AI safety community at some point in the next 10 years.
I agree that clown attacks seem to be possible. I accept a reasonably high probability (c70%) that someone has already done this deliberately—the wilful denigration of the Covid lab leak seems like a good candidate, as you describe. But I don’t see evidence is that deliberate clown attacks are widespread. And specifically, I don’t see evidence that these are being used by AI companies. (I suspect that most current uses are by governments.)
I think it’s fair to warn against the risk that clown attacks might be used against the AI-not-kill-everyone community, and that this might have already happened, but you need a lot more evidence before asserting that it has already happened. If anything, the opposite has occurred, as the CEOs of all major AI companies signed onto the declaration stating that AGI is a potential existential risk. I don’t have quantitative proof, but from reading a wide range of media across the last couple of years, I get the impression that the media and general public are increasingly persuaded that AGI is a real risk, and are mostly no longer deriding the AGI-concerned as being low-status crazy sci-fi people.
Do you still think your communication was better than the people who thought the line was being towed, and if so then what’s your evidence for that?
We are way off topic, but I am actually going to say yes. If someone understands that English uses standing-on-the-right-side-of-a-line as a standard image for obeying rules, then they are also going to understand variants of the same idea. For example, “crossing a line” means breaking rules/norms to a degree that will not be tolerated, as does “stepping out of line”. A person who doesn’t grok that these are all referring to the same basic metaphor of do-not-cross-line=rule is either not going to understand the other expressions or is going to have to rote-learn them all separately. (And even after rote-learning, they will get confused by less common variants, like “setting foot over the line”.) And a person who uses tow not toe the line has obviously not grokked the basic metaphor.
To recap:
original poster johnswentsworth wrote a piece about people LARPing their jobs rather than attempting to build deeper understanding or models-with-gears
aysja added some discussion about people failing to notice that words have referents, as a further psychological exploration of the LARPing idea, and added tow/toe the line as a related phenomenon. They say “LARPing jobs is a bit eerie to me, too, in a similar way. It’s like people are towing the line instead of toeing it. Like they’re modeling what they’re “supposed” to be doing, or something, rather than doing it for reasons.”
You asked for further clarification
I tried using null pointers as an alternative metaphor to get at the same concept.
No one is debating the question of whether learning etymology of words is important and I’m not sure how you got hung up on that idea. And toe/tow the line is just an example of the problem of people failing to load the intended image/concept, while LARPing (and believing?) that they are in fact communicating in the same way as people who do.
Does that help?
Not sure I understand what you’re saying with the “toe the line” thing.
The initial metaphor was ‘toe the line’ meaning to obey the rules, often reluctantly. Imagine a do-not-cross line drawn on the ground and a person coming so close to the line that their toe touched it, but not in fact crossing the line. To substitute “tow the line”, which has a completely different literal meaning, means that the person has failed to comprehend the metaphor, and has simply adopted the view that this random phrase has this specific meaning.
I don’t think aysja adopts the view that it’s terrible to put idiomatic phrases whole into your dictionary. But a person who replaces a meaningful specific metaphor with a similar but meaningless one is in some sense making less meaningful communication. (Note that this also holds if the person has correctly retained the phrase as ‘toe the line’ but has failed to comprehend the metaphor.)
aysja calls this failing to notice that words have referents, and I think that gets at the nature of the problem. These words are meant to point at a specific image, and in some people’s minds they point at a null instead. It’s not a big deal in this specific example, but a) some people seem to have an awful lot of null pointers and b) sometimes the words pointing at a null are actually important. For example, think of a scientist who can parrot that results should be ‘statistically significant’ but literally doesn’t understand the difference between doing one experiment and reporting the significance of the results, and doing 20 experiments and only reporting the one ‘significant’ result
NB: the link to the original blog on the Copenhagen Interpretation of Ethics is now broken and redirects to a shopping page.
Yes. But I think most of us would agree that coercively-breeding or -sterilising people is a lot worse than doing the same to animals. The point here is that intelligent parrots could be people who get treated like animals, because they would have the legal status of animals, which is obviously a very bad thing.
And if the breeding program resulted in gradual increases in intelligence with each generation, there would be no bright line where the parrots at t-minus-1 were still animals but the parrots at time t were obviously people. There would be no fire alarm to make the researchers switch over to treating them like people, getting informed consent etc. Human nature being what it is, I would expect the typical research project staff to keep rationalising why they could keep treating the parrots as animals long after the parrots had achieved sapience.
(There is separate non-trivial debate about what sapience is and where that line should be drawn and how you could tell if a given creature was sapient or not, but I’m not going down that rabbit hole right now.)
You ask if we could breed intelligent parrots without any explanation of why we would want to. In short, because we can doesn’t mean we should. I’m not 100% against the idea, but anyone trying this seriously needs to think about questions like:
At what point do the parrots get legal rights? If a private effort succeeds in breeding intelligent parrots without government buy-in, it will in effect be creating sapient people who will be legally non-persons and property. There are a lot of ways for that to go wrong.
ETA: presumably the researchers will want to keep controlling the parrots‘ reproduction, even as the parrots become more intelligent. What happens if the parrots have their own ideas about who to breed with? Or the rejected parrots don’t want to be sterilised? Will the parrot-breeders end up repeating some of the atrocities of the 20th century eugenics movement because they act like they’re breeding animals even once they are breeding people?
Is there a halfway state where they bred semi-intelligent parrots that are smarter than normal parrots but not as smart as people? (Could also be the result of a failed project.) What happens to them? At what stage does an animal become intelligent enough that keeping it as a pet is wrong? What consequences will there be if you just release the semi-intelligent parrots into the wild?
What protections are there if the parrot-breeding project runs out of funds or otherwise fails? Will it end up doing the equivalent of releasing a bunch of small children or mentally handicapped people into the wild where they’re ill-equipped to survive, because young intelligent parrots don’t get the legal protections granted to human children?
If there was a really compelling reason to breed intelligent parrots, then these objections could be overcome. But I don’t get any sense from you of what that compelling reason is. “Somebody thinks it sounds cool” is a good reason to do a lot of things, but not when the consequences involve something as ethically complex as creating a sapient species.
Metaculus lets you write private questions. Once you have an account, it’s as simple as selecting ‘write a question’ from the menu bar, and then setting the question to private not public, as a droplist in the settings when you write it. You can resolve your own questions ie mark them as yes/no or whatever, and then it’s easy to use Metaculus’ tools for examining your track record, including Brier score.
@andeslodes, congratulations on a very good first post. You clearly explained your point of view, and went through the text of the proposed Act and the background of the relevant Senators in enough detail to understand why this is important new information. I was already taking the prospect of aliens somewhat-seriously, but I updated higher after this post.
I notice that Metaculus is at a just 1.1% probability confirmed alien tech by 2030, which seems low.
I had not, and still don’t know about it, can you post a link?
Thank you for taking the time to highlight this. I hope that some LessWrongers with suitable credentials will sign up and try to get a major government interested in x-risk.
I see a lot of people commenting here and in related posts on the likelihood of aliens deliberately screwing with us and/or how improbable it is that advanced aliens would have bad stealth technology or ships that crash. So I wanted to add a few other possible scenarios into the discussion:
- Earth is a safari park where people go to see the pristine native wildlife (us). Occasionally some idiot tourist gets too close and disturbs that wildlife despite the many very clear warnings telling them not to. (Anyone who has ever worked with human tourists will probably have some sympathy for this explanation.)
- Observation of Earth is baby’s first science project. High-schoolers and college undergrads can practice their anthropology/xenology/whatever on us. Yes, they’re supposed to keep their existence secret from the natives, otherwise it wouldn’t be good science, but they’re kids with cheap disposable drones and sometimes they screw up.
- There is a Galactic Federation with serious rules about not disturbing primitive civilisations (us), and only accredited scientists can go observe them, but even scientists get bored and drunk and sometimes do dumb stuff when they’re on a low-prestige project a long way from any supervisors.
Obviously, these are all human-inspired examples, aliens could have other motivations incomprehensible to us. (Imagine trying to explain a safari park to a stone-age hunter-gatherer. Then remember that aliens potentially have a comparable tech gap to us and non-human psychology.)
Some takeaways from the scenarios above:
Aliens aren’t necessarily monolithic. There may be rule-setting entities (whoever says ‘don’t go near the wildlife’) which are separate from rule-following entities, and rules about not bothering Earthlings may not be maximally enforced.
We shouldn’t assume we’re seeing their most advanced tech. Human beings can manufacture super-safe jet planes that approximately never crash. We still build cheap consumer drones that sometimes fall out of the sky. Saying “how come an ultra-advanced alien civilisation can’t build undetectable craft?” is like looking at a kid whose drone just crashed in a park and saying “you’d think humanity could build planes that stay in the air”. We can. We just don’t always do it, nor should we.
We shouldn’t assume they care that much. Whoever is in charge of enforcing the rules about “don’t bother earthlings” might be the equivalent of a bored bureaucrat whose budget just got cut and who gets paid the same whether or not they actually do their job. Or a schoolteacher who’s more interested in getting the kids to complete their project than in complying with every pettifogging rule that no one ever checks anyway.
I realise all the above makes it sound like I believe in aliens, so for the record, I think that Chinese drones or other mundane causes are the most likely explanations for American UAP reports, and that hoax/disinformation-op/mental-breakdown are the most likely explanations for the Grusch whistleblower claims. But I would put about a 10% probability on actual aliens, which I realise is a lot higher than most LessWrongers.
A parable to elucidate my disagreement with parts of Zvi’s conclusion:
Jenny is a teenager in a boarding school. She starts cutting herself using razors. The school principal bans razors. Now all the other kids can’t shave and have to grow beards (if male) and have hairy armpits. Jenny switches to self-harming with scissors. The school principal bans scissors. Now every time the students receive a package they have to tear it open with their bare hands, and anyone physically weak or disabled has to go begging for someone to help them. Jenny smashes a mirror into glass shards, and self-harms with those. The principal bans mirrors...
Any sane adult here would be screaming at the principal. “No! Stop banning stuff! Get Jenny some psychological treatment!”
Yes, I know the parable is not an exact match to sports betting, but it feels like a similar dynamic. There are some people with addictive personality disorder and they will harm themselves by using a product that the majority of the population can enjoy responsibly[1]. (Per a comment above, 39% of the US population bet online, of whom only a tiny fraction will have a gambling problem.) The product might be gambling, alcohol, online porn, cannabis, or something else. New such products may be invented in future. But, crucially, if one product is not available, then these people will very likely form an addiction to something else[2]. That is what ‘addictive personality disorder’ means.
Sometimes the authorities want to ban the product in order to protect addicts. But no one ever asks the question: how can we stop them from wanting to self-harm in the first place? Because just banning the current popular method of self-harming is not a solution if people will go on to addict themselves to something else instead[3]. I feel that the discourse has quietly assumed a fabricated option: if these people can’t gamble then they will be happy unharmed non-addicts.
I am a libertarian, and I have great sympathy for Maxwell Tabarrok’s arguments. But in this case, I think the whole debate is missing a very important question. We should stop worrying about whether [specific product] is net harmful and start asking how we can fix the root cause of the problem by getting effective treatment to people with addictive personality disorder. And yes, inventing effective treatment and rolling it out on a population scale is a much harder problem than just banning [target of latest moral panic]. But it’s the problem that society needs to solve.
In my parable the principal bans ‘useful’ things, while authorities responding to addictive behaviour usually want to ban entertainment. That isn’t a crux for me. Entertainment is intrinsically valuable, and banning it costs utility—potentially large amounts of utility when multiplied across an entire population.
Or more than one something else. People can be addicted to multiple things, I’m just eliding that for readability.
Zvi quotes research saying that legalizing sports betting resulted in a 28% increase in bankruptcies, which suggests it might be more financially harmful than whatever other addictions people had before, but that’s about as much as we can say.