I had not, and still don’t know about it, can you post a link?
Dumbledore's Army
Thank you for taking the time to highlight this. I hope that some LessWrongers with suitable credentials will sign up and try to get a major government interested in x-risk.
I see a lot of people commenting here and in related posts on the likelihood of aliens deliberately screwing with us and/or how improbable it is that advanced aliens would have bad stealth technology or ships that crash. So I wanted to add a few other possible scenarios into the discussion:
- Earth is a safari park where people go to see the pristine native wildlife (us). Occasionally some idiot tourist gets too close and disturbs that wildlife despite the many very clear warnings telling them not to. (Anyone who has ever worked with human tourists will probably have some sympathy for this explanation.)
- Observation of Earth is baby’s first science project. High-schoolers and college undergrads can practice their anthropology/xenology/whatever on us. Yes, they’re supposed to keep their existence secret from the natives, otherwise it wouldn’t be good science, but they’re kids with cheap disposable drones and sometimes they screw up.
- There is a Galactic Federation with serious rules about not disturbing primitive civilisations (us), and only accredited scientists can go observe them, but even scientists get bored and drunk and sometimes do dumb stuff when they’re on a low-prestige project a long way from any supervisors.
Obviously, these are all human-inspired examples, aliens could have other motivations incomprehensible to us. (Imagine trying to explain a safari park to a stone-age hunter-gatherer. Then remember that aliens potentially have a comparable tech gap to us and non-human psychology.)
Some takeaways from the scenarios above:
Aliens aren’t necessarily monolithic. There may be rule-setting entities (whoever says ‘don’t go near the wildlife’) which are separate from rule-following entities, and rules about not bothering Earthlings may not be maximally enforced.
We shouldn’t assume we’re seeing their most advanced tech. Human beings can manufacture super-safe jet planes that approximately never crash. We still build cheap consumer drones that sometimes fall out of the sky. Saying “how come an ultra-advanced alien civilisation can’t build undetectable craft?” is like looking at a kid whose drone just crashed in a park and saying “you’d think humanity could build planes that stay in the air”. We can. We just don’t always do it, nor should we.
We shouldn’t assume they care that much. Whoever is in charge of enforcing the rules about “don’t bother earthlings” might be the equivalent of a bored bureaucrat whose budget just got cut and who gets paid the same whether or not they actually do their job. Or a schoolteacher who’s more interested in getting the kids to complete their project than in complying with every pettifogging rule that no one ever checks anyway.
I realise all the above makes it sound like I believe in aliens, so for the record, I think that Chinese drones or other mundane causes are the most likely explanations for American UAP reports, and that hoax/disinformation-op/mental-breakdown are the most likely explanations for the Grusch whistleblower claims. But I would put about a 10% probability on actual aliens, which I realise is a lot higher than most LessWrongers.
Possible yes, but if all advanced civs are highly prioritising stealth, that implies some version of the Dark Forest theory, which is terrifying.
I can come up with a hypothesis about the behaviour of the sources: the drones you send to observe and explore a planet might be disposable. (Eg we’ve left rovers behind on Mars because it’s not worth the effort to retrieve them from the gravity well.) Although if the even-wilder rumours about bio-alien corpses are true, that one fails too.
But the broader picture: that there are high-tech aliens out there who we haven’t observed doing things like building Dyson spheres or tiling the universe with computronium? They’re millions of years ahead of us and somehow didn’t either progress to building mega-tech or to AI apocalypse? They’re not millions of years ahead of us and there’s some insane coincidence where two intelligent species emerged on different planets at the same time but also there are no older civs that already grabbed their lightcone? I’m as boggled as you.
I’m kind of hoping this whole thing is a hoax or deliberate disinformation operation or something because I have absolutely no idea what to think about the alternative. But after the amount of leaks about UAPs over the last few years, I’m at at least 10% that there are literal alien spacecraft visiting our planet.
I don’t think the hyperloop matters one way or the other to your original argument (which I agree with). Someone can be a genius and still make mistakes and fail to succeed at every single goal. (For another example, consider Isaac Newton who a) wasted a lot of time studying alchemy and still failed to transform lead into gold and b) screwed up his day job at the Royal Mint so badly that England ended up with a de facto gold standard even though it was supposed to have both silver and gold currency. He’s still a world-historic genius for inventing calculus.)
OP discusses CFCs in the main post. But yes, that’s the most hopeful precedent. The problem being that CFCs could be replaced by alternatives that were reasonably profitable for the manufacturers, whereas AI can’t be.
The child labour example seems potentially hopeful for AI given that fears of AI taking jobs are very real and salient, even if not everyone groks the existential risks. Possible takeaway: rationalists should be a lot more willing to amplify, encourage and give resources to protectionist campaigns to ban AI from taking jobs, even though we are really worried about x-risk not jobs.
Related point: I notice that the human race has not banned gain-of-function research even though it seems to have high and theoretically even existential risks. I am trying to think of something that’s banned purely for having existential risk and coming up blank[^1].
Also related: are there religious people who could be persuaded to object to AI in the same way they object to eg human gene editing? Can we persuade religious influencers that building AI is ‘playing God’ in some way? (Our very atheist community are probably the wrong people to reach out to the religious—do we know any intermediaries who could be persuaded?)
Or to summarise: if we can’t get AGI banned/regulated for the right reasons (and we should keep trying), can we support or encourage those who want to ban AGI for the wrong reasons? Or at minimum, not stand in their way? (I don’t like advocating Dark Arts, but my p(doom) is high enough that I would encourage any peaceful effort to ban, restrict, or slow AI development, even if it means working with people I disagree with on practically everything else.)
[^1]European quasi-bans on genetic modification of just about anything are one possibility. But those seem more like reflexive anti-corporatism plus religious fear of playing God, plus a pre-existing precautionary attitude applied to food items.)
One more question for your list: what industries have not been subject to this regulatory ratchet and why not?
I‘m thinking of insecure software, although others may be able to come up with more examples. Right now software vendors have no real incentive to ship secure code. If someone sells a connected fridge which any thirteen-year-old can recruit into their botnet, there’s no consequence for the vendor. If Microsoft ships code with bugs and Windows gets hacked worldwide, all that they suffer is embarrassment[1]. And this situation has stayed stable since the invention of software. Even after high-publicity bugs like Heartbleed or NotPetya, the industry hasn’t suffered the usual regulatory response of ’something must be done, this is something so we’re going to do it’.
You don’t have to start with a pre-approval model. You could write a law requiring all software to be ‘reasonably secure’, and create a 1906-FDA type policing agency that would punish insecure software after the fact. This seems like the first step of the regulatory ratchet, it would be feasible, but no one (in any country?) has done it and I don’t know why.
We’ve also had demonstrations in principle of the ability to hack highly-regulated objects like medical devices and autos, but still no ratchet even in regulated domains and I don’t understand why not.
My best explanation for this non-regulation is that nothing bad enough has happened. We haven’t had any high-profile safety incidents where someone has died, and that is what starts the ratchet. But we have had high-profile hacks, and at least one has taken down hospitals and killed someone in a fairly direct way, and I don’t even remember any journalists pushing for regulation. I notice that I’m confused.
Software is an example of an industry where the first step of the ratchet never happened. Are there any industries which got a first step like a policing-agency and then didn’t ratchet from there even after high-publicity disasters? Can we learn how to prevent or interrupt the regulatory ratchet?
[1] Bruce Schneier has been pushing a liability model as a possible solution for at least ten years, but nothing has changed.
There are already countries where prostitution is legal including the Netherlands, the UK and the US state of Nevada. (Not a complete list, just the first three I thought of off the top of my head.) None of them require people to prostitute themselves rather than accessing public benefits.
Likewise, there are countries, including the USA where it’s legal to pay people for donating human eggs, and probably other body parts. So far as I know, no state in the US requires women to attempt that before accessing welfare, and the US welfare system is less generous than European ones.
Empirically, your concern seems not to have any basis in fact.
Thanks, that’s a good example. I’ll think about it.
I think I overstated slightly. And I’m focusing on the rationale for taking away options as much as the taking away itself. I’d restate to something like: taking people’s options away for their own good, because you think they will make the wrong decisions for themselves, is almost always bad.
There’s a discussion further down the thread about arms race dynamics, where you take away options in order to solve a coordination problem, where I accept that it is sometimes a good idea. Note that the arms race example recognises that everyone involved is behaving in a way that is individually rational. But the idea that politicians and regulators, living generally comfortable lives, know better than poor people what is good for them is something I really object to. It reminds me of the Victorian reply to the women’s rights movement: that male relatives should be able to control women’s lives because they could make better decisions than women would make for themselves. Ugh.
To the specific sex example, yes it’s unpleasant to be in that situation, everyone agrees. The problem is that banning payment in sex forces people into situations they find even worse, like homelessness. I would prefer governments to solve these problems constructively, like by building more housing, and said so in a footnote to the main post, but in the meantime we should stop banning poor people from doing the best they can to cope with the world that actually exists.
The game theory example ignores the principal-agent effect. We are not talking about you rationally choosing to give up some of your options. We are talking about someone else, who is not well-aligned with you, taking away your options, generally without input from you.
I’m also introverted and nerdy bordering on autistic, so I can’t make a claim that my experiences are different from yours in that sense. I think some of my perspective comes from growing up in developing countries and knowing what real poverty looks like, even though I haven’t experienced it myself. And some of my perspective is that I value my own personal autonomy very highly, so I oppose people who want to take autonomy away from others, and that feeling seems to be stronger than it is for most people.
This strikes me as a fully general argument against making any form of moral progress. Some examples:
An average guy in the 1950s notices that the main argument against permitting homosexuality seems to be “God disapproves of it”. But he doesn’t believe in God. Should he note that there is a strong cultural guardrail against “sexual deviancy” according to the local cultural definition, and oppose the gay rights movement anyway?
Does the answer to this question change by the 1990s when the cultural environment is shifting? Or by the 2020s? If so, is it right that the answer to an ethical question should change based on other people’s attitudes? (Obviously the answer to the pragmatic question of “how much trouble will I get into for speaking out?” changes, but that’s not what we’re debating.)
A mother from a culture where coercive arranged marriages are normal notices that the culturally-endorsed reason for this practice is that young adults are immature and parents are better at understanding what is good for them, so parents should arrange marriages to secure their offspring’s happiness. She notices that many parents actually make marriage decisions based on what is economically best for the parents, and that even those trying to ensure the young person’s happiness often get it wrong. Should the mother think “this is a guardrail preventing breakdown of family structure, or filial respect, or something important, I will arrange marriages for my own sons and daughters anyway?”
NB: I’m trying to be clear I’m talking about arranged marriage of adults, not child marriage, although that is also a practice that has been endorsed by many cultures, who would presumably be able to name “guardrails” that banning child marriage would cross.
I get that you said “respect” not “obey” guardrails presumably for reasons like these, but without more discussion about when you “respect” the guardrail but bypass it anyway, this seems roughly equivalent to saying that there is always a very heavy burden of proof to change moral norms, even where the existing norms seem to be hurting people. (In the two examples above, gay people, and everyone who gets married to someone they don’t like).
″...an important thing to reiterate is that creating a world where people have good options is good, but banning a bad option isn’t the way to do it.” This is very well-phrased and I strongly agree. In fact, I think you have managed to summarise my view better than I did myself!
But is the free tuberculosis treatment in India because kidney selling was banned? Or because countries which get to a certain development level try to give at least some basic free healthcare to their people? In a counterfactual where India had legalised kidney selling for the last twenty years, do you think they would not have free treatment for tuberculosis?
Just so you know, there are a lot of people disagreeing with me on this page, and you are the only one I have downvoted. I’m surprised that someone who has been on LessWrong as long as you would engage in such blatant strawmanning. Slavery? Really?
Agree, which makes it even more heinous that governments prevent people from doing it.
@andeslodes, congratulations on a very good first post. You clearly explained your point of view, and went through the text of the proposed Act and the background of the relevant Senators in enough detail to understand why this is important new information. I was already taking the prospect of aliens somewhat-seriously, but I updated higher after this post.
I notice that Metaculus is at a just 1.1% probability confirmed alien tech by 2030, which seems low.