I think if I got asked randomly at an AI conference if I knew what AGI was I would probably say no, just to see what the questioner was going to tell me.
lc
Saying “I have no intention to kill myself, and I suspect that I might be murdered” is not enough.
Frankly I do think this would work in many jurisdictions. It didn’t work for John McAfee because he has a history of crazy remarks, it sounds like the sort of thing he’d do to save face/generate intrigue if he actually did plan on killing himself, and McAfee made no specific accusations. But if you really thought Sam Altman’s head of security was going to murder you, you’d probably change their personal risk calculus dramatically by saying that repeatedly on the internet. Just make sure you also contact police specifically with what you know, so that the threat is legible to them as an institution.
If someone wants to murder you, they can. If you ever walk outside, you can’t avoid being shot by a sniper.
If the person or people trying to murder you is omnicompetent, then it’s hard. If they’re regular people, then there are at least lots of temporary measures you can take that would make it more difficult. You can fly to a random state or country and check into a motel without telling anybody where you are. Or you could find a bunch of friends and stay in a basement somewhere. Mobsters used to call doing that sort of thing for a time before a threat had receded “going to ground”.
Wearing a camera that is streaming to a cloud 24⁄7, and your friends can publish the video in case of your death… seems a bit too much. (Also, it wouldn’t protect you e.g. against being poisoned. But I think this is not a typical way how whistleblowers die.) Is there something simpler?
You could move to New York or London, and your every move outside of a private home or apartment will already be recorded. Then place a security camera in your house.
Tapping the sign:
Postdiction: Modern “cancel culture” was mostly a consequence of new communication systems (social media, etc.) rather than a consequence of “naturally” shifting attitudes or politics.
I have a draft that has wasted away for ages. I will probably post something this month though. Very busy with work.
The original comment you wrote appeared to be a response to “AI China hawks” like Leopold Aschenbrenner. Those people do accept the AI-is-extremely-powerful premise, and are arguing for an arms race based on that premise. I don’t think whether normies can feel the AGI is very relevant to their position, because one of their big goals is to make sure Xi is never in a position to run the world, and completing a Manhattan Project for AI would probably prevent that regardless (even if it kills us).
If you’re trying to argue instead that the Manhattan Project won’t happen, then I’m mostly ambivalent. But I’ll remark that that argument feels a lot more shaky in 2024 than in 2020, when Trump’s daughter is literally retweeting Leopold’s manifesto.
No, my problem with the hawks, as far as this criticism goes, is that they aren’t repeatedly and explicitly saying what they will do
One issue with “explicitly and repeatedly saying what they will do” is that it invites competition. Many of the things that China hawks might want to do would be outside the Overton window. As Eliezer describes in AGI ruin:
The example I usually give is “burn all GPUs”. This is not what I think you’d actually want to do with a powerful AGI—the nanomachines would need to operate in an incredibly complicated open environment to hunt down all the GPUs, and that would be needlessly difficult to align. However, all known pivotal acts are currently outside the Overton Window, and I expect them to stay there. So I picked an example where if anybody says “how dare you propose burning all GPUs?” I can say “Oh, well, I don’t actually advocate doing that; it’s just a mild overestimate for the rough power level of what you’d have to do, and the rough level of machine cognition required to do that, in order to prevent somebody else from destroying the world in six months or three years.”
What does winning look like? What do you do next? How do you “bury the body”? You get AGI and you show it off publicly, Xi blows his stack as he realizes how badly he screwed up strategically and declares a national emergency and the CCP starts racing towards its own AGI in a year, and… then what? What do you do in this 1 year period, while you still enjoy AGI supremacy? You have millions of AGIs which can do… stuff. What is this stuff? Are you going to start massive weaponized hacking to subvert CCP AI programs as much as possible short of nuclear war? Lobby the UN to ban rival AGIs and approve US carrier group air strikes on the Chinese mainland? License it to the CCP to buy them off? Just… do nothing and enjoy 10%+ GDP growth for one year before the rival CCP AGIs all start getting deployed? Do you have any idea at all? If you don’t, what is the point of ‘winning the race’?
The standard LW & rationalist thesis (which AFAICT you agree with) is that sufficiently superintelligent AI is a magic wand that allows you to achieve whatever outcome you want. So one answer would be to prevent the CCP from doing potentially nasty things to you while they have AGI supremacy. Another answer might be turn the CCP into a nice liberal democracy friendly to the United States. Both of these are within the range of things the United States has done historically when they have had the opportunity.
Did you look into: https://longtermrisk.org/?
“Spy” is an ambiguous term, sometimes meaning “intelligence officer” and sometimes meaning “informant”. Most ‘spies’ in the “espionage-commiting-person” sense are untrained civilians who have chosen to pass information to officers of a foreign country, for varying reasons. So if you see someone acting suspicious, an argument like “well surely a real spy would have been coached not to do that during spy school” is locally invalid.
- 29 Nov 2024 0:46 UTC; 1 point) 's comment on quila’s Shortform by (
Why hardware bugs in particular?
Well that’s at least a completely different kind of regulatory failure than the one that was proposed on Twitter. But this is probably motivated reasoning on Microsoft’s part. Kernel access is only necessary for IDS because of Microsoft’s design choices. If Microsoft wanted, they could also have exported a user API for IDS services, which is a project they are working on now. MacOS already has this! And Microsoft would never ever have done as good a job on their own if they hadn’t faced competition from other companies, which is why everyone uses CrowdStrike in the first place.
I have more than once noticed gell-mann amnesia (either in myself or others) about standard LessWrong takes on regulation. I think this community has a bias toward thinking regulations are stupider and responsible for more scarcity than they actually are. I would be skeptical of any particular story someone here tells you about how regulations are making things worse unless they can point to the specific rules involved.
For example: there is a persistent meme here and in sort of the rat-blogosphere that the FDA is what’s causing the food you make at home to be so much less expensive than the food you order out. But any person who has managed or owned a restaurant will tell you that the actual two biggest things making your hamburger expensive are labor and real estate, not complying with food service codes. People don’t spend as much money cooking at home because they’re getting both the kitchen and labor for free (or at least paying for it in other ways), and this would remain true even if it were legal to sell that food you’re making on the street without a license.
Another example that’s more specific and in my particular trade: Back in May, when the Crowdstrike bug happened, people were posting wild takes on Twitter and in my signal groupchats about how Crowdstrike is only used everywhere because the government regulators subject you to copious extra red tape if you try to switch to something else.
I cannot for the life of me imagine what regulators people were talking about. First of all a large portion of cybersecurity regulation, like SOC2, is self-imposed by the industry; second anyone who’s ever had to go through something unusual like ISO 27001 or FedRAMP knows that they do not give a rats ass what particular software vendor you use for anything. At most your accountant will ask if you use an endpoint defense product, and then require you to upload some sort of logfile regularly to make sure you’re using the product. Which is a different kind of regulatory failure, I suppose, but it’s not what caused the Crowdstrike bug.
As the name suggests, Leela Queen Odds is trained specifically to play without a queen, which is of course an absolutely bonkers disadvantage against 2k+ elo players. One interesting wrinkle is the time constraint. AIs are better at fast chess (obviously), and apparently no one who’s tried is yet able to beat it consistently at 3+0 (3 minutes with no timing increment)
Epstein was an amateur rapist, not a pro rapist. His cabal—the parts of it that are actually confirmed and not just speculated about baselessly—seems extremely limited in scope compared to the kinds of industrial conspiracies that people propose about child sex work. Most of epstein’s victims only ever had sex with Epstein, and only one of them—Virginia Giuffre—ever appears to have publicly claimed being passed around to many of Epstein’s friends.
What I am referring to are claims an underworld industry for exploiting children the primary purpose of which is making money. For example, in the Sound of Freedom, a large part of the plot hinges on the idea that there are professionals who literally bring kidnapped children from South America into the United States so that pedophiles here can have sex with them. I submit that this industry in particular does not exist, or at least would be a terrible way to make money on a risk-adjusted basis compared to drug dealing.
I think P(DOOM) is fairly high (maybe 60%) and working on AI research or accelerating AI race dynamics independently is one of the worst things you can do. I do not endorse improving the capabilities of frontier models and think humanity would benefit if you worked on other things instead.
That said, I hope Anthropic retains a market lead, ceteris paribus. I think there’s a lot of ambiguous parts of the standard AI risk thesis, and that there’s a strong possibility we get reasonablish alignment with a few quick creative techniques at the finish like faithful CoT. If that happens I expect it might be because Anthropic researchers decided to pull back and use their leverage to coordinate a pause. I also do not see what Anthropic could do from a research front at this point that would make race dynamics even worse than they already are, besides split up the company. I also do not want to live in a world entirely controlled by Sam Altman, and think that could be worse than death.
So one of the themes of sequences is that deliberate self-deception or thought censorship—deciding to prevent yourself from “knowing” or learning things you would otherwise learn—is almost always irrational. Reality is what it is, regardless of your state of mind, and at the end of the day whatever action you’re deciding to take—for example, not talking about dragons—you could also be doing if you knew the truth. So when you say:
But if I decided to look into it I might instead find myself convinced that dragons do exist. In addition to this being bad news about the world, I would be in an awkward position personally. If I wrote up what I found I would be in some highly unsavory company. Instead of being known as someone who writes about a range of things of varying levels of seriousness and applicability, I would quickly become primarily known as one of those dragon advocates. Given the taboos around dragon-belief, I could face strong professional and social consequences.
It’s not a reason not to investigate. You could continue to avoid these consequences you speak of by not writing about Dragons regardless of the results of your investigation. One possibility is that what you’re also avoiding, is guilt/discomfort that might come from knowing the truth and remaining silent. But through your decision not to investigate, the world is going to carry the burden of that silence either way.
Another theme of the sequences is that self-deception, deliberate agnosticism, and motivated reasoning are a source of surprising amounts of human suffering. Richard explains one way it goes horribly wrong here. Whatever subject you’re talking about, I’m sure there a lot of other people in your position who have chosen not to look into it for the same reasons. But if all of those people had looked into it, and faced whatever conclusion that resulted squarely, you yourself might not be in the position of having to face a harmful taboo in the first place. So the form of information hiding you endorse in the post is self-perpetuating, and is part of what helps keep the taboo strong.
I think the entire point of rationalism is that you don’t do things like this.
I’m flabbergasted by this degree/kind of altruism. I respect you for it, but I literally cannot bring myself to care about “humanity”’s survival if it means the permanent impoverishment, enslavement or starvation of everybody I love. That future is simply not much better on my lights than everyone including the gpu-controllers meeting a similar fate. In fact I think my instincts are to hate that outcome more, because it’s unjust.
Slight correction: catastrophic job loss would destroy the ability of the non-landed, working public to paritcipate in and extract value from the global economy. The global economy itself would be fine. I agree this is a natural conclusion; I guess people were hoping to get 10 or 15 more years out of their natural gifts.