I’m not sure most people would have a near-zero chance of getting anywhere.
If AGI researchers took physical security super seriously, I bet this would make a malicious actors quite unlikely to succeed. But it doesn’t seem like they’re doing this right now, and I’m not sure they will start.
Theft, extortion, hacking, eavesdropping, and building botnets are things a normal person could do, so I don’t see why they wouldn’t have a fighting chance. I’ve been thinking about how someone could currently acquire private code from Google or some other current organization working on AI, and it sounds pretty plausible to me. I’m a little reluctant to go into details here due to informational hazards.
What do you think the difficulties would that make most people have a near-zero chance of getting anywhere? Is it from the difficulty in acquiring the code for the AGI? Or getting a mass of hacked computers big enough to compete with AGI researchers? Both seem pretty possible to me for a dedicated individual.
Hi! Missed your reply for a few days. Sorry, I’m new here.
I’m not sure most people would have a near-zero chance of getting anywhere.
I think our disagreement may stem from our different starting points. I’m considering literally every person on the planet and saying that maybe 1% of them would act malevolently given AGI. So a sadistic version of me, say, would probably be in the 98% percentile of all sadists in terms of ability to obtain AGI (I know people working in AI, am two connections away from some really key actors, have a university education, have read Superintelligence etc.), yet my probability of success would be absolutely tiny – like 0.01% even if I tried my absolute hardest. That’s what I mean when I say that most people would have a near-zero chance. There are maybe a few hundred (??) people in the world who we even need to consider.
Theft, extortion, hacking, eavesdropping, and building botnets are things a normal person could do, so I don’t see why they wouldn’t have a fighting chance.
I disagree. Theft and extortion are the only two (sort of) easy ones on the list imo. Most people can’t hack or build botnets at all, and only certain people are in the right place to eavesdrop.
But OK, maybe this isn’t a real disagreement between us. My starting point is considering literally everybody on the planet, and I think you are only taking people into account who have a reasonable shot.
How many people on the planet do you think meet the following conditions?
yet my probability of success would be absolutely tiny – like 0.01% even if I tried my absolute hardest. That’s what I mean when I say that most people would have a near-zero chance. There are maybe a few hundred (??) people in the world who we even need to consider
Could you explain how you come to this conclusion? What do you think your fundamental roadblock would be? Getting the code for AGI or beating everyone else to superintelligence?]
How many people on the planet do you think meet the following conditions?
Have > 1% of obtaining AGI.
Have malevolent intent.
It’s important to remember that there may be quite a few people who would act somewhat maliciously if they took control of AGI, but I best the vast majority of these people would never even consider trying to take control of the world. I think trying to control AGI would just be far too much work and risk for the vast majority of people who want to cause suffering.
However, there still may be a few people want to harm the world enough to justify trying. They would need to be extremely motivated to cause damage. It’s a big world, though, so I wouldn’t be surpized if there were a few people like this.
I think that a typical, highly motivated malicious actor would have much higher than 1% probability of succeeding. (If mainstream AI research starts taking security against malicious actors super seriously, the probability of the malicious actors’ success would be very low, but I’m not sure it will be taken seriously enough.)
I disagree. Theft and extortion are the only two (sort of) easy ones on the list imo. Most people can’t hack or build botnets at all, and only certain people are in the right place to eavesdrop.
A person might not know how to hack, building botnets, or eavesdrop, but they could learn. I think a motivated, reasonably capable individual would be able to become proficient in all those things. And they potentially will have decades of training before they would need to use it.
yet my probability of success would be absolutely tiny – like 0.01% even if I tried my absolute hardest. That’s what I mean when I say that most people would have a near-zero chance. There are maybe a few hundred (??) people in the world who we even need to consider
Could you explain how you come to this conclusion? What do you think your fundamental roadblock would be? Getting the code for AGI or beating everyone else to superintelligence?]
My fundamental roadblock would be getting the code to AGI. My hacking skills are non-existent and I wouldn’t be able to learn enough to be useful even in a couple of decades. I wouldn’t want to hire anybody to do the hacking for me as I wouldn’t trust the hacker to give me my unlimited power once he got his hands on it. I don’t have any idea how to assemble an elite armed squad or anything like that either.
My best shot would be to somehow turn my connections into something useful. Let’s pretend I’m an acquaintance of Elon Musk’s PA (this is a total fabrication, but I don’t want to give any actual names, and this is the right ballpark). I’d need to somehow find a way to meet Elon Musk himself (1% chance), and then impress him enough that, over the years, I could become a trusted ally (0.5%). Then, I’d need Elon to be the first one to get AGI (2%) and then I’d need to turn my trusted position into an opportunity to betray him and get my hands on the most important invention ever (5%). So that’s 20 million to one, but I’ve only spent a couple of hours thinking about it. I could possibly shorten the odds to 10,000 to one if I really went all in on the idea.
How would you do it?
However, there still may be a few people want to harm the world enough to justify trying. They would need to be extremely motivated to cause damage. It’s a big world, though, so I wouldn’t be surpized if there were a few people like this.
Here we agree. I think most of the danger will be concentrated in a few, highly competent individuals with malicious intent. They could be people close to the tech or people with enough power to get it via bribery, extortion, military force etc.
I’m not sure most people would have a near-zero chance of getting anywhere.
If AGI researchers took physical security super seriously, I bet this would make a malicious actors quite unlikely to succeed. But it doesn’t seem like they’re doing this right now, and I’m not sure they will start.
Theft, extortion, hacking, eavesdropping, and building botnets are things a normal person could do, so I don’t see why they wouldn’t have a fighting chance. I’ve been thinking about how someone could currently acquire private code from Google or some other current organization working on AI, and it sounds pretty plausible to me. I’m a little reluctant to go into details here due to informational hazards.
What do you think the difficulties would that make most people have a near-zero chance of getting anywhere? Is it from the difficulty in acquiring the code for the AGI? Or getting a mass of hacked computers big enough to compete with AGI researchers? Both seem pretty possible to me for a dedicated individual.
Hi! Missed your reply for a few days. Sorry, I’m new here.
I think our disagreement may stem from our different starting points. I’m considering literally every person on the planet and saying that maybe 1% of them would act malevolently given AGI. So a sadistic version of me, say, would probably be in the 98% percentile of all sadists in terms of ability to obtain AGI (I know people working in AI, am two connections away from some really key actors, have a university education, have read Superintelligence etc.), yet my probability of success would be absolutely tiny – like 0.01% even if I tried my absolute hardest. That’s what I mean when I say that most people would have a near-zero chance. There are maybe a few hundred (??) people in the world who we even need to consider.
I disagree. Theft and extortion are the only two (sort of) easy ones on the list imo. Most people can’t hack or build botnets at all, and only certain people are in the right place to eavesdrop.
But OK, maybe this isn’t a real disagreement between us. My starting point is considering literally everybody on the planet, and I think you are only taking people into account who have a reasonable shot.
How many people on the planet do you think meet the following conditions?
Have > 1% of obtaining AGI.
Have malevolent intent.
Could you explain how you come to this conclusion? What do you think your fundamental roadblock would be? Getting the code for AGI or beating everyone else to superintelligence?]
It’s important to remember that there may be quite a few people who would act somewhat maliciously if they took control of AGI, but I best the vast majority of these people would never even consider trying to take control of the world. I think trying to control AGI would just be far too much work and risk for the vast majority of people who want to cause suffering.
However, there still may be a few people want to harm the world enough to justify trying. They would need to be extremely motivated to cause damage. It’s a big world, though, so I wouldn’t be surpized if there were a few people like this.
I think that a typical, highly motivated malicious actor would have much higher than 1% probability of succeeding. (If mainstream AI research starts taking security against malicious actors super seriously, the probability of the malicious actors’ success would be very low, but I’m not sure it will be taken seriously enough.)
A person might not know how to hack, building botnets, or eavesdrop, but they could learn. I think a motivated, reasonably capable individual would be able to become proficient in all those things. And they potentially will have decades of training before they would need to use it.
My fundamental roadblock would be getting the code to AGI. My hacking skills are non-existent and I wouldn’t be able to learn enough to be useful even in a couple of decades. I wouldn’t want to hire anybody to do the hacking for me as I wouldn’t trust the hacker to give me my unlimited power once he got his hands on it. I don’t have any idea how to assemble an elite armed squad or anything like that either.
My best shot would be to somehow turn my connections into something useful. Let’s pretend I’m an acquaintance of Elon Musk’s PA (this is a total fabrication, but I don’t want to give any actual names, and this is the right ballpark). I’d need to somehow find a way to meet Elon Musk himself (1% chance), and then impress him enough that, over the years, I could become a trusted ally (0.5%). Then, I’d need Elon to be the first one to get AGI (2%) and then I’d need to turn my trusted position into an opportunity to betray him and get my hands on the most important invention ever (5%). So that’s 20 million to one, but I’ve only spent a couple of hours thinking about it. I could possibly shorten the odds to 10,000 to one if I really went all in on the idea.
How would you do it?
Here we agree. I think most of the danger will be concentrated in a few, highly competent individuals with malicious intent. They could be people close to the tech or people with enough power to get it via bribery, extortion, military force etc.