yet my probability of success would be absolutely tiny – like 0.01% even if I tried my absolute hardest. That’s what I mean when I say that most people would have a near-zero chance. There are maybe a few hundred (??) people in the world who we even need to consider
Could you explain how you come to this conclusion? What do you think your fundamental roadblock would be? Getting the code for AGI or beating everyone else to superintelligence?]
How many people on the planet do you think meet the following conditions?
Have > 1% of obtaining AGI.
Have malevolent intent.
It’s important to remember that there may be quite a few people who would act somewhat maliciously if they took control of AGI, but I best the vast majority of these people would never even consider trying to take control of the world. I think trying to control AGI would just be far too much work and risk for the vast majority of people who want to cause suffering.
However, there still may be a few people want to harm the world enough to justify trying. They would need to be extremely motivated to cause damage. It’s a big world, though, so I wouldn’t be surpized if there were a few people like this.
I think that a typical, highly motivated malicious actor would have much higher than 1% probability of succeeding. (If mainstream AI research starts taking security against malicious actors super seriously, the probability of the malicious actors’ success would be very low, but I’m not sure it will be taken seriously enough.)
I disagree. Theft and extortion are the only two (sort of) easy ones on the list imo. Most people can’t hack or build botnets at all, and only certain people are in the right place to eavesdrop.
A person might not know how to hack, building botnets, or eavesdrop, but they could learn. I think a motivated, reasonably capable individual would be able to become proficient in all those things. And they potentially will have decades of training before they would need to use it.
yet my probability of success would be absolutely tiny – like 0.01% even if I tried my absolute hardest. That’s what I mean when I say that most people would have a near-zero chance. There are maybe a few hundred (??) people in the world who we even need to consider
Could you explain how you come to this conclusion? What do you think your fundamental roadblock would be? Getting the code for AGI or beating everyone else to superintelligence?]
My fundamental roadblock would be getting the code to AGI. My hacking skills are non-existent and I wouldn’t be able to learn enough to be useful even in a couple of decades. I wouldn’t want to hire anybody to do the hacking for me as I wouldn’t trust the hacker to give me my unlimited power once he got his hands on it. I don’t have any idea how to assemble an elite armed squad or anything like that either.
My best shot would be to somehow turn my connections into something useful. Let’s pretend I’m an acquaintance of Elon Musk’s PA (this is a total fabrication, but I don’t want to give any actual names, and this is the right ballpark). I’d need to somehow find a way to meet Elon Musk himself (1% chance), and then impress him enough that, over the years, I could become a trusted ally (0.5%). Then, I’d need Elon to be the first one to get AGI (2%) and then I’d need to turn my trusted position into an opportunity to betray him and get my hands on the most important invention ever (5%). So that’s 20 million to one, but I’ve only spent a couple of hours thinking about it. I could possibly shorten the odds to 10,000 to one if I really went all in on the idea.
How would you do it?
However, there still may be a few people want to harm the world enough to justify trying. They would need to be extremely motivated to cause damage. It’s a big world, though, so I wouldn’t be surpized if there were a few people like this.
Here we agree. I think most of the danger will be concentrated in a few, highly competent individuals with malicious intent. They could be people close to the tech or people with enough power to get it via bribery, extortion, military force etc.
Could you explain how you come to this conclusion? What do you think your fundamental roadblock would be? Getting the code for AGI or beating everyone else to superintelligence?]
It’s important to remember that there may be quite a few people who would act somewhat maliciously if they took control of AGI, but I best the vast majority of these people would never even consider trying to take control of the world. I think trying to control AGI would just be far too much work and risk for the vast majority of people who want to cause suffering.
However, there still may be a few people want to harm the world enough to justify trying. They would need to be extremely motivated to cause damage. It’s a big world, though, so I wouldn’t be surpized if there were a few people like this.
I think that a typical, highly motivated malicious actor would have much higher than 1% probability of succeeding. (If mainstream AI research starts taking security against malicious actors super seriously, the probability of the malicious actors’ success would be very low, but I’m not sure it will be taken seriously enough.)
A person might not know how to hack, building botnets, or eavesdrop, but they could learn. I think a motivated, reasonably capable individual would be able to become proficient in all those things. And they potentially will have decades of training before they would need to use it.
My fundamental roadblock would be getting the code to AGI. My hacking skills are non-existent and I wouldn’t be able to learn enough to be useful even in a couple of decades. I wouldn’t want to hire anybody to do the hacking for me as I wouldn’t trust the hacker to give me my unlimited power once he got his hands on it. I don’t have any idea how to assemble an elite armed squad or anything like that either.
My best shot would be to somehow turn my connections into something useful. Let’s pretend I’m an acquaintance of Elon Musk’s PA (this is a total fabrication, but I don’t want to give any actual names, and this is the right ballpark). I’d need to somehow find a way to meet Elon Musk himself (1% chance), and then impress him enough that, over the years, I could become a trusted ally (0.5%). Then, I’d need Elon to be the first one to get AGI (2%) and then I’d need to turn my trusted position into an opportunity to betray him and get my hands on the most important invention ever (5%). So that’s 20 million to one, but I’ve only spent a couple of hours thinking about it. I could possibly shorten the odds to 10,000 to one if I really went all in on the idea.
How would you do it?
Here we agree. I think most of the danger will be concentrated in a few, highly competent individuals with malicious intent. They could be people close to the tech or people with enough power to get it via bribery, extortion, military force etc.