I also have a question about agency. Let’s say Bob invents an AGI in his garage one day. It even gets smarter the more it runs. When Bob goes to sleep at night he turns the computer off and his AI stops getting smarter. It doesn’t control it’s own power switch, it’s not managing Bob’s subnet for him. It doesn’t have internet access. I guess in a doomsday scenario Bob would have to have programmed in “root access” for his ever more intelligent software? Then it can eventually modify the operating system that it’s running on? How does such a thing get to be in charege of anything? It would have be people who put it in charge of stuff, and people who would vet it’s decisions.
So here’s a question: If we write software that can do simple things (make coffee in the morning, do my laundry) how many years is it going to be before I let it make stock trades for me? For some people, they do that right away. Then the software gets confused and loses all their money for them. They call the broker (who says: “You did WHAT? Hahaha”)
So how do these scary kind of AIs actually get control of their own power cord? Much less their internet connection.
When you say “an AGI” are you implying a program that has control of enough resources to guarantee it’s own survival?
This seems to boil down to the “AI in the box” problem. People are convinced that keeping an AI trapped is not possible. There is a tag which you can look up (AI Boxing) or you can just read up here.
Q2 Agency
I also have a question about agency. Let’s say Bob invents an AGI in his garage one day. It even gets smarter the more it runs. When Bob goes to sleep at night he turns the computer off and his AI stops getting smarter. It doesn’t control it’s own power switch, it’s not managing Bob’s subnet for him. It doesn’t have internet access. I guess in a doomsday scenario Bob would have to have programmed in “root access” for his ever more intelligent software? Then it can eventually modify the operating system that it’s running on? How does such a thing get to be in charege of anything? It would have be people who put it in charge of stuff, and people who would vet it’s decisions.
So here’s a question: If we write software that can do simple things (make coffee in the morning, do my laundry) how many years is it going to be before I let it make stock trades for me? For some people, they do that right away. Then the software gets confused and loses all their money for them. They call the broker (who says: “You did WHAT? Hahaha”)
So how do these scary kind of AIs actually get control of their own power cord? Much less their internet connection.
When you say “an AGI” are you implying a program that has control of enough resources to guarantee it’s own survival?
This seems to boil down to the “AI in the box” problem. People are convinced that keeping an AI trapped is not possible. There is a tag which you can look up (AI Boxing) or you can just read up here.