What are the limitations on the AI? If we’re specifying current technology is the AI 25 megabytes or 25 petabytes? How fast is it’s connection to the internet? People love to talk about an AI “reading the internet” and suddenly having access to all of human knowledge but the internet is big. Even at 1 GB/s internet speeds it would take the AI 2200 years to download the amount of data that was transferred over cell phones in 2007 alone.
There are hard limits in the world that no amount of intelligence will save you from. I feel like at LW superintelligence is used as a fully general counterargument. The typical argument is “How can I know what something so much smarter than me is capable of?”, but the typical argument is bunk. An AI can’t count the primes, it won’t come up with general solution to the Navier-Stokes equation, it won’t be able to take over the world by clever arguments on the Net.
I’m an outspoken critic of the “crack protein folding” example, and it seems absolutely ludicrous to me that there is less argument/evidence given for the claim the AI will “crack protein folding” than there is for the claim, “The AI will find an lab that does DNA synthesis.”
If you want people to play your game, tell us the rules. What limitations does the AI have? What’s its initial knowledge state? Presumably information processing requires energy and produces entropy. There are fundamental limits to how much the AI can read or talk or learn. It can’t have a conversation with everyone in the world at the same time. What communication network would it use?
People talk about the AI making a bunch of money from starting a business or doing HFT, but HFT is a competitive field. The AI isn’t going to be able to make money without shelling out for colocation facilities, RF transmitters, etc. and even if it had 100% market share it would still only be making a billion or two a year. There’s a lot you can do with a couple billion dollars, but it’s still 0.001 percent of the GWP. We’re running very strongly into scope insensitivity where the number 1 billion and the number 85 trillion are both so far from our experience that people lose sight of the fact there there’s a difference of 5 orders of magnitude between the two quantities.
People talk about the AI taking over supercomputers and building botnets and stuff like that, but botnets are only useful for problems that parallelize really well. There are lots of problems I deal with in every day life that I still wouldn’t be able to solve with all the computing power in the world. Throwing more processors at a problem can make finding a solution slower, and throwing too many resources at a problem usually ends up looking a lot more like Twitch plays pokemon than a superintelligence.
This is a really hard problem, but also a really good discussion to be having. I’ll try and come back to this and see if I can come up with a speculative path to power that doesn’t rely on magic, but there’s no guarantee that this is a solvable problem. An AI may not be able to boss everyone around, because humans are really heavily specialized at manipulating and fighting other intelligent creatures, they don’t particularly like being bossed around, and there’s 7 billion humans.
The world is really really big, unintuitively enormous.
Let’s define “taking over the world” conservatively, and say that it’s equivalent to capturing something like 25% of the GWP. That’s roughly tantamount to buying Apple, Exxon, Walmart, G.E., Microsoft, and IBM 10 times over. And the AI needs to do that every year. Bill Gates at his wealthiest is a rounding error. We’re talking about the AI reaching a point where it directly employs 1 in every 4 humans.
What are the limitations on the AI? If we’re specifying current technology is the AI 25 megabytes or 25 petabytes? How fast is it’s connection to the internet? People love to talk about an AI “reading the internet” and suddenly having access to all of human knowledge but the internet is big. Even at 1 GB/s internet speeds it would take the AI 2200 years to download the amount of data that was transferred over cell phones in 2007 alone.
There are hard limits in the world that no amount of intelligence will save you from. I feel like at LW superintelligence is used as a fully general counterargument. The typical argument is “How can I know what something so much smarter than me is capable of?”, but the typical argument is bunk. An AI can’t count the primes, it won’t come up with general solution to the Navier-Stokes equation, it won’t be able to take over the world by clever arguments on the Net.
I’m an outspoken critic of the “crack protein folding” example, and it seems absolutely ludicrous to me that there is less argument/evidence given for the claim the AI will “crack protein folding” than there is for the claim, “The AI will find an lab that does DNA synthesis.”
If you want people to play your game, tell us the rules. What limitations does the AI have? What’s its initial knowledge state? Presumably information processing requires energy and produces entropy. There are fundamental limits to how much the AI can read or talk or learn. It can’t have a conversation with everyone in the world at the same time. What communication network would it use?
People talk about the AI making a bunch of money from starting a business or doing HFT, but HFT is a competitive field. The AI isn’t going to be able to make money without shelling out for colocation facilities, RF transmitters, etc. and even if it had 100% market share it would still only be making a billion or two a year. There’s a lot you can do with a couple billion dollars, but it’s still 0.001 percent of the GWP. We’re running very strongly into scope insensitivity where the number 1 billion and the number 85 trillion are both so far from our experience that people lose sight of the fact there there’s a difference of 5 orders of magnitude between the two quantities.
People talk about the AI taking over supercomputers and building botnets and stuff like that, but botnets are only useful for problems that parallelize really well. There are lots of problems I deal with in every day life that I still wouldn’t be able to solve with all the computing power in the world. Throwing more processors at a problem can make finding a solution slower, and throwing too many resources at a problem usually ends up looking a lot more like Twitch plays pokemon than a superintelligence.
This is a really hard problem, but also a really good discussion to be having. I’ll try and come back to this and see if I can come up with a speculative path to power that doesn’t rely on magic, but there’s no guarantee that this is a solvable problem. An AI may not be able to boss everyone around, because humans are really heavily specialized at manipulating and fighting other intelligent creatures, they don’t particularly like being bossed around, and there’s 7 billion humans. The world is really really big, unintuitively enormous.
Let’s define “taking over the world” conservatively, and say that it’s equivalent to capturing something like 25% of the GWP. That’s roughly tantamount to buying Apple, Exxon, Walmart, G.E., Microsoft, and IBM 10 times over. And the AI needs to do that every year. Bill Gates at his wealthiest is a rounding error. We’re talking about the AI reaching a point where it directly employs 1 in every 4 humans.