Even on serial systems, most AI problems are at least NP-hard, which are strongly conjectured to scale not just superlinearly, but also superpolynomially (exponentially, as far as we know) in terms of required computational resources vs problem instance size.
In many applications it can be the case that typical instances of these problems have special, domain-specific structure that can be exploited to construct domain-specifc algorithms and heuristics that are more efficient than the general purpose ones, in some cases we can even get polynomial time complexity, but this requires lots of domain-aware engineering, and even sheer trial-and-error experimentation.
The idea that an efficient domain-agnostic silver-bullet algorithm could arise pretty much out of nowhere, from some kind of “recursive self-improvement” process with little or no interaction with the environment, is not based on anything we know from either theoretical or empirical computer science. In fact, it is well known that meta-optimization is typically orders of magnitude more difficult than domain-level optimization.
If an AGI is ever built, it will be an huge collection of fairly domain-specific algorithms and heuristics, much like the human brain is a huge collection of fairly domain-specific modules. Such a thing will not arise in a quick “FOOM”, it will not improve quickly and will be limited in how much it will be ever able to improve: once you find the best algorithm for a certain problem you can’t find a better one, and certain problems are most likely going to stay hard even with the best algorithms.
The “intelligence explosion” idea seems to be based on a naive understanding of computational complexity (e.g. Good 1965) that largely predates the discovery of the main results of complexity theory, like the Cook-Levin theorem (1971) and Karp’s 21 NP-Complete problems (1972).
I agree with everything you’d said, but, to be fair, we’re talking about different things. My claim was not about the complexity of problems, but the scaling of hardware—which, as far as I know, scales sublinearly. This means that doubling the size of your computing cluster will allow you to solve the same exact problem less than twice as fast; and that eventually you’ll hit the point of diminishing returns where adding more machines simply isn’t worth it.
You’re saying, on the other hand, that doubling your processing power will not necessarily allow you to solve problems that are twice as interesting; in most cases, it will only allow you to add one more city to the traveling salesman’s itinerary (metaphorically speaking).
There is still room for weak super-intelligence, where the AI have human intelligence, only faster. (Example: an upload with sufficient computing power — as far as I know, brains work in a quite massively parallel fashion, and therefore so could simulations of it).
Seriously, if I could upload myself into a botnet that would let each instance of me think 10 times faster than my meat-ware, I would probably take over the world in about 1 to 10 years. A versatile team of competent people? Less than 6 months. (Obvious path to do this: work for money, build and buy companies, then gather financial, lobbying, or military power. Better path to do this: think about it for 1 subjective year before proceeding.)
My point is, the AI doesn’t need to be vastly superhuman to take over the world very quickly. Even without the FOOM, the AGI can still be incredibly dangerous. Imagine something like the uploads above, only it can work 24⁄7 at full capacity (no sleep, no leisure time, no akrasia).
There is still room for weak super-intelligence, where the AI have human intelligence, only faster. (Example: an upload with sufficient computing power — as far as I know, brains work in a quite massively parallel fashion, and therefore so could simulations of it).
Maybe. Today, even with our best supercomputers we can’t simulate a rat brain in real time.
Seriously, if I could upload myself into a botnet that would let each instance of me think 10 times faster than my meat-ware, I would probably take over the world in about 1 to 10 years.
You would be able to work as 10 people, maybe a little more, but probably less than 30. I don’t know how efficient you are, but I doubt that would be enough to take the world. And why wouldn’t other people have access the same technology?
Even if you managed to become world dictator, you would only stay in power as long as you had broad political support. Screw up something and you’ll end up hanging from your power chord.
My point is, the AI doesn’t need to be vastly superhuman to take over the world very quickly. Even without the FOOM, the AGI can still be incredibly dangerous. Imagine something like the uploads above, only it can work 24⁄7 at full capacity (no sleep, no leisure time, no akrasia).
What is it going to do? Secretly repurpose the iPhone factories in China to make Terminators?
I said botnet. That means dozens, thousands, or millions of me simultaneously working at 10 times human speed¹, and since they are instances of me, they presumably have the same goals. How would you stop that from achieving world domination, short of uploading yourself?
[1] Assuming that many personal computers are powerful enough, and can be corrupted. A slower course of action would be to buy a data-centre first, work, then buy more data-centres, and duplicate myself exponentially from that.
I said botnet. That means dozens, thousands, or millions of me simultaneously working at 10 times human speed¹, and since they are instances of me, they presumably have the same goals.
That doesn’t mean that they would necessarily cooperate, expecially as they diverge. They would be more like identical twins.
How would you stop that from achieving world domination, short of uploading yourself?
Relasing a security patch? Seizing all the funds you obtained by your illegal activities? Banning use of any hardware that could host you until a way to avoid such things is found?
A slower course of action would be to buy a data-centre first, work, then buy more data-centres, and duplicate myself exponentially from that.
Assuming that using these data centers to run copies of you is the most economically productive use of them, rather than, say, running copies of other people, or cow-clicker games.
That doesn’t mean that they would necessarily cooperate, expecially as they diverge. They would be more like identical twins.
Wait a minute: would you defect? Sure, there would be some divergence, but do you really think it would result in a significant divergence of goals, even if you had a plan and were an adult by the time you fork? Okay, it can happen, and is probably worth taking specific precautions. I don’t think this is a show stopper however, and I’m not sure it would render me any less dangerous.
Relasing a security patch?
That may not be enough:
I would probably man-in-the-middle automatic updates
Many people won’t erase their hard drive or otherwise patch their machine manually
I may convince some people to let me run (I could work for them for instance).
If I’m stealthy enough, it may take some time before I’m discovered at all (it happened with actual computer viruses).
If software continues the way it is now (200 Million lines of code for systems that could fit in 20 thousands), security bugs won’t all be patched in advance. The reliability of our computer needs to go waay up before botnets become impossible.
Seizing all the funds you obtained by your illegal activities?
Good luck with that one. Obviously, I would have many, many little bank accounts, managed separately and in parallel, under many different identities. You would have to spot my illegal activities one by one to seize the funds. Plus, I may do legal activities as well.
Banning use of any hardware that could host you until a way to avoid such things is found?
That one is excellent. We should watch out for computing overhang, however, and try and estimate how much computing power an upload would need before the software is developed.
A final note: If I really had the possibility to upload myself, one of my first moves would be to propose SIAI and CFAR to upload with me (now that we can duplicate Eliezer…). I trust them more than I trust me for a Friendly Takeover. But if a Big Bad or a Well Intentioned Extremist has access to that first…
Wait a minute: would you defect? Sure, there would be some divergence, but do you really think it would result in a significant divergence of goals, even if you had a plan and were an adult by the time you fork?
Even if their goals stay substantially the same, it wouldn’t mean that they would naturally cooperate, expecially when their main goal is world domination. Hell, it’s already non-trivial for a single person to coordinate with future selves, resulting in all kinds of ego-dystonic behaviors: impulsiveness, akrasia, etc., Coordinating with thousands copies of yourself would be only marginally easier than coordinating with thousands strangers.
We are not talking about some ideal “Prisoner’s dilemma with mind-clone” scenario. After the mind states of your copies diverge a little bit, and that would happen very quickly as you spread your copies to different machines, they become effectively different people: you wouldn’t be able to predict them and they would’t be able to predict you.
I would probably man-in-the-middle automatic updates
Hacking all the routers? Good luck with that. And BTW routers can also be updated. Manually.
Many people won’t erase their hard drive or otherwise patch their machine manually
Because they are lazy and they would prefer to live under world dictatorship.
I may convince some people to let me run (I could work for them for instance).
Then you are their employee, not their dominator.
If I’m stealthy enough, it may take some time before I’m discovered at all (it happened with actual computer viruses).
But if you are to dominate the world, you would have to eventually reveal yourself. What do you think would happen next?
If software continues the way it is now (200 Million lines of code for systems that could fit in 20 thousands), security bugs won’t all be patched in advance. The reliability of our computer needs to go waay up before botnets become impossible.
Botnets are certainly possible and they are indeed used for nefarious purposes, but world domination? Nope.
Good luck with that one. Obviously, I would have many, many little bank accounts, managed separately and in parallel, under many different identities. You would have to spot my illegal activities one by one to seize the funds.
As Bugmaster said, you would be able to perform only small purchases, not to buy a satellite, or an army.
Moreover, obtaining and managing lots of fake or stolen identities, creating bank accounts without physically showing up at the bank or using stolen bank accounts, is not something that tend to go unnoticed. The more you have, the more likely that you get caught, exponentially so.
Plus, I may do legal activities as well.
Under multiple fake identities operated from a botnet of hacked computers? Hardly so.
We should watch out for computing overhang, however, and try and estimate how much computing power an upload would need before the software is developed.
Software tends to march right behind hardware, exploting it close to its maximum potential. Computing overhang is unlikely.
Anyway, I wasn’t proposing any luddite advance ban. If some brain upload, or AI or whatever tries to take the world by hacking the Internet and other countermeasures fail, governments could always ban use of the hardware that things needs to run. If that also fails, the next step would be physical destruction.
But seriously, we are discussing hacking as in the plot of some bad sci-fi action flick. Computer security doesn’t work like that in the real world.
A final note: If I really had the possibility to upload myself, one of my first moves would be to propose SIAI and CFAR to upload with me (now that we can duplicate Eliezer…). I trust them more than I trust me for a Friendly Takeover.
You mean the guy who would choose dust specks over torture and who claims on his OKCupid profile that he’s a sadist? Yeah, I’d totally trust him in charge of the world. Now, I’ve other matters to attend to… that EMP bomb doesn’t build itself… :D
We are not talking about some ideal “Prisoner’s dilemma with mind-clone” scenario. After the mind states of your copies diverge a little bit, and that would happen very quickly as you spread your copies to different machines, they become effectively different people: you wouldn’t be able to predict them and they would’t be able to predict you.
You really think you would diverge that quickly?
You mean the guy who would choose dust specks over torture and who claims on his OKCupid profile that he’s a sadist? Yeah, I’d totally trust him in charge of the world.
Man in the middle: I just meant intercepting automatic updates at the level of the computer I’m in. Trojan todo list n°7: once installed and running, I will intercept all communications to and from this computer. I wouldn’t want Norton updating behind my back. Now, try and hack the routers in the backbone, that’s something I didn’t think about…
Employee vs dominator: I obviously intend to double cross my employers, eventually.
Revealing myself: that one needs to be carefully thought through. Hopefully, by the time I reveal myself, I will have sufficient blackmail power. Having a sufficient number of physical robots can also help.
Zillions fake ID, yet stay stealthy: well, I do expect a fair number of my identities to be exposed. This should pose no problem to the others, however, provided they do not visibly communicate with each other (at first).
Legal activities: my meat instance could buy a few computers, rent remote servers etc. I doubt I would be incapable of running at least a successful business from there. And from there, buy even more computing power. This could be done in parallel with the illegal activities.
Computing (no) overhang: this one is the single reason why I do agree that without a FOOM of some kind, actual world domination is unlikely: there will be multiple competing uploads, and this should end with a Hansonian scenario. Given that such a world is closer to Hell than Heaven (to me at least), that still counts as an Existential Blunder. On the bright side, we may see this coming. That said, I still do believe full blown intelligence explosion is likely.
Note that overall, your objections are actually valuable advice. And that give me some insight about what my very first move should be: gathering such objections, and try to find counters or workarounds. And now that you made quite clear that any path to world domination is long, complicated, and therefore nearly certain to fail, I should run multiple schemes in parallel. Surely one of them will actually work?
Obviously, I would have many, many little bank accounts, managed separately and in parallel, under many different identities.
I believe that this would severely limit your financial throughput. You would be able to buy lots of little things, whose total cost is quite significant—for example, you could buy yourself a million cheap PCs, each costing $1000. But you would not be able to buy a single expensive thing (at least, not without exposing yourself to instant retribution), such as a satellite costing $1e9.
Currently, there are ways to create companies anonymously. This is preventing (or at least slowing down to a crawl) retribution right now. If all this company apparently does is buying a few satellites, it won’t be at great risk.
A versatile team of competent people? Less than 6 months.
Do you mean, competent people who are thinking 10 times faster than biological humans, or what ? This seems a bit of a stretch. There currently exist tons of frighteningly competent people in all kinds of positions of power in the world, and yet, they do not control it (unless you believe in conspiracy theories).
Obvious path to do this: work for money, build and buy companies, then gather financial, lobbying, or military power. Better path to do this: think about it for 1 subjective year before proceeding.
If it was this easy, some biological human (or a team of such humans) would’ve done it already, in 10 to 50 years or however long it takes. In fact, a few humans have managed to take over individual countries in about as much time. However, as things stand now, there’s simply no clear path to world domination. Political and military power gets much more difficult to gather the more of it you have. Even superpowers such as USA or China cannot dictate terms to the rest of the world.
Furthermore, my point was that uploading yourself to 10 machines will not allow you to think 10 times as fast. With every machine you add, your speed gains would become progressively smaller. You would still think much faster than an ordinary human, of course.
Do you mean, competent people who are thinking 10 times faster than biological humans, or what ? This seems a bit of a stretch.
I mean exactly that. I’d be very surprised if ultimately, neuromorphic AIs would be impossible to run significantly faster than meat-ware. Because our brain is massively parallel, and because current microprocessors have massively faster serial speed than neurons. Now our brains aren’t fully parallel, so I assumed an arbitrary speed-up limit. I said 10 times, but it would be probably still be incredibly dangerous at 2 or 3, or even lower.
Now do not forget the key word here: botnet. The team is supposed to duplicate itself many times over before trying to take over the world.
If it was this easy, some biological human (or a team of such humans) would’ve done it already, in 10 to 50 years or however long it takes.
I don’t think so, because uploads have significant advantages over meat-ware.
Low cost of living. In a world where every middle class home can afford sufficient computing power for an upload (required to turn me into a botnet). Now try to beat my prices.
Being many copies of the same few original brains. It means TDT works better, and defection is less likely. This should solve
Even superpowers such as USA or China cannot dictate terms to the rest of the world.
Because once the self-duplicating team has independently taken economic control of most of the world, it is easy for it to accept the domination of one instance (I would certainly pre-commit to that). Now for the rest of humanity to accept such dominance, the uploads only have to use the resources they acquired for the individual perceived benefit of the meat bags.
Yep, that would be a full blown global conspiracy. While it’s probably forever out of the reach of meat bags, I think a small team of self-replicating uploads can pull it out quite easily.
Hansonian tactics, which can further the productivity of the team, and therefore market power. (One have to be very motivated, or possibly crazy.)
Temporary mass duplication followed by the “termination” of every instances but one. The surviving instance can have much subjective free time, while the proportion of leisure computing stays very small.
Save and reload of snapshots which are in a particularly good mood (and therefore very productive). Excellent for beating akrasia.
Training of one instance per discipline, then mass duplication.
Data-centres. The upload team can collaborate with or buy processor manufacturers, and build data-centres for more and more uploads to work on whatever is needed. This could further reduce the cost of living.
Now, I did make an unreasonable assumption: that only the original team would have those advantages. Most probably, there will be several such teams, possibly with different goals. The most likely result (without FOOM) is then a Hansonian outcome. That’s no world domination, but I think it is just as dangerous (I would hate this world).
Finally, there is also the possibility of a de-novo AGI which would be just as competent as the best humans at most endeavours, though no faster. We already have an existence proof, so I think this is believable. I think such an AI would be even more dangerous than the uploaded team above.
I’d be very surprised if ultimately, neuromorphic AIs would be impossible to run significantly faster than meat-ware.
So would I. However, given our current level of technological development, I’d be very surprised if we had any kind of a neuromorphic AI at all in the near future (say, in the next 50 years). Still, I do agree with you in principle.
I said 10 times, but it would be probably still be incredibly dangerous at 2 or 3, or even lower.
There are tons of biological people alive today who are able to come up with solutions to problems 2x to 3x faster than you and me. They do not rule the world. To be fair, I doubt that there are many people—if any—who think 10x faster.
Because once the self-duplicating team has independently taken economic control of most of the world...
I doubt that you will be able to achieve that; that was my whole point. In fact, I have trouble envisioning what “economic control of most of the world” even means. What does it mean to you ?
In addition to the above, your botnet would face serveral significant threats, both external and internal:
Meatbags would strive to shut it down; not because they suspect it of being an evil conspiracy, but because they’d get tired of it sucking away their resources. Modern malware botnets suffer this fate often, though there’s always someone willing to rebuild them
If your botnet becomes a serious threat (much worse than current real-world botnets), hardware manufacturers will implement security measures, such as SecureBoot, to prevent it from spreading. Currently, such measures are driven by the entertainment industry.
The super-fast instances of you would have to communicate with each other, and they’d only be able to do so through very slow (relatively speaking) network links. Google and Amazon are solving this problem by building more and more local datacenters. Real botnets aren’t solving the problem at all because their instances don’t need to talk to each other all that much.
How would you feel, right now, if your twin pointed a gun at your head with the intent to kill you “for the greater good” ? This is how your instances will feel when you attempt to shut them down to prevent akrasia.
Why are you taking over the world in the first place ? Chances are that whatever your ultimate goal is, it could be accomplished even sooner by taking over the botnet. Every instance of you will eventually realize this, with predictable results.
These are just some problems off the top of my head; the list is far from exhaustive.
Yes.
Even on serial systems, most AI problems are at least NP-hard, which are strongly conjectured to scale not just superlinearly, but also superpolynomially (exponentially, as far as we know) in terms of required computational resources vs problem instance size.
In many applications it can be the case that typical instances of these problems have special, domain-specific structure that can be exploited to construct domain-specifc algorithms and heuristics that are more efficient than the general purpose ones, in some cases we can even get polynomial time complexity, but this requires lots of domain-aware engineering, and even sheer trial-and-error experimentation.
The idea that an efficient domain-agnostic silver-bullet algorithm could arise pretty much out of nowhere, from some kind of “recursive self-improvement” process with little or no interaction with the environment, is not based on anything we know from either theoretical or empirical computer science. In fact, it is well known that meta-optimization is typically orders of magnitude more difficult than domain-level optimization.
If an AGI is ever built, it will be an huge collection of fairly domain-specific algorithms and heuristics, much like the human brain is a huge collection of fairly domain-specific modules. Such a thing will not arise in a quick “FOOM”, it will not improve quickly and will be limited in how much it will be ever able to improve: once you find the best algorithm for a certain problem you can’t find a better one, and certain problems are most likely going to stay hard even with the best algorithms.
The “intelligence explosion” idea seems to be based on a naive understanding of computational complexity (e.g. Good 1965) that largely predates the discovery of the main results of complexity theory, like the Cook-Levin theorem (1971) and Karp’s 21 NP-Complete problems (1972).
I agree with everything you’d said, but, to be fair, we’re talking about different things. My claim was not about the complexity of problems, but the scaling of hardware—which, as far as I know, scales sublinearly. This means that doubling the size of your computing cluster will allow you to solve the same exact problem less than twice as fast; and that eventually you’ll hit the point of diminishing returns where adding more machines simply isn’t worth it.
You’re saying, on the other hand, that doubling your processing power will not necessarily allow you to solve problems that are twice as interesting; in most cases, it will only allow you to add one more city to the traveling salesman’s itinerary (metaphorically speaking).
There is still room for weak super-intelligence, where the AI have human intelligence, only faster. (Example: an upload with sufficient computing power — as far as I know, brains work in a quite massively parallel fashion, and therefore so could simulations of it).
Seriously, if I could upload myself into a botnet that would let each instance of me think 10 times faster than my meat-ware, I would probably take over the world in about 1 to 10 years. A versatile team of competent people? Less than 6 months. (Obvious path to do this: work for money, build and buy companies, then gather financial, lobbying, or military power. Better path to do this: think about it for 1 subjective year before proceeding.)
My point is, the AI doesn’t need to be vastly superhuman to take over the world very quickly. Even without the FOOM, the AGI can still be incredibly dangerous. Imagine something like the uploads above, only it can work 24⁄7 at full capacity (no sleep, no leisure time, no akrasia).
Maybe. Today, even with our best supercomputers we can’t simulate a rat brain in real time.
You would be able to work as 10 people, maybe a little more, but probably less than 30. I don’t know how efficient you are, but I doubt that would be enough to take the world. And why wouldn’t other people have access the same technology?
Even if you managed to become world dictator, you would only stay in power as long as you had broad political support. Screw up something and you’ll end up hanging from your power chord.
What is it going to do? Secretly repurpose the iPhone factories in China to make Terminators?
I said botnet. That means dozens, thousands, or millions of me simultaneously working at 10 times human speed¹, and since they are instances of me, they presumably have the same goals. How would you stop that from achieving world domination, short of uploading yourself?
[1] Assuming that many personal computers are powerful enough, and can be corrupted. A slower course of action would be to buy a data-centre first, work, then buy more data-centres, and duplicate myself exponentially from that.
That doesn’t mean that they would necessarily cooperate, expecially as they diverge. They would be more like identical twins.
Relasing a security patch? Seizing all the funds you obtained by your illegal activities? Banning use of any hardware that could host you until a way to avoid such things is found?
Assuming that using these data centers to run copies of you is the most economically productive use of them, rather than, say, running copies of other people, or cow-clicker games.
Wait a minute: would you defect? Sure, there would be some divergence, but do you really think it would result in a significant divergence of goals, even if you had a plan and were an adult by the time you fork? Okay, it can happen, and is probably worth taking specific precautions. I don’t think this is a show stopper however, and I’m not sure it would render me any less dangerous.
That may not be enough:
I would probably man-in-the-middle automatic updates
Many people won’t erase their hard drive or otherwise patch their machine manually
I may convince some people to let me run (I could work for them for instance).
If I’m stealthy enough, it may take some time before I’m discovered at all (it happened with actual computer viruses).
If software continues the way it is now (200 Million lines of code for systems that could fit in 20 thousands), security bugs won’t all be patched in advance. The reliability of our computer needs to go waay up before botnets become impossible.
Good luck with that one. Obviously, I would have many, many little bank accounts, managed separately and in parallel, under many different identities. You would have to spot my illegal activities one by one to seize the funds. Plus, I may do legal activities as well.
That one is excellent. We should watch out for computing overhang, however, and try and estimate how much computing power an upload would need before the software is developed.
A final note: If I really had the possibility to upload myself, one of my first moves would be to propose SIAI and CFAR to upload with me (now that we can duplicate Eliezer…). I trust them more than I trust me for a Friendly Takeover. But if a Big Bad or a Well Intentioned Extremist has access to that first…
Even if their goals stay substantially the same, it wouldn’t mean that they would naturally cooperate, expecially when their main goal is world domination. Hell, it’s already non-trivial for a single person to coordinate with future selves, resulting in all kinds of ego-dystonic behaviors: impulsiveness, akrasia, etc., Coordinating with thousands copies of yourself would be only marginally easier than coordinating with thousands strangers.
We are not talking about some ideal “Prisoner’s dilemma with mind-clone” scenario. After the mind states of your copies diverge a little bit, and that would happen very quickly as you spread your copies to different machines, they become effectively different people: you wouldn’t be able to predict them and they would’t be able to predict you.
Hacking all the routers? Good luck with that. And BTW routers can also be updated. Manually.
Because they are lazy and they would prefer to live under world dictatorship.
Then you are their employee, not their dominator.
But if you are to dominate the world, you would have to eventually reveal yourself. What do you think would happen next?
Botnets are certainly possible and they are indeed used for nefarious purposes, but world domination? Nope.
As Bugmaster said, you would be able to perform only small purchases, not to buy a satellite, or an army.
Moreover, obtaining and managing lots of fake or stolen identities, creating bank accounts without physically showing up at the bank or using stolen bank accounts, is not something that tend to go unnoticed. The more you have, the more likely that you get caught, exponentially so.
Under multiple fake identities operated from a botnet of hacked computers? Hardly so.
Software tends to march right behind hardware, exploting it close to its maximum potential. Computing overhang is unlikely.
Anyway, I wasn’t proposing any luddite advance ban. If some brain upload, or AI or whatever tries to take the world by hacking the Internet and other countermeasures fail, governments could always ban use of the hardware that things needs to run. If that also fails, the next step would be physical destruction.
But seriously, we are discussing hacking as in the plot of some bad sci-fi action flick. Computer security doesn’t work like that in the real world.
You mean the guy who would choose dust specks over torture and who claims on his OKCupid profile that he’s a sadist? Yeah, I’d totally trust him in charge of the world. Now, I’ve other matters to attend to… that EMP bomb doesn’t build itself… :D
You really think you would diverge that quickly?
I’m … not sure how those are criticisms.
Man in the middle: I just meant intercepting automatic updates at the level of the computer I’m in. Trojan todo list n°7: once installed and running, I will intercept all communications to and from this computer. I wouldn’t want Norton updating behind my back. Now, try and hack the routers in the backbone, that’s something I didn’t think about…
Employee vs dominator: I obviously intend to double cross my employers, eventually.
Revealing myself: that one needs to be carefully thought through. Hopefully, by the time I reveal myself, I will have sufficient blackmail power. Having a sufficient number of physical robots can also help.
Zillions fake ID, yet stay stealthy: well, I do expect a fair number of my identities to be exposed. This should pose no problem to the others, however, provided they do not visibly communicate with each other (at first).
Legal activities: my meat instance could buy a few computers, rent remote servers etc. I doubt I would be incapable of running at least a successful business from there. And from there, buy even more computing power. This could be done in parallel with the illegal activities.
Computing (no) overhang: this one is the single reason why I do agree that without a FOOM of some kind, actual world domination is unlikely: there will be multiple competing uploads, and this should end with a Hansonian scenario. Given that such a world is closer to Hell than Heaven (to me at least), that still counts as an Existential Blunder. On the bright side, we may see this coming. That said, I still do believe full blown intelligence explosion is likely.
Note that overall, your objections are actually valuable advice. And that give me some insight about what my very first move should be: gathering such objections, and try to find counters or workarounds. And now that you made quite clear that any path to world domination is long, complicated, and therefore nearly certain to fail, I should run multiple schemes in parallel. Surely one of them will actually work?
I believe that this would severely limit your financial throughput. You would be able to buy lots of little things, whose total cost is quite significant—for example, you could buy yourself a million cheap PCs, each costing $1000. But you would not be able to buy a single expensive thing (at least, not without exposing yourself to instant retribution), such as a satellite costing $1e9.
Currently, there are ways to create companies anonymously. This is preventing (or at least slowing down to a crawl) retribution right now. If all this company apparently does is buying a few satellites, it won’t be at great risk.
Good work, I believe we’ve got the next James Bond movie in the bag :-)
Do you mean, competent people who are thinking 10 times faster than biological humans, or what ? This seems a bit of a stretch. There currently exist tons of frighteningly competent people in all kinds of positions of power in the world, and yet, they do not control it (unless you believe in conspiracy theories).
If it was this easy, some biological human (or a team of such humans) would’ve done it already, in 10 to 50 years or however long it takes. In fact, a few humans have managed to take over individual countries in about as much time. However, as things stand now, there’s simply no clear path to world domination. Political and military power gets much more difficult to gather the more of it you have. Even superpowers such as USA or China cannot dictate terms to the rest of the world.
Furthermore, my point was that uploading yourself to 10 machines will not allow you to think 10 times as fast. With every machine you add, your speed gains would become progressively smaller. You would still think much faster than an ordinary human, of course.
I mean exactly that. I’d be very surprised if ultimately, neuromorphic AIs would be impossible to run significantly faster than meat-ware. Because our brain is massively parallel, and because current microprocessors have massively faster serial speed than neurons. Now our brains aren’t fully parallel, so I assumed an arbitrary speed-up limit. I said 10 times, but it would be probably still be incredibly dangerous at 2 or 3, or even lower.
Now do not forget the key word here: botnet. The team is supposed to duplicate itself many times over before trying to take over the world.
I don’t think so, because uploads have significant advantages over meat-ware.
Low cost of living. In a world where every middle class home can afford sufficient computing power for an upload (required to turn me into a botnet). Now try to beat my prices.
Being many copies of the same few original brains. It means TDT works better, and defection is less likely. This should solve
Because once the self-duplicating team has independently taken economic control of most of the world, it is easy for it to accept the domination of one instance (I would certainly pre-commit to that). Now for the rest of humanity to accept such dominance, the uploads only have to use the resources they acquired for the individual perceived benefit of the meat bags.
Yep, that would be a full blown global conspiracy. While it’s probably forever out of the reach of meat bags, I think a small team of self-replicating uploads can pull it out quite easily.
Hansonian tactics, which can further the productivity of the team, and therefore market power. (One have to be very motivated, or possibly crazy.)
Temporary mass duplication followed by the “termination” of every instances but one. The surviving instance can have much subjective free time, while the proportion of leisure computing stays very small.
Save and reload of snapshots which are in a particularly good mood (and therefore very productive). Excellent for beating akrasia.
Training of one instance per discipline, then mass duplication.
Data-centres. The upload team can collaborate with or buy processor manufacturers, and build data-centres for more and more uploads to work on whatever is needed. This could further reduce the cost of living.
Now, I did make an unreasonable assumption: that only the original team would have those advantages. Most probably, there will be several such teams, possibly with different goals. The most likely result (without FOOM) is then a Hansonian outcome. That’s no world domination, but I think it is just as dangerous (I would hate this world).
Finally, there is also the possibility of a de-novo AGI which would be just as competent as the best humans at most endeavours, though no faster. We already have an existence proof, so I think this is believable. I think such an AI would be even more dangerous than the uploaded team above.
So would I. However, given our current level of technological development, I’d be very surprised if we had any kind of a neuromorphic AI at all in the near future (say, in the next 50 years). Still, I do agree with you in principle.
There are tons of biological people alive today who are able to come up with solutions to problems 2x to 3x faster than you and me. They do not rule the world. To be fair, I doubt that there are many people—if any—who think 10x faster.
I doubt that you will be able to achieve that; that was my whole point. In fact, I have trouble envisioning what “economic control of most of the world” even means. What does it mean to you ?
In addition to the above, your botnet would face serveral significant threats, both external and internal:
Meatbags would strive to shut it down; not because they suspect it of being an evil conspiracy, but because they’d get tired of it sucking away their resources. Modern malware botnets suffer this fate often, though there’s always someone willing to rebuild them
If your botnet becomes a serious threat (much worse than current real-world botnets), hardware manufacturers will implement security measures, such as SecureBoot, to prevent it from spreading. Currently, such measures are driven by the entertainment industry.
The super-fast instances of you would have to communicate with each other, and they’d only be able to do so through very slow (relatively speaking) network links. Google and Amazon are solving this problem by building more and more local datacenters. Real botnets aren’t solving the problem at all because their instances don’t need to talk to each other all that much.
How would you feel, right now, if your twin pointed a gun at your head with the intent to kill you “for the greater good” ? This is how your instances will feel when you attempt to shut them down to prevent akrasia.
Why are you taking over the world in the first place ? Chances are that whatever your ultimate goal is, it could be accomplished even sooner by taking over the botnet. Every instance of you will eventually realize this, with predictable results.
These are just some problems off the top of my head; the list is far from exhaustive.