Yes, a thorough analysis would take a long time and I am not the right person to do that. I only have the capability to improve it incrementally.
The reason for why I post something like this anyway is that SIAI and people like you are exclusively stating that which speaks in favor of your worldview, without any critical analysis of your beliefs. And if someone else does it for you then you make accusations in the most hypocritical manner possible.
...why things like Goedel machines are either fallacious or irrelevant.
I can’t review the work of Jürgen Schmidhuber because I lack the mathematical background. But you knew that.
If his work was relevant in estimating risks from AI then it is up to people like you to write about it and show how his work does constitute evidence for your claims.
I did the best I can do. I even interviewed a bunch of AI researchers and asked others about his work in private.
Have you taken the effort to ask actual experts? You people mainly rely on surveys that ask for time-frames and then interpret that to mean that risks from AI are nigh. Even though most AI researchers who answered that AGI will happen soon would deny the implications that you assume. Which is just another proof of your general dishonesty and conformation bias.
Your argument is basically an argument from fiction;
...passing that black powder’s formulation is so simple and famous that even I, who prefers archery, knows it: saltpeter, charcoal, and sulfur.
This wouldn’t be nearly enough.
A civilization which exists and is there for the taking.
Magical thinking.
And to do so efficiently it takes random mutation, a whole society of minds
All of which are available to a ‘simple algorithm’. Artificial life was first explored by von Neumann himself!
Yes.
Chimp brains have not improved at all, even to the point of building computers. There is an obvious disanalogy here...
Yes, what I said.
Dead-simple chess and Go algorithms routinely turn out fascinating moves. Genetic algorithms are renowned for producing results which are bizarre and inhuman and creative.
I am aware of that. Not sure what’s your point though.
What is this bullshit ‘computers can’t exhibit creativity’ doing here?
I never argued that.
Why can’t I predict the next move of my chess algorithm? Why is there no algorithm to predict the AI algorithm simpler and faster than the original AI algorithm?
The point was that humans can use the same technique that the AI does. I never claimed that it would be possible to predict the next move.
Source code can be available and either the maliciousness not obvious (see the Underhanded C Contest) or not prove what you think it proves (see Reflections on Trusting Trust, just for starters). Assuming you are even inspecting all the existing code rather than a stub left behind to look like an AI.
This is just naive. We’re talking about a plan for world domination, which doesn’t just include massive amounts of code that would have to be hidden from inspection but also massive amount of actions.
people like you are exclusively stating that which speaks in favor of your worldview, without any critical analysis of your beliefs. And if someone else does it for you then you make accusations in the most hypocritical manner possible.
Which is just another proof of your general dishonesty
Come on. Maybe you disagree with gwern’s response and think he missed a bunch of your points, but this is just name-calling. I like your posts, but a comment like this make me lose respect for you.
I can’t review the work of Jürgen Schmidhuber because I lack the mathematical background. But you knew that.
I know nothing about you.
If his work was relevant in estimating risks from AI then it is up to people like you to write about it and show how his work does constitute evidence for your claims.
His papers and those of Legg or Hutter are all online. I’ve hosted some of them myself eg. the recent Journal of Consciousness Studies one. The abstracts are pretty clear. They’ve been mentioned and discussed constantly on LW. You yourself have posted material on them, and material designed as an introduction for relative beginners so hopefully you read & learned from it.
Have you taken the effort to ask actual experts? You people mainly rely on surveys that ask for time-frames and then interpret that to mean that risks from AI are nigh. Even though most AI researchers who answered that AGI will happen soon would deny the implications that you assume. Which is just another proof of your general dishonesty and conformation bias.
So unless they agree in every detail, their forecasts are useless?
So? What’s your point?
That story is an intuition pump (not one of my favorites, incidentally) - and your story is a pump with a broken-off handle.
This wouldn’t be nearly enough.
Gee, I don’t suppose you would care to enlarge on that.
Magical thinking.
Which means what, exactly? It’s magical thinking to point out that our current civilization exists and is available to any AI we might make?
I am aware of that. Not sure what’s your point though.
You said they necessarily lack creativity.
The point was that humans can use the same technique that the AI does. I never claimed that it would be possible to predict the next move.
I’ll reiterate the quote:
So if the AI can do that, why wouldn’t humans be able to use the same algorithms to predict what the initial AI is going to do?
Next:
We’re talking about a plan for world domination, which doesn’t just include massive amounts of code that would have to be hidden from inspection but also massive amount of actions.
So in other words, we would be able to detect the AI had gone bad while it was in the process of executing the massive amount of actions of taking over the world. I agree! Unfortunately, that’s may not be a useful time to detect it...
Which means what, exactly? It’s magical thinking to point out that our current civilization exists and is available to any AI we might make?
I can’t speak for XiXiDu, but I myself have noticed a bit of magical thinking that is sometimes employed by proponents of AGI/FAI. It goes something like this (exaggerated for effect):
1). It’s possible to create an AI that would recursively make itself smarter 2). Therefore the AI would make itself very nearly infinitely smart 3). The AI would then use its intelligence to acquire godlike powers
As I see it, though, #2 does not necessarily follow from #1, unless one makes an implicit assumption that Moore’s Law (or something like it) is a universal and unstoppable law of nature (like the speed of light or something). And #3 does not follow from #2, for reasons that XiXiDu articulated—even if we assume that godlike powers can exist at all, which I personally doubt.
If you took the ten smartest scientists alive in the world today, and transported them to Ancient Rome, they wouldn’t be able to build an iPhone from scratch no matter how smart they were. In addition, assuming that what we know of science today is more or less correct, we could predict with a high degree of certainty that no future scientist, no matter how superhumanly smart, would be able to build a perpetual motion device.
Edited to add: I was in the process of outlining a discussion post on this very subject, but then XiXiDu scooped me. Bah, I say !
As I see it, though, #2 does not necessarily follow from #1, unless one makes an implicit assumption that Moore’s Law (or something like it) is a universal and unstoppable law of nature (like the speed of light or something). And #3 does not follow from #2, for reasons that XiXiDu articulated—even if we assume that godlike powers can exist at all, which I personally doubt.
#2 does not need to follow since we already know it’s false—infinite intelligence is not on offer by the basic laws of physics aside from Tipler’s dubious theories. If it is replaced by ‘will make itself much smarter than us’, that is enough. (Have you read Chalmer’s paper?)
And #3 does not follow from #2, for reasons that XiXiDu articulated—even if we assume thAnd #3 does not follow from #2, for reasons that XiXiDu articulated—even if we assume that godlike powers can exist at all, which I personally doubt.at godlike powers can exist at all, which I personally doubt.
Which reasons would those be? And as I’ve pointed out, the only way to cure your doubt if the prior history of humanity is not enough would be to actually demonstrate the powers, with the obvious issue that is.
If it is replaced by ‘will make itself much smarter than us’, that is enough.
Ok, but how much smarter ? Stephen Hawking is much smarter than me, for example, but I’m not worried about his existence, and in fact see it as a very good thing, though I’m not expecting him to invent “gray goo” anytime soon (or, in fact, ever).
I realize that quantifying intelligence is a tricky proposition, so let me put it this way: can you list some feats of intelligence, currently inaccessible to us, which you would expect a dangerously smart AI to be able to achieve ? And, segueing into #3, how do these feats of intelligence translate into operational capabilities ?
(Have you read Chalmer’s paper?)
Probably not; which paper are you referring to ?
Which reasons would those be?
The ones I alluded to in my next paragraph:
If you took the ten smartest scientists alive in the world today, and transported them to Ancient Rome, they wouldn’t be able to build an iPhone from scratch no matter how smart they were.
The problem here is that raw intelligence is not enough to achieve a tangible effect on the world. If your goal is to develop and deploy a specific technology, such as an iPhone, you need the infrastructure that would supply your raw materials and labor. This means that your technology can’t be too far ahead of what everyone else in the world is already using.
Even if you were ten times smarter than any human, you still wouldn’t be able to conjure a modern CPU (such as the one used in iPhones) out of thin air. You’d need (among other things) a factory, and a power supply to run it, and mines to extract the raw ores, and refineries to produce plastics, and the people to run them full-time, and the infrastructure to feed those people, and a government (or some other hegemony) to organize them, and so on and so forth… None of which existed in Ancient Rome (with the possible exception of the hegemony, and even that’s a stretch). Sure, you could build all of that stuff from scratch, but then you wouldn’t be going “FOOM”, you’d be going “are we there yet” for a century or so (optimistically speaking).
the only way to cure your doubt if the prior history of humanity is not enough
Are you referring to some specific historical events ? If so, which ones ?
Okay, you are right. I was wrong to expect you to have read my comments where I explained how I lack the most basic education (currently trying to change that).
You yourself have posted material on them, and material designed as an introduction for relative beginners so hopefully you read & learned from it.
Yeah, I post a lot of stuff that I sense to be important and that I would love to be able to read. I hope to be able to do so in future.
So unless they agree in every detail, their forecasts are useless?
No. But if you are interested in risks from AI you should ask them about risks from AI and not just about when human-level AI will likely be invented.
That story is an intuition pump (not one of my favorites, incidentally) - and your story is a pump with a broken-off handle.
My story isn’t a story but a quickly written discussion post, as a reply to post where an argument in favor of risks from AI has been outlined that was much too vague to be useful.
The problem I have is that it can be very misleading to just state that it is likely physically possible to invent smarter than human intelligence that could then be applied to its own improvement. It misses a lot of details.
Show me how that is going to work out. Or at least outline how a smarter-than-human AI is supposed to take over the world. Why is nobody doing that?
Just saying that there will be “a positive feedback loop in which an intelligence is making itself smarter” makes it look like something that couldn’t possible fail.
Gee, I don’t suppose you would care to enlarge on that.
100 people are not enough to produce and employ any toxic gas or bombs in a way that would defeat a wide-stretched empire with many thousands of people.
It’s magical thinking to point out that our current civilization exists and is available to any AI we might make?
It is magically thinking because you don’t know how that could possible work out in practice.
You said they necessarily lack creativity.
I said that there is nothing but evolution, a simple algorithm, when it comes to creativity and the discovery of unknown unknowns. I said that the full potential of evolution can only be tapped by a society of minds and its culture. I said that it is highly speculative that there exists a simple algorithm that would constitute a consequentialist AI with simple values that could achieve the same as aforementioned society of minds and therefore work better than evolution.
You just turned that into “XiXiDu believes that simple algorithms can’t exhibit creativity.”
So in other words, we would be able to detect the AI had gone bad while it was in the process of executing the massive amount of actions of taking over the world. I agree! Unfortunately, that’s may not be a useful time to detect it...
Show me how that is going to work out. Or at least outline how a smarter-than-human AI is supposed to take over the world. Why is nobody doing that?
People have suggested dozens of scenarios, from taking over the Internet to hacking militaries to producing nanoassemblers & eating everything. The scenarios will never be enough for critics because until they are actually executed there will always be some doubt that they would work—at which point there would be no need to discuss them any more. Just like in cryonics (if you already had the technology to revive someone, there would be no need to discuss whether it would work). This is intrinsic to any discussion of threats that have not already struck or technologies which don’t already exist.
I am reminded of the quote, “‘Should we trust models or observations?’ In reply we note that if we had observations of the future, we obviously would trust them more than models, but unfortunately observations of the future are not available at this time.”
100 people are not enough to produce and employ any toxic gas or bombs in a way that would defeat a wide-stretched empire with many thousands of people.
Because that’s the best way to take over...
I said that it is highly speculative that there exists a simple algorithm that would constitute a consequentialist AI with simple values that could achieve the same as aforementioned society of minds and therefore work better than evolution. You just turned that into “XiXiDu believes that simple algorithms can’t exhibit creativity.”
That is not what you said. I’ll requote it:
Complex values are the cornerstone of diversity, which in turn enables creativity and drives the exploration of various conflicting routes. A singleton with a stable utility-function lacks the feedback provided by a society of minds and its cultural evolution...An AI with simple values will simply lack the creativity, due to a lack of drives, to pursue the huge spectrum of research that a society of humans does pursue. Which will allow an AI to solve some well-defined narrow problems, but it will be unable to make use of the broad range of synergetic effects of cultural evolution. Cultural evolution is a result of the interaction of a wide range of utility-functions.
If a singleton lacks feedback from diversity and something which is the ‘cornerstone’ of diversity is something a singleton cannot have… This is actually even stronger a claim than simple algorithms, because a singleton could be a very complex algorithm. (You see how charitable I’m being towards your claims? Yet no one appreciates it.)
And that’s not even getting into your claim about spectrum of research, which seems to impute stupidity to even ultraintelligent agents.
(‘Let’s see, I’m too dumb to see that I am systematically underinvesting in research despite the high returns when I do investigate something other than X, and apparently I’m also too dumb to notice that I am underperforming compared to those oh-so-diverse humans’ research programs. Gosh, no wonder I’m failing! I wonder why I am so stupid like this, I can’t seem to find any proofs of it.’)
People have suggested dozens of scenarios, from taking over the Internet to hacking militaries to producing nanoassemblers & eating everything. The scenarios will never be enough for critics because until they are actually executed there will always be some doubt that they would work...
Speaking as one of the critics, I’ve got to say that these scenarios are “not enough” for me not because there’s “some doubt that they would work”, but because there’s massive doubt that they would work. To use an analogy, I look both ways before crossing the street because I’m afraid of being hit by a car; but I don’t look up all the time, despite the fact that a meteorite could, theoretically, drop out of the sky and squash me flat. Cars are likely; meteorites are not.
I could elaborate regarding the reasons why I doubt some of these world takeover scenarios (including “hacking the Internet” and “eating everything”), if you’re interested.
I could elaborate regarding the reasons why I doubt some of these world takeover scenarios (including “hacking the Internet” and “eating everything”), if you’re interested.
Not really. Any scenario presented in any level of detail can be faulted with elaborate scenarios why it would not work. I’ve seen this at work with cryonics: no matter how detailed a future scenario is presented or how many options are presented in a disjunctive argument, no matter how many humans recovered from death or how many organs preserved and brought back, there are people who just never seem to think it has a non-zero chance of working because it has not yet worked.
For example, if I wanted to elaborate on the hacking the Internet scenario, I could ask you your probability on the possibility and then present information on Warhol worm simulations, prevalence of existing worms, number of root vulnerabilities a year, vulnerabilities exposed by static analysis tools like Coverity, the early results from fuzz testers, the size of the computer crime blackmarket, etc. until I was blue in the face, check whether you had changed your opinion and even if you said you changed it a little, you still would not do a single thing in your life differently.
Because, after all, disagreements are not about information. There’s a lot of evidence reasoning is only about arguing and disproving other people’s theories, and it’s increasingly clear to me that politics and theism are strongly heritable or determined by underlying cognitive properties like performance on the CRT or personality traits; why would cryonics or AI be any different?
The point of writing is to assemble useful information for those receptive, and use those not receptive to clean up errors or omissions. If someone reads my modafinil or nicotine essays and is a puritan with regard to supplements, I don’t expect them to change their minds; at most, I hope they’ll have a good citation for a negative point or mention a broken hyperlink.
Any scenario presented in any level of detail can be faulted with elaborate scenarios why it would not work.
That is a rather uncharitable interpretation of my words. I am fully willing to grant that your scenarios are possible, but are they likely ? If you showed me a highly detailed plan for building a new kind of skyscraper out of steel and concrete, I might try and poke some holes in it, but I’d agree that it would probably work. On the other hand, if you showed me a highly detailed plan for building a space elevator out of candy-canes, I would conclude that it would probably fail to work. I would conclude this not merely because I’ve never seen a space elevator before, but also because I know that candy-canes make a poor construction material. Sure, you could postulate super-strong diamondoid candy-canes of some sort, but then you’d need to explain where you’re going to get them from.
there are people who just never seem to think it has a non-zero chance of working because it has not yet worked.
For the record, I believe that cryonics has a non-zero chance of working.
...until I was blue in the face, check whether you had changed your opinion and even if you said you changed it a little, you still would not do a single thing in your life differently
I think this would depend on how much my opinion had, in fact, changed. If you’re going to simply go ahead and assume that I’m a disingenuous liar, then sure, there’s no point in talking to me. Is there anything I can say or do (short of agreeing with you unconditionally) to prove my sincerity, or is the mere fact of my disagreement with you evidence enough of my dishonesty and/or stupidity ?
The point of writing is to assemble useful information for those receptive, and use those not receptive to clean up errors or omissions.
And yet, de-converted atheists as well as converted theists do exist. Perhaps more importantly, the above sentence makes you sound as though you’d made up your mind on the topic, and thus nothing and no one could persuade you to change it in any way—which is kind of like what you’re accusing me of doing.
People have suggested dozens of scenarios, from taking over the Internet to hacking militaries to producing nanoassemblers & eating everything.
But none of them make any sense to me, see below.
That is not what you said. I’ll requote it:
Wait, your quote said what I said I said you said I didn’t say.
Because that’s the best way to take over...
I have no idea. You don’t have any idea either or you’d have told me by now. You are just saying that magic will happen and the world will be ours. That’s the problem with risks from AI.
Let’s see, I’m too dumb to see that I am systematically underinvesting in research despite the high returns when I do investigate something other than X, and apparently I’m also too dumb to notice that I am underperforming compared to those oh-so-diverse humans’ research programs.
See, that’s the problem. The AI can’t acquire the resources that are necessary to acquire resources in the first place. It might figure out that it will need to pursue various strategies or build nanoassemblers, but how does it do that?
Taking over the Internet is no answer, because the question is how. Building nanoassemblers is no answer, because the question is how.
I have no idea. You don’t have any idea either or you’d have told me by now. You are just saying that magic will happen and the world will be ours. That’s the problem with risks from AI.
We have plenty of ideas. Yvain posted a Discussion thread filled with ideas how. “Alternate history” is an old sub-genre dating back at least to Mark Twain (who makes many concrete suggestions about how his Connecticut yankee would do something similar).
But what’s the point? See my reply to Bugmaster—it’s impossible or would defeat the point of the discussion to actually execute the strategies, and anything short of execution is vulnerable to ‘that’s magic!11!!1’
The AI can’t acquire the resources that are necessary to acquire resources in the first place. It might figure out that it will need to pursue various strategies or build nanoassemblers, but how does it do that?
By reading the many discussions of what could go wrong and implementing whatever is easiest, like hacking computers. Oh the irony!
Yes, a thorough analysis would take a long time and I am not the right person to do that. I only have the capability to improve it incrementally.
The reason for why I post something like this anyway is that SIAI and people like you are exclusively stating that which speaks in favor of your worldview, without any critical analysis of your beliefs. And if someone else does it for you then you make accusations in the most hypocritical manner possible.
I can’t review the work of Jürgen Schmidhuber because I lack the mathematical background. But you knew that.
If his work was relevant in estimating risks from AI then it is up to people like you to write about it and show how his work does constitute evidence for your claims.
I did the best I can do. I even interviewed a bunch of AI researchers and asked others about his work in private.
Have you taken the effort to ask actual experts? You people mainly rely on surveys that ask for time-frames and then interpret that to mean that risks from AI are nigh. Even though most AI researchers who answered that AGI will happen soon would deny the implications that you assume. Which is just another proof of your general dishonesty and conformation bias.
So? What’s your point?
This wouldn’t be nearly enough.
Magical thinking.
Yes.
Yes, what I said.
I am aware of that. Not sure what’s your point though.
I never argued that.
The point was that humans can use the same technique that the AI does. I never claimed that it would be possible to predict the next move.
This is just naive. We’re talking about a plan for world domination, which doesn’t just include massive amounts of code that would have to be hidden from inspection but also massive amount of actions.
Come on. Maybe you disagree with gwern’s response and think he missed a bunch of your points, but this is just name-calling. I like your posts, but a comment like this make me lose respect for you.
I know nothing about you.
His papers and those of Legg or Hutter are all online. I’ve hosted some of them myself eg. the recent Journal of Consciousness Studies one. The abstracts are pretty clear. They’ve been mentioned and discussed constantly on LW. You yourself have posted material on them, and material designed as an introduction for relative beginners so hopefully you read & learned from it.
So unless they agree in every detail, their forecasts are useless?
That story is an intuition pump (not one of my favorites, incidentally) - and your story is a pump with a broken-off handle.
Gee, I don’t suppose you would care to enlarge on that.
Which means what, exactly? It’s magical thinking to point out that our current civilization exists and is available to any AI we might make?
You said they necessarily lack creativity.
I’ll reiterate the quote:
Next:
So in other words, we would be able to detect the AI had gone bad while it was in the process of executing the massive amount of actions of taking over the world. I agree! Unfortunately, that’s may not be a useful time to detect it...
I can’t speak for XiXiDu, but I myself have noticed a bit of magical thinking that is sometimes employed by proponents of AGI/FAI. It goes something like this (exaggerated for effect):
1). It’s possible to create an AI that would recursively make itself smarter
2). Therefore the AI would make itself very nearly infinitely smart
3). The AI would then use its intelligence to acquire godlike powers
As I see it, though, #2 does not necessarily follow from #1, unless one makes an implicit assumption that Moore’s Law (or something like it) is a universal and unstoppable law of nature (like the speed of light or something). And #3 does not follow from #2, for reasons that XiXiDu articulated—even if we assume that godlike powers can exist at all, which I personally doubt.
If you took the ten smartest scientists alive in the world today, and transported them to Ancient Rome, they wouldn’t be able to build an iPhone from scratch no matter how smart they were. In addition, assuming that what we know of science today is more or less correct, we could predict with a high degree of certainty that no future scientist, no matter how superhumanly smart, would be able to build a perpetual motion device.
Edited to add: I was in the process of outlining a discussion post on this very subject, but then XiXiDu scooped me. Bah, I say !
I’d still like to see you write it, if it’s concise.
#2 does not need to follow since we already know it’s false—infinite intelligence is not on offer by the basic laws of physics aside from Tipler’s dubious theories. If it is replaced by ‘will make itself much smarter than us’, that is enough. (Have you read Chalmer’s paper?)
Which reasons would those be? And as I’ve pointed out, the only way to cure your doubt if the prior history of humanity is not enough would be to actually demonstrate the powers, with the obvious issue that is.
Ok, but how much smarter ? Stephen Hawking is much smarter than me, for example, but I’m not worried about his existence, and in fact see it as a very good thing, though I’m not expecting him to invent “gray goo” anytime soon (or, in fact, ever).
I realize that quantifying intelligence is a tricky proposition, so let me put it this way: can you list some feats of intelligence, currently inaccessible to us, which you would expect a dangerously smart AI to be able to achieve ? And, segueing into #3, how do these feats of intelligence translate into operational capabilities ?
Probably not; which paper are you referring to ?
The ones I alluded to in my next paragraph:
The problem here is that raw intelligence is not enough to achieve a tangible effect on the world. If your goal is to develop and deploy a specific technology, such as an iPhone, you need the infrastructure that would supply your raw materials and labor. This means that your technology can’t be too far ahead of what everyone else in the world is already using.
Even if you were ten times smarter than any human, you still wouldn’t be able to conjure a modern CPU (such as the one used in iPhones) out of thin air. You’d need (among other things) a factory, and a power supply to run it, and mines to extract the raw ores, and refineries to produce plastics, and the people to run them full-time, and the infrastructure to feed those people, and a government (or some other hegemony) to organize them, and so on and so forth… None of which existed in Ancient Rome (with the possible exception of the hegemony, and even that’s a stretch). Sure, you could build all of that stuff from scratch, but then you wouldn’t be going “FOOM”, you’d be going “are we there yet” for a century or so (optimistically speaking).
Are you referring to some specific historical events ? If so, which ones ?
Okay, you are right. I was wrong to expect you to have read my comments where I explained how I lack the most basic education (currently trying to change that).
Yeah, I post a lot of stuff that I sense to be important and that I would love to be able to read. I hope to be able to do so in future.
No. But if you are interested in risks from AI you should ask them about risks from AI and not just about when human-level AI will likely be invented.
My story isn’t a story but a quickly written discussion post, as a reply to post where an argument in favor of risks from AI has been outlined that was much too vague to be useful.
The problem I have is that it can be very misleading to just state that it is likely physically possible to invent smarter than human intelligence that could then be applied to its own improvement. It misses a lot of details.
Show me how that is going to work out. Or at least outline how a smarter-than-human AI is supposed to take over the world. Why is nobody doing that?
Just saying that there will be “a positive feedback loop in which an intelligence is making itself smarter” makes it look like something that couldn’t possible fail.
100 people are not enough to produce and employ any toxic gas or bombs in a way that would defeat a wide-stretched empire with many thousands of people.
It is magically thinking because you don’t know how that could possible work out in practice.
I said that there is nothing but evolution, a simple algorithm, when it comes to creativity and the discovery of unknown unknowns. I said that the full potential of evolution can only be tapped by a society of minds and its culture. I said that it is highly speculative that there exists a simple algorithm that would constitute a consequentialist AI with simple values that could achieve the same as aforementioned society of minds and therefore work better than evolution.
You just turned that into “XiXiDu believes that simple algorithms can’t exhibit creativity.”
Well, then we agree.
People have suggested dozens of scenarios, from taking over the Internet to hacking militaries to producing nanoassemblers & eating everything. The scenarios will never be enough for critics because until they are actually executed there will always be some doubt that they would work—at which point there would be no need to discuss them any more. Just like in cryonics (if you already had the technology to revive someone, there would be no need to discuss whether it would work). This is intrinsic to any discussion of threats that have not already struck or technologies which don’t already exist.
I am reminded of the quote, “‘Should we trust models or observations?’ In reply we note that if we had observations of the future, we obviously would trust them more than models, but unfortunately observations of the future are not available at this time.”
Because that’s the best way to take over...
That is not what you said. I’ll requote it:
If a singleton lacks feedback from diversity and something which is the ‘cornerstone’ of diversity is something a singleton cannot have… This is actually even stronger a claim than simple algorithms, because a singleton could be a very complex algorithm. (You see how charitable I’m being towards your claims? Yet no one appreciates it.)
And that’s not even getting into your claim about spectrum of research, which seems to impute stupidity to even ultraintelligent agents.
(‘Let’s see, I’m too dumb to see that I am systematically underinvesting in research despite the high returns when I do investigate something other than X, and apparently I’m also too dumb to notice that I am underperforming compared to those oh-so-diverse humans’ research programs. Gosh, no wonder I’m failing! I wonder why I am so stupid like this, I can’t seem to find any proofs of it.’)
Speaking as one of the critics, I’ve got to say that these scenarios are “not enough” for me not because there’s “some doubt that they would work”, but because there’s massive doubt that they would work. To use an analogy, I look both ways before crossing the street because I’m afraid of being hit by a car; but I don’t look up all the time, despite the fact that a meteorite could, theoretically, drop out of the sky and squash me flat. Cars are likely; meteorites are not.
I could elaborate regarding the reasons why I doubt some of these world takeover scenarios (including “hacking the Internet” and “eating everything”), if you’re interested.
Not really. Any scenario presented in any level of detail can be faulted with elaborate scenarios why it would not work. I’ve seen this at work with cryonics: no matter how detailed a future scenario is presented or how many options are presented in a disjunctive argument, no matter how many humans recovered from death or how many organs preserved and brought back, there are people who just never seem to think it has a non-zero chance of working because it has not yet worked.
For example, if I wanted to elaborate on the hacking the Internet scenario, I could ask you your probability on the possibility and then present information on Warhol worm simulations, prevalence of existing worms, number of root vulnerabilities a year, vulnerabilities exposed by static analysis tools like Coverity, the early results from fuzz testers, the size of the computer crime blackmarket, etc. until I was blue in the face, check whether you had changed your opinion and even if you said you changed it a little, you still would not do a single thing in your life differently.
Because, after all, disagreements are not about information. There’s a lot of evidence reasoning is only about arguing and disproving other people’s theories, and it’s increasingly clear to me that politics and theism are strongly heritable or determined by underlying cognitive properties like performance on the CRT or personality traits; why would cryonics or AI be any different?
The point of writing is to assemble useful information for those receptive, and use those not receptive to clean up errors or omissions. If someone reads my modafinil or nicotine essays and is a puritan with regard to supplements, I don’t expect them to change their minds; at most, I hope they’ll have a good citation for a negative point or mention a broken hyperlink.
That is a rather uncharitable interpretation of my words. I am fully willing to grant that your scenarios are possible, but are they likely ? If you showed me a highly detailed plan for building a new kind of skyscraper out of steel and concrete, I might try and poke some holes in it, but I’d agree that it would probably work. On the other hand, if you showed me a highly detailed plan for building a space elevator out of candy-canes, I would conclude that it would probably fail to work. I would conclude this not merely because I’ve never seen a space elevator before, but also because I know that candy-canes make a poor construction material. Sure, you could postulate super-strong diamondoid candy-canes of some sort, but then you’d need to explain where you’re going to get them from.
For the record, I believe that cryonics has a non-zero chance of working.
I think this would depend on how much my opinion had, in fact, changed. If you’re going to simply go ahead and assume that I’m a disingenuous liar, then sure, there’s no point in talking to me. Is there anything I can say or do (short of agreeing with you unconditionally) to prove my sincerity, or is the mere fact of my disagreement with you evidence enough of my dishonesty and/or stupidity ?
And yet, de-converted atheists as well as converted theists do exist. Perhaps more importantly, the above sentence makes you sound as though you’d made up your mind on the topic, and thus nothing and no one could persuade you to change it in any way—which is kind of like what you’re accusing me of doing.
But none of them make any sense to me, see below.
Wait, your quote said what I said I said you said I didn’t say.
I have no idea. You don’t have any idea either or you’d have told me by now. You are just saying that magic will happen and the world will be ours. That’s the problem with risks from AI.
See, that’s the problem. The AI can’t acquire the resources that are necessary to acquire resources in the first place. It might figure out that it will need to pursue various strategies or build nanoassemblers, but how does it do that?
Taking over the Internet is no answer, because the question is how. Building nanoassemblers is no answer, because the question is how.
We have plenty of ideas. Yvain posted a Discussion thread filled with ideas how. “Alternate history” is an old sub-genre dating back at least to Mark Twain (who makes many concrete suggestions about how his Connecticut yankee would do something similar).
But what’s the point? See my reply to Bugmaster—it’s impossible or would defeat the point of the discussion to actually execute the strategies, and anything short of execution is vulnerable to ‘that’s magic!11!!1’
By reading the many discussions of what could go wrong and implementing whatever is easiest, like hacking computers. Oh the irony!