As a FOOM skeptic, can I ask you to show your reasoning a little more?
The next FOOM will be only the faster phase of the already functioning one. The one from the primordial Earth to now. Or the one from the Big Bang to now.
Respectfully, “like the Big Bang, only faster” does nothing to answer my question. I’m hardly committed to believing AI will go FOOM based on my belief in the Big Bang. Likewise with my belief in the evolution of life on Earth.
Not “like Big Bang, only faster”, but “like from Big Bang to today, only faster” or “like from the Roman Empire to today, only faster”. Or “like from the first cell to an ape, only faster”.
How fast a transformation goes is a matter of a degree inside the possible physics. But if something “evolves” very fast, you can call it FOOM more easily. Only that.
Now, what make me think, that some “intelligent” program could change its hardware as well? And fast!?
Be cause there is no real dichotomy here. Every bit has its physical imprint and every calculation is also a physical process. Nothing forbids quite a large influence onto surrounding matter and a positive feedback.
Be cause there is no real dichotomy here. Every bit has its physical imprint and every calculation is also a physical process. Nothing forbids quite a large influence onto surrounding matter and a positive feedback.
Yes, there are things that forbid this. Typically when we design a CPU, one of the design requirements is that no sequence of instructions can alter the hardware in irreversible ways. A reset should really put it back to a consistent state. Yes, it’s possible that the hardware has the potential for unexpected alteration from software, but I wouldn’t bet on that as a magic capability without real evidence. It takes a lot of energy to alter silicon and digital logic circuits just don’t have that kind of power.
So, given a correctly-designed CPU, any positive-feedback loop here has to go off-chip, which usually means “through humans”. And humans are slow and error-prone, so that imposes a lot of lag in the feedback loop.
I believe that a human-machine system will steadily improve over time. But it doesn’t seem, based on past experience, that there’s unlimited positive feedback. We’ve hit limits in hardware performance, despite using sophisticated machines and algorithms for design. We’ve hit limits in software performance—some problems really are intractable and others are undecidable.
So where’s the evidence that a single software program can improve its capabilities in an uncontrolled fashion, much more quickly than the surrounding society?
Just to make sure I understand you: if A is a program that has full access to its source code and the specifications of the hardware it’s running on, and A designs a new machine infrastructure and applies pressure to the world (e.g., through money or blackmail or whatever works) to induce humans to build an instance of that machine, B, such that B allows software-mediated hardware modification (for example, by having an automated chip-manufacturing plant attached to it), you would say that B is an “incorrectly-designed” CPU that might allow for a positive feedback loop.
Is that right?
Put a different way: this argument assumes that the existence of intelligent software doesn’t alter our predictions that CPUs will all be “correctly designed.” That might be true, or might not be.
No, this is not a case of an incorrectly designed CPU. This is a case where there’s a human in the loop and where the process of evolution will therefore be slow. It’s not a FOOM if it takes years between improvements, during which time the rest of the world is also improving.
We are very far from having a wholly-automated CPU-builder-plus-machine-assembly-and-install system. This is not a process that I expect a mildly-superhuman intelligence to be able to short-circuit.
Agreed that IF it turns out that existing hardware is incapable of supporting software capable of designing a wholly automated chip factory, THEN humans are a necessary part of the self-improvement cycle for as many iterations as it takes to come up with hardware that is capable of that (plus one final iteration).
I’m not as confident of that premise as you sound, but it’s certainly possible.
Existing hardware might be capable of supporting software capable of designing an automated chip factory. But the assumption required for the FOOM scenario is much stronger than that.
To get an automated self-improving system, it’s not enough to design—you have to actually build. And the necessary factory has to build a lot more than chips. I’m certain that existing hardware attached to general purpose computers is insufficient to build much of anything. And the sort of robotic actuators required to build a wholly automated factory are pretty far from what’s available today. There’s really a lot of manufacturing required to get from clever software to a flexible robotic factory.
I am skeptical that these steps can be done all that quickly or that a merely superhuman AI won’t make costly mistakes along the way. There are lots and lots of details to get right and the AI won’t typically have access to all the relevant facts.
To get an automated self-improving system, it’s not enough to design—you have to actually build. And the necessary factory has to build a lot more than chips.
At least you need to build eventually. That’s after you’ve harvested the resources you can from the internet. Which is a lot. ie. All the early iterations would probably just be software improvements. Hardware improvements can wait until the self improving system is already smart enough to make such tasks simple.
How do you know how much scope there is for software-only optimization? If I understand right, you are assuming that an AGI is able to reliably write the code for a much more capable AGI.
I’m sure this isn’t true in general. At some point you max out the hardware. Before you get to that point, I’d expect the amount of cleverness needed to find more improvements exceeds the ability of the machine. Intractable problems stay intractable no matter how smart you are.
Just how much room do you think there is for iterative software-only reengineering of an AGI, and why?
Not every software, of course not. But a complex enough, that can search through the space of all possibilities fast enough to find a hole, if there is one.
Nobody thought, that in a chess a king with two knights is doomed against a king with two bishops. The most brilliant human minds have never suspected that. Then a simple software program found this hole in the FIDE’s rules of “50 moves without check”. The million or so best human minds haven’t. People are able to explore only a small part of the solutions space.
Nobody thought, that in a chess a king with two knights is doomed against a king with two bishops. The most brilliant human minds have never suspected that.
I’m trying to find a reference for that but I can’t find any mention of that endgame. Do you have a reference or maybe another detail which could narrow the google search down?
Then a simple software program found this hole in the FIDE’s rules of “50 moves without check”.
Isn’t the 50 move rule “50 moves without a pawn moved or a piece captured”? Just requiring a check wouldn’t (always) prevent the problem the rule is trying to prevent.
There are some long general theoretical wins with only a two- or three-point material advantage but the fifty-move rule usually comes into play because of the number of moves required: two bishops versus a knight (66 moves); a queen and bishop versus two rooks (two-point material advantage, can require 84 moves); a rook and bishop versus a bishop on the opposite color and a knight (a two-point material advantage, requires up to 98 moves); and a rook and bishop versus two knights (two-point material advantage, but it requires up to 222 moves) (Müller & Lamprecht 2001:400–6) (Nunn 2002a:325–29).
It is almost all I find online. But I will keep try.
Who do you think proved this? A human? Do you have a supporting link?
If there was such a proof it would have been found by a computer.
Do you think it isn’t proven?
I initially just believed you and wanted to find out more. But it turns out there isn’t any mention of it in the places where I expected it to be mentioned. A winning endgame between a combination so similar in material would almost certainly be mentioned if it existed. Absence of evidence (that should exist) is evidence of absence! Perhaps there was another similar result in the magazine?
The most interesting endgame I found in my searching was two knights vs king and pawn, which is (depending on the pawn) a win. This is in contrast to the knights vs the lone king which is an easy draw. On a related (better to be worse) note there was a high ranked game in which a player underpromoted (pawns to knights) twice in one game and in each case the underpromotion was the unambiguous correct play.
Somebody recalls a slightly different version than I.
FSR: Incidentally, knights really suck on b7, e.g., Soltis vs A J Goldsby, 1981, so driving your opponent's knight there tends to be a good thing. If you're defending the endgame of two bishops versus knight, disregard the above advice, since there the various "N2" squares (b7, g7, b2, and g2) are the key squares the knight should occupy. See P Popovic vs Korchnoi, 1984. (Computers proved 20 years ago that that ending is a theoretical win - though it's very difficult, see Timman vs Speelman, 1992.)
I second wedrifid’s request, please provide a link to the two knights against two bishops problem. It sounds interesting. Also, it’s indeed not “50 moves without check” but rather “50 moves without a capture or a pawn move”.
Sure. Machines are good at systematically checking cases and at combinatorial optimization, once the state space is set up properly. But this isn’t a good model for general-purpose intelligence. In fact, this sort of systematic checking is precisely why I think we can build correct hardware.
The way systematic verification works is that designers write a specification and then run moderately-complex programs to check that the design meets the spec. Model-checking software or hardware doesn’t require a general-purpose intelligence. It requires good algorithms and plenty of horsepower, but nothing remotely self-modifying or even particularly adaptive.
Yes, that’s basically what going FOOM means. Why do you think it will happen?
Nothing forbids quite a large influence onto surrounding matter and a positive feedback.
Well, that’s not true. Many computational problems have well understood upper limits on how fast they can be solved. If you make those problem sufficiently large, they are just as intractable to a fast computer as to a smart human. You seem to think that “sufficiently large” is not a likely size of problems we will want want to solve in the future. Why do you think that?
It means, that maybe a self optimizing program will first only recompile itself more optimally. Then it will make itself parallel. Then it will find a way to level the voltage. Then it will find undocumented OPs. Then it will harness some quantum effects in the processor or in RAM or elsewhere to get a boost. Then it will outsource itself to the neighboring devices. Then it will do some small changes on the “quantum level”.
Soon we will call it—a FOOMer.
Many computational problems have well understood upper limits on how fast they can be solved.
On a given hardware. Another reason it may want to FOOM a little.
I thought it was clear. A program, which goal is only to improve itself, as much as possible, when advanced enough, CAN influence its hardware. I don’t know exactly what would be the best way to do it, but I imagine that some tinkering with the electrical currents inside the CPU might alter it on a nondestructive way as well.
The “well understood upper limit” of the PI calculating will wait for an improved hardware. Improved with the whole Earth, for example.
Search lesswrong.com and Yudkowsky about this, it is one of a few things I agree with them.
The next FOOM will be only the faster phase of the already functioning one. The one from the primordial Earth to now. Or the one from the Big Bang to now.
Nothing new, except the speed.
Respectfully, “like the Big Bang, only faster” does nothing to answer my question. I’m hardly committed to believing AI will go FOOM based on my belief in the Big Bang. Likewise with my belief in the evolution of life on Earth.
Not “like Big Bang, only faster”, but “like from Big Bang to today, only faster” or “like from the Roman Empire to today, only faster”. Or “like from the first cell to an ape, only faster”.
How fast a transformation goes is a matter of a degree inside the possible physics. But if something “evolves” very fast, you can call it FOOM more easily. Only that.
Now, what make me think, that some “intelligent” program could change its hardware as well? And fast!?
Be cause there is no real dichotomy here. Every bit has its physical imprint and every calculation is also a physical process. Nothing forbids quite a large influence onto surrounding matter and a positive feedback.
Does it?
Yes, there are things that forbid this. Typically when we design a CPU, one of the design requirements is that no sequence of instructions can alter the hardware in irreversible ways. A reset should really put it back to a consistent state. Yes, it’s possible that the hardware has the potential for unexpected alteration from software, but I wouldn’t bet on that as a magic capability without real evidence. It takes a lot of energy to alter silicon and digital logic circuits just don’t have that kind of power.
So, given a correctly-designed CPU, any positive-feedback loop here has to go off-chip, which usually means “through humans”. And humans are slow and error-prone, so that imposes a lot of lag in the feedback loop.
I believe that a human-machine system will steadily improve over time. But it doesn’t seem, based on past experience, that there’s unlimited positive feedback. We’ve hit limits in hardware performance, despite using sophisticated machines and algorithms for design. We’ve hit limits in software performance—some problems really are intractable and others are undecidable.
So where’s the evidence that a single software program can improve its capabilities in an uncontrolled fashion, much more quickly than the surrounding society?
Just to make sure I understand you: if A is a program that has full access to its source code and the specifications of the hardware it’s running on, and A designs a new machine infrastructure and applies pressure to the world (e.g., through money or blackmail or whatever works) to induce humans to build an instance of that machine, B, such that B allows software-mediated hardware modification (for example, by having an automated chip-manufacturing plant attached to it), you would say that B is an “incorrectly-designed” CPU that might allow for a positive feedback loop.
Is that right?
Put a different way: this argument assumes that the existence of intelligent software doesn’t alter our predictions that CPUs will all be “correctly designed.” That might be true, or might not be.
No, this is not a case of an incorrectly designed CPU. This is a case where there’s a human in the loop and where the process of evolution will therefore be slow. It’s not a FOOM if it takes years between improvements, during which time the rest of the world is also improving.
We are very far from having a wholly-automated CPU-builder-plus-machine-assembly-and-install system. This is not a process that I expect a mildly-superhuman intelligence to be able to short-circuit.
Ah, OK.
Agreed that IF it turns out that existing hardware is incapable of supporting software capable of designing a wholly automated chip factory, THEN humans are a necessary part of the self-improvement cycle for as many iterations as it takes to come up with hardware that is capable of that (plus one final iteration).
I’m not as confident of that premise as you sound, but it’s certainly possible.
Existing hardware might be capable of supporting software capable of designing an automated chip factory. But the assumption required for the FOOM scenario is much stronger than that.
To get an automated self-improving system, it’s not enough to design—you have to actually build. And the necessary factory has to build a lot more than chips. I’m certain that existing hardware attached to general purpose computers is insufficient to build much of anything. And the sort of robotic actuators required to build a wholly automated factory are pretty far from what’s available today. There’s really a lot of manufacturing required to get from clever software to a flexible robotic factory.
I am skeptical that these steps can be done all that quickly or that a merely superhuman AI won’t make costly mistakes along the way. There are lots and lots of details to get right and the AI won’t typically have access to all the relevant facts.
At least you need to build eventually. That’s after you’ve harvested the resources you can from the internet. Which is a lot. ie. All the early iterations would probably just be software improvements. Hardware improvements can wait until the self improving system is already smart enough to make such tasks simple.
How do you know how much scope there is for software-only optimization? If I understand right, you are assuming that an AGI is able to reliably write the code for a much more capable AGI.
I’m sure this isn’t true in general. At some point you max out the hardware. Before you get to that point, I’d expect the amount of cleverness needed to find more improvements exceeds the ability of the machine. Intractable problems stay intractable no matter how smart you are.
Just how much room do you think there is for iterative software-only reengineering of an AGI, and why?
Not every software, of course not. But a complex enough, that can search through the space of all possibilities fast enough to find a hole, if there is one.
Nobody thought, that in a chess a king with two knights is doomed against a king with two bishops. The most brilliant human minds have never suspected that. Then a simple software program found this hole in the FIDE’s rules of “50 moves without check”. The million or so best human minds haven’t. People are able to explore only a small part of the solutions space.
I’m trying to find a reference for that but I can’t find any mention of that endgame. Do you have a reference or maybe another detail which could narrow the google search down?
Isn’t the 50 move rule “50 moves without a pawn moved or a piece captured”? Just requiring a check wouldn’t (always) prevent the problem the rule is trying to prevent.
here
Quote:
It is almost all I find online. But I will keep try.
The “50 rule” changed several times.
This doesn’t seem to mention two knights vs two bishops. Is that specifically something you recall seeing elsewhere?
I’ve read this about 25 years ago in a magazine.
But do try this:
and this
Google? Yes, I tried that. I found no confirmation. I still haven’t found said confirmation. I now doubt the claim.
Who do you think proved this? A human? Do you have a supporting link?
Do you think it isn’t proven?
If there was such a proof it would have been found by a computer.
I initially just believed you and wanted to find out more. But it turns out there isn’t any mention of it in the places where I expected it to be mentioned. A winning endgame between a combination so similar in material would almost certainly be mentioned if it existed. Absence of evidence (that should exist) is evidence of absence! Perhaps there was another similar result in the magazine?
The most interesting endgame I found in my searching was two knights vs king and pawn, which is (depending on the pawn) a win. This is in contrast to the knights vs the lone king which is an easy draw. On a related (better to be worse) note there was a high ranked game in which a player underpromoted (pawns to knights) twice in one game and in each case the underpromotion was the unambiguous correct play.
Here
Somebody recalls a slightly different version than I.
I second wedrifid’s request, please provide a link to the two knights against two bishops problem. It sounds interesting. Also, it’s indeed not “50 moves without check” but rather “50 moves without a capture or a pawn move”.
Sure. Machines are good at systematically checking cases and at combinatorial optimization, once the state space is set up properly. But this isn’t a good model for general-purpose intelligence. In fact, this sort of systematic checking is precisely why I think we can build correct hardware.
The way systematic verification works is that designers write a specification and then run moderately-complex programs to check that the design meets the spec. Model-checking software or hardware doesn’t require a general-purpose intelligence. It requires good algorithms and plenty of horsepower, but nothing remotely self-modifying or even particularly adaptive.
Yes, that’s basically what going FOOM means. Why do you think it will happen?
Well, that’s not true. Many computational problems have well understood upper limits on how fast they can be solved. If you make those problem sufficiently large, they are just as intractable to a fast computer as to a smart human. You seem to think that “sufficiently large” is not a likely size of problems we will want want to solve in the future. Why do you think that?
It means, that maybe a self optimizing program will first only recompile itself more optimally. Then it will make itself parallel. Then it will find a way to level the voltage. Then it will find undocumented OPs. Then it will harness some quantum effects in the processor or in RAM or elsewhere to get a boost. Then it will outsource itself to the neighboring devices. Then it will do some small changes on the “quantum level”.
Soon we will call it—a FOOMer.
On a given hardware. Another reason it may want to FOOM a little.
Again, this is not what I mean..
Please note that I’m asking WHY you think your assertions are true.
I thought it was clear. A program, which goal is only to improve itself, as much as possible, when advanced enough, CAN influence its hardware. I don’t know exactly what would be the best way to do it, but I imagine that some tinkering with the electrical currents inside the CPU might alter it on a nondestructive way as well.
The “well understood upper limit” of the PI calculating will wait for an improved hardware. Improved with the whole Earth, for example.
Search lesswrong.com and Yudkowsky about this, it is one of a few things I agree with them.