Note that this “immortal robots will take over the galaxy and it’s inevitable” is extremely recent thinking. Arguably it has been less than 10 years that neural networks became more than toys. That it’s definitely possible to exceed human intelligence in every dimension with machine intelligence, once we get the details correct.
My point is that we have constructed this world view from a large pile of assumptions. Assumptions that each today seem solid but may simply be wrong.
That the laws of physics are as we know them with a few significant figure and missing theory integrations from being complete
That the laws of physics are the same everywhere
That other stars and galaxies are even real
That alien species have to expand per the laws of nature as we understand them, that there is no way to cheat or exploit to get a better outcome than endless Dyson swarms
That there are no hidden rules that would end everything, like exceeding the complexity the universe allows causing a ‘chunk reset’ in an area of space.
And these are just a few of the many low probability possibilities. Point is, yeah, when the outcome of your estimate seems to violate any prior history it calls into question the assumptions you don’t have direct evidence for.
No. Not to take away Holden’s thunder but Von Neumann already postulated the possibility of self-replicating probes colonizing the galaxy. Indeed, I might be wrong here but it is exactly this kind of thinking that drove Fermi to pose his famous question.
Most of this stuff was fairly common in extropian/hard scifi Usenet circles in the 90′s I believe.
My point was that during von Neumanns time there was plenty of reason to think such probes might never be possible, or far in the future. The exponential nature of certain types of improvements wasn’t yet known.
We can’t build Von Neumann probes in the real world—though we can in the digital world. What kind of significant (!) new information have we obtained about the feasibility of galaxywide colonization through Von Neumann probes?
We made computers with billions of times as much compute and memory from the 1960s. Previously intractable problems like machine perception and machine planning to resolve arbitrary failures—were only really begun to be solved with neural networks around 2014ish.
Previously they were theoretical. Now it’s just a matter of money and iterations.
See previously to define a subtask for a von neuman machibe like “mine rocks from the asteroid you landed on in other tasks and haul them to the smelter” could have a near infinite number of failures. And with previous robotics each failure had to be explicitly handled by a programmer who anticipated the specific failure or a worker on site to resolve it.
With machine planning algorithms you can have the machine estimate the action that has a (better than random chance, ideally close to the true global maxima) probability of scoring well on a heuristic of success. And you need neural networks to even perceive the asteroid surface in arbitrary scenarios and lighting conditions. And you need realistic computer graphics and simulated physics to even model what the robot will see.
It’s still many generations of technology away but we can actually in concrete terms specify how to do this, and how we could iterate to a working system if we wanted to.
Note that this “immortal robots will take over the galaxy and it’s inevitable” is extremely recent thinking. Arguably it has been less than 10 years that neural networks became more than toys. That it’s definitely possible to exceed human intelligence in every dimension with machine intelligence, once we get the details correct.
My point is that we have constructed this world view from a large pile of assumptions. Assumptions that each today seem solid but may simply be wrong.
That the laws of physics are as we know them with a few significant figure and missing theory integrations from being complete
That the laws of physics are the same everywhere
That other stars and galaxies are even real
That alien species have to expand per the laws of nature as we understand them, that there is no way to cheat or exploit to get a better outcome than endless Dyson swarms
That there are no hidden rules that would end everything, like exceeding the complexity the universe allows causing a ‘chunk reset’ in an area of space.
And these are just a few of the many low probability possibilities. Point is, yeah, when the outcome of your estimate seems to violate any prior history it calls into question the assumptions you don’t have direct evidence for.
No. Not to take away Holden’s thunder but Von Neumann already postulated the possibility of self-replicating probes colonizing the galaxy. Indeed, I might be wrong here but it is exactly this kind of thinking that drove Fermi to pose his famous question.
Most of this stuff was fairly common in extropian/hard scifi Usenet circles in the 90′s I believe.
My point was that during von Neumanns time there was plenty of reason to think such probes might never be possible, or far in the future. The exponential nature of certain types of improvements wasn’t yet known.
We can’t build Von Neumann probes in the real world—though we can in the digital world.
What kind of significant (!) new information have we obtained about the feasibility of galaxywide colonization through Von Neumann probes?
We made computers with billions of times as much compute and memory from the 1960s. Previously intractable problems like machine perception and machine planning to resolve arbitrary failures—were only really begun to be solved with neural networks around 2014ish.
Previously they were theoretical. Now it’s just a matter of money and iterations.
See previously to define a subtask for a von neuman machibe like “mine rocks from the asteroid you landed on in other tasks and haul them to the smelter” could have a near infinite number of failures. And with previous robotics each failure had to be explicitly handled by a programmer who anticipated the specific failure or a worker on site to resolve it.
With machine planning algorithms you can have the machine estimate the action that has a (better than random chance, ideally close to the true global maxima) probability of scoring well on a heuristic of success. And you need neural networks to even perceive the asteroid surface in arbitrary scenarios and lighting conditions. And you need realistic computer graphics and simulated physics to even model what the robot will see.
It’s still many generations of technology away but we can actually in concrete terms specify how to do this, and how we could iterate to a working system if we wanted to.
Fair enough.