I’m not worried about OAI not being able to solve the rocket alignment problem in time. Risks from asteroids accidentally hitting the earth (instead of getting into a delicate low-earth orbit) are purely speculative.
You might say “but there are clear historical cases where asteroids hit the earth and caused catastrophes”, but I think geological evolution is just a really bad reference class for this type of thinking. After all, we are directing the asteroid this time, not geological evolution.
Risks from asteroids accidentally hitting the earth (instead of getting into a delicate low-earth orbit) are purely speculative.
Not to mention that simple counting arguments show that the volume of the Earth is much smaller than even a rather narrow spherical shell of space around the Earth. Most asteroid trajectories that come anywhere near Earth will pass through this spherical shell rather than the Earth volume.
Remember—“Earth is not the optimized-trajectory target”! An agent like Open Asteroid Impact is merely executing policies involving business strategies involving asteroids which have been (financially) rewarded in the past; it in no way attempts to ‘optimize impact’.
And the lack of optimization is a killer because impacts just wouldn’t happen. The idea that asteroids will ever impact ignores the simple fact that the Solar System has chaotic dynamics—it is not just a ‘3-body problem’ but an n-body problem where n = millions. Imagine trying to predict that! And consider the simple problem of landing the same rocket you launched: as of November 2015, no one has ever succeeded in this, because everything involved is super-chaotic. (Just imagine the sheer level of chaos of a multi-ton rocket perched over exploding exhaust while descending through chaotic weather, right down to the millisecond timescale; you might as well balance a pencil on the tip of a pencil.) Grifting tech hypebros would have you believe that ‘technology improves’, sometimes rapidly, and that by now we might be landing rockets on—if you believe absurd exponential forecasts—a near-daily basis. Such claims are not even worth factchecking.
Open Asteroid Impact strongly disagrees with this line of thinking. Our theory of change relies on many asteroids filled with precious minerals hitting earth, as mining in space (even LEO) is prohibitively expensive compared to on-ground mining.
While your claims may be true for small asteroids, we strongly believe that scale is all you need. Over time, sufficiently large, and sufficiently many, asteroids can solve the problem of specific asteroids not successfully impacting Earth.
You might say “but there are clear historical cases where asteroids hit the earth and caused catastrophes”, but I think geological evolution is just a really bad reference class for this type of thinking. After all, we are directing the asteroid this time, not geological evolution.
This paragraph gives me bad vibes. I feel like it’s mocking particular people in an unhelpful way.
It doesn’t feel to me like constructively poking fun at an argument and instead feels more like dunking on a strawman of the outgroup.
(I also have this complaint to some extent with the top level “Open Asteroid Impact” April Fool’s post, though I think the vibes are much less bad for the overall post. This is due to various reasons I have a hard time articulating.)
As near as I can summarize the arguments, the argument distills to :
What intelligence in any thinking agent does :
A. Perceive the situation at present. It will functionally always be unique in the real world. (it will never exact match to a Q-table entry with a few exceptions like board games)
B. Lookup reference classes that are similar to the situation you are in.
C. Use the reference classes that are similar to the situation, and assume the laws of physics will cause a similar outcome now as to then, predict the future outcomes conditional on the agent’s actions. (aka I do nothing, reality will cause outcome 1, I do action A, reality will cause outcome 2...) (D. choose the action that predicts the future with the highest EV from the agent’s perspective)
This will fail badly when the situation is a black swan.
For example, at one point I sold 80! bitcoins for $10 each because I reasoned that it was similar to fake experimental e-currencies that had been tried in the past.
How you can project the future:
So when people try to answer questions like ‘will there be a recession’ and similar, that’s how. You try to find a reference class, or a numerical variable that predicted ’15 of the last 10 recessions’ and project it will happen when this happens.
The argument for “AI won’t be that bad” comes to reasoning:
A. A piece of software we call a transformers model is kinda like the reference classes “useful software”, “useful technology tools”, “military applicable technology”.
B. You then assume the laws of physics are similar, and assume the other tens of thousands of things that match the above will cause a similar outcome, and there you go, almost 0% AI doom because none of the other technologies risked the doom of humanity (except 1-2). And also you hit a corollary, we know historically that countries that didn’t adopt the latest and most expensive military technology got slaughtered. Recent examples : Afghanistan invaded by the Soviets, then later the USA. Iraq hit by the USA twice. Ukraine.
And we see the results in Ukraine what even a little bit of better weapons technology donated to their side does on the battlefield, the results are dramatic. This results in another pro-AI argument, “we didn’t really have a choice”.
The counterargument:
The simplest counterargument to above is to say “AI, especially ASI, doesn’t match the reference classes of “useful software”, “useful technology tools”, “military applicable technology”.
B. “useful technology tools” : the argument here is usually that the ASI isn’t a tool because it can betray you while a hammer can’t. Also it’s smarter than yourself, so you can’t really even check it’s work or know when it’s betraying.
C. “military applicable technology” : ditto, you can’t trust a weapon that can think for itself or coordinate to betray you
This thread:
The “in joke” is that we all know that slamming an asteroid into the earth and causing a > 1 gigaton explosion of plasma and probably an earthquake (I checked and it’s a tiny asteroid to reach 1 gigaton) is something that we know the consequences for. It really is a bad idea and if we need platinum or iridium or other elements common to asteroids we’ll have to process it in place and bring it back the hard way.
So we’re trying to play with the “reference class” to make it seem like a good idea among other purposeful argument faults. This one is making fun of people saying the asteroid fits the ‘reference class’ of the one that extincted the dinosaurs and that most doom advocates, unlike yourself, aren’t qualified in ML.
Mine tries to say that because the reference class data is old, we should get into the business of slamming asteroids and find out the consequences later, and I also make the military applicable tech argument, which is a true argument : if you want to stop people deorbiting asteroids from out past the orbit of Mars, you need space warships and vehicles that can redirect asteroids on an impact course away. (so the same technology as the bad guys, meaning you cannot afford a ‘spacecraft building pause’)
I’m not worried about OAI not being able to solve the rocket alignment problem in time. Risks from asteroids accidentally hitting the earth (instead of getting into a delicate low-earth orbit) are purely speculative.
You might say “but there are clear historical cases where asteroids hit the earth and caused catastrophes”, but I think geological evolution is just a really bad reference class for this type of thinking. After all, we are directing the asteroid this time, not geological evolution.
Not to mention that simple counting arguments show that the volume of the Earth is much smaller than even a rather narrow spherical shell of space around the Earth. Most asteroid trajectories that come anywhere near Earth will pass through this spherical shell rather than the Earth volume.
Remember—“Earth is not the optimized-trajectory target”! An agent like Open Asteroid Impact is merely executing policies involving business strategies involving asteroids which have been (financially) rewarded in the past; it in no way attempts to ‘optimize impact’.
And the lack of optimization is a killer because impacts just wouldn’t happen. The idea that asteroids will ever impact ignores the simple fact that the Solar System has chaotic dynamics—it is not just a ‘3-body problem’ but an n-body problem where n = millions. Imagine trying to predict that! And consider the simple problem of landing the same rocket you launched: as of November 2015, no one has ever succeeded in this, because everything involved is super-chaotic. (Just imagine the sheer level of chaos of a multi-ton rocket perched over exploding exhaust while descending through chaotic weather, right down to the millisecond timescale; you might as well balance a pencil on the tip of a pencil.) Grifting tech hypebros would have you believe that ‘technology improves’, sometimes rapidly, and that by now we might be landing rockets on—if you believe absurd exponential forecasts—a near-daily basis. Such claims are not even worth factchecking.
Open Asteroid Impact strongly disagrees with this line of thinking. Our theory of change relies on many asteroids filled with precious minerals hitting earth, as mining in space (even LEO) is prohibitively expensive compared to on-ground mining.
While your claims may be true for small asteroids, we strongly believe that scale is all you need. Over time, sufficiently large, and sufficiently many, asteroids can solve the problem of specific asteroids not successfully impacting Earth.
This paragraph gives me bad vibes. I feel like it’s mocking particular people in an unhelpful way.
It doesn’t feel to me like constructively poking fun at an argument and instead feels more like dunking on a strawman of the outgroup.
(I also have this complaint to some extent with the top level “Open Asteroid Impact” April Fool’s post, though I think the vibes are much less bad for the overall post. This is due to various reasons I have a hard time articulating.)
It comes from the discussion here: https://www.lesswrong.com/posts/LvKDMWQ3yLG9R3gHw/empiricism-as-anti-epistemology
which also has https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn as a related topic.
As near as I can summarize the arguments, the argument distills to :
What intelligence in any thinking agent does :
A. Perceive the situation at present. It will functionally always be unique in the real world. (it will never exact match to a Q-table entry with a few exceptions like board games)
B. Lookup reference classes that are similar to the situation you are in.
C. Use the reference classes that are similar to the situation, and assume the laws of physics will cause a similar outcome now as to then, predict the future outcomes conditional on the agent’s actions. (aka I do nothing, reality will cause outcome 1, I do action A, reality will cause outcome 2...) (D. choose the action that predicts the future with the highest EV from the agent’s perspective)
This will fail badly when the situation is a black swan.
For example, at one point I sold 80! bitcoins for $10 each because I reasoned that it was similar to fake experimental e-currencies that had been tried in the past.
How you can project the future:
So when people try to answer questions like ‘will there be a recession’ and similar, that’s how. You try to find a reference class, or a numerical variable that predicted ’15 of the last 10 recessions’ and project it will happen when this happens.
The argument for “AI won’t be that bad” comes to reasoning:
A. A piece of software we call a transformers model is kinda like the reference classes “useful software”, “useful technology tools”, “military applicable technology”.
B. You then assume the laws of physics are similar, and assume the other tens of thousands of things that match the above will cause a similar outcome, and there you go, almost 0% AI doom because none of the other technologies risked the doom of humanity (except 1-2). And also you hit a corollary, we know historically that countries that didn’t adopt the latest and most expensive military technology got slaughtered. Recent examples : Afghanistan invaded by the Soviets, then later the USA. Iraq hit by the USA twice. Ukraine.
And we see the results in Ukraine what even a little bit of better weapons technology donated to their side does on the battlefield, the results are dramatic. This results in another pro-AI argument, “we didn’t really have a choice”.
The counterargument:
The simplest counterargument to above is to say “AI, especially ASI, doesn’t match the reference classes of “useful software”, “useful technology tools”, “military applicable technology”.
A. useful software counter: https://www.lesswrong.com/posts/kSq5qiafd6SqQoJWv/ by @Davidmanheim
B. “useful technology tools” : the argument here is usually that the ASI isn’t a tool because it can betray you while a hammer can’t. Also it’s smarter than yourself, so you can’t really even check it’s work or know when it’s betraying.
C. “military applicable technology” : ditto, you can’t trust a weapon that can think for itself or coordinate to betray you
This thread:
The “in joke” is that we all know that slamming an asteroid into the earth and causing a > 1 gigaton explosion of plasma and probably an earthquake (I checked and it’s a tiny asteroid to reach 1 gigaton) is something that we know the consequences for. It really is a bad idea and if we need platinum or iridium or other elements common to asteroids we’ll have to process it in place and bring it back the hard way.
So we’re trying to play with the “reference class” to make it seem like a good idea among other purposeful argument faults. This one is making fun of people saying the asteroid fits the ‘reference class’ of the one that extincted the dinosaurs and that most doom advocates, unlike yourself, aren’t qualified in ML.
Mine tries to say that because the reference class data is old, we should get into the business of slamming asteroids and find out the consequences later, and I also make the military applicable tech argument, which is a true argument : if you want to stop people deorbiting asteroids from out past the orbit of Mars, you need space warships and vehicles that can redirect asteroids on an impact course away. (so the same technology as the bad guys, meaning you cannot afford a ‘spacecraft building pause’)
I also made fun of Sam Altman’s double dealing.