thank you.
weverka
Why down votes and a statement that I am wrong because I misunderstood.
This is a mean spirited reaction when I lead with admission that I could not follow the argument. I offered a concrete example and stated that I could not follow the original thesis as applied to the concrete example. No one took me up on this.
Are you too advanced to stoop to my level of understanding and help me figure out how this abstract reasoning applies to a particular example? Is the shut down mechanism suggested by Yudkowsky too simple?
I tried to follow with a particular shut down mechanism in mind and this whole argument is just too abstract to see how it applies.
Yudkowsky gave us a shut down mechanism in his Time Magazine article. He said we could bomb* the data centers. Can you show how these theorems cast doubt on this shut down proposal?
*”destroy a rogue datacenter by airstrike.”—https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Compute is not the limiting factor for mammalian intelligence. Mammalian brains are organized to maximize communication. The gray matter, where most compute is done, is mostly on the surface and the white matter which dominate long range communication, fills the interior, communicating in the third dimension.
If you plot volume of white matter vs. gray matter across the various mammal brains, you find that the volume of white matter grows super linearly with volume of gray matter. https://www.pnas.org/doi/10.1073/pnas.1716956116
As brains get larger, you need a higher ratio of communication/compute.
Your calculations, and Cotras as well, focus on FLOPs but the intelligence is created by communication.
dy/dt = f(y) = m*y whose solution is the compound interest exponential, y = e^(m*t).
Why not estimate m?
An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely.
This blog post contains a false dichotomy. In the equation, m can take any value and there is no special keyhole value, and there is no line between fast and slow.
The description in the subsequent discussion is a distraction. The posted equation is meaningful only if we have an estimate of the growth rate.
We don’t have to tell it about the off switch!
You said nothing about positive contributions. When you throw away the positives, everything is negative.
Why didn’t you also compute the expectation this project contributes towards human flourishing?
If you only count the negative contributions, you will find that the expectation value of everything is negative.
The ML engineer is developing an automation technology for coding and is aware of AI risks . The engineers polite acknowledgment of the concerns is met with your long derivation of how many current and future people she will kill with this.
Automating an aspect of coding is part of a long history of using computers to help design better computers, starting with Carver Mead’s realization that you don’t need humans to cut rubylith film to form each transistor.
You haven’t shown an argument that this project will accelerate the scenario you describe. Perhaps the engineer is brushing you off because your reasoning is broad enough to apply to all improvements in computing technology. You will get more traction if you can show more specifically how this project is “bad for the world”.
The missile gap was a lie by the US Air Force to justify building more nukes, by falsely claiming that the Soviet Union had more nukes than the US
This statement is not supported by the link used as a reference. Was it a lie? The reference speaks to failed intelligence and political manipulation using the perceived gap. The phrasing above suggests conspiracy.
Why doesn’t an “off switch” protect us?
You have more than an order of magnitude scatter in your plot, but you write 3 significant figures to your calculated doubling period. Is this precision of value?
Also, your black data appears to have something different going on prior to 2008. It would be worthwhile doing a separate fit to post 2008 data. Eyeballing it, it is longer than 4 year doubling time.
AI is dependent on humans. It gets power and data from humans and it cannot go on without humans. We don’t trade with it, we dictate terms.
Do we fear a world where we have turned over mining, production and powering everything to the AI. Getting there would take a lot more than self amplifying feedback loop of a machine rewriting its own code.
When I was doing runs in the dozens of miles, I found it better to cache water ahead of time at the ten mile points. On a hot day, you need more water than you can comfortably carry.
Ok, I could be that someone. here goes. You and the paper author suggest a heat engine. That needs a cold side and a hot side. We build a heat engine where the hot side is kept hot by the incoming energy as described in this paper. The cold side is a surface we have in radiative communication with the 3 degrees Kelvin temperature of deep space. In order to keep the cold side from melting, we need to keep it below a few thousand degrees, so we have to make it really large so that it can still radiate the energy.
From here, we can use Stefan–Boltzmann law, to show that we need to build a radiator much bigger than a billion times the surface area of Mercury. It goes as the fourth power of the ratio of temperatures in our heat engine.
The paper’s contribution is the suggestion of a self replicating factory with exponential growth. That is cool. But the problem with all exponentials is that, in real life, they fail to grow indefinitely. Extrapolating an exponential a dozen orders of magnitude, without entertaining such limits, is just silly.
A billion times the energy flux from the surface of the sun, over any extended area is a lot to deal with. It is hard to take this proposal seriously.
For the record, I find that scientists make such errors routinely. In public conferences when optical scientists propose systems that violate the constant radiance theorem, I have no trouble standing up and saying so. It happens often enough that when I see a scientist propose such a system, It does not diminish my opinion of that scientist. I have fallen into this trap myself at times. Making this error should not be a source of embarrassment.
either way, you are claiming Sandberg, a physicist who works with thermodynamic stuff all the time, made a trivial error of physics;
I did not expect this to revert to credentialism. If you were to find out that my credentials exceed this other guy’s, would you change your position? If not, why appeal to credentials in your argument?
I stand corrected. Please forgive me.
Last summer I went on a week long backpacking trip where we had to carry out all our used toilet paper.
This year, I got this bidet for Christmas: https://www.garagegrowngear.com/products/portable-bidet-by-culoclean
You could carry one with you so you are no longer reliant on having them provided for you.
Ukraine gave up its nuclear weapons in exchange for our assurance of its territorial integrity.
The Budapest Memorandum on Security Assurances commits us to help defend Ukraine.
What “non-appeasing de-escalation strategies” are you proposing. We could remind Russia that it is a signatory to the Budapest Memorandum. Do you think that will move them to withdraw?
This letter offers nothing.