I think the more general problem is violation of Hume’s guillotine. You can’t take a fact about natural selection (or really about anything) and go from that to moral reasoning without some pre-existing morals.
However, it seems the actual reasoning with the Thermodynamic God is just post-hoc reasoning. Some people just really want to accelerate and then make up philosophical reasons to believe what they believe. It’s important to be careful to criticize actual reasoning and not post-hoc reasoning. I don’t think the Thermodynamic God was invented and then people invented accelerationism to fulfill it. It was precisely the other way around. One should not critique the made up stuff (besides just critiquing that it is made up) because that is not charitable (very uncertain on this). Instead, one should look for the actual motivation to accelerate and then criticize that (or find flaws in it).
The “thermodynamic god” is a very weak force, as evidenced by the approximate age of the universe and no AI foom in Sol or in reach of our telescopes. It’s technically correct but who’s to say it won’t take 140 billion more years to AI foom?
It’s a terrible argument.
What bothers me is if you talk about competing human groups, whether at the individual, company level or country level or superpower block level, all the arrows point to acceleration.
(0) Individual level : nature sabotaged your genes. You can hope for AI advances leading to biotech advances and substantial life extension for yourself or your direct family. (Children, grandchildren—humans you will directly live to see). Death is otherwise your fate.
(1) Company level : accelerate AI (either as an AI lab or end user adopter) and get mountains of investment capital and money you saved via using AI tooling, or go broke
(2) Country level : get strapped with AI weapons (like drones with onboard intelligence manufactured by intelligent robots) or your enemies can annihilate you at low cost on the battlefield.
(3) Power bloc level. Fall behind enough, and you or your allies nuclear weapons may no longer be a sufficient deterrent. MAD ends if a side uses AI driven robots to make anti ballistic missile and air defense weapons in the quantities needed to win a nuclear war.
These forces seem shockingly strong and we know from the recent financial activity for Nvidia stock it’s trillions in favor of acceleration.
Thermodynamics is by comparison negligible.
I currently suspect due to 0 through 3 we are locked into a race for AI and have no alternatives, but it’s really weird the e/acc makes such an overtly bad argument when they are likely overall correct.
I think the more general problem is violation of Hume’s guillotine. You can’t take a fact about natural selection (or really about anything) and go from that to moral reasoning without some pre-existing morals.
However, it seems the actual reasoning with the Thermodynamic God is just post-hoc reasoning. Some people just really want to accelerate and then make up philosophical reasons to believe what they believe. It’s important to be careful to criticize actual reasoning and not post-hoc reasoning. I don’t think the Thermodynamic God was invented and then people invented accelerationism to fulfill it. It was precisely the other way around. One should not critique the made up stuff (besides just critiquing that it is made up) because that is not charitable (very uncertain on this). Instead, one should look for the actual motivation to accelerate and then criticize that (or find flaws in it).
The “thermodynamic god” is a very weak force, as evidenced by the approximate age of the universe and no AI foom in Sol or in reach of our telescopes. It’s technically correct but who’s to say it won’t take 140 billion more years to AI foom?
It’s a terrible argument.
What bothers me is if you talk about competing human groups, whether at the individual, company level or country level or superpower block level, all the arrows point to acceleration.
(0) Individual level : nature sabotaged your genes. You can hope for AI advances leading to biotech advances and substantial life extension for yourself or your direct family. (Children, grandchildren—humans you will directly live to see). Death is otherwise your fate.
(1) Company level : accelerate AI (either as an AI lab or end user adopter) and get mountains of investment capital and money you saved via using AI tooling, or go broke
(2) Country level : get strapped with AI weapons (like drones with onboard intelligence manufactured by intelligent robots) or your enemies can annihilate you at low cost on the battlefield.
(3) Power bloc level. Fall behind enough, and you or your allies nuclear weapons may no longer be a sufficient deterrent. MAD ends if a side uses AI driven robots to make anti ballistic missile and air defense weapons in the quantities needed to win a nuclear war.
These forces seem shockingly strong and we know from the recent financial activity for Nvidia stock it’s trillions in favor of acceleration.
Thermodynamics is by comparison negligible.
I currently suspect due to 0 through 3 we are locked into a race for AI and have no alternatives, but it’s really weird the e/acc makes such an overtly bad argument when they are likely overall correct.