Maybe we should buy like a really nice macbook right before we expect chips to become like 2x more expensive and/or Taiwan manufacturing is disrupted?
Especially if you think those same years will be an important time to do work or have a good computer.
“xAI has announced plans to expand its Memphis supercomputer to house at least 1,000,000 GPUs. The supercomputer, called Collosus, already has 100,000 GPUs, making it the largest AI supercomputer in the world, according to Nvidia.”
Unsure if it’s built by 2026 but seems plausible based on quick search.
A quick OpenAI-o1 preview BOTEC for additional emissions from a sort of Leopold scenario ~2030, assuming energy is mostly provided by natural gas, since I was kinda curious. Not much time spent on this and took the results at face value. I (of course?) buy that emissions don’t matter in short term, in a world where R&D is increasingly automated and scaled.
Phib: Say an additional 20% of US electricity was added to our power usage (e.g. for AI) over the next 6 years, and it was mostly natural gas. Also, that AI inference is used at an increasing rate, say 100Million H100s equivalent, increasing from today’s level to 100M over the next 6 years, and it’s more or less the same as today’s energy breakdown, although with more natural gas over time.
What do the emissions from such a build out of natural gas look like?
The emissions from this build-out of natural gas over the next six years would be significant. Specifically:
- **Total Additional Emissions Over Six Years:** Approximately **1.136 billion metric tons of CO₂**. - **Average Annual Emissions Increase:** About **189 million metric tons of CO₂ per year**. - **Percentage of Current U.S. Emissions:** This represents an approximate **2.9% increase** over the current annual U.S. greenhouse gas emissions.
These emissions result from the increased natural gas usage required to meet the additional electricity demand—both from the general 20% increase in electricity consumption and the substantial growth in AI inference workloads powered by the equivalent of 100 million NVIDIA H100 GPUs.
Phib: Imagine there is a similar buildout in China, if not larger, how do the emissions from all of the above (within some confidence interval) place us environmentally within 2 years after 2030 of such a buildout and increase in emissions? Within 10 years? Considering a more or less constant rate of emissions thereafter for each.
Conclusion The combined emissions from the U.S. and China’s buildouts would significantly impact the global carbon budget and environmental goals:
By 2032 (Two Years After 2030):
Additional Emissions: Approximately 2.44 billion metric tons of CO₂. Impact: Uses about 0.42% of the remaining carbon budget for limiting warming to 1.5°C. By 2040 (Ten Years After 2030):
Additional Emissions: Approximately 12.21 billion metric tons of CO₂. Impact: Consumes about 2.1% of the carbon budget. Cumulative Impact by 2040:
Total Additional Emissions: Around 19.54 billion metric tons of CO₂. Percentage of Carbon Budget: Approximately 3.37%.
Environmental Implications:
Increased Difficulty in Meeting Climate Targets: The additional emissions make it more challenging to stay within the carbon budgets necessary to limit global warming to 1.5°C or even 2°C. Accelerated Climate Change Effects: Higher emissions could lead to more severe climate impacts, such as extreme weather events, sea-level rise, and ecosystem disruptions.
Final Note:
These estimates are subject to uncertainties due to potential changes in technology, policy, and economic factors. Proactive measures, such as investing in renewable energy and improving energy efficiency, could mitigate some of these emissions. However, without significant efforts to reduce reliance on fossil fuels, the environmental impact will be substantial and pose a serious challenge to global climate goals.
Final Phib note, perhaps inference energy costs may be far greater than assumed above, I don’t imagine a GPT-5, GPT-6, that justify further investment, not also being adopted by a much larger population proportion (maybe 1 billion, 2 billion, instead of 100 million).
I (of course?) buy that emissions don’t matter in short term
Emissions don’t matter in the long term, ASI can reshape the climate (if Earth is not disassembled outright). They might matter before ASI, especially if there is an AI Pause. Which I think is still a non-negligible possibility if there is a recoverable scare at some point; probably not otherwise. Might be enforceable by international treaty through hobbling semiconductor manufacturing, if AI of that time still needs significant compute to adapt and advance.
Benchmarks are weird, imagine comparing a human only along their ability to take a test. Like saying, how do we measure einstein? in his avility to take a test. Someone else who completes that test therefore IS Einstein (not necessarily at all, you can game tests, in ways that aren’t ‘cheating’, just study the relevant material (all the online content ever).
LLM’s ability to properly guide someone through procedures is actually the correct way to evaluate language models. Not written description or solutions, but step by step guiding someone through something impressive, Can the model help me make a
Or even without a human, step by step completing a task.
Kinda commenting on stuff like “Please don’t throw your mind away” or any advice not to fully defer judgment to others (and not intending to just straw man these! They’re nuanced and valuable, just meaning to next step it).
In my circumstance and I imagine many others who are young and trying to learn and trying to get a job, I think you have to defer to your seniors/superiors/program to a great extent, or at least to the extent where you accept or act on things (perform research, support ops) that you’re quite uncertain about.
Idk there’s a lot more nuance here to this conversation as with any, of course. Maybe nobody is certain of anything and they’re just staking a claim so that they can be proven right or wrong and experiment in this way, producing value in their overconfidence. But I do get a sense of young/new people coming into a field that is even slightly established, requiring to some extent to defer to others for their own sake.
LLMs as a new benchmark for human labor. Using ChatGPT as a control group versus my own efforts to see if my efforts are worth more than the (new) default
Maybe we should buy like a really nice macbook right before we expect chips to become like 2x more expensive and/or Taiwan manufacturing is disrupted?
Especially if you think those same years will be an important time to do work or have a good computer.
BTW—this video is quite fun. Seems relevant re: Paperclip Maximizer and nanobots.
xAI’s new planned scaleup follows one more step on the training compute timeline from Situational Awareness (among other projections, I imagine)
From ControlAI newsletter:
Unsure if it’s built by 2026 but seems plausible based on quick search.
https://www.reuters.com/technology/artificial-intelligence/musks-xai-plans-massive-expansion-ai-supercomputer-memphis-2024-12-04/
A quick OpenAI-o1 preview BOTEC for additional emissions from a sort of Leopold scenario ~2030, assuming energy is mostly provided by natural gas, since I was kinda curious. Not much time spent on this and took the results at face value. I (of course?) buy that emissions don’t matter in short term, in a world where R&D is increasingly automated and scaled.
Phib:
Say an additional 20% of US electricity was added to our power usage (e.g. for AI) over the next 6 years, and it was mostly natural gas.
Also, that AI inference is used at an increasing rate, say 100Million H100s equivalent, increasing from today’s level to 100M over the next 6 years, and it’s more or less the same as today’s energy breakdown, although with more natural gas over time.
What do the emissions from such a build out of natural gas look like?
[Reasoning available here: https://chatgpt.com/share/672ed183-7c3c-800d-ab0e-d08020d072c7]
****Answer:**
The emissions from this build-out of natural gas over the next six years would be significant. Specifically:
- **Total Additional Emissions Over Six Years:** Approximately **1.136 billion metric tons of CO₂**.
- **Average Annual Emissions Increase:** About **189 million metric tons of CO₂ per year**.
- **Percentage of Current U.S. Emissions:** This represents an approximate **2.9% increase** over the current annual U.S. greenhouse gas emissions.
These emissions result from the increased natural gas usage required to meet the additional electricity demand—both from the general 20% increase in electricity consumption and the substantial growth in AI inference workloads powered by the equivalent of 100 million NVIDIA H100 GPUs.
Phib:
Imagine there is a similar buildout in China, if not larger, how do the emissions from all of the above (within some confidence interval) place us environmentally within 2 years after 2030 of such a buildout and increase in emissions? Within 10 years? Considering a more or less constant rate of emissions thereafter for each.
Conclusion
The combined emissions from the U.S. and China’s buildouts would significantly impact the global carbon budget and environmental goals:
By 2032 (Two Years After 2030):
Additional Emissions: Approximately 2.44 billion metric tons of CO₂.
Impact: Uses about 0.42% of the remaining carbon budget for limiting warming to 1.5°C.
By 2040 (Ten Years After 2030):
Additional Emissions: Approximately 12.21 billion metric tons of CO₂.
Impact: Consumes about 2.1% of the carbon budget.
Cumulative Impact by 2040:
Total Additional Emissions: Around 19.54 billion metric tons of CO₂.
Percentage of Carbon Budget: Approximately 3.37%.
Environmental Implications:
Increased Difficulty in Meeting Climate Targets: The additional emissions make it more challenging to stay within the carbon budgets necessary to limit global warming to 1.5°C or even 2°C.
Accelerated Climate Change Effects: Higher emissions could lead to more severe climate impacts, such as extreme weather events, sea-level rise, and ecosystem disruptions.
Final Note:
These estimates are subject to uncertainties due to potential changes in technology, policy, and economic factors. Proactive measures, such as investing in renewable energy and improving energy efficiency, could mitigate some of these emissions. However, without significant efforts to reduce reliance on fossil fuels, the environmental impact will be substantial and pose a serious challenge to global climate goals.
Final Phib note, perhaps inference energy costs may be far greater than assumed above, I don’t imagine a GPT-5, GPT-6, that justify further investment, not also being adopted by a much larger population proportion (maybe 1 billion, 2 billion, instead of 100 million).
Emissions don’t matter in the long term, ASI can reshape the climate (if Earth is not disassembled outright). They might matter before ASI, especially if there is an AI Pause. Which I think is still a non-negligible possibility if there is a recoverable scare at some point; probably not otherwise. Might be enforceable by international treaty through hobbling semiconductor manufacturing, if AI of that time still needs significant compute to adapt and advance.
Yeah oops, meant long
Benchmarks are weird, imagine comparing a human only along their ability to take a test. Like saying, how do we measure einstein? in his avility to take a test. Someone else who completes that test therefore IS Einstein (not necessarily at all, you can game tests, in ways that aren’t ‘cheating’, just study the relevant material (all the online content ever).
LLM’s ability to properly guide someone through procedures is actually the correct way to evaluate language models. Not written description or solutions, but step by step guiding someone through something impressive, Can the model help me make a
Or even without a human, step by step completing a task.
Kinda commenting on stuff like “Please don’t throw your mind away” or any advice not to fully defer judgment to others (and not intending to just straw man these! They’re nuanced and valuable, just meaning to next step it).
In my circumstance and I imagine many others who are young and trying to learn and trying to get a job, I think you have to defer to your seniors/superiors/program to a great extent, or at least to the extent where you accept or act on things (perform research, support ops) that you’re quite uncertain about.
Idk there’s a lot more nuance here to this conversation as with any, of course. Maybe nobody is certain of anything and they’re just staking a claim so that they can be proven right or wrong and experiment in this way, producing value in their overconfidence. But I do get a sense of young/new people coming into a field that is even slightly established, requiring to some extent to defer to others for their own sake.
LLMs as a new benchmark for human labor. Using ChatGPT as a control group versus my own efforts to see if my efforts are worth more than the (new) default