Some thoughts about e/acc that weren’t worthy of a post:
E/acc is similar to early 2010s social justice, in that it’s little more than a war machine; they decided that injustice was bad and that therefore they were going to fight it, and that anyone who criticized them was either weakening the coalition or opposing the coalition.
Likewise, E/acc decided that acceleration was good and anyone opposing them was evil luddites, and you had to use the full extent of your brain to “win” each confrontation as frequently as possible.
E/acc people like Beff Jezos collided with AI safety gradually, so they encountered a worldview is logically developed and probably obviously correct, but they encountered it piece by piece.
As a result, they thought up justifications on every rare occasion that they encountered AI safety’s logic debunking their takes.
Over time, they progressively ended up twisted into a pro-extinction, pro-paperclip shape, and layered a war machine on top of that stance, which now sufficient to defend against true scissor statements from AI safety (e.g. 85% of e/acc themselves don’t want human extinction, but your leaders do).
The crazy thing is that e/acc, meme cult that it is, I feel is maybe a more realistic view of the world.
If you assume that there’s no way you can dissuade others from building AI : this includes wealthy corporations who can lobby with lots of money and demand the right to buy as many GPUs as they want, nuclear armed smaller powers, and China, what do you do?
Imagine 2 simple scenarios.
World A : they built AI. Someone let it get out of hand. You have only pre-AI technology to defend yourself.
World B : you kept up in the arms race but did a better job on model security and quality. Some of the low hanging fruit for AI include things like self replicating factories and gigascale surveillance.
Against a hostile superintelligence you may ultimately lose but do you want the ability to surveil and interpret a vast battlespace for enemy activity, and coordinate and manufacture millions or billions of automated weapons or not?
You absolutely can lose but in a future world of escalating threats your odds are better if your country is strapped with the latest weapons.
Do you agree or disagree? I am not saying e/acc is right, just that historically no arms agreement has ever really been successful. SALT wasn’t disarmament and the treaty has effectively ended. Were Russia wealthier it would be back to another nuclear arsenal buildup.
What do you estimate the probability that a global multilateral AI pause could happen? Right now based on the frequentist view that such an event has never been seen in history, should it rationally be 0 or under 1 percent? (Note with this last sentence this isn’t my opinion, imagine you are a robot using an algorithm. What would the current evidence support? If you think my statement that an international agreement to ban a promising strategic technology and all equivalent alternatives has never happened is false, how do you know?)
Some thoughts about e/acc that weren’t worthy of a post:
E/acc is similar to early 2010s social justice, in that it’s little more than a war machine; they decided that injustice was bad and that therefore they were going to fight it, and that anyone who criticized them was either weakening the coalition or opposing the coalition.
Likewise, E/acc decided that acceleration was good and anyone opposing them was evil luddites, and you had to use the full extent of your brain to “win” each confrontation as frequently as possible.
E/acc people like Beff Jezos collided with AI safety gradually, so they encountered a worldview is logically developed and probably obviously correct, but they encountered it piece by piece.
As a result, they thought up justifications on every rare occasion that they encountered AI safety’s logic debunking their takes.
Over time, they progressively ended up twisted into a pro-extinction, pro-paperclip shape, and layered a war machine on top of that stance, which now sufficient to defend against true scissor statements from AI safety (e.g. 85% of e/acc themselves don’t want human extinction, but your leaders do).
The crazy thing is that e/acc, meme cult that it is, I feel is maybe a more realistic view of the world.
If you assume that there’s no way you can dissuade others from building AI : this includes wealthy corporations who can lobby with lots of money and demand the right to buy as many GPUs as they want, nuclear armed smaller powers, and China, what do you do?
Imagine 2 simple scenarios.
World A : they built AI. Someone let it get out of hand. You have only pre-AI technology to defend yourself.
World B : you kept up in the arms race but did a better job on model security and quality. Some of the low hanging fruit for AI include things like self replicating factories and gigascale surveillance.
Against a hostile superintelligence you may ultimately lose but do you want the ability to surveil and interpret a vast battlespace for enemy activity, and coordinate and manufacture millions or billions of automated weapons or not?
You absolutely can lose but in a future world of escalating threats your odds are better if your country is strapped with the latest weapons.
Do you agree or disagree? I am not saying e/acc is right, just that historically no arms agreement has ever really been successful. SALT wasn’t disarmament and the treaty has effectively ended. Were Russia wealthier it would be back to another nuclear arsenal buildup.
What do you estimate the probability that a global multilateral AI pause could happen? Right now based on the frequentist view that such an event has never been seen in history, should it rationally be 0 or under 1 percent? (Note with this last sentence this isn’t my opinion, imagine you are a robot using an algorithm. What would the current evidence support? If you think my statement that an international agreement to ban a promising strategic technology and all equivalent alternatives has never happened is false, how do you know?)