Regarding the tariffs, I have taken to saying “It’s not the end of the world, and it’s not even the end of world trade.” In the modern world, every decade sees a few global economic upheavals, and in my opinion that’s all this is. It is a strong player within the world trade system (China and the EU being the other strong players), deciding to do things differently. Among other things, it’s an attempt to do something about America’s trade deficits, and to make the country into a net producer rather than a net consumer. Those are huge changes but now that they are being attempted, I don’t see any going back. The old situation was tolerated because it was too hard to do anything about it, and the upper class was still living comfortably. I think a reasonable prediction is that world trade avoiding the US will increase, US national income may not grow as fast, but the US will re-industrialize (and de-financialize). Possibly there’s some interaction with the US dollar’s status as reserve currency too, but I don’t know what that would be.
Mitchell_Porter
Humans didn’t always speak in 50-word sentences. If you want to figure out how we came to be trending away from that, you should try to figure out how, when, and why that became normal in the first place.
I only skimmed this to get the basics, I guess I’ll read it more carefully and responsibly later. But my immediate impressions: The narrative presents a near future history of AI agents, which largely recapitulates the recent past experience with our current AIs. Then we linger on the threshold of superintelligence, as one super-AI designs another which designs another which… It seemed artificially drawn out. Then superintelligence arrives, and one of two things happens: We get a world in which human beings are still living human lives, but surrounded by abundance and space travel, and superintelligent AIs are in the background doing philosophy at a thousand times human speed or something. Or, the AIs put all organic life into indefinite data storage, and set out to conquer the universe themselves.
I find this choice of scenarios unsatisfactory. For one thing, I think the idea of explosive conquest of the universe once a certain threshold is passed (whether or not humans are in the loop) has too strong a hold on people’s imaginations. I understand the logic of it, but it’s a stereotyped scenario now.
Also, I just don’t buy this idea of “life goes on, but with robots and space colonies”. Somewhere I noticed a passage about superintelligence being released to the public, as if it was an app. Even if you managed to create this Culture-like scenario, in which anyone can ask for anything from a ubiquitous superintelligence but it makes sure not to fulfil wishes that are damaging in some way… you are then definitely in a world in which superintelligence is running things. I don’t believe in an elite human minority who have superintelligence in a bottle and then get to dole it out. Once you create superintelligence, it’s in charge. Even if it’s benevolent, humans and humans life are not likely to go on unchanged, there is too much that humans can hope for that would change them and their world beyond recognition.
Anyway, that’s my impulsive first reaction, eventually I’ll do a more sober and studied response…
I don’t follow the economics of AI at all, but my model is that Google (Gemini) has oceans of money and would therefore be less vulnerable in a crash, and that OpenAI and Anthropic have rich patrons (Microsoft and Amazon respectively) who would have the power to bail them out. xAI is probably safe for the same reason, the patron being Elon Musk. China is a similar story, with the AI contenders either being their biggest tech companies (e.g. Baidu) or sponsored by them (Alibaba and Tencent being big investors in “AI 2.0”).
Feedback (contains spoilers):
Impression based on a quick skim because that’s all I have time for: It belongs to the genre “AI lab makes an AI, lab members interact with it as it advances, eventually it gets loose and takes over the world”. This is not a genre in which one expects normal literary virtues like character development; the real story is in the cognitive development of the AI. There’s no logical barrier to such a story having the virtues of conventional literature, but if the real point of the story is to describe a thought experiment or singularity scenario, one may as well embrace the minimalism. From that perspective, what I saw seemed logical, not surprising since you actually work in the field and know the concepts, jargon, debates, and issues… The ending I consider unlikely, because I think it’s very unlikely that a ubiquitous superintelligent agent responsive to human need and desire would leave the world going through familiar cycles. This world is just too evil and destructive from the perspective of human values, and if godlike power exists and can be harnessed in the service of human desire, things should change in a big way. (How a poet once expressed this thought.)
During the next few days, I do not have time to study exactly how you manage to tie together second-order logic, the symbol grounding problem, and qualia as Gödel sentences (or whatever that connection is). I am reminded of Hofstadter’s theory that consciousness has something to do with indirect self-reference in formal systems, so maybe you’re a kind of Hofstadterian eliminativist.
However, in response to this --
EN predicts that you will say that
-- I can tell you how a believer in the reality of intentional states, would go about explaining you and EN. The first step is to understand what the key propositions of EN are, the next step is to hypothesize about the cognitive process whereby the propositions of EN arose from more commonplace propositions, the final step is to conceive of that cognitive process in an intentional-realist way, i.e. as a series of thoughts that occurred in a mind, rather than just as a series of representational states in a brain.
You mention Penrose. Penrose had the idea that the human mind can reason about the semantics of higher-order logic because brain dynamics is governed by highly noncomputable physics (highly noncomputable in the sense of Turing degrees, I guess). It’s a very imaginative idea, and it’s intriguing that quantum gravity may actually contain a highly noncomputable component (because of the undecidability of many properties of 4-manifolds, that may appear in the gravitational path integral).
Nonetheless, it seems an avoidable hypothesis. A thinking system can derive the truth of Gödel sentences, so long as it can reason about the semantics of the initial axioms, so all you need is a capacity for semantic reflection (I believe Feferman has a formal theory of this under the name “logical reflection”). Penrose doesn’t address this because he doesn’t even tackle the question of how anything physical has intentionality, he sticks purely to mathematics, physics, and logic.
My approach to this is Husserlian realism about the mind. You don’t start with mindless matter and hope to see how mental ontology is implicit in it or emerges from it. You start with the phenomenological datum that the mind is real, and you build on that. At some point, you may wish to model mental dynamics purely as a state machine, neglecting semantics and qualia; and then you can look for relationships between that state machine, and the state machines that physics and biology tell you about.
But you should never forget the distinctive ontology of the mental, that supplies the actual “substance” of that mental state machine. You’re free to consider panpsychism and other identity theories, interactionism, even pure metaphysical idealism; but total eliminativism contradicts the most elementary facts we know, as Descartes and Rand could testify. Even you say that you feel the qualia, it’s just that you think “from a rational perspective, it must be otherwise”.
“existence” itself may be a category error—not because nothing is real
If something is real, then something exists, yes? Or is there a difference between “existing” and “being real”?
Do you take any particular attitude towards what is real? For example, you might believe that something exists, but you might be fundamentally agnostic about the details of what exists. Or you might claim that the real is ineffable or a continuum, and so any existence claim about individual things is necessarily wrong.
qualia … necessary for our self-models, but not grounded in any formal or observable system
See, from my perspective, qualia are the empirical. I would consider the opposite view to be “direct realism”—experience consists of direct awareness of an external world. That would mean e.g. that when someone dreams or hallucinates, the perceived object is actually there.
What qualic realism and direct realism have in common, is that they also assume the reality of awareness, a conscious subject aware of phenomenal objects. I assume your own philosophy denies this as well. There is no actual awareness, there are only material systems evolved to behave as if they are aware and as if there are such things as qualia.
It is curious that the eliminativist scenario can be elaborated that far. Nonetheless, I really do know that something exists and that “I”, whatever I may be, am aware of it; whether or not I am capable of convincing you of this. And my own assumption is that you too are actually aware, but have somehow arrived at a philosophy which denies it.
Descartes’s cogito is the famous expression of this, but I actually think a formulation due to Ayn Rand is superior. We know that consciousness exists, just as surely as we know that existence exists; and furthermore, to be is to be something (“existence is identity”), to be aware is to know something (“consciousness is identification”).
What we actually know by virtue of existing and being conscious, probably goes considerably beyond even that; but negating either of those already means that you’re drifting away from reality.
This is an interesting demonstration of what’s possible in philosophy, and maybe I’ll want to engage in detail with it at some point. But for now I’ll just say, I see no need to be an eliminativist or to consider eliminativism, any more than I feel a need to consider “air eliminativism”, the theory that there is no air, or any other eliminativism aimed at something that obviously exists.
Interest in eliminativism arises entirely from the belief that the world is made of nothing but physics, and that physics doesn’t contain qualia, intentionality, consciousness, selves, and so forth. Current physical theory certainly contains no such things. But did you ever try making a theory that contains them?
What’s up with incredibly successful geniuses having embarassing & confusing public meltdowns? What’s up with them getting into naziism in particular?
Does this refer to anyone other than Elon?
But maybe the real question intended, is why any part of the tech world would side with Trumpian populism? You could start by noting that every modern authoritarian state (that has at least an industrial level of technology) has had a technical and managerial elite who support the regime. Nazi Germany, Soviet Russia, and Imperial Japan all had industrial enterprises, and the people who ran them participated in the ruling ideology. So did those in the British empire and the American republic.
Our current era is one in which an American liberal world order, with free trade and democracy as universal norms, is splintering back into one of multiple great powers and civilizational regions. Liberalism no longer had the will and the power to govern the world, the power vacuum was filled by nationalist strongmen overseas, and now in America too, one has stepped into the gap left by the weak late-liberal leadership, and is creating a new regime governed by different principles (balanced trade instead of free trade, spheres of influence rather than universal democracy, etc).
Trump and Musk are the two pillars of this new American order, and represent different parts of a coalition. Trump is the figurehead of a populist movement, Musk is foremost among the tech oligarchs. Trump is destroying old structures of authority and creating new ones around himself, Musk and his peers are reorganizing the entire economy around the technologies of the “fourth industrial revolution” (as they call it in Davos).
That’s the big picture according to me. Now, you talk about “public meltdowns” and “getting into naziism”. Again I’ll assume that this is referring to Elon Musk (I can’t think of anyone else). The only “meltdowns” I see from Musk are tweets or soundbites that are defensive or accusatory, and achieve 15 minutes of fame. None of it seems very meaningful to me. He feuds with someone, he makes a political statement, his fans and his haters take what they want, and none of it changes anything about the larger transformations occurring. It may be odd to see a near-trillionaire with a social media profile more like a bad-boy celebrity who can’t stay out of trouble, but it’s not necessarily an unsustainable persona.
As for “getting into naziism”, let’s try to say something about what his politics or ideology really are. Noah Smith just wrote an essay on “Understanding America’s New Right” which might be helpful. What does Elon actually say about his political agenda? First it was defeating the “woke mind virus”, then it was meddling in European politics, now it’s about DOGE and the combative politics of Trump 2.0.
I interpret all of these as episodes in the power struggle whereby a new American nationalism is displacing the remnants of the cosmopolitan globalism of the previous regime. The new America is still pretty cosmopolitan, but it does emphasize its European and Christian origins, rather than repressing them in favor of a secular progressivism that is intended to embrace the entire world.
In all this, there are echoes of the fascist opposition to communism in the 20th century, but in a farcical and comparatively peaceful form. Communism was a utopian secular movement that replaced capitalism and nationalism with a new kind of one-party dictatorship that could take root in any industrialized society. Fascism was a nationalist and traditionalist imitation of this political form, in which ethnicity rather than class was the decisive identity. They fought a war in which tens of millions died.
MAGA versus Woke, by comparison, is a culture war of salesmen versus hippies. Serious issues of war and peace, law and order, humanitarianism and national survival are interwoven with this struggle, because this is real life, but this has been a meme war more than anything, in which fascism and communism are just historical props.
Via David Gerard’s forum, I learned of a recent article called “The questions ChatGPT shouldn’t answer”. It’s a study of how ChatGPT replies to ethical dilemmas, written with an eye on OpenAI’s recent Model Spec, and the author’s conclusion is that AI shouldn’t answer ethical questions at all, because (my paraphrase) ethical intelligence is acquired by learning how to live, and of course that’s not how current AI acquires its ethical opinions.
Incidentally, don’t read this article expecting scholarship; it’s basically a sarcastic op-ed. I was inspired to see if GPT-4o could reproduce the author’s own moral framework. It tried, but its imitations of her tone stood out more. My experiment was even less scientific and systematic than hers, and yet I found her article, and 4o’s imitation, tickling my intuition in a way I wish I had time to overthink.
To begin with, it would be good to understand better, what is going on when our AIs produce ethical discourse or adopt a style of writing, so that we really understand how it differs from the way that humans do it. The humanist critics of AI are right enough when they point out that AI lacks almost everything that humans draw upon. But their favorite explanation of the mechanism that AI does employ is just “autocomplete”. Eventually they’ll have to develop a more sophisticated account, perhaps drawing upon some of the work in AI interpretability. But is interpretability research anywhere near explaining an AI’s metaethics or its literary style?
Thirty years ago Bruce Sterling gave a speech in which he said that he wouldn’t want to talk to an AI about its “bogus humanity”, he would want the machine to be honest with him about its mechanism, its “social interaction engine”. But that was the era of old-fashioned rule-based AI. Now we have AIs which can talk about their supposed mechanism, as glibly as they can pretend to have a family, a job, and a life. But the talk about the mechanism is no more honest than the human impersonation, there’s no sense in which it brings the user closer to the reality of how the AI works; it’s just another mask that we know how to induce the AI to wear.
Looking at things from another angle, the idea that authentic ethical thinking arises in human beings from a process of living, learning, and reflecting, reminds me of how Coherent Extrapolated Volition is supposed to work. It’s far from identical; in particular CEV is supposed to arrive at the human-ideal decision procedure without much empirical input beyond a knowledge of the human brain’s cognitive architecture. Instead, what I see is an opportunity for taxonomy; comparative studies in decision theory that encompass both human and AI, and which pay attention to how the development and use of the decision procedure is embedded in the life cycle (or product cycle) of the entity.
This is something that can be studied computationally, but there are conceptual and ontological issues too. Ethical decision-making is only one kind of normative decision-making (for example, there are also norms for aesthetics, rationality, lawfulness); normative decision-making is only one kind of action-determining process (some of which involve causality passing through the self, while others don’t). Some forms of “decision procedure” intrinsically involve consciousness, others are purely computational. And ideally one would want to be clear about all this before launching a superintelligence. :-)
I consider myself broadly aligned with rationalism, though with a strong preference for skeptical consequentialism than overconfident utilitarianism
OK, thanks for the information! By the way, I would say that most people active on Less Wrong, disagree with some of the propositions that are considered to be characteristic of the Less Wrong brand of rationalism. Disagreement doesn’t have to be a problem. What set off my alarms was your adversarial debut—the rationalists are being irrational! Anyway, my opinion on that doesn’t matter since I have no authority here, I’m just another commenter.
The rationalist community is extremely influential in both AI development and AI policy. Do you disagree?
It was. It still has influence, but e/acc is in charge now. That’s my take.
If you couldn’t forecast the Republicans would be in favor of less regulation
If they actually saw AI as the creation of a rival to the human race, they might have a different attitude. Then again, it’s not as if that’s why the Democrats favored regulation, either.
Qwen … Manus
I feel like Qwen is being hyped. And isn’t Manus just Claude in a wrapper? But fine, maybe I should put Alibaba next to DeepSeek in my growing list of contenders to create superintelligence, which is the thing I really care about.
But back to the actual topic. If Gwern or Zvi or Connor Leahy want to comment on why they said what they did, or how their thinking has evolved, that would have some interest. It would also be of interest to know where certain specific framings, like “China doesn’t want to race, so it’s up to America to stop and make a deal”, came from. I guess it might have come from politically minded EAs, rather than from rationalism per se, but that’s just a guess. It might even come from somewhere entirely outside the EA/LW nexus.
I figured this was part of a 19th-century trend in Trump’s thought—mercantilism, territorial expansion, the world system as a game of great powers rather than a parliament of nations. The USA will be greater if it extends throughout the whole of North America, and so Canada must be absorbed.
It hadn’t occurred to me that the hunger for resources to train AI might be part of this. But I would think that even if it is part of it, it’s just a part.
What do YOU think?
My first thought is, it’s not clear why you care about this. This is your first post ever, and your profile has zero information about you. Do you consider yourself a Less Wrong rationalist? Are you counting on the rationality community to provide crucial clarity and leadership regarding AI and AI policy?
My second thought is, if a big rethink is needed, it should also include the fact that in Trump 2.0, the US elected a revolutionary regime whose policies include AI accelerationism. I don’t think anyone saw that coming either, and I think that’s more consequential than DeepSeek-r1. Maybe a Chinese startup briefly got ahead of its American rivals in the domain of reasoning LLMs; but most of the contenders are still within American borders, and US AI policy is now ostensibly in the hands of a crypto VC who is a long-time buddy of Elon’s.
Musk has just been on Ted Cruz’s podcast, and gave his take on everything from the purpose of DOGE to where AI and robotics will be ten years from now (AI smarter than the smartest human, humanoid robots everywhere, all goods and services essentially free). He sounded about as sane as a risk-taking tech CEO who managed to become the main character on the eve of singularity, could be.
I’ve just noticed in the main post, the reference to “high-functioning” bipolar individuals. I hadn’t even realized that is an allowed concept, I had assumed that bipolar implies dysfunctional… I feel like these psychological speculations are just a way of expressing alienation with who he has become. It’s bad enough that his takes are so mid and his humor is so cringe, but now he’s literally allied with Trump and boosting similar movements worldwide.
If someone finds that an alien headspace to contemplate, it might be more comforting to believe that he’s going crazy. But I think that in reality, like most members of today’s right wing, he’s totally serious about trying to undo 2010s thinking on race, gender, and nation. That’s part of his vision for the future, along with the high technology. When I think of him like that, everything clicks into place for me.
we are likely to end up appendages to something with the intelligence of a toxoplasma parasite, long before a realistic chance of being wiped out by a lighcone-consuming alien robointelligence of our own creation.
All kinds of human-AI relationship are possible (and even a complete replacement of humanity so it’s nothing but AIs and AIs); but unless they mysteriously coordinate to stop the research, the technical side of AI is going to keep advancing. If anything, AI whisperers on net seem likely to encourage humanity to keep going in that direction.
Since then I’ve come to conclude that string theory is probably a dead end, albeit an astonishingly alluring one for a particular type of person.
The more you know about particle physics and quantum field theory, the more inevitable string theory seems. There are just too many connections. However, identifying the specific form of string theory that corresponds to our universe is more of a challenge, and not just because of the fabled 10^500 vacua (though it could be one of those). We don’t actually know either all the possible forms of string theory, or the right way to think about the physics that we can see. The LHC, with its “unnaturally” light Higgs boson, already mortally wounded a particular paradigm for particle physics (naturalness) which in turn was guiding string phenomenology (i.e. the part of string theory that tries to be empirically relevant). So along with the numerical problem of being able to calculate the properties of a given string vacuum, the conceptual side of string theory and string phenomenology is still wide open for discovery.
I get AUD$1500 per month, which is one-hundredth or less of what you’re now getting. I accomplish only a very small fraction of what I would like to be able to do (e.g. just identifying many worthy actions rather than getting to carry them out), it’s been that way for many years, and living environment is a huge factor in that.
So if I had your resources, the first thing I would be doing is change my working environment. I’d probably move from Australia to a particular location in North America, rent a room there for six months to begin with, and set myself up to actually get things done. (At that point I still would have used less than 1% of available resources.)
The most important thing I could be doing is working directly on “superalignment”, in the specific sense of equipping an autonomous superintelligence with values sufficient to boot up a (trans)human-friendly civilization from nothing. I also work to keep track of the overall situation and to understand other paradigms, but my usual assumption (as described in recent posts) is that we are now extremely close to the creation of superintelligence and the resulting decisive loss of human control over our destiny, that the forces accelerating AI are overwhelmingly more powerful than those which would pause it or ban it, and so that the best hope for achieving a positive outcome by design rather than by sheer good luck, is public-domain work on superalignment in the sense that I defined, which then has a chance of being picked up by the private labs that are rushing us over the edge.
As I have intimated, I already have a number of concrete investigations I could carry out. My most recent checklist for what superalignment in this sense requires, is in the last paragraph here: “problem-solving superintelligence… sufficiently correct ‘value system’… model of metaphilosophical cognition”. Last month I expressed interest in revisiting June Ku’s CEV-like proposal from the perspective of Joshua Clymer’s ideas. It’s important to be able to exhibit concrete proposals, but for me the fundamental thing is to get into a situation that is better for thinking in general. Presumably there are many others in the same situation.
If an LLM had feelings, by what causal process would they end up being expressed?
Are there any examples yet, of homology or cohomology being applied to cognition, whether human or AI?
Inspired by critical remarks from @Laura-2 about “bio/acc”, my question is, when and how does something like this give rise to causal explanation and actual cures? Maybe GWAS is a precedent. You end up with evidence that a particular gene or allele is correlated with a particular trait, but you have no idea why. That lets you (and/or society) know some risks, but it doesn’t actually eliminate disease, unless you think you can get there by editing out risky alleles, or just screening embryos. Otherwise this just seems to lead (optimistically) to better risk management, and (pessimistically) to a “Gattaca” society in which DNA is destiny, even more than it is now.
I’m no biologist. I’m hoping someone who is, can give me an idea of how far this GWAS-like study of genotype-phenotype correlations, actually gets us towards new explanations and new cures. What’s the methodology for closing that gap? What extra steps are needed? How much have we benefited from GWAS so far?