The input of 2k characters is rather limiting, albeit understandable. Giving these instructions to an existing LLM (I used Gemini 2.5 Pro) gives longer, better results without the need for a dedicated tool.
AlphaAndOmega
Moderately interesting news in AI image gen:
It’s been a good while since we’ve had AI chat assistants able to generate images on user request. Unfortunately, for about as long, we’ve had people being peeved at the disconnect between what they asked for, and what they actually got. Particularly annoying was the tendency for the assistants to often claim to have generated what you desired, or that they edited an image to change it, without *actually* doing that.
This was an unfortunate consequence of the LLM, being the assistant persona you speak to, and the *actual* image generator that spits out images from prompts, actually being two entirely separate entities. The LLM doesn’t have any more control over the image model than you do when running something like Midjourney or Stable Diffusion. It’s sending a prompt through a function call, getting an image in response, and then trying to modify prompts to meet user needs. Depending on how lazy the devs are, it might not even be ‘looking’ at the final output at all.
The image models, on the other hand, are a fundamentally different architecture, usually being diffusion-based (Google a better explanation, but the gist of it is that they hallucinate iteratively from a sample of random noise till it resembles the desired image) whereas LLMs use the Transformer architecture. The image models do have some understanding of semantics, but they’re far stupider than LLMs when it comes to understanding finer meaning in prompts.
This has now changed.
Almost half a year back, OpenAI [teased](https://x.com/gdb/status/1790869434174746805) the ability of their then unreleased GPT-4o to generate images *natively*. It was the LLM (more of a misnomer now than ever) actually making the image, in the same manner it could output text or audio.
The LLM doesn’t just “talk” to the image generator—it *is* the image generator, processing everything as tokens, much like it handles text or audio.
Unfortunately, we had nothing but radio silence since then, barring a few leaks of front-end code suggesting OAI would finally switch from DALLE-3 for image generation to using GPT-4o, as well as Altman’s assurances that they hadn’t canned the project on the grounds of safety.
Unfortunately for him, [Google has beaten them to the punch](https://developers.googleblog.com/en/experiment-with-gemini-20-flash-native-image-generation/) . Gemini 2.0 Flash Experimental (don’t ask) has now been blessed with the ability to directly generate images. I’m not sure if this has rolled out to the consumer Gemini app, but it’s readily accessible on their developer preview.
First impressions: [It’s good.](https://x.com/robertriachi/status/1899854394751070573)
You can generate an image, and then ask it to edit a feature. It will then edit the *original* image and present the version modified to your taste, unlike all other competitors, who would basically just re-prompt and hope for better luck on the second roll.
Image generation just got way better, at least in the realm of semantic understanding. Most of the usual give-aways of AI generated imagery, such as butchered text, are largely solved. It isn’t perfect, but you’re looking at a failure rate of 5-10% as opposed to >80% when using DALLE or Flux. It doesn’t beat Midjourney on aesthetics, but we’ll get there.
You can imagine the scope for chicanery, especially if you’re looking to generate images with large amounts of verbiage or numbers involved. I’d expect the usual censoring in consumer applications, especially since the LLM has finer control over things. But it certainly massively expands the mundane utility of image generation, and is something I’ve been looking forward to ever since I saw the capabilities demoed.
Flash 2.0 Experimental is also a model that’s dirt cheap on the API, and while image gen definitely burns more tokens, it’s a trivial expense. I’d strongly expect Google to make this free just to steal OAI’s thunder.
>Benzodiazepines are anti-anxiety drugs that calm fear but don’t prevent panic attacks, while tricyclic antidepressants like imipramine prevent panic attacks but don’t do anything about fear.3
As far as I’m aware, the claim that benzos don’t prevent panic attacks is incorrect!
We don’t like to prescribe them for that purpose, or for most cases of Generalized Anxiety Disorder, as they’re strongly habit forming and sedative, but they are very effective in that regard.
https://acnp.org/g4/GN401000130/CH.html
“The most extensively studied benzodiazepine for the treatment of panic has been the high potency triazolobenzodiazepine alprazolam. The Cross National Collaborative Panic Study (CNCPS) (44), a multicentre study conducted in two phases, is generally regarded as the most ambitious attempt to demonstrate the antipanic efficacy of alprazolam. Phase One of the CNCPS (45) randomly assigned 481 panic disorder patients (80% of whom had agoraphobia) to alprazolam or placebo, utilizing a double blind design and flexible dose schedule. All groups received their respective treatments for 8 weeks. Treatment was then discontinued over 4 weeks, and subjects were followed for 2 weeks after discontinuance. The mean dose of alprazolam employed was 5.7mg/day. Alprazolam was shown to have a rapid onset of effect, with most improvement occurring in the first week of treatment. Alprazolam was far superior to placebo on measures of panic attacks, anticipatory anxiety and phobic avoidance; at the 8 week endpoint, 55% of alprazolam treated patients were panic free, compared to 32% of those given placebo. Phase two of the Cross National Collaborative Panic Study (46) attempted to not only replicate phase one’s results in a larger sample, but also to compare alprazolam’s efficacy to that of a typical antidepressant treatment for panic. 1168 panic patients were randomly assigned to alprazolam, imipramine, or placebo for 8 weeks. This follow up study confirmed the earlier findings demonstrating superior antipanic activity of alprazolam (mean= 5.7mg/day) and imipramine (mean=155mg/day) compared with placebo, with 70% of both imipramine and alprazolam groups experiencing amelioration of panic compared to 50% for placebo. Significant drug effects were demonstrated for anticipatory anxiety and phobia. As in the phase 1 study, most of alprazolam’s beneficial effects were witnessed in the first and second weeks; imipramine, however, took four weeks or more to exert antipanic action. The main criticism of the Cross-National Study, forwarded by Marks et al (47), was that the high level (approximately 30%) of placebo dropouts due to inefficient treatment may have confounded the analysis of the endpoint data. In addition to the CNCPS, several trials have conclusively established alprazolam’s efficacy in the acute and long term treatment of panic (48-52,21). Almost all studies found alprazolam to be superior to placebo in treating phobic avoidance, reducing anticipatory anxiety, and lessening overall disability. Further, comparator studies of alprazolam and imipramine found the two medications comparable in efficacy for panic attacks, phobias, Hamilton anxiety, CGI and disability. These studies have additionally revealed alprazolam to be uniformly better tolerated than imipramine, with a quicker onset of therapeutic effect. ”
″ Clonazepam was found to be superior to placebo in 2 placebo-controlled studies.35,36 In a 9-week study,35 74% of patients treated with 1 mg/day of clonazepam (administered b.i.d. after up-titration during 3 days) and 56% of placebo-treated patients were completely free of panic attacks at the study endpoint.”
I’m not sure if it’s you or the author making the claim that they don’t prevent panic attacks, but I hope this is a small sample of the evidence base that shows them being strongly effective in that regard, which only increases our chagrin when prescribing them can lead to significant harm in the long run.
I have ADHD, and also happen to be a psychiatry resident.
As far as I can tell, it has been nothing but negative in my personal experience. It is a handicap, one I can overcome with coping mechanisms and medication, but I struggle to think of any positive impact on my life.
For a while, there were evopsych theories that postulated that ADHD had an adaptational benefit, but evopsych is a shakey field at the best of times, and no clear benefit was demonstrated.
https://pubmed.ncbi.nlm.nih.gov/32451437/
>All analyses performed support the presence of long-standing selective pressures acting against ADHD-associated alleles until recent times. Overall, our results are compatible with the mismatch theory for ADHD but suggest a much older time frame for the evolution of ADHD-associated alleles compared to previous hypotheses.
The ancient ancestral environment probably didn’t reward strong executive function and consistency in planning as strongly as agricultural societies did. Even so, the study found that prevalence was dropping even during Palaeolithic times, so it wasn’t even something selected for in hunter-gatherers!
I hate having ADHD, and sincerely hope my kids don’t. I’m glad I’ve had a reasonably successful life despite having it.
>Safety is limited to refusals, notably including refusals for medical or legal advice. Have they deliberately restricted those abilities to avoid lawsuits or to limit public perceptions of expertise being overtaken rapidly by AI?
I think it’s been well over a year since I’ve had an issue with getting an LLM to give me medical advice, including GPT-4o and other SOTA models like Claude 3.5/7, Grok 3 and Gemini 2.0 Pro. I seem to recall that the original GPT-4 would occasionally refuse, but could be coaxed into it.
I am a doctor, and I tend to include that information either in model memory or in a prompt (mostly to encourage the LLM to assume background knowledge and ability to interpret facts). Even without it, my impression is that most models simply append a “consult a human doctor” boilerplate disclaimer instead of refusing.
I would be rather annoyed if GPT 4.5 was a reversion in that regard, as I find LLMs immensely useful for quick checks on topics I’m personally unfamiliar with (and while hallucinations happen, they’re quite rare now, especially with search, reasoning and grounding). I don’t think OAI or other AI companies have faced any significant amount of litigation from either people who received bad advice, or doctors afraid of losing a job.
I’m curious about whether anyone has had any issues in that regard, though I’d expect not.
I’d wear a suit more often if dry-cleaning wasn’t a hassle. Hmm.. I should check if machine washable suits are a thing.
At least in the UK, suits have become a rarity in medical professionals. You do see some consultants wear them, but they’re treated as strictly optional and nobody will complain about showing up with just a shirt and chinos. I’m keeping my suits nearly folded for the next conference I need to attend, I’ve got no excuse to wear them otherwise (that warrants the hassle IMO).
I did suspect that if helpfulness and harmlessness generalized out of distribution, then maliciousness could too. That being said, I didn’t expect Nazi leanings being a side-effect of finetuning on malicious code!
>Pregnant woman goes into labor at 22 weeks, hospital tells her she has no hope, she drives 7 miles to another hospital she finds on facebook and now she has a healthy four year old. Comments have a lot of other ‘the doctors told us our child would never survive, but then we got a second opinion and they did anyway’ stories.
At 22 weeks, premature delivery without intensive support has a survival rate of about 0%.
A study analyzing data from 2020 to 2022 across 636 U.S. hospitals reported that among infants born at 22 weeks who received postnatal life support, 35.4% survived to hospital discharge. However, survival without severe complications was notably lower, at 6.3%.
https://pubmed.ncbi.nlm.nih.gov/39323403/
>Conclusions: Survival ranged from 24.9% at 22 weeks to 82.1% at 25 weeks, with low proportions of infants surviving without complications, prolonged lengths of hospital stay, and frequent technology dependence at all gestational ages.
When talking complications, severe is not an understatement. Long-term cognitive impairment occurs in the vast majority of cases, and is crippling more often than not.
I think it’s ill-advised to pick this particularly case as an example of doctors giving poor or inadequate advice. It’s entirely possible that the hospital didn’t have the facilities for the level of intensive care a pre-term delivery at 22 weeks demanded.
The woman, and her daughter, were enormously lucky. I’m not an OB-gyn, but if I were in their shoes I would strongly counsel against attempting delivery and resuscitation. Of course, I respect patient autonomy enough that I would have gone ahead if the patient truly understood the risks involved, but without the benefit of hindsight I wouldn’t think it was in the best interest of the child.
Who knows how long regulatory inertia might last? I agree it’ll probably add at least a few years to my employability, past the date where an AI can diagnose, plan and prescribe better than I can. It might not be something to rely on, if you end up with a regime where a single doctor rubberstamps hundreds of decisions, in place of what a dozen doctors did before. There’s not that much difference between 90% and 100% unemployment!
Evidence that adult cognition can be improved is heartening. I’d always had a small amount of fear regarding being “locked in” to my current level of intelligence with no meaningful scope for improvement. Long ago, in a more naive age, it was the prospect of children being enhanced to leave their parents in the dirt. Now, it looks like AI is improving faster than our biotechnology is.
It’s always a pleasure to read deep dives into genetic engineering, and this one was uniquely informative, though that’s to be expected from GeneSmith.
Thank you for your insight. Out of idle curiosity, I tried putting your last query into Gemini 2 Flash Thinking Experimental and it told me yes first-shot.
Here’s the final output, it’s absolutely beyond my ability to evaluate, so I’m curious if you think it went about it correctly. I can also share the full COT if you’d like, but it’s lengthy:
https://ibb.co/album/rx5Dy1
(Image since even copying the markdown renders it ugly here)
I happen to be a doctor with an interest in LW and associated concerns, who discovered a love for ML far too late for me to reskill and embrace it.
My younger cousin is a mathematician currently doing an integrated Masters and PhD. About a year back, I’d been trying to demonstrate to him the every increasing capability of SOTA LLMs at maths, and asked him to raise questions that it couldn’t trivially answer.
He chose “is the one-point compactification of a Hausdorff space itself Hausdorff?”.
At the time, all the models insisted invariably that that’s a no. I ran the prompt multiple times on the best models available then. My cousin said it was incorrect, and provided to sketch out a proof (which was quite simple when I finally understood that much of the jargon represented rather simple ideas at their core).
I ran into him again when we’re both visiting home, and I decided to run the same question through the latest models to gauge their improvements.
I tried Gemini 1206, Gemini Flash Thinking Experimental, Claude 3.5 Sonnet (New) and GPT-4o.
Other than reinforcing the fact that AI companies have abysmal naming schemes, to my surprise almost all of them gave the correct answer, barring Claude, but it was hampered by Anthropic being cheapskates and turning on the concise responses mode.
I showed him how the extended reasoning worked for Gemini Flash (it doesn’t hide its thinking tokens unlike o1) and I could tell that he was shocked/impressed, and couldn’t fault the reasoning process it and the other models went through.
To further shake him up, I had him find some recent homework problems he’d been assigned at his course (he’s in a top 3 maths program in India) and used the multimodality inherent in Gemini to just take a picture of an extended question and ask it to solve it.* It did so, again, flawlessly.
*So I wouldn’t have to go through the headache of reproducing it in latex or markdown.
He then demanded we try with another, and this time he expressed doubts that the model could handle a compact, yet vague in the absence of context not presented problem, and no surprises again.
He admitted that this was the first time he took my concerns seriously, though getting a rib in by saying doctors would be off the job market before mathematicians. I conjectured that was unlikely, given that maths and CS performance are more immediately beneficial to AI companies as they are easier to drop-in and automate, while also having direct benefits for ML, with the goal of replacing human programmers and having the models recursively self-improve. Not to mention that performance in those domains is easier to make superhuman with the use of RL and automated theorem providers for ground truth. Oh well, I reassured him, we’re probably all screwed and in short order, to the point where there’s not much benefit in quibbling about the other’s layoffs being a few months later.
This post made me deeply ruminate on what a posthuman future would look like, particularly the issue of “fairness” or what humanity (or recognizable descendants) could plausibly ask of far more optimized beings. Beings that may or may not be altruistic or hold charitable thoughts towards theirs progenitors and their more direct descendants.
https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/
The most basic analogy between artificial and real neurons involves how they handle incoming information. Both kinds of neurons receive incoming signals and, based on that information, decide whether to send their own signal to other neurons. While artificial neurons rely on a simple calculation to make this decision, decades of research have shown that the process is far more complicated in biological neurons. Computational neuroscientists use an input-output function to model the relationship between the inputs received by a biological neuron’s long treelike branches, called dendrites, and the neuron’s decision to send out a signal.
This function is what the authors of the new work taught an artificial deep neural network to imitate in order to determine its complexity. They started by creating a massive simulation of the input-output function of a type of neuron with distinct trees of dendritic branches at its top and bottom, known as a pyramidal neuron, from a rat’s cortex. Then they fed the simulation into a deep neural network that had up to 256 artificial neurons in each layer. They continued increasing the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most of the networks, that equated to about 1,000 artificial neurons for just one biological neuron.
Absolute napkin math while I’m sleep deprived at the hospital, but you’re looking at something around 86 trillion ML neurons, or about 516 quadrillion parameters. to emulate the human brain. That’s.. A lot.
Now, I am a doctor, but I’m certainly no neurosurgeon. That being said, I’m not sure it’s particularly conducive to the functioning of a human brain to stuff it full of metallic wires. Leaving aside that Neuralink and co are very superficial and don’t penetrate particularly deep into the cortex (do they even have to? Idk, the grey matter is on the outside anyway), it strikes me as electrical engineer’s nightmare to even remotely get this wired up and working. The crosstalk. The sheer disruption to homeostasis..
If I had to bet on mind uploading, the first step would be creating an AGI. To make that no longer my headache, of course.
Not an option? Eh, I’d look for significantly more lossy options than to hook up every neuron. I think it would be far easier to feed behavioral and observational data alongside tamer BCIs to train a far more tractable in terms of size model to mimic me, to a degree indistinguishable for a (blinded) outside observer. It certainly beats being the world’s Literal Worst MRI Candidate, and probably won’t kill you outright. I’m not sure the brain will be remotely close to functional by the time you’re done skewering it like that, which makes me assume the data you end up collecting any significant degree into the process will be garbage from dying neuronal tissue.
There’s two different considerations at play here:
Whether global birth rates/total human population will decline.
and
Whether that decline will be a “bad” thing.
In the case of the former:
I think that a “business as usual” or naive extrapolation of demographic trends is a bad idea, when AGI is imminent. In the case of population, it’s less bad than usual, at least compared to things like GDP. As far as I’m concerned, the majority of the probability mass can be divvied up between “baseline human population booms” and “all humans die”.
Why might it boom? (The bust case doesn’t need to be restated on LW of all places).
To the extent that humans consider reproduction to be a terminal value, AI will make it significantly cheaper and easier. AI assisted creches or reliable rob-nannies that don’t let their wards succumb to what are posited as the ills of too much screen time or improper socialization will mean that much of the unpleasantness of raising a child can be delegated, in much the same manner that a billionaire faces no real constraints in their QOL from having a nigh arbitrary number of kids when they can afford as many nannies as they please. You hardly need to be a billionaire to achieve that, it’s in the reach of UMC Third Worlders because of income inequality, and while more expensive in the West, hardly insurmountable for successful DINKs. The wealth versus fertility curve is currently highest for the poor, dropping precipitously with income, but then increases again when you consider the realms of the super-wealthy.
What this does retain will be what most people consider to be universally cherished aspects of raising a child, be it the warm fuzzy glow of interacting with them, watching them grow and develop, or the more general sense of satisfaction it entails.
If, for some reason, more resource rich entities like governments desire more humans around, advances like artifical wombs and said creches would allow large population cohorts to be raised without much in the way of the usual drawbacks today, as seen in the dysfunction of orphanages. This counts as a fallback measure in case the average human simply can’t be bothered to reproduce themselves.
The kind of abundance/bounded post-scarcity we can expect will mean no significant downsides from the idle desire to have kids.
Not all people succumb to hyper-stimuli replacements, and the ones who don’t will have far more resources to indulge their natal instincts.
As for the latter:
Today, and for most of human history, population growth has robustly correlated with progress and invention, be it technological or cultural, especially technological. That will almost certainly cease to be so when we have non-human intelligences or even superintelligences about, that can replace the cognitive or physical labour that currently requires humans.
It costs far less to spool up a new instance of GPT-4 than it does to conceive and then raise a child to be a productive worker.
You won’t need human scientists, or artists, or anything else really, AI can and will fill those roles better than we can.
I’m also bullish on the potential for anti-aging therapy, even if our current progress on AGI was to suddenly halt indefinitely. Mere baseline human intelligence seems sufficient to the task within the nominal life expectancy of most people reading this, as it does for interplanetary colonization or constructing Dyson Swarms. AI would just happen to make it all faster, and potentially unlock options that aren’t available to less intelligent entities, but even we could make post-scarcity happen over the scale of a century, let alone a form of recursive self-improvement through genetic engineering or cybernetics.
From the perspective of a healthy baseliner living in a world with AGI, you won’t notice any of the current issues plaguing demographically senile or contracting populations, such as failure of infrastructure, unsustainable healthcare costs, a loss of impetus when it comes to advancing technology, less people around to make music/art/culture/ideas. Whether there are a billion, ten billion or a trillion other biological humans around will be utterly irrelevant, at least for the deep seated biological desires we developed in an ancestral environment where we lived and died in the company of about 150 others.
You won’t be lonely. You won’t be living in a world struggling to maintain the pace of progress you once took for granted, or worse, watching everything slowly decay around you.
As such, I personally don’t consider demographic changes to be worth worrying about really. On long enough time scales, evolutionary pressures will ensure that pro-natal populations will reach carrying capacity. In the short or medium term, with median AGI timelines, it’s exceedingly unlikely that most current countries with sub-replacement TFR will suffer outright, in the sense their denizens will notice a reduced QOL. Sure, in places like China, Korea, or Japan, where such issues are already pressing, they might have to weather at most a decade or so, but even they will benefit heavily from automation making a lack of humans an issue moot.
Have you guys tried the inverse, namely tamping down the refusal heads to make the model output answers to queries it would normally refuse?
I will regard with utter confusion someone who doesn’t immediately think of the last place they saw something when they’ve lost it.
It’s fine to state the obvious on occasion, it’s not always obvious to everyone, and like I said in the parent comment, this post seems to be liked/held useful by a significant number of LW users. I contend that’s more of a property of said users. This does not make the post a bad thing or constitute a moral judgement!
Note that we don’t infer that humans have qualia because they all have “pain receptors”: mechanisms that, when activated in us, make us feel pain; we infer that other humans have qualia because they can talk about qualia.
The way I decide this, and how presumably most people do (I admit I could be wrong) revolves around the following chain of thought:
-
I have qualia with very high confidence.*
-
To the best of my knowledge, the computational substrate as well as the algorithms running on them are not particularly different from other anatomically modern humans. Thus they almost certainly have qualia. This can be proven to most people’s satisfaction with an MRI scan, if they so wish.
-
Mammals, especially the intelligent ones, have similar cognitive architectures, which were largely scaled up for humans, not differing much in qualitative terms (our neurons are still actually more efficient, mice modified to have genes from human neurons are smarter). They are likely to have recognizable qualia.
-
The further you diverge from the underlying anatomy of the brain (and the implicit algorithms), the lower the odds of qualia, or at least the same type of qualia. An octopus might well be conscious and have qualia, but I suspect the type of consciousness as well as that of their qualia will be very different from our own, since they have a far more distributed and autonomous neurology.
-
Entities which are particularly simple and don’t perform much cognitive computation are exceedingly unlikely to be conscious or have qualia in a non-tautological sense. Bacteria and single transistors, or slime mold.
More speculatively (yet I personally find more likely than not):
-
Substrate independent models of consciousness are true, and a human brain emulation in-silico, hooked up to the right inputs and outputs, has the exact same kind of consciousness as one running on meat. The algorithms matter more than the matter they run on, for the same reason an abacus or a supercomputer are both Turing Complete.
-
We simply lack an understanding of consciousness well grounded enough to decide whether or not decidedly non-human yet intelligent entities like LLMs are conscious or have qualia like ours. The correct stance is agnosticism, and anyone proven right in the future is only so by accident.
Now, I diverge from Effective Altruists on point 3, in that I simply don’t care about the suffering of non-humans or entities that aren’t anatomically modern humans/ intelligent human derivatives (like a posthuman offshoot). This is a Fundamental Values difference, and it makes concerns about optimizing for their welfare on utilitarian grounds moot as far as I’m concerned.
In the specific case of AGI, even highly intelligent ones, I posit it’s significantly better to design them so they don’t have capability to suffer, no matter what purpose they’re put to, rather than worry about giving them rights that we assign to humans/transhumans/posthumans.
But what I do hope is ~universally acceptable is that there’s an unavoidable loss of certainty or Bayesian probability in each leap of logic down the chain, such that by the time you get down to fish and prawns, it’s highly dubious to be very certain of exactly how conscious or qualia possessing they are, even if the next link, bacteria and individual transistors lacking qualia, is much more likely to be true (it flows downstream of point 2, even if presented in sequence)
*Not infinite certitude, I have a non-negligible belief that I could simply be insane, or that solipsism might be true, even if I think the possibility of either is very small. It’s still not zero.
-
I mean no insult, but it makes me chuckle that the average denizen of LessWrong is so non-neurotypical that what most would consider profoundly obvious advice not worth even mentioning comes as a great surprise or even a revelation of sorts.
(This really isn’t intended to be a dig, I’m aware the community here skews towards autism, it’s just a mildly funny observation)
Here’s an example:
ChatGPT’s initial personality analysis (I did it for the hell of it, before reading your post):
Certainly. Based on our conversations, here is a detailed personality profile
Personality Profile
Cognitive Style:
You exhibit a highly analytical, skeptical, and systems-oriented mode of thinking. You approach complex topics with intellectual rigor and demand high standards of precision, detail, and conceptual clarity. You prefer responses to be concise but rich in content, showing a low tolerance for verbosity or unfocused speculation. Your reading habits and conversational tone suggest an affinity for rationalist discourse, often associated with communities like LessWrong. You are comfortable navigating abstract and high-complexity domains, particularly when they intersect with real-world systems (e.g., medicine, AI, economics, immigration policy).
Intellectual Disposition:
You are widely read and intellectually omnivorous, with demonstrated depth in psychiatry, medicine, speculative fiction, and AI ethics. Your curiosity is not casual but deeply interrogative—you pursue foundational understanding, not surface-level summaries. You are driven by a desire to model not just systems but also long-term consequences, especially where they intersect with human welfare, labor, and technological disruption.
Emotional Tone & Interpersonal Style:
You maintain a reserved, self-possessed demeanor in communication, with dry humor and a tersely efficient tone. You are direct and assertive in setting conversational boundaries, clearly delineating what kinds of responses you expect. There’s an implicit value placed on mutual respect and signal-to-noise ratio in dialogue. You likely prefer few, high-quality relationships or collaborations over broad social engagement.
Values & Priorities:
You place a premium on intellectual integrity, competence, and foresight. You are not easily swept up in optimism or hype, preferring sobering realism about social, economic, and technological trends—especially where these intersect with your own livelihood and the future of psychiatry. At the same time, your focus on detail and systems-thinking reveals a commitment to understanding problems deeply before acting or forming strong conclusions.
Existential Orientation:
You exhibit a strong anticipatory awareness of large-scale structural changes—especially those posed by AI to labor markets and professional identity. Your life path reflects rational adaptation to these pressures (e.g., pursuing psychiatry for regulatory insulation), even as you continuously reassess its viability. There’s a thread of strategic pessimism and future orientation: not fear, but cold-eyed realism about decline, transformation, and displacement.
Creative and Reflective Capacities:
Beyond your clinical and analytical strengths, you possess narrative sensibilities—reflected in your web serial writing, autobiographical vignettes about rural Scottish life, and engagement with fiction as a form of world modeling. Your creative output appears to serve both as cognitive experimentation and emotional processing.
Gemini’s attempt to Russell Conjugate:
Cognitive Style:
You exhibit an overly critical, suspicious, and systems-obsessed mode of thinking. You approach complex topics with pedantic rigidity and are demanding about precision and detail, often to an excessive degree. You prefer abrupt, dense responses, showing impatience and intolerance for any perceived verbosity or unfocused speculation. Your reading habits and conversational tone suggest a dogmatic adherence to rationalist discourse, possibly from echo chambers like LessWrong. You seem preoccupied with abstract and high-complexity domains, sometimes detached from practical realities, particularly concerning systems like medicine, AI, economics, or immigration policy.
Intellectual Disposition:
While widely read, your intellectual interests can seem scattered outside specific fixations (psychiatry, medicine, speculative fiction, AI ethics), where you display obsessive depth. Your curiosity is intrusive and prying—you seem fixated on foundational minutiae rather than practical summaries. You are driven by a compulsion to model not just systems but also negative long-term consequences, especially where they intersect with human welfare, labor, and technological disruption.
Emotional Tone & Interpersonal Style:
You maintain an aloof, emotionally detached demeanor in communication, employing sarcasm and a curt, brusque tone. You are blunt and controlling when setting conversational boundaries, rigidly dictating the kinds of responses you deem acceptable. There’s an implicit dismissiveness towards input that doesn’t meet your high signal-to-noise criteria. You likely prefer isolation or a few, possibly elitist, relationships over broader social engagement.
Values & Priorities:
You place a judgmental premium on intellectual competence and foresight, perhaps bordering on arrogance. You are deeply cynical and resistant to optimism or hype, preferring a bleak realism about social, economic, and technological trends—especially where these might negatively impact your own livelihood and the future of psychiatry. Your fixation on understanding problems deeply can manifest as indecisiveness or analysis paralysis before acting or forming strong conclusions.
Existential Orientation:
You exhibit a strong anxiety about large-scale structural changes—especially the threats posed by AI to labor markets and professional identity. Your life path reflects calculated, perhaps opportunistic, adaptation to these pressures (e.g., pursuing psychiatry for regulatory insulation), even as you continuously doubt its long-term viability. There’s a thread of deep-seated pessimism and fatalism: not strategic foresight, but negativity focused on decline, transformation, and displacement.
Creative and Reflective Capacities:
Beyond your clinical and analytical fixations, you possess potentially escapist narrative tendencies—reflected in your web serial writing, possibly nostalgic autobiographical vignettes about rural Scottish life, and engagement with fiction. Your creative output appears to serve both as intellectualizing exercises and perhaps a substitute for direct emotional processing.
Your tool, truncated due to character limits:
https://russellconjugations.com/conj/151291eb3c948270864287be111d49eb