Why do trans women attracted to women score significantly higher on IQ tests than trans women attracted to men? And why is their average +1.5 SD compared to the general population? In this paper, the averages were 121.7 (SD = 17) and 107.5 (SD = 14.3) respectively. I genuinely have no clue why these averages are what they are. I’m sure there are selection effects in that these were women who were driven enough to seek out GAC (which at the time of this study likely required being at least a little bit of a self-starter), but the first average is still ridiculous. The first average implies that roughly one in 50 trans woman who are attracted to women and meet whatever selection effects that are going on in this study would have an IQ of 156. For reference, among the general white population, only one in ~10,000 people would have that IQ.
LWLW
I think short timelines just don’t square with the way intelligence agencies are behaving. The NSA took Y2K more seriously than it currently seems to be taking near-term AGI. You can make the argument that intelligence agencies are less competent than they used to be, but I don’t buy that they aren’t at least extremely paranoid and moderately competent: that seems like their job.
What makes you confident that AI progress has stagnated at OpenAI? If you don’t have the time to explain why I understand, but what metrics over the past year have stagnated?
What if Trump is channeling his inner doctor strange and is crashing the economy in order to slow AI progress and buy time for alignment? Eliezer calls for an AI pause, Trump MAKES an AI pause. I rest my case that Trump is the most important figure in the history of AI alignment.
This is an uncharitable interpretation, but “good at increasingly long tasks which require no real cleverness” seems economically valuable, but doesn’t seem to be leading to what I think of as superintelligence.
How does this account for the difficulty of the tasks? AFAIK even reasoning models still struggle with matrix reasoning. And most matrix puzzles (even difficult ones) are something you can do in 15-30 seconds, occasionally 4-5 minutes for sufficiently challenging ones. But even in those cases you usually figure out what to look for in the first 30-60 seconds and then spend the rest of the time on drudge.
So current agents might be capable of the 1 minute task “write a hello world program,” while not being capable of the 1 minute task “solve the final puzzle on Mensa DK.”
And if that’s the case, then agents might be capable of routine long-horizon tasks in the future (whatever that means), while still being incapable of more OOD achievements like “write Attention is all you need.”What am I missing?
Oh I was actually hoping you’d reply! I may have hallucinated the exact quote I mentioned but here is something from Ulam: “Ulam on physical intuition and visualization,” it’s on Steve Hsu’s blog. And I might have hallucinated the thing about Poincaré being tested by Binet, that might just be an urban legend I didn’t verify. You can find Poincaré’s struggles with coordination and dexterity in “Men of Mathematics,” but that’s a lot less extreme than the story I passed on. I am confident in Tao’s preference for analysis over visualization. If you have the time look up “Terence Tao” on Gwern’s website.
I’m not very familiar with the field of neuroscience, but it seems to me that we’re probably pretty far from being able to provide a satisfactory answer to these questions. Is that true from your understanding of where the field is at? What sorts of techniques/technology would we need to develop in order for us to start answering these questions?
From what I understand, JVN, Poincaré, and Terence Tao all had/have issues with perceptual intuition/mental visualization. JVN had “the physical intuition of a doorknob,” Poincaré was tested by Binet and had extremely poor perceptual abilities, and Tao (at least as a child) mentioned finding mental rotation tasks “hard.”
I also fit a (much less extreme) version of this pattern, which is why I’m interested in this in the first place. I am (relatively) good at visual pattern recognition and math, but I have aphantasia and have an average visual working memory. I felt insecure about this for a while, but seeing that much more intelligent people than me had a similar (but more extreme) cognitive profile made me feel better.
Does anybody have a satisfactory explanation for this profile beyond a simplistic “tradeoffs” explanation?
Edit: Some claims about JVN/Poincare may have been hallucinated, but they are based at least somewhat on reality. See my reply to Steven
This is why I don’t really buy anybody who claims an IQ >160. Effectively all tested IQs over 160 likely came from a childhood test or have an SD of 20 and there is an extremely high probability that the person with said tested iq substantially regressed to the mean. And even for a test like the WAIS that claims to measure up to 160 with SD 15, the norms start to look really questionable once you go much past 140.
I think I know one person who tested at 152 on the WISC when he was ~11, and one person who ceilinged the WAIS-III at 155 when he was 21. And they were both high-achieving, but they weren’t exceptionally high-achieving. Someone fixated on IQ might call this cope, but they really were pretty normal people who didn’t seem to be on a higher plane of existence. The biggest functional difference between them and people with more average IQs was that they had better job prospects. But they both had a lot of emotional problems and didn’t seem particularly happy.
This just boils down to “humans aren’t aligned,” and that fact is why this would never work, but I still think it’s worth bringing up. Why are you required to get a license to drive, but not to have children? I don’t mean this in a literal way, I’m just referring to how casual the decision to have children is seen by much of society. Bringing someone into existence is vastly higher stakes than driving a car.
I’m sure this isn’t implementable, but parents should at least be screened for personality disorders before they’re allowed to have children. And sure that’s a slippery slope, and sure many of the most powerful people just want workers to furnish their quality of life regardless of the worker’s QOL. But bringing a child into the world who you can’t properly care for can lead to a lifetime of avoidable suffering.
I was just reading about “genomic liberty,” and the idea that parents would choose to make their kids iq lower than possible, that some would even choose for their children to have disabilities like them is completely ridiculous. And it just made me think “those people shouldn’t have the liberty of being parents.” Bringing another life into existence is not casual like where you work/live. And the obligation should be to the children, not the parents.
How far along are the development of autonomous underwater drones in America? I’ve read statements by American military officials about wanting to turn the Taiwan straight into a drone-infested death trap. And I read someone (not an expert) who said that China is racing against time to try and invade before autonomous underwater drones take off. Is that true? Are they on track?
MuZero doesn’t seem categorically different from AlphaZero. It has to do a little bit more work at the beginning, but if you don’t get any reward for breaking the rules: you will learn not to break the rules. If MuZero is continuously learning then so is AlphaZero. Also, the games used were still computationally simple, OOMs more simple than an open-world game, let alone a true World-Model. AFAIK MuZero doesn’t work on open-ended, open-world games. And AlphaStar never got to superhuman performance at human speed either.
hi, thank you! i guess i was thinking about claims that “AGI is imminent and therefore we’re doomed.” it seems like if you define AGI as “really good at STEM” then it is obviously imminent. but if you define it as “capable of continuous learning like a human or animal,” that’s not true. we don’t know how to build it and we can’t even run a fruit-fly connectome on the most powerful computers we have for more than a couple of seconds without the instance breaking down: how would we expect to run something OOMs more complex and intelligent? “being good at STEM” seems like a much, much simpler and less computationally intensive task than continuous, dynamic learning. tourist is great at codeforces, but he obviously doesn’t have the ability to take over the world (i am making the assumption that anyone with the capability to take over the world would do so). the second is a much, much fuzzier, more computationally complex task than the first.
i had just been in a deep depression for a while (it’s embarassing, but this started with GPT-4) because i thought some AI in the near future was going to wake up, become god, and pwn humanity. but when i think about it from this perspective, that future seems much less likely. in fact, the future (at least in the near-term) looks very bright. and i can actually plan for it, which feels deeply relieving to me.
Apologies in advance if this is a midwit take. Chess engines are “smarter” than humans at chess, but they aren’t automatically better at real-world strategizing as a result. They don’t take over the world. Why couldn’t the same be true for STEMlord LLM-based agents?
It doesn’t seem like any of the companies are anywhere near AI that can “learn” or generalize in real time like a human or animal. Maybe a superintelligent STEMlord could hack their way around learning, but that still doesn’t seem the same as or as dangerous as fooming, and it also seems much easier to monitor. Does it not seem plausible that the current paradigm drastically accelerates scientific research while remaining tools? The counter is that people will just use the tools to try and figure out learning. But we don’t know how hard learning is, and the tools could also enable people to make real progress on alignment before learning is cracked.
You’re probably right but I guess my biggest concern is the first superhuman alignment researchers being aligned/dumb enough to explain to the companies how control works. It really depends on if self-awareness is present as well.
what is the plan for making task-alignment go well? i am much more worried about the possibility of being at the mercy of some god-emperor with a task-aligned AGI slave than I am about having my atoms repurposed by an unaligned AGI. the incentives for blackmail and power-consolidation look awful.
I honestly think the EV of superhumans is lower than the EV for AI. sadism and wills to power are baked into almost every human mind (with the exception of outliers of course). force multiplying those instincts is much worse than an AI which simply decides to repurpose the atoms in a human for something else. i think people oftentimes act like the risk ends at existential risks, which i strongly disagree with. i would argue that everyone dying is actually a pretty great ending compared to hyperexistential risks. it is effectively +inf relative utility.
with AIs we’re essentially putting them through selective pressures to promote benevolence (as a hedge by the labs in case they don’t figure out intent alignment). that seems like a massive advantage compared to the evolutionary baggage associated with humans.
with humans you’d need the will and capability to engineer in at least +5sd empathy and −10sd sadism into every superbaby. but people wouldn’t want their children to make them feel like shitty people so they would want them to “be more normal.”
I think that people don’t consider the implications of something like this. This seems to imply that the mathematical object of a malevolent superintelligence exists, and that conscious victims of said superintelligence exist as well. Is that really desirable? do people really prefer that to some sort of teleology?
Yeah something like that, the ASI is an extension of their will.
This is a late reply, but at least from this article, it seems like Ilya Sutskever was running out of confidence that OpenAI would reach AGI by mid 2023. Additionally, if the rumors about GPT-5 are true, it’s mainly going to be a unification of existing models rather than something entirely new. Combined with the GPT-4.5 release, it sure seems like progress at OpenAI is slowing down rather than speeding up.
How do you know that researchers at AGI labs genuinely believe what they’re saying? Couldn’t the companies just put pressure on them to act like they believe Transformative AI is imminent? I just don’t buy that these agencies are dismissive without good reason. They’ve explored remote viewing and other ideas that are almost certainly bullshit. If they are willing to consider those possibilities, I don’t know why they wouldn’t consider the possibility of current deep learning techniques creating a national security threat. That seems like their job, and they’ve explored significantly weirder ideas.