Psychology professor at University of New Mexico. BA Columbia, PhD Stanford. Works on evolutionary psychology, Effective Altruism, AI alignment, X risk. Worked on neural networks, genetic algorithms, evolutionary robotics, & autonomous agents back in the 90s.
geoffreymiller
Shutting down OpenAI entirely would be a good ‘high level change’, at this point.
Well I’m seeing no signs at all, whatsoever, that OpenAI would ever seriously consider slowing, pausing, or stopping its quest for AGI, no matter what safety concerns get raised. Sam Altman seems determined to develop AGI at all costs, despite all risks, ASAP. I see OpenAI as betraying virtually all of its founding principles, especially since the strategic alliance with Microsoft, and with the prospect of colossal wealth for its leaders and employees.
At this point, I’d rather spend $5-7 trillion on a Butlerian Jihad to stop OpenAI’s reckless hubris.
Human intelligence augmentation is feasible over a scale of decades to generations, given iterated polygenic embryo selection.
I don’t see any feasible way that gene editing or ‘mind uploading’ could work within the next few decades. Gene editing for intelligence seems unfeasible because human intelligence is a massively polygenic trait, influenced by thousands to tens of thousands of quantitative trait loci. Gene editing can fix major mutations, to nudge IQ back up to normal levels, but we don’t know of any single genes that can boost IQ above the normal range. And ‘mind uploading’ would require extremely fine-grained brain scanning that we simply don’t have now.
Bottom line is, human intelligence augmentation would happen way too slowly to be able to compete with ASI development.
If we want safe AI, we have to slow AI development. There’s no other way.
Tamsin—interesting points.
I think it’s important for the ‘Pause AI’ movement (which I support) to help politicians, voter, and policy wonks understand that ‘power to do good’ is not necessarily correlated with ‘power to deter harm’ or the ‘power to do indiscriminate harm’. So, advocating for caution (‘OMG AI is really dangerous!‘) should not be read as ‘power to do good’ or ‘power to deter harm’—which could incentivize gov’ts to pursue AI despite the risks.
For example, nuclear weapons can’t really do much good (except maybe for blasting incoming asteroids), but have some power to deter use of nuclear weapons by others, but also have a lot of power to do indiscriminate harm (e.g. global thermonuclear war).
Whereas engineered pandemic viruses would have virtually no power to do good, and no power to deter harm, and only offer power to do indiscriminate harm (e.g. global pandemic).
Arguably, ASI might have a LOT more power to do indiscriminate harm than power to deter harm or power to do good.
If we can convince policy-makers that this is a reasonable viewpoint (ASI offers mostly indiscriminate harm, not good or deterrence), then it might be easier to achieve a helpful pause, and also to reduce the chance of an AI arms race.
gwern—The situation is indeed quite asymmetric, insofar as some people at Lightcone seem to have launched a poorly-researched slander attack on another EA organization, Nonlinear, which has been suffering serious reputational harm as a result. Whereas Nonlinear did not attack Lightcone or its people, except insofar as necessary to defend themselves.
Treating Nonlinear as a disposable organization, and treating its leaders as having disposable careers, seems ethically very bad.
Naive question: why are the disgruntled ex-employees who seem to have made many serious false allegations the only ones whose ‘privacy’ is being protected here?
The people who were accused at Nonlinear aren’t able to keep their privacy.
The guy (Ben Pace) who published the allegations isn’t keeping his privacy.
But the people who are at the heart of the whole controversy, whose allegations are the whole thing we’ve been discussing at length, are protected by the forum moderators? Why?
This is a genuine question. I don’t understand the ethical or rational principles that you’re applying here.
There’s a human cognitive bias that may be relevant to this whole discussion, but that may not be widely appreciated in Rationalist circles yet: gender bias in ‘moral typecasting’.
In a 2020 paper, my U. New Mexico colleague Tania Reynolds and coauthors found a systematic bias for women to be more easily categorized as victims and men as perpetrators, in situations where harm seems to have been done. The ran six studies in four countries (total N=3,317).
(Ever since a seminal paper by Gray & Wegner (2009), there’s been a fast-growing literature on moral typecasting. Beyond this Nonlinear dispute, it’s something that Rationalists might find useful in thinking about human moral psychology.)
If this dispute over Nonlinear is framed as male Emerson Spartz (at Nonlinear) vs. the females ‘Alice’ and ‘Chloe’, people may tend to see Nonlinear as the harm perpetrator. If it’s framed as male Ben Pace (at LessWrong) vs. female Kat Woods (at Nonlinear), people may tend to see Ben as the harm-perpetrator.
This is just one of the many human cognitive biases that’s worth bearing in mind when trying to evaluate conflicting evidence in complex situations.
Maybe it’s relevant here, maybe it’s not. But the psychological evidence suggests it may be relevant more often than we realize.
(Note: this is a very slightly edited version of a comment originally posted on EA Forum here).
Whatever people think about this particular reply by Nonlinear, I hope it’s clear to most EAs that Ben Pace could have done a much better job fact-checking his allegations against Nonlinear, and in getting their side of the story.
In my comment on Ben Pace’s original post 3 months ago, I argued that EAs & Rationalists are not typically trained as investigative journalists, and we should be very careful when we try to do investigative journalism—an epistemically and ethically very complex and challenging profession, which typically requires years of training and experience—including many experiences of getting taken in by individuals and allegations that seemed credible at first, but that proved, on further investigation, to have been false, exaggerated, incoherent, and/or vengeful.
EAs pride ourselves on our skepticism and our epistemic standards when we’re identifying large-scope, neglected, tractable causes areas to support, and when we’re evaluating different policies and interventions to promote sentient well-being. But those EA skills overlap very little with the kinds of investigative journalism skills required to figure out who’s really telling the truth, in contexts involving disgruntled ex-employees versus their former managers and colleagues.
EA epistemics are well suited to the domains of science and policy. We’re often not as savvy when it comes to interpersonal relationships and human psychology—which is the relevant domain here.
In my opinion, Mr. Pace did a rather poor job of playing the investigative journalism role, insofar as most of the facts and claims and perspectives posted by Kat Woods here were not even included or addressed by Ben Pace.
I think in the future, EAs making serious allegations about particular individuals or organizations should be held to a pretty high standard of doing their due diligence, fact-checking their claims with all relevant parties, showing patience and maturity before publishing their investigations, and expecting that they will be held accountable for any serious errors and omissions that they make.
(Note: this reply is cross-posted from EA Forum; my original comment is here.)
I’m actually quite confused by the content and tone of this post.
Is it a satire of the ‘AI ethics’ position?
I speculate that the downvotes might reflect other people being confused as well?
Fair enough. Thanks for replying. It’s helpful to have a little more background on Ben. (I might write more, but I’m busy with a newborn baby here...)
Jim—I didn’t claim that libel law solves all problems in holding people to higher epistemic standards.
Often, it can be helpful just to incentivize avoiding the most egregious forms of lying and bias—e.g. punishing situations when ‘the writer had actual knowledge that the claims were false, or was completely indifferent to whether they were true or false’.
Rob—you claim ‘it’s very obvious that Ben is neither deliberately asserting falsehoods, nor publishing “with reckless disregard’.
Why do you think that’s obvious? We don’t know the facts of the matter. We don’t know what information he gathered. We don’t know the contents of the interviews he did. As far as we can tell, there was no independent editing, fact-checking, or oversight in this writing process. He’s just a guy who hasn’t been trained as an investigative journalist, who did some investigative journalism-type research, and wrote it up.
Number of hours invested in research does not necessarily correlate with objectivity of research—quite the opposite, if someone has any kind of hidden agenda.
I think it’s likely that Ben was researching and writing in good faith, and did not have a hidden agenda. But that’s based on almost nothing other than my heuristic that ‘he seems to be respected in EA/LessWrong circles, and EAs generally seem to act in good faith’.
But I’d never heard of him until yesterday. He has no established track record as an investigative journalist. And I have no idea what kind of hidden agendas he might have.
So, until we know a lot more about this case, I’ll withhold judgment about who might or might not be deliberately asserting falsehoods.
(Note: this was cross-posted to EA Forum here; I’ve corrected a couple of minor typos, and swapping out ‘EA Forum’ for ‘LessWrong’ where appropriate)
A note on
EALessWrong posts as (amateur) investigative journalism:When passions are running high, it can be helpful to take a step back and assess what’s going on here a little more objectively.
There are all different kinds of
EA ForumLessWrong posts that we evaluate using different criteria. Some posts announce new funding opportunities; we evaluate these in terms of brevity, clarity, relevance, and useful links for applicants. Some posts introduce a new potential EA cause area; we evaluate them in terms of whether they make a good empirical case for the cause area being large-scope, neglected, and tractable. Some posts raise a theoretical issues in moral philosophy; we evaluate those in terms of technical philosophical criteria such as logical coherence.This post by Ben Pace is very unusual, in that it’s basically investigative journalism, reporting the alleged problems with one particular organization and two of its leaders. The author doesn’t explicitly frame it this way, but in his discussion of how many people he talked to, how much time he spent working on it, and how important he believes the alleged problems are, it’s clearly a sort of investigative journalism.
So, let’s assess the post by the usual standards of investigative journalism. I don’t offer any answers to the questions below, but I’d like to raise some issues that might help us evaluate how good the post is, if taken seriously as a work of investigative journalism.
Does the author have any training, experience, or accountability as an investigative journalist, so they can avoid the most common pitfalls, in terms of journalist ethics, due diligence, appropriate degrees of skepticism about what sources say, etc?
Did the author have any appropriate oversight, in terms of an editor ensuring that they were fair and balanced, or a fact-checking team that reached out independently to verify empirical claims, quotes, and background context? Did they ‘run it by legal’, in terms of checking for potential libel issues?
Does the author have any personal relationship to any of their key sources? Any personal or professional conflicts of interest? Any personal agenda? Was their payment of money to anonymous sources appropriate and ethical?
Were the anonymous sources credible? Did they have any personal or professional incentives to make false allegations? Are they mentally healthy, stable, and responsible? Does the author have significant experience judging the relative merits of contradictory claims by different sources with different degrees of credibility and conflicts of interest?
Did the author give the key targets of their negative coverage sufficient time and opportunity to respond to their allegations, and were their responses fully incorporated into the resulting piece, such that the overall content and tone of the coverage was fair and balanced?
Does the piece offer a coherent narrative that’s clearly organized according to a timeline of events, interactions, claims, counter-claims, and outcomes? Does the piece show ‘scope-sensitivity’ in accurately judging the relative badness of different actions by different people and organizations, in terms of which things are actually trivial, which may have been unethical but not illegal, and which would be prosecutable in a court of law?
Does the piece conform to accepted journalist standards in terms of truth, balance, open-mindedness, context-sensitivity, newsworthiness, credibility of sources, and avoidance of libel? (Or is it a biased article that presupposed its negative conclusions, aka a ‘hit piece’, ‘takedown’, or ‘hatchet job’).
Would this post meet the standards of investigative journalism that’s typically published in mainstream news outlets such as the New York Times, the Washington Post, or the Economist?
I don’t know the answers to some of these, although I have personal hunches about others. But that’s not what’s important here.
What’s important is that if we publish amateur investigative journalism in
EA ForumLessWrong, especially when there are very high stakes for the reputations of individuals and organizations, we should try to adhere, as closely as possible, to the standards of professional investigative journalism. Why? Because professional journalists have learned, from centuries of copious, bitter, hard-won experience, that it’s very hard to maintain good epistemic standards when writing these kinds of pieces, it’s very tempting to buy into the narratives of certain sources and informants, it’s very hard to course-correct when contradictory information comes to light, and it’s very important to be professionally accountable for truth and balance.
A brief note on defamation law:
The whole point of having laws against defamation, whether libel (written defamation) or slander (spoken defamation), is to hold people to higher epistemic standards when they communicate very negative things about people or organizations—especially negative things that would stick in the readers/listeners minds in ways that would be very hard for subsequent corrections or clarifications to counter-act.
Without making any comment about the accuracy or inaccuracy of this post, I would just point out that nobody in EA should be shocked that an organization (e.g. Nonlinear) that is being libeled (in its view) would threaten a libel suit to deter the false accusations (as they see them), to nudge the author(e.g. Ben Pace) towards making sure that their negative claims are factually correct and contextually fair.
That is the whole point and function of defamation law: to promote especially high standards of research, accuracy, and care when making severe negative comments. This helps promote better epistemics, when reputations are on the line. If we never use defamation law for its intended purpose, we’re being very naive about the profound costs of libel and slander to those who might be falsely accused.
EA Forum is a very active public forum, where accusations can have very high stakes for those who have devoted their lives to EA. We should not expect that EA Forum should be completely insulated from defamation law, or that posts here should be immune to libel suits. Again, the whole point of libel suits is to encourage very high epistemic standards when people are making career-ruining and organization-ruining claims.
(Note: I’ve also cross-posted this to EA Forum here )
Biomimetic alignment: Alignment between animal genes and animal brains as a model for alignment between humans and AI systems
Gordon—I was also puzzled by the initial downvotes. But they happened so quickly that I figured the downvoters hadn’t actually read or digested my essay. Disappointing that this happens on LessWrong, but here we are.
Max—I think your observations are right. The ‘normies’, once they understand AI extinction risk, tend to have much clearer, more decisive, more negative moral reactions to AI than many EAs, rationalists, and technophiles tend to have. (We’ve been conditioned by our EA/Rat subcultures to think we need to ‘play nice’ with the AI industry, no matter how sociopathic it proves to be.)
Whether a moral anti-AI backlash can actually slow AI progress is the Big Question. I think so, but my epistemic confidence on this issue is pretty wide. As an evolutionary psychologist, my inclination is to expect that human instincts for morally stigmatizing behaviors, traits, and people perceived as ‘evil’ have evolved to be very effective in reducing those behaviors, suppressing those traits, and ostracizing those people. But whether those instincts can be organized at a global scale, across billions of people, is the open question.
Of course, we don’t need billions to become anti-AI activists. We only need a few million of the most influential, committed people to raise the alarm—and that would already vastly out-number the people working in the AI industry or actively supporting its hubris.
Maybe. But at the moment, the US is really the only significant actor in the AGI development space. Other nations are reacting in various ways, ranging from curious concern to geopolitical horror. But if we want to minimize risk of a nation-state AI arms races, the burden is on the US companies to Just Stop Unilaterally Driving The Arms Race.
This is really good, and it’ll be required reading for my new ‘Psychology and AI’ class that I’ll teach next year.
Students are likely to ask ‘If the blob can figure out so much about the world, and modify its strategies so radically, why does it still want sugar? Why not just decide to desire something more useful, like money, power, and influence?’