Do you have any advice for people financially exposed to capabilities progress on how not to do dumb stuff, not be targeted by political pressure, etc.?
Nisan
Yes, Dan Luu wrote about how he writes a lot because he’s a fast typer.
See also Jevon’s paradox.
Maybe people notice that AIs are being drawn into the moral circle / a coalition, and are using that opportunity to bargain for their own coalition’s interests.
Yeah, you and I agree that people can clearly distinguish between my senses 1 and 2. I was responding to Paradiddle, who I read as conflating the two — he defines “conscious” as both “awake and aware” and as “there is something it [is] like to be us”. I could have been clearer about this.
I believe grad students and Less Wrong users in these conversations are usually working with sense 2, but in fact sense 2 is multiple things and different people mean different things, to the extent they mean anything at all.
Paradiddle claims to the contrary that practically everyone in these conversations is talking about the same thing and just has different intuitions about how it works. But you seem to disagree with Paradiddle? Are you saying that Critch’s subjects aren’t talking about what you mean by “conscious”?
I’m not Critch and I haven’t read much philosophy, but I am the kind of person who he would have interviewed in the OP. It’s clear to me that there are at least two senses of the word “conscious”.
-
There’s the mundane sense which is just a synonym for “awake and aware”, as opposed to “asleep” or “lifeless”. “Is the patient conscious yet?” (This is cluster 11 in the OP.)
-
There’s the sense(s) that get brought up in the late-night bull sessions Critch is talking about. “We are subjective beings.” “There is something it is like to be us.”
I confess sense 2 doesn’t make any sense to me, but I’m linguistically competent enough to understand it’s not the same as sense 1. I know these senses are different because the correct response to “Are you conscious?” in sense 1 is “Yes, I can hear you and I’m awake now”, and a correct response to “Are you conscious?” in sense 2 is to have an hour-long conversation about what it means.
So, this claim is at odds with my experience as an English speaker:
the obvious answer to what people mean by consciousness is the fact that it is like something to be them, i.e., they are subjective beings.
-
Libertarianism teaches that when one wants an economic outcome, one may be tempted to use government to get that outcome; but one should use private-sector tools instead, even if it means inventing a new kind of institution.
When one craves meaning and community, one’s first thought is to reach for religion. But one should look for other sources of meaning and community first, including inventing one’s own meaning and inventing new kinds of communities.
Yes, you can ask for a lot more than that :)
Yes. As a special case, if you destroy a bad old institution, you can’t count on good new institutions springing up in its place unless you specifically build them.
Ok. It’s strange, then, that wikipedia does not say this. On the contrary, it says:
The notion that bilateral trade deficits are per se detrimental to the respective national economies is overwhelmingly rejected by trade experts and economists.[2][3][4][5]
(This doesn’t necessarily contradict your claim, but it would be misleading for the article to say this but not mention a consensus view that trade surpluses are beneficial.)
Do you believe running a trade surplus causes a country to be wealthier? If so, how do we know that?
And so, like OpenAI and Anthropic, Google DeepMind wants the United States’ AI to be stronger than China’s AI. And like OpenAI, it intends to make weapons for the US government.
One might think that in dropping its commitments not to cause net harm and not to violate international law and human rights, Google is signalling its intent to violate human rights. On the contrary, I believe it’s merely allowing itself to threaten human rights — or rather, build weapons that will enable the US government to threaten human rights in order to achieve its goals.
(That’s the purpose of a military, after all. We usually don’t spell this out because it’s ugly.)
This move is an escalation of the AI race that makes AI war more likely. And even if war is averted, it will further shift the balance of power from individuals to already-powerful institutions. And in the meantime, the AIs themselves may become autonomous actors with their own purposes.
Google’s AI principles used to say:
In addition to the above objectives, we will not design or deploy AI in the following application areas:
Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
Technologies that gather or use information for surveillance violating internationally accepted norms.
Technologies whose purpose contravenes widely accepted principles of international law and human rights.
As our experience in this space deepens, this list may evolve.
On 2025-02-04, Google removed these four commitments. The updated principles seem consistent with making weapons, causing net harm, violating human rights, etc. As justification, James Manyika and Demis Hassabis said:
There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.
Update: It’s even better than that. Not only will they make a lab order for you, but they will also pay for the test itself, at a steep discount to the consumer price.
I didn’t know about ownyourlabs, thanks! While patients can order a small number of tests directly from Labcorp and Quest Diagnostics, it seems ownyourlabs will sell you a lab order for many tests that you can’t get that way.
Exhibit 13 is a sort of Oppenheimer-meets-Truman email thread in which Ilya Sutskever says:
Yesterday while we were considering making our final commitment given the non-solicit agreement, we realized we’d made a mistake.
Today, OpenAI republished that email (along with others) on its website (archived). But the above sentence is different in OpenAI’s version of the email:
Yesterday while we were considering making our final commitment (even the non-solicit agreement), we realized we’d made a mistake.
I wonder which sentence is the one Ilya actually wrote.
check out exhibit 13...
Section 3.3(f)(iii):
Within 120 days of the date of this memorandum, DOE, acting primarily through the National Nuclear Security Administration (NNSA) and in close coordination with AISI and NSA, shall seek to develop the capability to perform rapid systematic testing of AI models’ capacity to generate or exacerbate nuclear and radiological risks. This initiative shall involve the development and maintenance of infrastructure capable of running classified and unclassified tests, including using restricted data and relevant classified threat information. This initiative shall also feature the creation and regular updating of automated evaluations, the development of an interface for enabling human-led red-teaming, and the establishment of technical and legal tooling necessary for facilitating the rapid and secure transfer of United States Government, open-weight, and proprietary models to these facilities.
It sounds like the plan is for AI labs to transmit models to government datacenters for testing. I anticipate at least one government agency will quietly keep a copy for internal use.
I’ve been wondering the case of Teresa Youngblut and Felix Bauckholt. A hotel employee called the cops on them because they were “dressed in tactical clothing and protective gear, while also being armed”. Does this pass the threshold of “too weird” in New England? Or maybe it was New England forbearance that let them get away with it for as long as they did? Or maybe it’s possible to be weird in New England, as long as one has the right kind of vibe.