Masters student in Physics at the University of Queensland.
I am interested in Quantum Computing, physical AI Safety guarantees and alignment techniques that will work beyond contemporary models.
Masters student in Physics at the University of Queensland.
I am interested in Quantum Computing, physical AI Safety guarantees and alignment techniques that will work beyond contemporary models.
“cannot imagine a study that would convince me that it “didn’t work” for me, in the ways that actually matter. The effects on my mind kick in sharply, scale smoothly with dose, decay right in sync with half-life in the body, and are clearly noticeable not just internally for my mood but externally in my speech patterns, reaction speeds, ability to notice things in my surroundings, short term memory, and facial expressions.”
The drug actually working would mean that your life is better after 6 years of taking the drug compared to the counterfactual where you took a placebo.
The observations you describe are explained by you simply having a chemical dependency on a drug that you have been on for 6 years.
“In an argument between a specialist and a generalist, the expert usually wins by simply (1) using unintelligible jargon, and (2) citing their specialist results, which are often completely irrelevant to the discussion. The expert is, therefore, a potent factor to be reckoned with in our society. Since experts both are necessary and also at times do great harm in blocking significant progress, they need to be examined closely. All too often the expert misunderstands the problem at hand, but the generalist cannot carry though their side to completion. The person who thinks they understand the problem and does not is usually more of a curse (blockage) than the person who knows they do not understand the problem.’
—Richard W. Hamming, “The Art of Doing Science and Engineering”
***
(Side note:
I think there’s at least a 10% chance that a randomly selected LessWrong user thinks it was worth their time to read at least some of the chapters in this book. Significantly more users would agree that it was a good use of their time (in expectation) to skim the contents and introduction before deciding if they’re in that 10%.
That is to say, I recommend this book.)
Robin Hanson recently wrote about two dynamics that can emerge among individuals within an organisations when working as a group to reach decisions. These are the “outcome game” and the “consensus game.”
In the outcome game, individuals aim to be seen as advocating for decisions that are later proven correct. In contrast, the consensus game focuses on advocating for decisions that are most immediately popular within the organization. When most participants play the consensus game, the quality of decision-making suffers.
The incentive structure within an organization influences which game people play. When feedback on decisions is immediate and substantial, individuals are more likely to engage in the outcome game. Hanson argues that capitalism’s key strength is its ability to make outcome games more relevant.
However, if an organization is insulated from the consequences of its decisions or feedback is delayed, playing the consensus game becomes the best strategy for gaining resources and influence.
This dynamic is particularly relevant in the field of (existential) AI Safety, which needs to develop strategies to mitigate risks before AGI is developed. Currently, we have zero concrete feedback about which strategies can effectively align complex systems of equal or greater intelligence to humans.
As a result, it is unsurprising that most alignment efforts avoid tackling seemingly intractable problems. The incentive structures in the field encourage individuals to play the consensus game instead.
I’m not saying that this would necessarily be a step in the wrong direction, but I don’t think think a discord server is capable of fixing a deeply entrenched cultural problem among safety researchers.
If moderating the server takes up a few hours of John’s time per week the opportunity cost probably isn’t worth it.
Worth emphasizing that cognitive work is more than just a parallel to physical work, it is literally Work in the physical sense.
The reduction in entropy required to train a model means that there is a minimum amount of work required to do it.
I think this is a very important research direction, not merely as an avenue for communicating and understanding AI Safety concerns, but potentially as a framework for developing AI Safety techniques.
There is some minimum amount of cognitive work required to pose an existential threat, perhaps it is much higher than the amount of cognitive work required to perform economically useful tasks.
Can you expect that the applications to interpretability would apply on inputs radically outside of distribution?
My naive intuition is that by taking derivatives are you only describing local behaviour.
(I am “shooting from the hip” epistemically)
A loss of this type of (very weak) interpretability would be quite unfortunate from a practical safety perspective.
This is bad, but perhaps there is a silver lining.
If internal communication within the scaffold appears to be in plain English, it will tempt humans to assume the meaning coincides precisely with the semantic content of the message.
If the chain of thought contains seemingly nonsensical content, it will be impossible to make this assumption.
I think that overall it’s good on the margin for staff at companies risking human extinction to be sharing their perspectives on criticisms and moving towards having dialogue at all
No disagreement.
your implicit demand for Evan Hubinger to do more work here is marginally unhelpful
The community seems to be quite receptive to the opinion, it doesn’t seem unreasonable to voice an objection. If you’re saying it is primarily the way I’ve written it that makes it unhelpful, that seems fair.
I originally felt that either question I asked would be reasonably easy to answer, if time was given to evaluating the potential for harm.
However, given that Hubinger might have to run any reply by Anthropic staff, I understand that it might be negative to demand further work. This is pretty obvious, but didn’t occur to me earlier.
I will add: It’s odd to me, Stephen, that this is your line for (what I read as) disgust at Anthropic staff espousing extremely convenient positions while doing things that seem to you to be causing massive harm.
Ultimately, the original quicktake was only justifying one facet of Anthropic’s work so that’s all I’ve engaged with. It would seem less helpful to bring up my wider objections.
I wouldn’t expect them or their employees to have high standards for public defenses of far less risky behavior
I don’t expect them to have a high standard for defending Anthropic’s behavior, but I do expect the LessWrong community to have a high standard for arguments.
Highly Expected Events Provide Little Information and The Value of PR Statements
Entropy for a discrete random variable is given by . This quantifies the amount of information that you gain on average by observing the value of the variable.
It is maximized when every possible outcome is equally likely. It gets smaller as the variable becomes more predictable and is zero when the “random” variable is 100% guaranteed to have a specific value.
You’ve learnt 1 bit of information when you learn the outcome of a fair coin toss was heads. But you learn 0 information, when you learn the outcome was heads after tossing a coin with heads on both side.
On your desk is a sealed envelope that you’ve been told contains a transcript of a speech that President Elect Trump gave on the campaign trail. You are told that it discusses the impact that his policies will have on the financial position of the average American.
How much additional information do you gain if I tell you that the statement says his policies will have a positive impact on the financial position of the average American?
The answer is very little. You know ahead of time that it is exceptionally unlikely for any politician to talk negatively about their own policies.
There is still plenty of information in the details that Trump mentions, how exactly he plans to improve the economy.
Both Altman and Amodei have recently put out personal blog posts in which they present a vision of the future after AGI is safely developed.
How much additional information do you gain from learning that they present a positive view of this future?
I would argue simply learning that they’re optimistic tells you almost zero useful information about what such a future looks like.
There is plenty of useful information, particularly in Amodei’s essay, in how they justify this optimism and what topics they choose to discuss. But their optimism alone shouldn’t be used as evidence to update your beliefs.
Edit:
Fixed pretty major terminology blunder.
(This observation is not original, and a similar idea appears in The Sequences.)
This explanation seems overly convenient.
When faced with evidence which might update your beliefs about Anthropic, you adopt a set of beliefs which, coincidentally, means you won’t risk losing your job.
How much time have you spent analyzing the positive or negative impact of US intelligence efforts prior to concluding that merely using Claude for intelligence “seemed fine”?
What future events would make you re-evaluate your position and state that the partnership was a bad thing?
Example:
-- A pro-US despot rounds up and tortures to death tens of thousands of pro-union activists and their families. Claude was used to analyse social media and mobile data, building a list of people sympathetic to the union movement, which the US then gave to their ally.
EDIT: The first two sentences were overly confrontational, but I do think either question warrants an answer.
As a highly respected community member and prominent AI safety researchers, your stated beliefs and justifications will be influential to a wide range of people.
The lack of a robust, highly general paradigm for reasoning about AGI models is the current greatest technical problem, although it is not what most people are working on.
What features of architecture of contemporary AI models will occur in future models that pose an existential risk?
What behavioral patterns of contemporary AI models will be shared with future models that pose an existential risk?
Is there a useful and general mathematical/physical framework that describes how agentic, macroscropic systems process information and interact with the environment?
Does terminology adopted by AI Safety researchers like “scheming”, “inner alignment” or “agent” carve nature at the joints?
I upvoted because I imagine more people reading this would slightly nudge group norms in a direction that is positive.
But being cynical:
I’m sure you believe that this is true, but I doubt that it is literally true.
Signalling this position is very low risk when the community is already on board.
Trying to do good may be insufficient if your work on alignment ends up being dual use.
My reply definitely missed that you were talking about tunnel densities beyond what has been historically seen.
I’m inclined to agree with your argument that there is a phase shift, but it seems like it is less to do the fact that there are tunnels, and more to do with the geography becoming less tunnel-like and more open.
I have a couple thoughts on your model that aren’t direct refutations of anything you’ve said here:
I think the single term “density” is a too crude of a measure to get a good predictive model of how combat would play out. I’d expect there to be many parameters that describe a tunnel system and have a direct tactical impact. From your discussion of mines, I think “density” is referring to the number of edges in the network? I’d expect tunnel width, geometric layout etc would change how either side behaves.
I’m not sure about your background, but with zero hours of military combat under my belt, I doubt I can predict how modern subterranean combat plays out in tunnel systems with architectures that are beyond anything seen before in history.
I think a crucial factor that is missing from your analysis is the difficulties for the attacker wanting to maneuver within the tunnel system.
In the Vietnam war and the ongoing Israel-Hamas war, the attacking forces appear to favor destroying the tunnels rather than exploiting them to maneuver. [1]
1. The layout of the tunnels is at least partially unknown to the attackers, which mitigates their ability to outflank the defenders. Yes, there may be paths that will allow the attacker to advance safely, but it may be difficult or impossible to reliably distinguish what this route is.
2. While maps of the tunnels could be produced through modern subsurface mapping, the defenders still must content with area denial devices (e.g. land mines, IEDs or booby traps). The confined nature of the tunnel system forces makes traps substantially more efficient.
3. The previous two considerations impose a substantial psychological burden on attacking advancing through the tunnels, even if they don’t encounter any resistance.
4. (Speculative)
Imagine a network so dense that in a typical 1km stretch of frontline, there are 100 separate tunnels passing beneath, such that you’d need at least 100 defensive chokepoints or else your line would have an exploitable hole.
The density and layout of the tunnels does not need to be constant throughout the network. The system of tunnels in regions the defender doesn’t expect to hold may have hundreds of entrances and intersections, being impossible for either side to defend effectively. But travel deeper into the defenders territory requires passing through only a limited number of well defended passageways. This favors the defenders using the peripheral, dense section of tunnel to employ hit-and-run tactics, rather than attempting to defend every passageways.
(My knowledge of subterranean warfare is based entirely on recreational reading.)
As a counterargument, the destruction of tunnels may be primarily due to the attacking force not intending on holding the territory permanently, and so there is little reason to preserve defensive structures.
I don’t think people who disagree with your political beliefs must be inherently irrational.
Can you think of real world scenarios in which “shop elsewhere” isn’t an option?
Brainteaser for anyone who doesn’t regularly think about units.
Why is it that I can multiply or divide two quantities with different units, but addition or subtraction is generally not allowed?
I think the way arithmetic is being used here is closer in meaning to “dimensional analysis”.
“Type checking” through the use of units is applicable to an extremely broad class of calculations beyond Fermi Estimates.
will be developed by reversible computation, since we will likely have hit the Landauer Limit for non-reversible computation by then, and in principle there is basically 0 limit to how much you can optimize for reversible computation, which leads to massive energy savings, and this lets you not have to consume as much energy as current AIs or brains today.
With respect, I believe this to be overly optimistic about the benefits of reversible computation.
Reversible computation means you aren’t erasing information, so you don’t lose energy in the form of heat (per Landauer[1][2]). But if you don’t erase information, you are faced with the issue of where to store it.
If you are performing a series of computations and only have a finite memory to work with, you will eventually need to reinitialise your registers and empty your memory, at which point you incur the energy cost that you had been trying to avoid. [3]
Epistemics:
I’m quite confident (95%+) that the above is true. (edit: RogerDearnaley’s comment has convinced me I was overconfident) Any substantial errors would surprise me.
I’m less confident in the footnotes.
A cute, non-rigorous intuition for Landauer’s Principle:
The process of losing track of (deleting) 1 bit of information means your uncertainty about the state of the environment has increased by 1 bit. You must see entropy increase by at least 1 bit’s worth of entropy.
Proof:
Rearrange the Landauer Limit to .
Now, when you add a small amount of heat to a system, the change in entropy is given by:
But the E occurring in Landauer’s formula is not the total energy of a system, it is a small amount of energy required to delete the information. When it all ends up as heat, we can replace it with and we have:
Compare this expression with the physicist’s definition of entropy. The entropy of a system is the a scaling factor, , times the logarithm of the number of micro-states that the system might be in, .
The choice of units obscures the meaning of the final term. converted from nats to bits is just 1 bit.
Splitting hairs, some setups will allow you to delete information with a reduced or zero energy cost, but the process is essentially just “kicking the can down the road”. You will incur the full cost during the process of re-initialisation.
For details, see equation (4) and fig (1) of Sagawa, Ueda (2009).
Disagree, but I sympathise with your position.
The “System 1/2” terminology ensures that your listener understands that you are referring to a specific concept as defined by Kahneman.
Signalling that I do not like linkposts to personal blogs.