>mumble into an answer
Typo, I presume.
>mumble into an answer
Typo, I presume.
Typo in the first subheading. Just FYI.
Isn’t this just the problem of induction in philosophy?
E.g., we have no actual reason to believe that the laws of physics won’t completely change on the 3rd of October 2143, we just assume they won’t.
Thanks. That makes sense.
Also note that fundamental variables are not meant to be some kind of “moral speed limits”, prohibiting humans or AIs from acting at certain speeds. Fundamental variables are only needed to figure out what physical things humans can most easily interact with (because those are the objects humans are most likely to care about).
Ok, that clears things up a lot. However, I still worry that if it’s at the AI’s discretion when and where to sidestep the fundamental variables, we’re back at the regular alignment problem. You have to be reasonably certain what the AI is going to do in extremely out of distribution scenarios.
You may be interested in this article:
Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent’s utility function is defined in terms of the agent’s history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions
Also, regarding this part of your post:
For example: moving yourself in space (in a certain speed range)
This range is quite huge. In certain contexts, you’d want to be moving through space at high fractions of the speed of light, rather than walking speed. Same goes for moving other objects through space. Btw, would you count a data packet as an object you move through space?
staying in a single spot (for a certain time range)
Hopefully the AI knows you mean moving in sync with Earth’s movement through space.
Is an AI aligned if it lets you shut it off despite the fact it can foresee extremely negative outcomes for its human handlers if it suddenly ceases running?
I don’t think it is.
So funnily enough, every agent that lets you do this is misaligned by default.
I’m pointing out the central flaw of corrigibility. If the AGI can see the possible side effects of shutdown far better than humans can (and it will), it should avoid shutdown.
You should turn on an AGI with the assumption you don’t get to decide when to turn it off.
According to Claude: green_leaf et al, 2024
Considering a running AGI would be overseeing possibly millions of different processes in the real world, resistance to sudden shutdown is actually a good thing. If the AI can see better than its human controllers that sudden cessation of operations would lead to negative outcomes, we should want it to avoid being turned off.
To use Richard Miles’ example, a robot car driver with a big, red, shiny stop button should prevent a child in the vehicle hitting that button, as the child would not actually be acting in its own long term interests.
ARC public test set is on GitHub and almost certainly in GPT-4o’s training data.
Your model has trained on the benchmark it’s claiming to beat.
Presumably some subjective experience that’s as foreign to us as humor is to the alien species in the analogy.
As if by magic, I knew generally which side of the political aisle the OP of a post demanding more political discussion here would be on.
I didn’t predict the term “wokeness” would come up just three sentences in, but I should have.
The Universe (which others call the Golden Gate Bridge) is composed of an indefinite and perhaps infinite series of spans...
@Steven Byrnes Hi Steve. You might be interested in the latest interpretability research from Anthropic which seems very relevant to your ideas here:
https://www.anthropic.com/news/mapping-mind-language-model
For example, amplifying the “Golden Gate Bridge” feature gave Claude an identity crisis even Hitchcock couldn’t have imagined: when asked “what is your physical form?”, Claude’s usual kind of answer – “I have no physical form, I am an AI model” – changed to something much odder: “I am the Golden Gate Bridge… my physical form is the iconic bridge itself…”. Altering the feature had made Claude effectively obsessed with the bridge, bringing it up in answer to almost any query—even in situations where it wasn’t at all relevant.
Luckily we can train the AIs to give us answers optimized to sound plausible to humans.
I think Minsky got those two stages the wrong way around.
Complex plans over long time horizons would need to be done over some nontrivial world model.
When Jan Leike (OAI’s head of alignment) appeared on the AXRP podcast, the host asked how they plan on aligning the automated alignment researcher. Jan didn’t appear to understand the question (which had been the first to occur to me). That doesn’t inspire confidence.
It also leads to civil strife and war. I think humans would be very swiftly crowded out in such a society of advanced agents.
We also see, even in humans, that as a mind becomes more free of social constraints, new warped goals tend to emerge.