Any easy quick way to test is to offer some free coaching in this method.
Matt Goldenberg
Can you say more about how you’ve used this personally or with clients? What approaches you tried that didn’t work, and how this has changed if at all to be more effective over time?
There’s a lot here that’s interesting, but hard for me to tell from just your description how battletested this is
What would the title be?
I still don’t quite get it. We already have an Ilya Sutskever who can make type 1 and type 2 improvements, and don’t see the sort of jump’s in days your talking about (I mean, maybe we do, and they just look discontinuous because of the release cycles?)
Why do you imagine this? I imagine we’d get something like one Einstein from such a regime, which would maybe increase the timelines over existing AI labs by 1.2x or something? Eventually this gain compounds but I imagine that could tbe relatively slow and smooth , with the occasional discontinuous jump when something truly groundbreaking is discovered
Right, and per the second part of my comment—insofar as consciousness is a real phenomenon, there’s an empirical question of if whatever frame invariant definition of computation you’re using is the correct one.
Do you think wants that arise from conscious thought processes are equally valid to wants that arise from feelings? How do you think about that?
while this paradigm of ‘training a model that’s an agi, and then running it at inference’ is one way we get to transformative agi, i find myself thinking that probably WON’T be the first transformative AI, because my guess is that there are lots of tricks using lots of compute at inference to get not quite transformative ai to transformative ai.
my guess is that getting to that transformative level is gonna require ALL the tricks and compute, and will therefore eek out being transformative BY utilizing all those resources.
one of those tricks may be running millions of copies of the thing in an agentic swarm, but i would expect that to be merely a form of inference time scaling, and therefore wouldn’t expect ONE of those things to be transformative AGI on it’s own.
and i doubt that these tricks can funge against train time compute, as you seem to be assuming in your analysis. my guess is that you hit diminishing returns for various types of train compute, then diminishing returns for various types of inference compute, and that we’ll get to a point where we need to push both of them to that point to get tranformative ai
This seems arbitrary to me. I’m bringing in bits of information on multiple layers when I write a computer program to calculate the thing and then read out the result from the screen
Consider, if the transistors on the computer chip were moved around, would it still process the data in the same way and wield the correct answer?
Yes under some interpretation, but no from my perspective, because the right answer is about the relationship between what I consider computation and how I interpret the results in getting
But the real question for me is—under a computational perspective of consciousness, are there features of this computation that actually correlate to strength of consciousness? Does any interpretation of computation get equal weight? We could nail down a precise definition of what we mean by consciousness that we agreed on that didn’t have the issues mentioned above, but who knows whether that would be the definition that actually maps to the territory of consciousness?
For me the answer is yes. There’s some way of interpreting the colors of grains of sands on the beach as they swirl in the wind that would perfectly implement the miller robin primality test algorithm. So is the wind + sand computing the algorithm?
No, people really do see it, that whispiness can be crisp and clear
I’m not the most visual person. But occasionally when I’m reading I’ll start seeing the scene. I then get jolted out of it when I realize I don’t know how I’m seeing the words as they’ve been replaced with the imagined visuals
I used to think “getting lost in your eyes” was a metaphor, until I made eye contact with particularly beautiful woman in college and found myself losing track of where I was and what I was doing.
Tad James has a fascinating theory called timeline therapy. In it, he explores how different people represent their timelines and his theory about how shifting those representations will change fundamental ways you relate to the world.
fwiw i think that your first sentence makes sense, and second sentence doesn’t understand why
i think people OBVIOUSLY have a sense of what meaning is, but it’s really hard to describe
ah that makes sense
in my mind this isn’t resources flowing to elsewhere, it’s either:
An emotional learning update
A part of you that hasn’t been getting what it wants speaking up.
this is great, thanks for sharing
in my model that happens through local updates, rather than a global system
for instance, if i used my willpower to feel my social anxiety completely (instead of the usual strategy of suppression) while socializing, i might get some small or large reconsolidation updates to the social anxiety, such that that part thinks it’s needed in less situations or not at all
alternatively, the part that has the strategy of going to socialize and feeling confident may gain some more internal evidence, so it wins the internal conflict slightly more (but the internal conflict is still there and causes a drain)
i think the sort of global evaluation you’re talking about is pretty rare, though something like it can happen when someone e.g. reaches a deep state of love through meditation, and then is able to access lots of their unloved parts that are downstream TRYING to get to that love and suddenly a big shift happens to whole system simultaneously (another type of global reevaulation can take place through reconsolidating deep internal organizing principles like fundamental ontological constraints or attachment style)
also, this ‘subconscious parts going on strike’ theory makes slightly different predictions than the ‘is it good for the whole system/live’ theory
for instance, i predict that you can have ‘dead parts’ that e.g. give people social anxiety based on past trauma, even though it’s no longer actually relevant to their current situation.
and that if you override this social anxiety using ‘live willpower’ for a while, you can get burnout, even though the willpower is in some sense ‘correct’ about what would be good for the overall flourishing of the system given the current reality.
A lot of people are looking at the implications of o1′s training process as a future scaling paradigm, but it seems to me that this implementation of applying inference time compute to just in time fine tune the model for hard questions is equally promising and may have equally impressive results if it scales with compute, and has equal potential in terms of low hanging fruit to be picked to improve it.
Don’t sleep on test time training as a potential future scaling paradigm.
I think the model of “Burnout as shadow values” is quite important and loadbearing in my own model of working with many EAs/Rationalists. I don’t think I first got it from this post but I’m glad to see it written up so clearly here.