Tenoke
If we get to the point where prediction markets actually direct policy, then yes you need them to be very deep—which in at least some cases is expected to happen naturally or can be subsidized but you also want to make the decision based off a deeper analysis than just the resulting percentages—depth of market, analysis of unusual large trades, blocking bad actors etc.
Weed smells orders of magnitude more than many powders and I imagine releases way more particles into the air but assuming this is doable for well packed fentanyl is there a limit? Can you expose dogs to enough say carfentanil safely initially to start training them? lofentanil?
And if it’s detectable by dogs surely it can’t be that far from our capabilities to create a sensor that detects particles in the air at the same fidelity as a dog’s nose by copying the general functionality of olfactory receptors and neurons if cost isn’t a big issue.
I think you are overrating it. Biggest concern comes from whomever trains a model that passes some treshold in the first place. Not from a model that one actor has been using for a while getting leaked to another actor. The bad actor who got access to the leak is always going to be behind in multiple ways in this scenario.
>Once again, open weights models actively get a free exception, in a way that actually undermines the safety purpose.
Open weights getting a free exception doesn’t seem that bad to me, because yes on one hand it increases the chance of a bad actor getting a cutting-edge model but on the other hand the financial incentives are weaker, and it brings more capability to good actors outside of the top 5 companies earlier. And those weights can be used for testing, safety work, etc.I think what’s released openly will always be a bit behind anyway (and thus likely fairly safe), so at least everyone else can benefit.
They are not full explanations, but as far as, I at leat can get.
>tells you more about what exists
It’s still more satisfying, because a state of ~ everything existing is more ‘stable’ than a state of a specific something existing, in exactly the same way as to why I even think nothing makes more sense as a default state than something to be asking the queston. Nothing existing, and everything existing just require less explanation than a specific something existing. It doesn’t mean it necesserily requires 0 explanation.
And, if everything mathemetically describable and consistent/computable exists, I can wrap my head around it not requiring an orgin more easily, in a similar way why I don’t require an orgin for actual mathematical objects, but without it seeming like necesserily a Type error (though that’s the counterargument I most consider here) like with most explanations.
>because how can you have a “fluctuation” without something already existing, which does the fluctuating
That’s at least somewhat more satisfying to me because we already know about virtual particles and fluctuations from Quantum Mechanics, so it’s at least a recognized low-level mechanism that does cause something to exist even while the state is zero energy (nothing).
It still leaves us with nothing existing over something overall in at least one way (zero energy), is already demonstratable with fields, which are at the lowest level of what we already know of how the universe works and which can be examined and thought about furtther.
The only appealing answers to why there is something instead of nothing for me currently are
1. MUH is true, and all universes that can be defined mathematically exist. It’s not a specific something that exists but all internally consistent somethings.
or
2. The default state is nothing but there are small positive and negative fluctuations (either literally quantum fluctuations or similar but at a lower level) and over infinite time those fluctuations eventually result in a huge something like our and other universes.
Also even If 2 happens only at the regular quantum fluctuations level, there’s a non-zero chance of a new universe emerging due to fluctuations after heat death, which over infinite time would mean it is bound to happen and a new universe/rebirth of ours from scratch will eventually emerge.
Also 1 can happen due to 2 if the fluctuations are at such a low level that any possible mathematical structure eventually emerges over infinite time.
I am dominated by it, and okay, I see what you are saying. Whichever scenario results in a higher chance of human control of the light cone is the one I prefer, and these considerations are relevant only where we don’t control it.
Sure, but 1. I only put 80% or so on MWI/MUH etc. and 2. I’m talking about optimizing for more positive-human-lived-seconds, not for just a binary ‘I want some humans to keep living’ .
I have a preference for minds as close to mine continuing existence assuming their lives are worth living. If it’s misaligned enough that the remaining humans don’t have good lives, then yes it doesn’t matter but I’d just lead with that rather than just the deaths.
And if they do have lives worth living and don’t end up being the last humans, then that leaves us with a lot more positive-human-lived-seconds in the 2B death case.
Okay, then what are your actual probabilities? I’m guessing it’s not sub-20% otherwise you wouldnt just say “<50%”, because for me preventing a say 10% chance of extinction is much more important than even a 99% chance of 2B people dying. And your comment was specifically dismissing focus on full extinction due to the <50% chance.
unlikely (<50% likely).
That’s a bizarre bar to me! 50%!? I’d be worried if it was 5%.
It’s a potentially useful data point but probably only slightly comparable. Big, older, well-established companies face stronger and different pressures than small ones and do have more to lose. For humans that’s much less the case after a point.
>”The problem is when people get old, they don’t change their minds, they just die. So, if you want to have progress in society, you got to make sure that, you know, people need to die, because they get old, they don’t change their mind.”
That’s valid today but I am willing to bet a big reason why old people change their mind less is biological—less neuroplasticity, accumulated damage, mental fatigue etc. If we are fixing aging, and we fix those as well it should be less of an issue.
Additionally, if we are in some post-death utopia, I have to assume we have useful, benevolent AI solving our problems, and that ideally it doesn’t matter all that much who held a lot of wealth or power before.
He does not have a good plan for alignment, but he is far less confused about this fact than most others in similar positions.
Yes he seems like a great guy but he doesn’t just come up as not having a good plan but as them being completely disconnected about having a plan or doing much of anything
JS: If AGI came way sooner than expected we would definitely want to be careful about it.
DP: What would being careful mean? Presumably you’re already careful, right
And yes aren’t they being careful? Well, sounds like no
JS: Maybe it means not training the even smarter version or being really careful when you do train it. You can make sure it’s properly sandboxed and everything. Maybe it means not deploying it at scale or being careful about what scale you deploy it at
“Maybe”? That’s a lot of maybes for just potentially doing the basics. Their whole approximation of a plan is ‘maybe not deploying it at scale’ or ‘maybe’ stopping training after that and only theoretically considering sandboxing it?. That seems like kind of a bare minimum and it’s like he is guessing based on having been around, not based on any real plans they have.
He then goes on to molify, that it probably won’t happen in a year.. it might be a whole two or three years, and this is where they are at.First of all, I don’t think this is going to happen next year but it’s still useful to have the conversation. It could be two or three years instead.
It comes off as if all their talk of Safety is complete lip service even if he agrees with the need for Safety in theory. If you were ‘pleasantly surprised and impressed’ I shudder to imagine what the responses would have had to be to leave you disappointed.
Somewhat tangential but when you list the Safety people who have departed, I’d have prefered to see some sort of comparison group or base rate, as it always raises a red flag for me when only the absolute numbers are provided.
I did a quick check by changing your prompt from ‘AGI Safety or AGI Alignment’ to ‘AGI Capabilities or AGI Advancement’ and got 60% departed (compared to 70% for AGI Safety by you) with 4o. I do think what we are seeing is alarming but it’s too easy for either ‘side’ to accidentally exagerate via framing if you don’t watch for that sort of thing.
When considering that my thinking was that I’d expect the last day to be slightly after, but the announcement can be slightly before since that doesn’t need to be quite on the last day but can and often would be a little before—e.g. be on the first day of his last week.
The 21st when Altman was reinstated, is a logical date for the resignation, and within a week of 6 months now which is why a notice period/agreement to wait ~half a year/something similar is the first thing I thought of, since obviously the ultimate reason why he is quitting is rooted in what happened around then.
Is there a particular reason to think that he would have had an exactly 6-month notice
You are right, there isn’t, but 1, 3, 6 months is where I would have put the highest probability a priori.
Sora & GPT-4o were out.
Sora isn’t out out, or at least not how 4o is out and Ilya isn’t listed as a contributor in any form on it (compared to being an ‘additional contributor’ for gpt-4 or ‘additional leadership’ for gpt-4o) and in general, I doubt it had that much to do with the timing.
GPT-4o of course, makes a lot of sense, timing-wise (it’s literally the next day!) and he is listed on it (though not as one of the many contributors or leads). But if he wasn’t in the office during that time (or is that just a rumor?) it’s just not clear to me if he was actually participating in getting it out as his final project (which yes, is very plausible) or if he was just asked not to announce his departure until after the release, given that the two happen to be so close in time in that case.
Reasons are unclear
This is happening exactly 6 months after the November fiasco (the vote to remove Altman was on Nov 17th) which is likely what his notice period was, especially if he hasn’t been in the office since then.
Are the reasons really that unclear? The specifics of why he wanted Altman out might be, but he is ultimately clearly leaving because he didn’t think Altman should be in charge, while Altman thinks otherwise.
I own only ~5 physical books now (prefer digital) and 2 of them are Thinking, Fast and Slow. Despite not being on the site I’ve always thought of him as something of a founding grandfather of LessWrong.
Survival is obviously much better because 1. You can lose jobs but eventually still have a good life (think UBI at minimum) and 2. Because if you don’t like it you can always kill yourself and be in the same spot as the non-survival case anyway.