What are your timelines like? How long do YOU think we have left?
I know several CEOs of small AGI startups who seem to have gone crazy and told me that they are self inserts into this world, which is a simulation of their original self’s creation. However, none of them talk about each other, and presumably at most one of them can be meaningfully right?
One AGI CEO hasn’t gone THAT crazy (yet), but is quite sure that the November 2024 election will be meaningless because pivotal acts will have already occurred that make nation state elections visibly pointless.
Also I know many normies who can’t really think probabilistically and mostly aren’t worried at all about any of this… but one normy who can calculate is pretty sure that we have AT LEAST 12 years (possibly because his retirement plans won’t be finalized until then). He also thinks that even systems as “mere” as TikTok will be banned before the November 2024 election because “elites aren’t stupid”.
I think I’m more likely to be better calibrated than any of these opinions, because most of them don’t seem to focus very much on “hedging” or “thoughtful doubting”, whereas my event space assigns non-zero probability to ensembles that contain such features of possible futures (including these specific scenarios).
Wondering why this has so many disagreement votes. Perhaps people don’t like to see the serious topic of “how much time do we have left”, alongside evidence that there’s a population of AI entrepreneurs who are so far removed from consensus reality, that they now think they’re living in a simulation.
(edit: The disagreement for @JenniferRM’s comment was at something like −7. Two days later, it’s at −2)
For most of my comments, I’d almost be offended if I didn’t say something surprising enough to get a “high interestingness, low agreement” voting response. Excluding speech acts, why even say things if your interlocutor or full audience can predict what you’ll say?
And I usually don’t offer full clean proofs in direct word. Anyone still pondering the text at the end, properly, shouldn’t “vote to agree”, right? So from my perspective… its fine and sorta even working as intended <3
However, also, this is currently the top-voted response to me, and if William_S himself reads it I hope he answers here, if not with text then (hopefully? even better?) with a link to a response elsewhere?
((EDIT: Re-reading everything above his, point, I notice that I totally left out the “basic take” that might go roughly like “Kurzweil, Altman, and Zuckerberg are right about compute hardware (not software or philosophy) being central, and there’s a compute bottleneck rather than a compute overhang, so the speed of history will KEEP being about datacenter budgets and chip designs, and those happen on 6-to-18-month OODA loops that could actually fluctuate based on economic decisions, and therefore its maybe 2026, or 2028, or 2030, or even 2032 before things pop, depending on how and when billionaires and governments decide to spend money”.))
Pulling honest posteriors from people who’ve “seen things we wouldn’t believe” gives excellent material for trying to perform aumancy… work backwards from their posteriors to possible observations, and then forwards again, toward what might actually be true :-)
It could just be because it reaches a strong conclusion on anecdotal/clustered evidence (e.g. it might say more about her friend group than anything else). Along with claims to being better calibrated for weak reasons—which could be true, but seems not very epistemically humble.
Full disclosure I downvoted karma, because I don’t think it should be top reply, but I did not agree or disagree.
But Jen seems cool, I like weird takes, and downvotes are not a big deal—just a part of a healthy contentious discussion.
However, none of them talk about each other, and presumably at most one of them can be meaningfully right?
Why at most one of them can be meaningfully right?
Would not a simulation typically be “a multi-player game”?
(But yes, if they assume that their “original self” was the sole creator (?), then they would all be some kind of “clones” of that particular “original self”. Which would surely increase the overall weirdness.)
These are valid concerns! I presume that if “in the real timeline” there was a consortium of AGI CEOs who agreed to share costs on one run, and fiddled with their self-inserts, then they… would have coordinated more? (Or maybe they’re trying to settle a bet on how the Singularity might counterfactually might have happened in the event of this or that person experiencing this or that coincidence? But in that case I don’t think the self inserts would be allowed to say they’re self inserts.)
Like why not re-roll the PRNG, to censor out the counterfactually simulable timelines that included me hearing from any of the REAL “self inserts of the consortium of AGI CEOS” (and so I only hear from “metaphysically spurious” CEOs)??
Or maybe the game engine itself would have contacted me somehow to ask me to “stop sticking causal quines in their simulation” and somehow I would have been induced by such contact to not publish this?
Mostly I presume AGAINST “coordinated AGI CEO stuff in the real timeline” along any of these lines because, as a type, they often “don’t play well with others”. Fucking oligarchs… maaaaaan.
It seems like a pretty normal thing, to me, for a person to naturally keep track of simulation concerns as a philosophic possibility (its kinda basic “high school theology” right?)… which might become one’s “one track reality narrative” as a sort of “stress induced psychotic break away from a properly metaphysically agnostic mental posture”?
That’s my current working psychological hypothesis, basically.
But to the degree that it happens more and more, I can’t entirely shake the feeling that my probability distribution over “the time T of a pivotal acts occurring” (distinct from when I anticipate I’ll learn that it happened which of course must be LATER than both T and later than now) shouldn’t just include times in the past, but should actually be a distribution over complex numbers or something...
...but I don’t even know how to do that math? At best I can sorta see how to fit it into exotic grammars where it “can have happened counterfactually” or so that it “will have counterfactually happened in a way that caused this factually possible recurrence” or whatever. Fucking “plausible SUBJECTIVE time travel”, fucking shit up. It is so annoying.
Like… maybe every damn crazy AGI CEO’s claims are all true except the ones that are mathematically false?
How the hell should I know? I haven’t seen any not-plausibly-deniable miracles yet. (And all of the miracle reports I’ve heard were things I was pretty sure the Amazing Randi could have duplicated.)
All of this is to say, Hume hasn’t fully betrayed me yet!
Mostly I’ll hold off on performing normal updates until I see for myself, and hold off on performing logical updates until (again!) I see a valid proof for myself <3
I know several CEOs of small AGI startups who seem to have gone crazy and told me that they are self inserts into this world, which is a simulation of their original self’s creation
Do you know if the origin of this idea for them was a psychedelic or dissociative trip? I’d give it at least even odds, with most of the remaining chances being meditation or Eastern religions...
Wait, you know smart people who have NOT, at some point in their life: (1) taken a psychedelic NOR (2) meditated, NOR (3) thought about any of buddhism, jainism, hinduism, taoism, confucianisn, etc???
To be clear to naive readers: psychedelics are, in fact, non-trivially dangerous.
I personally worry I already have “an arguably-unfair and a probably-too-high share” of “shaman genes” and I don’t feel I need exogenous sources of weirdness at this point.
But in the SF bay area (and places on the internet memetically downstream from IRL communities there) a lot of that is going around, memetically (in stories about) and perhaps mimetically (via monkey see, monkey do).
The first time you use a serious one you’re likely getting a permanent modification to your personality (+0.5 stddev to your Openness?) and arguably/sorta each time you do a new one, or do a higher dose, or whatever, you’ve committed “1% of a personality suicide” by disrupting some of your most neurologically complex commitments.
To a first approximation my advice is simply “don’t do it”.
HOWEVER: this latter consideration actually suggests: anyone seriously and truly considering suicide should perhaps take a low dose psychedelic FIRST (with at least two loving tripsitters and due care) since it is also maybe/sorta “suicide” but it leaves a body behind that most people will think is still the same person and so they won’t cry very much and so on?
To calibrate this perspective a bit, I also expect that even if cryonics works, it will also cause an unusually large amount of personality shift. A tolerable amount. An amount that leaves behind a personality that similar-enough-to-the-current-one-to-not-have-triggered-a-ship-of-theseus-violation-in-one-modification-cycle. Much more than a stressful day and then bad nightmares and a feeling of regret the next day, but weirder. With cryonics, you might wake up to some effects that are roughly equivalent to “having taken a potion of youthful rejuvenation, and not having the same birthmarks, and also learning that you’re separated-by-disjoint-subjective-deaths from LOTS of people you loved when you experienced your first natural death” for example.This is a MUCH BIGGER CHANGE than just having a nightmare and a waking up with a change of heart (and most people don’t have nightmares and changes of heart every night (at least: I don’t and neither do most people I’ve asked)).
A good “axiological practice” (which I don’t know of anyone working on except me (and I’m only doing it a tiny bit, not with my full mental budget)) is sort of an idealized formal praxis for making yourself robust to “humanely heartful emotional changes”(?) and changing only in <PROPERTY-NAME-TBD> ways from such events.
(Edited to add: Current best candidate name for this property is: “WISE” but maybe “healthy” works? (It depends on whether the Stoics or Nietzsche were “more objectively correct” maybe? The Stoics, after all, were erased and replaced by Platonism-For-The-Masses (AKA “Christianity”) so if you think that “staying implemented in physics forever” is critically important then maybe “GRACEFUL” is the right word? (If someone says “vibe-alicious” or “flowful” or “active” or “strong” or “proud” (focusing on low latency unity achieved via subordination to simply and only power) then they are probably downstream of Heidegger and you should always be ready for them to change sides and submit to metaphorical Nazis, just as Heidegger subordinated himself to actual Nazis without really violating his philosophy at all.)))
I don’t think that psychedelics fits neatly into EITHER category. Drugs in general are akin to wireheading, except wireheading is when something reaches into your brain to overload one or more of your positive-value-tracking-modules, (as a trivially semantically invalid shortcut to achieving positive value “out there” in the state-of-affairs that your tracking modules are trying to track) but actual humans have LOTS of <thing>-tracking-modules and culture and science barely have any RIGOROUS vocabulary for any them.
Note that many of these neurological <thing>-tracking-modules were evolved.
Also, many of them will probably be “like hands” in terms of AI’s ability to model them.
This is part of why AI’s should be existentially terrifying to anyone who is spiritually adept.
AI that sees the full set of causal paths to modifying human minds will be “like psychedelic drugs with coherent persistent agendas”. Humans have basically zero cognitive security systems. Almost all security systems are culturally mediated, and then (absent complex interventions) lots of the brain stuff freezes in place around the age of puberty, and then other stuff freezes around 25, and so on. This is why we protect children from even TALKING to untrusted adults: they are too plastic and not savvy enough. (A good heuristic for the lowest level of “infohazard” is “anything you wouldn’t talk about in front of a six year old”.)
Humans are sorta like a bunch of unpatchable computers, exposing “ports” to the “internet”, where each of our port numbers is simply a lightly salted semantic hash of an address into some random memory location that stores everything, including our operating system.
Your word for “drugs” and my word for “drugs” don’t point to the same memory addresses in the computer’s implementing our souls. Also our souls themselves don’t even have the same nearby set of “documents” (because we just have different memories n’stuff)… but the word “drugs” is not just one of the ports… it is a port that deserves a LOT of security hardening.
The bible said ~”thou shalt not suffer a ‘pharmakeia’ to live” for REASONS.
I assume timelines are fairly long or this isn’t safety related. I don’t see a point in keeping PPUs or even caring about NDA lawsuits which may or may not happen and would take years in a short timeline or doomed world.
I think having a probability distribution over timelines is the correct approach. Like, in the comment above:
I think I’m more likely to be better calibrated than any of these opinions, because most of them don’t seem to focus very much on “hedging” or “thoughtful doubting”, whereas my event space assigns non-zero probability to ensembles that contain such features of possible futures (including these specific scenarios).
Even in probabilistic terms, the evidence of OpenAI members respecting their NDAs makes it more likely that this was some sort of political infighting (EA related) than sub-year takeoff timelines. I would be open to a 1 year takeoff, I just don’t see it happening given the evidence. OpenAI wouldn’t need to talk about raising trillions of dollars, companies wouldn’t be trying to commoditize their products, and the employees who quit OpenAI would speak up.
Political infighting is in general just more likely than very short timelines, which would go in counter of most prediction markets on the matter. Not to mention, given it’s already happened with the firing of Sam Altman, it’s far more likely to have happened again.
If there was a probability distribution of timelines, the current events indicate sub 3 year ones have negligible odds. If I am wrong about this, I implore the OpenAI employees to speak up. I don’t think normies misunderstand probability distributions, they just usually tend not to care about unlikely events.
No, OpenAI (assuming that it is a well-defined entity) also uses a probability distribution over timelines.
(In reality, every member of its leadership has their own probability distribution, and this translates to OpenAI having a policy and behavior formulated approximately as if there is some resulting single probability distribution).
The important thing is, they are uncertain about timelines themselves (in part, because no one knows how perplexity translates to capabilities, in part, because there might be difference with respect to capabilities even with the same perplexity, if the underlying architectures are different (e.g. in-context learning might depend on architecture even with fixed perplexity, and we do see a stream of potentially very interesting architectural innovations recently), in part, because it’s not clear how big is the potential of “harness”/”scaffolding”, and so on).
This does not mean there is no political infighting. But it’s on the background of them being correctly uncertain about true timelines...
Compute-wise, inference demands are huge and growing with popularity of the models (look how much Facebook did to make LLama 3 more inference-efficient).
So if they expect models to become useful enough for almost everyone to want to use them, they should worry about compute, assuming they do want to serve people like they say they do (I am not sure how this looks for very strong AI systems; they will probably be gradually expanding access, and the speed of expansion might depend).
better calibrated than any of these opinions, because most of them don’t seem to focus very much on “hedging” or “thoughtful doubting”
new observations > new thoughts when it comes to calibrating yourself.
The best calibrated people are people who get lots of interaction with the real world, not those who think a lot or have a complicated inner model. Tetlock’s super forecasters were gamblers and weathermen.
What are your timelines like? How long do YOU think we have left?
I know several CEOs of small AGI startups who seem to have gone crazy and told me that they are self inserts into this world, which is a simulation of their original self’s creation. However, none of them talk about each other, and presumably at most one of them can be meaningfully right?
One AGI CEO hasn’t gone THAT crazy (yet), but is quite sure that the November 2024 election will be meaningless because pivotal acts will have already occurred that make nation state elections visibly pointless.
Also I know many normies who can’t really think probabilistically and mostly aren’t worried at all about any of this… but one normy who can calculate is pretty sure that we have AT LEAST 12 years (possibly because his retirement plans won’t be finalized until then). He also thinks that even systems as “mere” as TikTok will be banned before the November 2024 election because “elites aren’t stupid”.
I think I’m more likely to be better calibrated than any of these opinions, because most of them don’t seem to focus very much on “hedging” or “thoughtful doubting”, whereas my event space assigns non-zero probability to ensembles that contain such features of possible futures (including these specific scenarios).
Wondering why this has so many disagreement votes. Perhaps people don’t like to see the serious topic of “how much time do we have left”, alongside evidence that there’s a population of AI entrepreneurs who are so far removed from consensus reality, that they now think they’re living in a simulation.
(edit: The disagreement for @JenniferRM’s comment was at something like −7. Two days later, it’s at −2)
For most of my comments, I’d almost be offended if I didn’t say something surprising enough to get a “high interestingness, low agreement” voting response. Excluding speech acts, why even say things if your interlocutor or full audience can predict what you’ll say?
And I usually don’t offer full clean proofs in direct word. Anyone still pondering the text at the end, properly, shouldn’t “vote to agree”, right? So from my perspective… its fine and sorta even working as intended <3
However, also, this is currently the top-voted response to me, and if William_S himself reads it I hope he answers here, if not with text then (hopefully? even better?) with a link to a response elsewhere?
((EDIT: Re-reading everything above his, point, I notice that I totally left out the “basic take” that might go roughly like “Kurzweil, Altman, and Zuckerberg are right about compute hardware (not software or philosophy) being central, and there’s a compute bottleneck rather than a compute overhang, so the speed of history will KEEP being about datacenter budgets and chip designs, and those happen on 6-to-18-month OODA loops that could actually fluctuate based on economic decisions, and therefore its maybe 2026, or 2028, or 2030, or even 2032 before things pop, depending on how and when billionaires and governments decide to spend money”.))
Pulling honest posteriors from people who’ve “seen things we wouldn’t believe” gives excellent material for trying to perform aumancy… work backwards from their posteriors to possible observations, and then forwards again, toward what might actually be true :-)
It could just be because it reaches a strong conclusion on anecdotal/clustered evidence (e.g. it might say more about her friend group than anything else). Along with claims to being better calibrated for weak reasons—which could be true, but seems not very epistemically humble.
Full disclosure I downvoted karma, because I don’t think it should be top reply, but I did not agree or disagree.
But Jen seems cool, I like weird takes, and downvotes are not a big deal—just a part of a healthy contentious discussion.
Why at most one of them can be meaningfully right?
Would not a simulation typically be “a multi-player game”?
(But yes, if they assume that their “original self” was the sole creator (?), then they would all be some kind of “clones” of that particular “original self”. Which would surely increase the overall weirdness.)
These are valid concerns! I presume that if “in the real timeline” there was a consortium of AGI CEOs who agreed to share costs on one run, and fiddled with their self-inserts, then they… would have coordinated more? (Or maybe they’re trying to settle a bet on how the Singularity might counterfactually might have happened in the event of this or that person experiencing this or that coincidence? But in that case I don’t think the self inserts would be allowed to say they’re self inserts.)
Like why not re-roll the PRNG, to censor out the counterfactually simulable timelines that included me hearing from any of the REAL “self inserts of the consortium of AGI CEOS” (and so I only hear from “metaphysically spurious” CEOs)??
Or maybe the game engine itself would have contacted me somehow to ask me to “stop sticking causal quines in their simulation” and somehow I would have been induced by such contact to not publish this?
Mostly I presume AGAINST “coordinated AGI CEO stuff in the real timeline” along any of these lines because, as a type, they often “don’t play well with others”. Fucking oligarchs… maaaaaan.
It seems like a pretty normal thing, to me, for a person to naturally keep track of simulation concerns as a philosophic possibility (its kinda basic “high school theology” right?)… which might become one’s “one track reality narrative” as a sort of “stress induced psychotic break away from a properly metaphysically agnostic mental posture”?
That’s my current working psychological hypothesis, basically.
But to the degree that it happens more and more, I can’t entirely shake the feeling that my probability distribution over “the time T of a pivotal acts occurring” (distinct from when I anticipate I’ll learn that it happened which of course must be LATER than both T and later than now) shouldn’t just include times in the past, but should actually be a distribution over complex numbers or something...
...but I don’t even know how to do that math? At best I can sorta see how to fit it into exotic grammars where it “can have happened counterfactually” or so that it “will have counterfactually happened in a way that caused this factually possible recurrence” or whatever. Fucking “plausible SUBJECTIVE time travel”, fucking shit up. It is so annoying.
Like… maybe every damn crazy AGI CEO’s claims are all true except the ones that are mathematically false?
How the hell should I know? I haven’t seen any not-plausibly-deniable miracles yet. (And all of the miracle reports I’ve heard were things I was pretty sure the Amazing Randi could have duplicated.)
All of this is to say, Hume hasn’t fully betrayed me yet!
Mostly I’ll hold off on performing normal updates until I see for myself, and hold off on performing logical updates until (again!) I see a valid proof for myself <3
Do you know if the origin of this idea for them was a psychedelic or dissociative trip? I’d give it at least even odds, with most of the remaining chances being meditation or Eastern religions...
Wait, you know smart people who have NOT, at some point in their life: (1) taken a psychedelic NOR (2) meditated, NOR (3) thought about any of buddhism, jainism, hinduism, taoism, confucianisn, etc???
To be clear to naive readers: psychedelics are, in fact, non-trivially dangerous.
I personally worry I already have “an arguably-unfair and a probably-too-high share” of “shaman genes” and I don’t feel I need exogenous sources of weirdness at this point.
But in the SF bay area (and places on the internet memetically downstream from IRL communities there) a lot of that is going around, memetically (in stories about) and perhaps mimetically (via monkey see, monkey do).
The first time you use a serious one you’re likely getting a permanent modification to your personality (+0.5 stddev to your Openness?) and arguably/sorta each time you do a new one, or do a higher dose, or whatever, you’ve committed “1% of a personality suicide” by disrupting some of your most neurologically complex commitments.
To a first approximation my advice is simply “don’t do it”.
HOWEVER: this latter consideration actually suggests: anyone seriously and truly considering suicide should perhaps take a low dose psychedelic FIRST (with at least two loving tripsitters and due care) since it is also maybe/sorta “suicide” but it leaves a body behind that most people will think is still the same person and so they won’t cry very much and so on?
To calibrate this perspective a bit, I also expect that even if cryonics works, it will also cause an unusually large amount of personality shift. A tolerable amount. An amount that leaves behind a personality that similar-enough-to-the-current-one-to-not-have-triggered-a-ship-of-theseus-violation-in-one-modification-cycle. Much more than a stressful day and then bad nightmares and a feeling of regret the next day, but weirder. With cryonics, you might wake up to some effects that are roughly equivalent to “having taken a potion of youthful rejuvenation, and not having the same birthmarks, and also learning that you’re separated-by-disjoint-subjective-deaths from LOTS of people you loved when you experienced your first natural death” for example.This is a MUCH BIGGER CHANGE than just having a nightmare and a waking up with a change of heart (and most people don’t have nightmares and changes of heart every night (at least: I don’t and neither do most people I’ve asked)).
Remember, every improvement is a change, though not every change is an improvement. A good “epistemological practice” is sort of a idealized formal praxis for making yourself robust to “learning any true fact” and changing only in GOOD ways from such facts.
A good “axiological practice” (which I don’t know of anyone working on except me (and I’m only doing it a tiny bit, not with my full mental budget)) is sort of an idealized formal praxis for making yourself robust to “humanely heartful emotional changes”(?) and changing only in <PROPERTY-NAME-TBD> ways from such events.
(Edited to add: Current best candidate name for this property is: “WISE” but maybe “healthy” works? (It depends on whether the Stoics or Nietzsche were “more objectively correct” maybe? The Stoics, after all, were erased and replaced by Platonism-For-The-Masses (AKA “Christianity”) so if you think that “staying implemented in physics forever” is critically important then maybe “GRACEFUL” is the right word? (If someone says “vibe-alicious” or “flowful” or “active” or “strong” or “proud” (focusing on low latency unity achieved via subordination to simply and only power) then they are probably downstream of Heidegger and you should always be ready for them to change sides and submit to metaphorical Nazis, just as Heidegger subordinated himself to actual Nazis without really violating his philosophy at all.)))
I don’t think that psychedelics fits neatly into EITHER category. Drugs in general are akin to wireheading, except wireheading is when something reaches into your brain to overload one or more of your positive-value-tracking-modules, (as a trivially semantically invalid shortcut to achieving positive value “out there” in the state-of-affairs that your tracking modules are trying to track) but actual humans have LOTS of <thing>-tracking-modules and culture and science barely have any RIGOROUS vocabulary for any them.
Note that many of these neurological <thing>-tracking-modules were evolved.
Also, many of them will probably be “like hands” in terms of AI’s ability to model them.
This is part of why AI’s should be existentially terrifying to anyone who is spiritually adept.
AI that sees the full set of causal paths to modifying human minds will be “like psychedelic drugs with coherent persistent agendas”. Humans have basically zero cognitive security systems. Almost all security systems are culturally mediated, and then (absent complex interventions) lots of the brain stuff freezes in place around the age of puberty, and then other stuff freezes around 25, and so on. This is why we protect children from even TALKING to untrusted adults: they are too plastic and not savvy enough. (A good heuristic for the lowest level of “infohazard” is “anything you wouldn’t talk about in front of a six year old”.)
Humans are sorta like a bunch of unpatchable computers, exposing “ports” to the “internet”, where each of our port numbers is simply a lightly salted semantic hash of an address into some random memory location that stores everything, including our operating system.
Your word for “drugs” and my word for “drugs” don’t point to the same memory addresses in the computer’s implementing our souls. Also our souls themselves don’t even have the same nearby set of “documents” (because we just have different memories n’stuff)… but the word “drugs” is not just one of the ports… it is a port that deserves a LOT of security hardening.
The bible said ~”thou shalt not suffer a ‘pharmakeia’ to live” for REASONS.
I assume timelines are fairly long or this isn’t safety related. I don’t see a point in keeping PPUs or even caring about NDA lawsuits which may or may not happen and would take years in a short timeline or doomed world.
I think having a probability distribution over timelines is the correct approach. Like, in the comment above:
Even in probabilistic terms, the evidence of OpenAI members respecting their NDAs makes it more likely that this was some sort of political infighting (EA related) than sub-year takeoff timelines. I would be open to a 1 year takeoff, I just don’t see it happening given the evidence. OpenAI wouldn’t need to talk about raising trillions of dollars, companies wouldn’t be trying to commoditize their products, and the employees who quit OpenAI would speak up.
Political infighting is in general just more likely than very short timelines, which would go in counter of most prediction markets on the matter. Not to mention, given it’s already happened with the firing of Sam Altman, it’s far more likely to have happened again.
If there was a probability distribution of timelines, the current events indicate sub 3 year ones have negligible odds. If I am wrong about this, I implore the OpenAI employees to speak up. I don’t think normies misunderstand probability distributions, they just usually tend not to care about unlikely events.
No, OpenAI (assuming that it is a well-defined entity) also uses a probability distribution over timelines.
(In reality, every member of its leadership has their own probability distribution, and this translates to OpenAI having a policy and behavior formulated approximately as if there is some resulting single probability distribution).
The important thing is, they are uncertain about timelines themselves (in part, because no one knows how perplexity translates to capabilities, in part, because there might be difference with respect to capabilities even with the same perplexity, if the underlying architectures are different (e.g. in-context learning might depend on architecture even with fixed perplexity, and we do see a stream of potentially very interesting architectural innovations recently), in part, because it’s not clear how big is the potential of “harness”/”scaffolding”, and so on).
This does not mean there is no political infighting. But it’s on the background of them being correctly uncertain about true timelines...
Compute-wise, inference demands are huge and growing with popularity of the models (look how much Facebook did to make LLama 3 more inference-efficient).
So if they expect models to become useful enough for almost everyone to want to use them, they should worry about compute, assuming they do want to serve people like they say they do (I am not sure how this looks for very strong AI systems; they will probably be gradually expanding access, and the speed of expansion might depend).
new observations > new thoughts when it comes to calibrating yourself.
The best calibrated people are people who get lots of interaction with the real world, not those who think a lot or have a complicated inner model. Tetlock’s super forecasters were gamblers and weathermen.