I agree that it sounds somewhat premature to write off Larry Page based on attitudes he had a long time ago, when AGI seemed more abstract and far away, and then not seek/try communication with him again later on. If that were Musk’s true and only reason for founding OpenAI, then I agree that this was a communication fuckup.
However, my best guess is that this story about Page was interchangeable with a number of alternative plausible criticisms of his competition on building AGI that Musk would likely have come up with in nearby worlds. People like Musk (and Altman too) tend to have a desire to do the most important thing and the belief that they can do this thing a lot better than anyone else. On that assumption, it’s not too surprising that Musk found a reason for having to step in and build AGI himself. In fact, on this view, we should expect to see surprisingly little sincere exploration of “joining someone else’s project to improve it” solutions.
I don’t think this is necessarily a bad attitude. Sometimes people who think this way are right in the specific situation. It just means that we see the following patterns a lot:
Ambitious people start their own thing rather than join some existing thing.
Ambitious people have fallouts with each other after starting a project together where the question of “who eventually gets de facto ultimate control” wasn’t totally specified from the start.
(Edited away a last paragraph that used to be here 50mins after posting. Wanted to express something like “Sometimes communication only prolongs the inevitable,” but that sounds maybe a bit too negative because even if you’re going to fall out eventually, probably good communication can help make it less bad.)
I totally agree. And I also think that all involved are quite serious when they say they care about the outcomes for all of humanity. So I think in this case history turned on a knife edge; Musk would’ve at least not done this much harm had he and Page had clearer thinking and clearer communication, possibly just by a little bit.
But I do agree that there’s some motivated reasoning happening there, too. In support of your point that Musk might find an excuse to do what he emotionally wanted to anyway (become humanity’s savior and perhaps emperor for eternity): Musk did also express concern about DeepMind making Hassabis the effective emperor of humanity, which seems much stranger—Hassabis’ values appear to be quite standard humanist ones, so you’d think having him in charge of a project with the clear lead would be a best-case scenario for anything other than being in charge yourself. So yes, I do think Musk, Altman, and people like them also have some powerful emotional drives toward doing grand things themselves.
It’s a mix of motivations, noble and selfish, conscious and unconscious. That’s true of all of us all the time, but it becomes particularly salient and worth analyzing when the future hangs in the balance.
From my observations and experiences, i don’t see sincere ethics motivations anymore.
I see Elon gaslighting about the LLM-powered bit problem on the platform he bought.
Side note: Bots interacting and collecting realtime novel human data over time is hugely valuable from a training perspective. Having machines simulate all future data is fundamentally not plausible, becuase it can’t account for actual human evolution over generations.
X has also done maximally the for-profit motivated actions, at user expense and cost. For instance: allowing anyone to get blue checks by buying them. This literally does nothing. X’s spin that it helps is deliberate deception becuase they aren’t actual dumb guys. With the amount of stolen financial data and PII for sale on the black market, there’s literally 0 friction added for scammers.
Scammers happen to have the highest profit margins, so, what they’ve done is actually made it harder for ethics to prevail. Over time, I, an ethical entrepreneur or artist, must constantly compromise and adopt less ethical tactics to keep competitive with the ever accelerating crop of crooks who keep scaling their LLM botnets competing against each other. It’s a forcing function.
Why would X do this? Profit. As that scenario scales, so does their earnings.
(Plus they get to train models on all that data) (that only they own)
Is anything fundamentally flawed in that logic? 👆
Let’s look at OpenAI, and importantly, their chosen “experimental partnership programs” (which means: companies they give access to the unreleased models the public can’t get access to).
Just about every major YC or silicon valley venture backed player that has emerged over the past two years has had this “partnership” privileged. Meanwhile, all the LLM bots being deployed untraceably, are pushing a heavy message of “Build In Public”. (A message all the top execs at funds and frontier labs also espouse)
…. So…. Billionaires and giant corporations get to collude to keep the most capable inferencing power amongst themselves, while encouraging small businesses and inventors to publish all their tech?
That’s what I did. I never got a penny. Despite inventing some objectively best-in-class tech stack. Which is all on the timeline and easily verifiable.
Anyway. Not here to cause any trouble or shill any sob story. But I think you’re wrong in your characterization of the endemic capitalistic players. And mind you —they’re all collectively in control of AGI. While actively making these choices and taking these actions against actual humans (me, and I’m sure others I can’t connect with becuase the algorithms don’t favor me)
I think you’re assuming a sharp line between sincere ethics motivations and self-interest. In my view, that doesn’t usually exist. People are prone to believe things that suit their self-interest. That motivated reasoning is the biggest problem with public discourse. People aren’t lying, they’re just confused. I think Musk definitely and even probably Altman believe they’re doing the best thing for humanity—they’re just confused and not taking the effort to get un-confused.
I’m really sorry all of that happened to you. Capitalism is a harsh system, and humans are harsh beings when we’re competing. And confused beings, even when we’re trying not to be harsh. I didn’t have time to go through your whole story, but I fully believe you were wronged.
I think most villains are the heroes of their own stories. Some of us are more genuinely altruistic than others—but we’re all confused in our own favor to one degree or another.
So reducing confusion while playing to everyone’s desire to be a hero is one route to survival.
Musk did also express concern about DeepMind making Hassabis the effective emperor of humanity, which seems much stranger—Hassabis’ values appear to be quite standard humanist ones, so you’d think having him in charge of a project with the clear lead would be a best-case scenario for anything other than being in charge yourself.
It seems the concern was that DeepMind would create a singleton, whereas their vision was for many people (potentially with different values) to have access to it. I don’t think that’s strange at all—it’s only strange if you assume that Musk and Altman would believe that a singleton is inevitable.
Musk:
If they win, it will be really bad news with their one mind to rule the world philosophy.
Altman:
The mission would be to create the first general AI and use it for individual empowerment—ie, the distributed version of the future that seems the safest.
That makes sense under certain assumptions—I find them so foreign I wasn’t thinking in those terms. I find this move strange if you worry about either alignment or misuse. If you hand AGI to a bunch of people, one of them is prone to either screw up and release a misaligned AGI, or deliberately use their AGI to self-improve and either take over or cause mayhem.
To me these problems both seem highly likely. That’s why the move of responding to concern over AGI by making more AGIs makes no sense to me. I think a singleton in responsible hands is our best chance at survival.
If you think alignment is so easy nobody will screw it up, or if you strongly believe that an offense-defense balance will strongly hold so that many good AGIs safely counter a few misaligned/misused ones, then sure. I just don’t think either of those are very plausible views once you’ve thought back and forth through things.
Cruxes of disagreement on alignment difficulty explains why I think anybody who thinks alignment is super easy is overestimating their confidence (as is anyone who’s sure it’s really really hard) - we just haven’t done enough analysis or experimentation yet.
If we solve alignment, do we die anyway? addresses why I think offense-defense balance is almost guaranteed to shift to offense with self-improving AGI, meaning a massively multipolar scenario means we’re doomed to misuse.
My best guess is that people who think open-sourcing AGI is a good idea either are thinking only of weak “AGI” and not the next step to autonomously self-improving AGI, or they’ve taken an optimistic guess at the offense-defense balance with many human-controlled real AGIs.
I agree that it sounds somewhat premature to write off Larry Page based on attitudes he had a long time ago, when AGI seemed more abstract and far away, and then not seek/try communication with him again later on. If that were Musk’s true and only reason for founding OpenAI, then I agree that this was a communication fuckup.
However, my best guess is that this story about Page was interchangeable with a number of alternative plausible criticisms of his competition on building AGI that Musk would likely have come up with in nearby worlds. People like Musk (and Altman too) tend to have a desire to do the most important thing and the belief that they can do this thing a lot better than anyone else. On that assumption, it’s not too surprising that Musk found a reason for having to step in and build AGI himself. In fact, on this view, we should expect to see surprisingly little sincere exploration of “joining someone else’s project to improve it” solutions.
I don’t think this is necessarily a bad attitude. Sometimes people who think this way are right in the specific situation. It just means that we see the following patterns a lot:
Ambitious people start their own thing rather than join some existing thing.
Ambitious people have fallouts with each other after starting a project together where the question of “who eventually gets de facto ultimate control” wasn’t totally specified from the start.
(Edited away a last paragraph that used to be here 50mins after posting. Wanted to express something like “Sometimes communication only prolongs the inevitable,” but that sounds maybe a bit too negative because even if you’re going to fall out eventually, probably good communication can help make it less bad.)
I totally agree. And I also think that all involved are quite serious when they say they care about the outcomes for all of humanity. So I think in this case history turned on a knife edge; Musk would’ve at least not done this much harm had he and Page had clearer thinking and clearer communication, possibly just by a little bit.
But I do agree that there’s some motivated reasoning happening there, too. In support of your point that Musk might find an excuse to do what he emotionally wanted to anyway (become humanity’s savior and perhaps emperor for eternity): Musk did also express concern about DeepMind making Hassabis the effective emperor of humanity, which seems much stranger—Hassabis’ values appear to be quite standard humanist ones, so you’d think having him in charge of a project with the clear lead would be a best-case scenario for anything other than being in charge yourself. So yes, I do think Musk, Altman, and people like them also have some powerful emotional drives toward doing grand things themselves.
It’s a mix of motivations, noble and selfish, conscious and unconscious. That’s true of all of us all the time, but it becomes particularly salient and worth analyzing when the future hangs in the balance.
From my observations and experiences, i don’t see sincere ethics motivations anymore.
I see Elon gaslighting about the LLM-powered bit problem on the platform he bought.
Side note: Bots interacting and collecting realtime novel human data over time is hugely valuable from a training perspective. Having machines simulate all future data is fundamentally not plausible, becuase it can’t account for actual human evolution over generations.
X has also done maximally the for-profit motivated actions, at user expense and cost. For instance: allowing anyone to get blue checks by buying them. This literally does nothing. X’s spin that it helps is deliberate deception becuase they aren’t actual dumb guys. With the amount of stolen financial data and PII for sale on the black market, there’s literally 0 friction added for scammers.
Scammers happen to have the highest profit margins, so, what they’ve done is actually made it harder for ethics to prevail. Over time, I, an ethical entrepreneur or artist, must constantly compromise and adopt less ethical tactics to keep competitive with the ever accelerating crop of crooks who keep scaling their LLM botnets competing against each other. It’s a forcing function.
Why would X do this? Profit. As that scenario scales, so does their earnings.
(Plus they get to train models on all that data) (that only they own)
Is anything fundamentally flawed in that logic? 👆
Let’s look at OpenAI, and importantly, their chosen “experimental partnership programs” (which means: companies they give access to the unreleased models the public can’t get access to).
Just about every major YC or silicon valley venture backed player that has emerged over the past two years has had this “partnership” privileged. Meanwhile, all the LLM bots being deployed untraceably, are pushing a heavy message of “Build In Public”. (A message all the top execs at funds and frontier labs also espouse)
…. So…. Billionaires and giant corporations get to collude to keep the most capable inferencing power amongst themselves, while encouraging small businesses and inventors to publish all their tech?
That’s what I did. I never got a penny. Despite inventing some objectively best-in-class tech stack. Which is all on the timeline and easily verifiable.
But I can’t compete. And guess who emerged mere months after I had my breakthrough? I document it here: https://youtu.be/03S8QqNP3-4?si=chgiBocUkDn-U5E6
Anyway. Not here to cause any trouble or shill any sob story. But I think you’re wrong in your characterization of the endemic capitalistic players. And mind you —they’re all collectively in control of AGI. While actively making these choices and taking these actions against actual humans (me, and I’m sure others I can’t connect with becuase the algorithms don’t favor me)
I think you’re assuming a sharp line between sincere ethics motivations and self-interest. In my view, that doesn’t usually exist. People are prone to believe things that suit their self-interest. That motivated reasoning is the biggest problem with public discourse. People aren’t lying, they’re just confused. I think Musk definitely and even probably Altman believe they’re doing the best thing for humanity—they’re just confused and not taking the effort to get un-confused.
I’m really sorry all of that happened to you. Capitalism is a harsh system, and humans are harsh beings when we’re competing. And confused beings, even when we’re trying not to be harsh. I didn’t have time to go through your whole story, but I fully believe you were wronged.
I think most villains are the heroes of their own stories. Some of us are more genuinely altruistic than others—but we’re all confused in our own favor to one degree or another.
So reducing confusion while playing to everyone’s desire to be a hero is one route to survival.
It seems the concern was that DeepMind would create a singleton, whereas their vision was for many people (potentially with different values) to have access to it. I don’t think that’s strange at all—it’s only strange if you assume that Musk and Altman would believe that a singleton is inevitable.
Musk:
Altman:
That makes sense under certain assumptions—I find them so foreign I wasn’t thinking in those terms. I find this move strange if you worry about either alignment or misuse. If you hand AGI to a bunch of people, one of them is prone to either screw up and release a misaligned AGI, or deliberately use their AGI to self-improve and either take over or cause mayhem.
To me these problems both seem highly likely. That’s why the move of responding to concern over AGI by making more AGIs makes no sense to me. I think a singleton in responsible hands is our best chance at survival.
If you think alignment is so easy nobody will screw it up, or if you strongly believe that an offense-defense balance will strongly hold so that many good AGIs safely counter a few misaligned/misused ones, then sure. I just don’t think either of those are very plausible views once you’ve thought back and forth through things.
Cruxes of disagreement on alignment difficulty explains why I think anybody who thinks alignment is super easy is overestimating their confidence (as is anyone who’s sure it’s really really hard) - we just haven’t done enough analysis or experimentation yet.
If we solve alignment, do we die anyway? addresses why I think offense-defense balance is almost guaranteed to shift to offense with self-improving AGI, meaning a massively multipolar scenario means we’re doomed to misuse.
My best guess is that people who think open-sourcing AGI is a good idea either are thinking only of weak “AGI” and not the next step to autonomously self-improving AGI, or they’ve taken an optimistic guess at the offense-defense balance with many human-controlled real AGIs.