It’s all relative. “Are extremely human, not alien at all” --> Are you seriously saying that e.g. if and when we one day encounter aliens on another planet, the kind of aliens smart enough to build an industrial civilization, they’ll be more alien than LLMs? (Well, obviously they won’t have been trained on the human Internet. So let’s imagine we took a whole bunch of them as children and imported them to Earth and raised them in some crazy orphanage where they were forced to watch TV and read the internet and play various video games all day.)
Because I instead say that all your arguments about similar learning algorithms, similar cognitive biases, etc. will apply even more strongly (in expectation) to these hypothetical aliens capable of building industrial civilization. So the basic relationship of humans<aliens<LLMs will still hold; LLMs will still be more alien than aliens.
Are you seriously saying that e.g. if and when we one day encounter aliens on another planet, the kind of aliens smart enough to build an industrial civilization, they’ll be more alien than LLMs?
Yes! obviously more alien than our LLMs. LLMs are distillations of aggregated human linguistic cortices. Anytime you train one network on the output of others, you clone distill the original(s)! The algorithmic content of NNs is determined by the training data, and the data here in question is human thought.
This was always the way it was going to be, this was all predicted long in advance by the systems/cybernetics futurists like Moravec—AI was/will be our mind children.
EY misled many people here with the bad “human mindspace is narrow meme”, I mostly agree with Quintin’s recent takedown, but I of course also objected way back when.
I really don’t buy this. To be clear: Your answer is Yes, including in the variant case I proposed in parentheses, where the aliens were taken as children and raised in a crazy Earth orphanage?
I didn’t notice the part in parentheses at all until just now—added in edit? The edit really doesn’t agree with the original question to me.
If you took alien children and raised them as earthlings you’d get mostly earthlings in alien bodies—given some assumptions they had roughly similar sized brains and reasonably parallel evolution. Something like this has happened historically—when uncontacted tribal children are raised in a distant advanced civ for example. Western culture—WIERD—has so pervasively colonized and conquered much of the memetic landscape that we have forgotten how diverse human mindspace can be (in some sense it could be WIERD that was the alien invasion ).
Also more locally on earth: japanese culture is somewhat alien compared to western english/american culture. I expect actual alien culture to be more alien.
I don’t necessarily agree—as I don’t consider either to be very alien. Minds are software memetic constructs so you are just comparing human software running on GPUs vs human software running on alien brains. How different that is and which is more different than human software running on ape brains now depends on many cumbersome details.
It’s all relative. “Are extremely human, not alien at all” --> Are you seriously saying that e.g. if and when we one day encounter aliens on another planet, the kind of aliens smart enough to build an industrial civilization, they’ll be more alien than LLMs? (Well, obviously they won’t have been trained on the human Internet. So let’s imagine we took a whole bunch of them as children and imported them to Earth and raised them in some crazy orphanage where they were forced to watch TV and read the internet and play various video games all day.)
Because I instead say that all your arguments about similar learning algorithms, similar cognitive biases, etc. will apply even more strongly (in expectation) to these hypothetical aliens capable of building industrial civilization. So the basic relationship of humans<aliens<LLMs will still hold; LLMs will still be more alien than aliens.
Yes! obviously more alien than our LLMs. LLMs are distillations of aggregated human linguistic cortices. Anytime you train one network on the output of others, you clone distill the original(s)! The algorithmic content of NNs is determined by the training data, and the data here in question is human thought.
This was always the way it was going to be, this was all predicted long in advance by the systems/cybernetics futurists like Moravec—AI was/will be our mind children.
EY misled many people here with the bad “human mindspace is narrow meme”, I mostly agree with Quintin’s recent takedown, but I of course also objected way back when.
Nice to see us getting down to cruxes.
I really don’t buy this. To be clear: Your answer is Yes, including in the variant case I proposed in parentheses, where the aliens were taken as children and raised in a crazy Earth orphanage?
I didn’t notice the part in parentheses at all until just now—added in edit? The edit really doesn’t agree with the original question to me.
If you took alien children and raised them as earthlings you’d get mostly earthlings in alien bodies—given some assumptions they had roughly similar sized brains and reasonably parallel evolution. Something like this has happened historically—when uncontacted tribal children are raised in a distant advanced civ for example. Western culture—WIERD—has so pervasively colonized and conquered much of the memetic landscape that we have forgotten how diverse human mindspace can be (in some sense it could be WIERD that was the alien invasion ).
Also more locally on earth: japanese culture is somewhat alien compared to western english/american culture. I expect actual alien culture to be more alien.
I’m pretty sure I didn’t edit it, I think that was there from the beginning.
OK, cool. So then you agree that LLMs will be more alien than aliens-who-were-raised-on-Earth-in-crazy-internet-text-pretraining-orphanage?
I don’t necessarily agree—as I don’t consider either to be very alien. Minds are software memetic constructs so you are just comparing human software running on GPUs vs human software running on alien brains. How different that is and which is more different than human software running on ape brains now depends on many cumbersome details.