It is not my on radar because either I I don’t understand intelligence properly or disagree about something fundamental about it. Basically my brain threw in the towel when Eliezer wrote that nonsentient superintelligence is possible. The what? How can you have intelligence without sentience? Intelligence, amongst humans, means to talk intelligently, to hold conversations about deep topics. It is social. (Optimization as such is not necessarily intelligence, Solomonoff induction shown you could just optimize things by trial and error.) How could anyone non-sentient talk at all? What would be their talk like? They were unable to talk about themselves, how they feel? Or they could talk about that, but because being nonsentient, that would be a manufactured lie?
But fine. Let’s just taboo the word intelligence and just focus on optimization. Can you have a nonsentient optimizer? Sure, we already do, we use genetic algorithms to evolve solutions to the Travelling Salesman Problem—this is actually being used to route trucks to shops.
But to make the jump beyond that and optimize for abstract values like human health in general you need to have something that is through and through social, something that talks, something that understands a value in a conversation. Something that is indistinguishable from something that is sentient.
And this is why it matters when I say intelligence is not mere optimization but more of a social, communication ability like conversing about deep topics. Imagine the whole of humankind as super-brain with individual brains being neurons in it, and they communicate through words. Speech and writing. Science for example is the result of this collective brain, with its memories (libraries), replacing old neurons with new ones (education) and so on. In this model, intelligence is the ability of a neuron-brain in the collective brain to deal with complicated communications. When you measure someone’s intelligence by their ability to understand a difficult idea of yours, you measure the ability of two neurons to communicate in the collective brain.
Cognition is not an individual thing. It is a highly collective. Nobody ever invented anything, it is the collective brain of humankind that invented anything, with one neuron-brain whom we credit with the invention simply being the last push in a long chain of pushes. And in this, intelligence is communication.
My point is, there is a huge gap in the genetic algorithm optimizing graphs type of optimization and the kind of optimization that can affect values described in words. For an AI to do this, it has to have sentience and that kind of communicative intelligence humans have. If it does, it becomes one neuron in the collective brain.
Imagine an AI trying to make MealSquares. It has to understand highly unspecific human concepts like optimal nutrition i.e. optimal body function, why we like states called health over states called illness etc. and so on. To understand all this it needs to have a form of communicative intelligence, a high level of empathy with human minds, to figure out we like to have protein because we like to have muscle mass because a bunch of reasons such as having fun at playing sports and why exactly is that fun and and… very, very few of its ideas will be his own. By the time it understands any value, it is programmed through and through with a huge set of human values, because there is no individual intelligence, just the ability to participate in the cognitive web of humankind.
TL;DR skeptical because not believing in cognitive individualism like the idea that heroic individuals can invent things.
It is not my on radar because either I I don’t understand intelligence properly or disagree about something fundamental about it. Basically my brain threw in the towel when Eliezer wrote that nonsentient superintelligence is possible. The what? How can you have intelligence without sentience? Intelligence, amongst humans, means to talk intelligently, to hold conversations about deep topics. It is social. (Optimization as such is not necessarily intelligence, Solomonoff induction shown you could just optimize things by trial and error.) How could anyone non-sentient talk at all? What would be their talk like? They were unable to talk about themselves, how they feel? Or they could talk about that, but because being nonsentient, that would be a manufactured lie?
But fine. Let’s just taboo the word intelligence and just focus on optimization. Can you have a nonsentient optimizer? Sure, we already do, we use genetic algorithms to evolve solutions to the Travelling Salesman Problem—this is actually being used to route trucks to shops.
But to make the jump beyond that and optimize for abstract values like human health in general you need to have something that is through and through social, something that talks, something that understands a value in a conversation. Something that is indistinguishable from something that is sentient.
And this is why it matters when I say intelligence is not mere optimization but more of a social, communication ability like conversing about deep topics. Imagine the whole of humankind as super-brain with individual brains being neurons in it, and they communicate through words. Speech and writing. Science for example is the result of this collective brain, with its memories (libraries), replacing old neurons with new ones (education) and so on. In this model, intelligence is the ability of a neuron-brain in the collective brain to deal with complicated communications. When you measure someone’s intelligence by their ability to understand a difficult idea of yours, you measure the ability of two neurons to communicate in the collective brain.
Cognition is not an individual thing. It is a highly collective. Nobody ever invented anything, it is the collective brain of humankind that invented anything, with one neuron-brain whom we credit with the invention simply being the last push in a long chain of pushes. And in this, intelligence is communication.
My point is, there is a huge gap in the genetic algorithm optimizing graphs type of optimization and the kind of optimization that can affect values described in words. For an AI to do this, it has to have sentience and that kind of communicative intelligence humans have. If it does, it becomes one neuron in the collective brain.
Imagine an AI trying to make MealSquares. It has to understand highly unspecific human concepts like optimal nutrition i.e. optimal body function, why we like states called health over states called illness etc. and so on. To understand all this it needs to have a form of communicative intelligence, a high level of empathy with human minds, to figure out we like to have protein because we like to have muscle mass because a bunch of reasons such as having fun at playing sports and why exactly is that fun and and… very, very few of its ideas will be his own. By the time it understands any value, it is programmed through and through with a huge set of human values, because there is no individual intelligence, just the ability to participate in the cognitive web of humankind.
TL;DR skeptical because not believing in cognitive individualism like the idea that heroic individuals can invent things.