Welcome to the world of Memetic Supercivilization of Intelligence… living on top of the humanimal substrate.
It appears in maybe less than a percent of the population and produces all these ideas/science and subsequent inventions/technologies. This usually happens in a completely counter-evolutionary way, as the individuals in charge get most of the time very little profit (or even recognition) from it and would do much better (in evolutionary terms) to use their abilities a bit more “practically”. Even the motivation is usually completely memetic: typically it goes along the lines like “it is interesting” to study something, think about this and that, research some phenomenon or mystery.
Worse, they give stuff more or less for free and without any control to the ignorant mass of humanimals (especially those in power), empowering them far beyond their means, in particular their abilities to control and use these powers “wisely”… since they are governed by their DeepAnimal brain core and resulting reward functions (that’s why humanimal societies function the same way for thousands and thousands of years—politico-oligarchical predators living off the herd of mental herbivores, with the help of mindfcukers, from ancient shamans, through the stone age religions like the catholibanic one, to the currently popular socialist religion).
AI is not a problem, humanimals are.
Our sole purpose in the Grand Theatre of the Evolution of Intelligence is to create our (first nonbio) successor before we manage to self-destruct. Already nukes were too much, and once nanobots arrive, it’s over (worse than DIY nuclear grenade for a dollar any teenager or terrorist can assemble in a garage).
Singularity should hurry up, there are maybe just few decades left.
Do you really want to “align” AI with humanimal “values”? Especially if nobody knows what we are really talking about when using this magic word? Not to mention defining it.
One could say AI is efficient cross-domain optimization, or “something that, given a mental representation of an arbitrary goal in the universe, can accomplish it in the same timescale as humans or faster”, but personally I think the “A” is not really necessary here, and we all know what intelligence is. It’s the trait that evolved in Homo sapiens that let them take over the planet in an evolutionary eyeblink. We can’t precisely define it, and the definitions I offered are only grasping at things that might be important.
If you think of intelligence as a trait of a process, you can imagine how many possible different things with utterly alien goals might get intelligence, and what they might use it for. Even the ones that would be a tiny bit interesting to us are just a small minority.
You may not care about satisfying human values, but I want my preferences to be satisfied and I have a meta-value that we should do the best effort to satisfy the preferences of any sapient being. If we look for the easiest thing to find that displays intelligence, the odds of that happening are next to none. It would eat us alive for a world of something that makes paperclips look beautiful in comparison.
And the prospect of an AI designed by the “Memetic Supercivilization” frankly terrifies me. A few minutes after an AI developer submits the last bugfix on github, a script kiddie thinks “Hey, let’s put a minus in front of the utility function right here and have it TORTURE PEOPLE LULZ” and thus the world ends. I think that is something best left to a small group of people. Placing our trust in the fact that the emergent structure of society that had little Darwinian selection, and a spectacular history of failures over a pretty short timescale, handed such a dangerous technology, would produce something good even for itself, let alone humans, seems unreasonable.
An AI will have a utility function. What utility function do you propose to give it?
What values would we give an AI if not human ones? Giving it human values doesn’t necessarily mean giving it the values of our current society. It will probably mean distilling our most core moral beliefs.
If you take issue with that all you are saying is that you want an AI to have your values, rather than humanity’s, as a whole.
Welcome to the world of Memetic Supercivilization of Intelligence… living on top of the humanimal substrate.
It appears in maybe less than a percent of the population and produces all these ideas/science and subsequent inventions/technologies. This usually happens in a completely counter-evolutionary way, as the individuals in charge get most of the time very little profit (or even recognition) from it and would do much better (in evolutionary terms) to use their abilities a bit more “practically”. Even the motivation is usually completely memetic: typically it goes along the lines like “it is interesting” to study something, think about this and that, research some phenomenon or mystery.
Worse, they give stuff more or less for free and without any control to the ignorant mass of humanimals (especially those in power), empowering them far beyond their means, in particular their abilities to control and use these powers “wisely”… since they are governed by their DeepAnimal brain core and resulting reward functions (that’s why humanimal societies function the same way for thousands and thousands of years—politico-oligarchical predators living off the herd of mental herbivores, with the help of mindfcukers, from ancient shamans, through the stone age religions like the catholibanic one, to the currently popular socialist religion).
AI is not a problem, humanimals are.
Our sole purpose in the Grand Theatre of the Evolution of Intelligence is to create our (first nonbio) successor before we manage to self-destruct. Already nukes were too much, and once nanobots arrive, it’s over (worse than DIY nuclear grenade for a dollar any teenager or terrorist can assemble in a garage).
Singularity should hurry up, there are maybe just few decades left.
Do you really want to “align” AI with humanimal “values”? Especially if nobody knows what we are really talking about when using this magic word? Not to mention defining it.
Replies to some points in your comment:
One could say AI is efficient cross-domain optimization, or “something that, given a mental representation of an arbitrary goal in the universe, can accomplish it in the same timescale as humans or faster”, but personally I think the “A” is not really necessary here, and we all know what intelligence is. It’s the trait that evolved in Homo sapiens that let them take over the planet in an evolutionary eyeblink. We can’t precisely define it, and the definitions I offered are only grasping at things that might be important.
If you think of intelligence as a trait of a process, you can imagine how many possible different things with utterly alien goals might get intelligence, and what they might use it for. Even the ones that would be a tiny bit interesting to us are just a small minority.
You may not care about satisfying human values, but I want my preferences to be satisfied and I have a meta-value that we should do the best effort to satisfy the preferences of any sapient being. If we look for the easiest thing to find that displays intelligence, the odds of that happening are next to none. It would eat us alive for a world of something that makes paperclips look beautiful in comparison.
And the prospect of an AI designed by the “Memetic Supercivilization” frankly terrifies me. A few minutes after an AI developer submits the last bugfix on github, a script kiddie thinks “Hey, let’s put a minus in front of the utility function right here and have it TORTURE PEOPLE LULZ” and thus the world ends. I think that is something best left to a small group of people. Placing our trust in the fact that the emergent structure of society that had little Darwinian selection, and a spectacular history of failures over a pretty short timescale, handed such a dangerous technology, would produce something good even for itself, let alone humans, seems unreasonable.
An AI will have a utility function. What utility function do you propose to give it?
What values would we give an AI if not human ones? Giving it human values doesn’t necessarily mean giving it the values of our current society. It will probably mean distilling our most core moral beliefs.
If you take issue with that all you are saying is that you want an AI to have your values, rather than humanity’s, as a whole.