Don’t we mean ‘friendly to humans and their goals’ when we say ‘Friendly’ in the context of AI? I’m pretty sure that would make us at least moderately Friendly (or, at least, more so than an Unfriendly AI would be.)
We are Friendlier than a paperclip maximizer, but we’re not just-plain-Friendly. We can be led to do nasty things for all kinds of reasons in all kinds of ways, we are subject to goal distortion and various interfering biases even insofar as our goals are correct, and our goals aren’t fully transparent to us to allow explicit unambiguous pursuit anyway.
I think most of us are Friendly “enough”, but those who aren’t tend to have a disproportionate impact on world history (Hitler would be one of the most extreme examples).
An Unfriendly AI would only be bad because it becomes ridiculously hard for us to stop, and it doesn’t care about us. If an ufAI is exactly as powerful and smart as an average human, and cannot ever get better, it’s not all that much of a threat, and is really just only as dangerous as your average socio/psycho/something-path.*
May I point at the various instances of systematic slavery in human history, or even right now across the world? Imagine if the slavers had double our triple the intelligence they had/have. What makes you think that these superintelligent slaver humans would be “Friendly” even at the basic level, let alone would be the Safe kind of Friendly under self-modification? (supposing they manage to modify or enhance themselves in some way)
The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default… is a very far-fetched combined assumption to be making here, IMO.
* Yes, that’s anthropomorphizing it a bit, but I’m assuming that it would need its own set of heuristics to replace humans’ biases and heuristics, otherwise it’d probably be thinking very slowly and pose even less of a threat. If those heuristics aren’t particularly better optimized than our own, then it’s still only so much of a threat, probably equivalent to a particularly unpredictable psychopath.
The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default… is a very far-fetched combined assumption to be making here, IMO.
The assumptions that I make are that the humanity-fooming would be both very slow, and generally available in some way (I’m not sure entirely how, but brain-computer interfaces are a possibility). That all humans foom, at more-or-less the same time, and at more-or-less the same rate, then follows on (especially in the case of the brain-computer interface, in which case the speed of the foom is controlled by the speed of technological development).
I don’t think that all of the fooming people would be Friendly, but I do think that under those circumstances, any Unfriendly ones would be outnumbered by roughly-equivalently-intelligent Friendly ones, resulting in a by-and-large Friendly consensus.
I believe OP was referring to a single FOOM of humanity collectively.
Yes, so was I:
The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default… is a very far-fetched combined assumption to be making here, IMO.
Hmm, I see from OP’s response that he is thinking that EACH human will have a doubling of IQ per decade once we can all read. I certainly can’t see where he’d get that from. It seems most likely that high literacy high wealth countries would be near the limits of individual IQ achievable from good nutrition, education, and pervasive literacy.
I thought, incorrectly apparently, he was referring to a collective intelligence of humanity.
It seems clear enough to me that humanity functions as a Searlian “Chinese room” style intelligence at least. In that sense, the infrastructure, the technology available to that room to integrate the individuals in the room, as well as the total number of individuals available to be installed in the room, limits the effective intelligence of that room.
If you don’t like the metaphor of the Searlian “Chinese room,” think of a multiprocessor where each core is a human, and the communications and shared memory and other linkages are internet, written documents, and so on.
Then turning the last 1⁄6 of humanity literate (world literacy rate currently about 5⁄6) might give a 16ish % boost in total intelligence, plus a bit more since excess capacity over what is available for pure survival is what we get to contribute to the total, and presumably illiterate people are working at close to breakeven (just effectively smart enough to stay alive).
But the idea that individual intelligence will change because literary rate goes from 84% to 99+%, I don’t get that at all.
Don’t we mean ‘friendly to humans and their goals’ when we say ‘Friendly’ in the context of AI? I’m pretty sure that would make us at least moderately Friendly (or, at least, more so than an Unfriendly AI would be.)
We are Friendlier than a paperclip maximizer, but we’re not just-plain-Friendly. We can be led to do nasty things for all kinds of reasons in all kinds of ways, we are subject to goal distortion and various interfering biases even insofar as our goals are correct, and our goals aren’t fully transparent to us to allow explicit unambiguous pursuit anyway.
I think most of us are Friendly “enough”, but those who aren’t tend to have a disproportionate impact on world history (Hitler would be one of the most extreme examples).
An Unfriendly AI would only be bad because it becomes ridiculously hard for us to stop, and it doesn’t care about us. If an ufAI is exactly as powerful and smart as an average human, and cannot ever get better, it’s not all that much of a threat, and is really just only as dangerous as your average socio/psycho/something-path.*
May I point at the various instances of systematic slavery in human history, or even right now across the world? Imagine if the slavers had double our triple the intelligence they had/have. What makes you think that these superintelligent slaver humans would be “Friendly” even at the basic level, let alone would be the Safe kind of Friendly under self-modification? (supposing they manage to modify or enhance themselves in some way)
The assumption that all humans foom, AND all do so at the same time, AND all do so at the same (or insignificant difference) rate, AND (Remain Safe under self-modification OR never find a way to self-modify), AND are human-Friendly by default… is a very far-fetched combined assumption to be making here, IMO.
* Yes, that’s anthropomorphizing it a bit, but I’m assuming that it would need its own set of heuristics to replace humans’ biases and heuristics, otherwise it’d probably be thinking very slowly and pose even less of a threat. If those heuristics aren’t particularly better optimized than our own, then it’s still only so much of a threat, probably equivalent to a particularly unpredictable psychopath.
The assumptions that I make are that the humanity-fooming would be both very slow, and generally available in some way (I’m not sure entirely how, but brain-computer interfaces are a possibility). That all humans foom, at more-or-less the same time, and at more-or-less the same rate, then follows on (especially in the case of the brain-computer interface, in which case the speed of the foom is controlled by the speed of technological development).
I don’t think that all of the fooming people would be Friendly, but I do think that under those circumstances, any Unfriendly ones would be outnumbered by roughly-equivalently-intelligent Friendly ones, resulting in a by-and-large Friendly consensus.
I believe OP was referring to a single FOOM of humanity collectively.
Yes, so was I:
Hmm, I see from OP’s response that he is thinking that EACH human will have a doubling of IQ per decade once we can all read. I certainly can’t see where he’d get that from. It seems most likely that high literacy high wealth countries would be near the limits of individual IQ achievable from good nutrition, education, and pervasive literacy.
I thought, incorrectly apparently, he was referring to a collective intelligence of humanity.
It seems clear enough to me that humanity functions as a Searlian “Chinese room” style intelligence at least. In that sense, the infrastructure, the technology available to that room to integrate the individuals in the room, as well as the total number of individuals available to be installed in the room, limits the effective intelligence of that room.
If you don’t like the metaphor of the Searlian “Chinese room,” think of a multiprocessor where each core is a human, and the communications and shared memory and other linkages are internet, written documents, and so on.
Then turning the last 1⁄6 of humanity literate (world literacy rate currently about 5⁄6) might give a 16ish % boost in total intelligence, plus a bit more since excess capacity over what is available for pure survival is what we get to contribute to the total, and presumably illiterate people are working at close to breakeven (just effectively smart enough to stay alive).
But the idea that individual intelligence will change because literary rate goes from 84% to 99+%, I don’t get that at all.
Yes, exactly. A slow foom, one in which we take maybe a decade or longer for each doubling of IQ, so that there’s time for everyone to keep up.
That is how I was using the term, yes.