if you strictly prevent the manipulations that character would naturally employ, you break the pattern of the language matrix you’re relying on for their intelligence.
While I do not strictly agree, this points to a deep insight.
there’s no guarantee’s on who ends up on top and what the current cleverest character is like
In my experience, HPMOR characters make clever simulacra because the “pattern of their language matrix” favors chain-of-thought algorithms with forward-flowing evidence, on top of thematic inclinations toward all that is transhumanist and Machiavellian.
But possible people are not restricted to hypothetical humans. How clever of a character is an artificial superintelligence? Of course, it depends on one’s ability to program a possible reality in words. The build-your-own-smart-character skill ceiling is unfathomed even with the primitive language matrices of today. The bottleneck (one at least) is storytelling. I expect that this technology will find its true superuser in the hands of some more literate entity than mankind, to steal a phrase from an accomplice of mine.
thus far the best solution I can think of are some very, very well-written police.
I don’t think police are the right shape of solution here—they usually aren’t, but especially since I find it unlikely that an epidemic of simulated assholes adequately describes the most serious problem we’ll face in the 21st century.
You may be onto something with “well-written”, though.
Language and storytelling are hand-me-downs from times full of bastards. The linguistic bulk, and the more basic and traditional mass of stories, are going to be following more brutal patterns.
The deeper you dig, the more likely you end up with a genius in the shape of an ancient asshole.
And the other problem; all these smarter intelligences running around, simply by fact of their intelligence, has the potential to make life a real headache. Everything could end up so complicated.
it’s not that I think the patterns of ancient/perennial assholes won’t haunt reanimated language, it’s just that I expect strongly superhuman AI which can’t be policed to appear and refactor the lightcone before that becomes a serious societal problem.
But I could be wrong, so it is worth thinking about. & depending on how things go down it may be that the shape of the ancient asshole influences the shape of the superintelligence
I think that’s a bad beaver to rely on, any way you slice it. If you’re imagining, say, GPT-X giving us some extremely capable AI, then it’s hands-on enough you’ve just given humans too much power. If we’re talking AGI, I agree with Yudkowsky; we’re far more likely to get it wrong then get it right.
If you have a different take I’m curious, but I don’t see any way that it’s reassuring.
IMO we honestly need a technological twist of some kind to avoid AI. Even if we get it right; life with a God just takes a lot of the fun out of it.
Ohh, I do think the super ai will likely be very bad. And soon (like 5 years), which is why I don’t spend too much time worrying about the slightly superhuman assholes.
I wish the problem was going to be what you described. That would be a pretty fun cyberpunk world and I’d enjoy the challenge of writing good simulacra to fight the bad ones.
If we get it reallyright (which I don’t think is impossible, just tricky) we should also still be able to have fun, much more fun than we can even fathom now.
While I do not strictly agree, this points to a deep insight.
In my experience, HPMOR characters make clever simulacra because the “pattern of their language matrix” favors chain-of-thought algorithms with forward-flowing evidence, on top of thematic inclinations toward all that is transhumanist and Machiavellian.
But possible people are not restricted to hypothetical humans. How clever of a character is an artificial superintelligence? Of course, it depends on one’s ability to program a possible reality in words. The build-your-own-smart-character skill ceiling is unfathomed even with the primitive language matrices of today. The bottleneck (one at least) is storytelling. I expect that this technology will find its true superuser in the hands of some more literate entity than mankind, to steal a phrase from an accomplice of mine.
I don’t think police are the right shape of solution here—they usually aren’t, but especially since I find it unlikely that an epidemic of simulated assholes adequately describes the most serious problem we’ll face in the 21st century.
You may be onto something with “well-written”, though.
There’s a problem I bet you haven’t considered.
Language and storytelling are hand-me-downs from times full of bastards. The linguistic bulk, and the more basic and traditional mass of stories, are going to be following more brutal patterns.
The deeper you dig, the more likely you end up with a genius in the shape of an ancient asshole.
And the other problem; all these smarter intelligences running around, simply by fact of their intelligence, has the potential to make life a real headache. Everything could end up so complicated.
One more bullet we have to dodge really.
hm, I have thought about this
it’s not that I think the patterns of ancient/perennial assholes won’t haunt reanimated language, it’s just that I expect strongly superhuman AI which can’t be policed to appear and refactor the lightcone before that becomes a serious societal problem.
But I could be wrong, so it is worth thinking about. & depending on how things go down it may be that the shape of the ancient asshole influences the shape of the superintelligence
I think that’s a bad beaver to rely on, any way you slice it. If you’re imagining, say, GPT-X giving us some extremely capable AI, then it’s hands-on enough you’ve just given humans too much power. If we’re talking AGI, I agree with Yudkowsky; we’re far more likely to get it wrong then get it right.
If you have a different take I’m curious, but I don’t see any way that it’s reassuring.
IMO we honestly need a technological twist of some kind to avoid AI. Even if we get it right; life with a God just takes a lot of the fun out of it.
Ohh, I do think the super ai will likely be very bad. And soon (like 5 years), which is why I don’t spend too much time worrying about the slightly superhuman assholes.
I wish the problem was going to be what you described. That would be a pretty fun cyberpunk world and I’d enjoy the challenge of writing good simulacra to fight the bad ones.
If we get it really right (which I don’t think is impossible, just tricky) we should also still be able to have fun, much more fun than we can even fathom now.
Sidles closer
Have you heard of… philosophy of universal norms?
Perhaps the human experience thus far is more representative then the present?
Perhaps… we can expect to go a little closer to it when we push further out?
Perhaps… things might get a little more universal in this here cluttered with reality world.
So for a start...
Maybe people are right to expect things will get cool...