it’s not that I think the patterns of ancient/perennial assholes won’t haunt reanimated language, it’s just that I expect strongly superhuman AI which can’t be policed to appear and refactor the lightcone before that becomes a serious societal problem.
But I could be wrong, so it is worth thinking about. & depending on how things go down it may be that the shape of the ancient asshole influences the shape of the superintelligence
I think that’s a bad beaver to rely on, any way you slice it. If you’re imagining, say, GPT-X giving us some extremely capable AI, then it’s hands-on enough you’ve just given humans too much power. If we’re talking AGI, I agree with Yudkowsky; we’re far more likely to get it wrong then get it right.
If you have a different take I’m curious, but I don’t see any way that it’s reassuring.
IMO we honestly need a technological twist of some kind to avoid AI. Even if we get it right; life with a God just takes a lot of the fun out of it.
Ohh, I do think the super ai will likely be very bad. And soon (like 5 years), which is why I don’t spend too much time worrying about the slightly superhuman assholes.
I wish the problem was going to be what you described. That would be a pretty fun cyberpunk world and I’d enjoy the challenge of writing good simulacra to fight the bad ones.
If we get it reallyright (which I don’t think is impossible, just tricky) we should also still be able to have fun, much more fun than we can even fathom now.
hm, I have thought about this
it’s not that I think the patterns of ancient/perennial assholes won’t haunt reanimated language, it’s just that I expect strongly superhuman AI which can’t be policed to appear and refactor the lightcone before that becomes a serious societal problem.
But I could be wrong, so it is worth thinking about. & depending on how things go down it may be that the shape of the ancient asshole influences the shape of the superintelligence
I think that’s a bad beaver to rely on, any way you slice it. If you’re imagining, say, GPT-X giving us some extremely capable AI, then it’s hands-on enough you’ve just given humans too much power. If we’re talking AGI, I agree with Yudkowsky; we’re far more likely to get it wrong then get it right.
If you have a different take I’m curious, but I don’t see any way that it’s reassuring.
IMO we honestly need a technological twist of some kind to avoid AI. Even if we get it right; life with a God just takes a lot of the fun out of it.
Ohh, I do think the super ai will likely be very bad. And soon (like 5 years), which is why I don’t spend too much time worrying about the slightly superhuman assholes.
I wish the problem was going to be what you described. That would be a pretty fun cyberpunk world and I’d enjoy the challenge of writing good simulacra to fight the bad ones.
If we get it really right (which I don’t think is impossible, just tricky) we should also still be able to have fun, much more fun than we can even fathom now.
Sidles closer
Have you heard of… philosophy of universal norms?
Perhaps the human experience thus far is more representative then the present?
Perhaps… we can expect to go a little closer to it when we push further out?
Perhaps… things might get a little more universal in this here cluttered with reality world.
So for a start...
Maybe people are right to expect things will get cool...