any chance you’d be willing to go into more detail? it sounds like you’re saying unaligned relative to human baseline. I don’t actually think I disagree a priori, I do think people who seek to have high agency have a tendency to end up misaligned with those around them and harming them, for basically exactly the same reasons as any ai that seeks to have high agency. it’s not consistent, though, as far as I can tell; some people successfully decide/reach-internal-consensus to have high agency towards creating moral good and then proceed to successfully apply that decision to the world. The only way to know if this is happening is to do it oneself, of course, and that’s not always easy. Nobody can force you to be moral, so it’s up to you to do it, and there are a lot of ways one can mess it up, notably such as by accepting instructions or worldview that sound moral but aren’t, and claims that someone knows what’s moral often come packaged with claims of exactly the type you and I are making here; “you’re wrong and should change”, after all, is a key way people get people to do things.
If anyone on this website had a decent chance of gaining capabilities that would rival or exceed those of the global superpowers, then spending lots of money/effort on a research program to align them would be warranted.
We sort-of know this already; people are corruptible, etc. There are lots of things individual humans want that would be bad if a superintelligence wanted them too.
Most people on this website are unaligned.
A lot of the top AI people are very unaligned.
any chance you’d be willing to go into more detail? it sounds like you’re saying unaligned relative to human baseline. I don’t actually think I disagree a priori, I do think people who seek to have high agency have a tendency to end up misaligned with those around them and harming them, for basically exactly the same reasons as any ai that seeks to have high agency. it’s not consistent, though, as far as I can tell; some people successfully decide/reach-internal-consensus to have high agency towards creating moral good and then proceed to successfully apply that decision to the world. The only way to know if this is happening is to do it oneself, of course, and that’s not always easy. Nobody can force you to be moral, so it’s up to you to do it, and there are a lot of ways one can mess it up, notably such as by accepting instructions or worldview that sound moral but aren’t, and claims that someone knows what’s moral often come packaged with claims of exactly the type you and I are making here; “you’re wrong and should change”, after all, is a key way people get people to do things.
Do you dislike sports and think they’re dumb? If so, you are in at least one small way unaligned with most of the human race. Just one example.
If anyone on this website had a decent chance of gaining capabilities that would rival or exceed those of the global superpowers, then spending lots of money/effort on a research program to align them would be warranted.
We sort-of know this already; people are corruptible, etc. There are lots of things individual humans want that would be bad if a superintelligence wanted them too.