FYI I think this post is getting few upvotes because it doesn’t contribute anything new to the alignment discussion. This point has already been written about many times before.
As long as we have it under some level of control, we can just say, “Hey, act ethically, OK?,”
Yes but the whole alignment problem is to get an ASI under some level of control.
Thank you for the feedback! I haven’t yet figured out the “secret sauce” of what people seem to appreciate on LW, so this is helpful. And, admittedly, although I’ve read a bunch, I haven’t read everything on this site so I don’t know all of what has come before. After I posted, I thought about changing the title to something like: “Why we should have an ‘ethics module’ ready to go before AGI/ASI comes online.” In a sense, that was the real point of the post: I’m developing an “ethics calculator” (a logic-based machine ethics system), and sometimes I ask myself if an ASI won’t just figure out ethics for itself far better than I ever could. Btw, if you have any thoughts on why my initial ethics calculator post was so poorly voted, I’d greatly appreciate them as I’m planning an update in the next few weeks. Thanks!
FYI I think this post is getting few upvotes because it doesn’t contribute anything new to the alignment discussion. This point has already been written about many times before.
Yes but the whole alignment problem is to get an ASI under some level of control.
Thank you for the feedback! I haven’t yet figured out the “secret sauce” of what people seem to appreciate on LW, so this is helpful. And, admittedly, although I’ve read a bunch, I haven’t read everything on this site so I don’t know all of what has come before. After I posted, I thought about changing the title to something like: “Why we should have an ‘ethics module’ ready to go before AGI/ASI comes online.” In a sense, that was the real point of the post: I’m developing an “ethics calculator” (a logic-based machine ethics system), and sometimes I ask myself if an ASI won’t just figure out ethics for itself far better than I ever could. Btw, if you have any thoughts on why my initial ethics calculator post was so poorly voted, I’d greatly appreciate them as I’m planning an update in the next few weeks. Thanks!