Glad to help! And hey, clarifying our ideas is half of what discussion is for!
I’d love to see a top-level post on ideas for making this happen, since I think you’re right, even though safety in current AI systems is very different from the problems we would face with AGI-level systems.
but I’m prob going to stay in the “dumb questions” area and not comment :)
ie. “the feeling I have when someone tries to teach me that human-safety is orthogonal to AI-Capability—in a real implementation, they’d be correlated in some way”
YES.
You are a gentleman and a scholar for taking the time on this. I wish I could’ve explained it more clearly from the outset.
Glad to help! And hey, clarifying our ideas is half of what discussion is for!
I’d love to see a top-level post on ideas for making this happen, since I think you’re right, even though safety in current AI systems is very different from the problems we would face with AGI-level systems.
Does this remind you of what I’m trying to get at? bc it sure does, to me:
https://twitter.com/ESYudkowsky/status/1537842203543801856?s=20&t=5THtjV5sUU1a7Ge1-venUw
but I’m prob going to stay in the “dumb questions” area and not comment :)
ie. “the feeling I have when someone tries to teach me that human-safety is orthogonal to AI-Capability—in a real implementation, they’d be correlated in some way”