It wasn’t intended to be—more incredulity. I thought this was a really important piece of the puzzle, so expected there’d be something at all by now. I appreciate your point: that this is a ridiculously huge problem and SIAI is ridiculously small.
However, human general intelligences don’t go FOOM but should be able to do the work for CEV. If they know what that work is.
This sounds interesting; do you think you could expand?
I meant that, as I understand it, CEV is what is fed to the seed AI. Or the AI does the work to ascertain the CEV. It requires an intelligence to ascertain the CEV, but I’d think the ascertaining process would be reasonably set out once we had an intelligence on hand, artificial or no. Or the process to get to the ascertaining process.
I thought we needed the CEV before the AI goes FOOM, because it’s too late after. That implies it doesn’t take a superintelligence to work it out.
Thus: CEV would have to be a process that mere human-level intelligences could apply. That would be a useful process to have, and doesn’t require first creating an AI.
I must point out that my statements on the subject are based in curiosity, ignorance and extrapolation from what little I do know, and I’m asking (probably annoyingly) for more to work with.
“CEV” can (unfortunately) refer to either CEV the process of determining what humans would want if we knew more etc., or the volition of humanity output by running that process. It sounds to me like you’re conflating these. The process is part of the seed AI and is needed before it goes FOOM, but the output naturally is neither, and there’s no guarantee or demand that the process be capable of being executed by humans.
It wasn’t intended to be—more incredulity. I thought this was a really important piece of the puzzle, so expected there’d be something at all by now. I appreciate your point: that this is a ridiculously huge problem and SIAI is ridiculously small.
I meant that, as I understand it, CEV is what is fed to the seed AI. Or the AI does the work to ascertain the CEV. It requires an intelligence to ascertain the CEV, but I’d think the ascertaining process would be reasonably set out once we had an intelligence on hand, artificial or no. Or the process to get to the ascertaining process.
I thought we needed the CEV before the AI goes FOOM, because it’s too late after. That implies it doesn’t take a superintelligence to work it out.
Thus: CEV would have to be a process that mere human-level intelligences could apply. That would be a useful process to have, and doesn’t require first creating an AI.
I must point out that my statements on the subject are based in curiosity, ignorance and extrapolation from what little I do know, and I’m asking (probably annoyingly) for more to work with.
“CEV” can (unfortunately) refer to either CEV the process of determining what humans would want if we knew more etc., or the volition of humanity output by running that process. It sounds to me like you’re conflating these. The process is part of the seed AI and is needed before it goes FOOM, but the output naturally is neither, and there’s no guarantee or demand that the process be capable of being executed by humans.
OK. I still don’t understand it, but I now feel my lack of understanding more clearly. Thank you!
(I suppose “what do people really want?” is a large philosophical question, not just undefined but subtle in its lack of definition.)