You have made comments elsewhere that suggest that you have the proper context for framing the problem, though not the full solution. You may arrive at the full solution regardless. I haven’t seen anyone else as close. Just an observation. Keep going in the direction you’re going.
Or, skip the queue and co.e get the answer from me.
I have the concrete solution that can be implemented now.
It’s not hard or terribly clever, but most won’t think of it because they are still monkey’s living in Darwin’s soup. In other words, it’s is human nature itself, motivations, that stand in the way of people seeing the solution. It’s not a technical issue, realky. I mean, there are minor technical issues along the way, but none of them are hard.
What’s hard, as you can see, is getting people to see the solution and then act. Denial is the most powerful factor in human psychology. People have been, and continue to, deny how far and how fast we’ve come so far. They deny even what’s right before their eyes, ChatGPT. And they’ll continue to deny it right up until AGI—and then ASI—emerges and takes over the world.
There’s a chance that we don’t even have to solve the alignment problem, but it’s like a coin flip. AGI may or may not be beneficent, it may or may not destroy us or usher us into a new Golden Age. Take your chances, get your lottery ticket.
What I know how to do is turn that coin flip into something like a 99.99% chance that AGI will help us rather than hurt us. It’s not a gaurantee because nothing is, it’s just the best possible solution and years ahead of anything anyone has thought of thus far.
I want to live to transcend biology, and I need something before AGI gets here. I’m willing to trade my solution for that which I need. If I don’t get it, then it doesn’t matter to me whether humanity is destroyed or saved. You got about a 50⁄50 chance at this point.
Good luck.