I think I understand X, and it seems like a legitimate problem, but the comment I think you’re referring to here seems to contain (nearly) all of X and not just half of it. So I’m confused and think I don’t completely understand X.
Edit: I think I found the missing part of X. Ouch.
Yeah. The idea came when I was lying in a hammock half asleep after dinner, it really woke me up :-) Now I wonder what approach could overcome such problems, even in principle.
If the basilisk is correct* it seems any indirect approach is doomed, but I don’t see how it prevents a direct approach. But that has it’s own set of probably-insurmountable problems, I’d wager.
* I remain highly uncertain about that, but it’s not something I can claim to have a good grasp on or to have thought a lot about.
My position on the basilisk: if someone comes to me worrying about it, I can probably convince them not to worry (I’ve done that several times), but if someone comes up with an AI idea that seems to suffer from basilisks, I hope that AI doesn’t get built. Unfortunately we don’t know very much. IMO open discussion would help.
You probably understand it correctly. I say “half” because Paul didn’t consider the old version serious, because we hadn’t made the connection with basilisks.
I think I understand X, and it seems like a legitimate problem, but the comment I think you’re referring to here seems to contain (nearly) all of X and not just half of it. So I’m confused and think I don’t completely understand X.
Edit: I think I found the missing part of X. Ouch.
Yeah. The idea came when I was lying in a hammock half asleep after dinner, it really woke me up :-) Now I wonder what approach could overcome such problems, even in principle.
If the basilisk is correct* it seems any indirect approach is doomed, but I don’t see how it prevents a direct approach. But that has it’s own set of probably-insurmountable problems, I’d wager.
* I remain highly uncertain about that, but it’s not something I can claim to have a good grasp on or to have thought a lot about.
My position on the basilisk: if someone comes to me worrying about it, I can probably convince them not to worry (I’ve done that several times), but if someone comes up with an AI idea that seems to suffer from basilisks, I hope that AI doesn’t get built. Unfortunately we don’t know very much. IMO open discussion would help.
You probably understand it correctly. I say “half” because Paul didn’t consider the old version serious, because we hadn’t made the connection with basilisks.