Thank you for the honest engagement. I can only add that our disagreement is precisely in the likelihood of that plan. I don’t find likely the example that you present, and that’s ok! We could go through the points one by one and analyse why they work or not, but I suggest we dont go there: I would probably continue thinking that it is complex (you need to do that without raising any alarms) and you would probably continue arguing that a AGI would make it better. I know that perfect Bayesian thinkers cannot agree on disagreement and that’s fine, I think that probably we are not perfect Bayesian thinkers. Please, don’t get my answer here wrong, I don’t want to sound dismissive or anything and I do appreciate your scenario, it is simply that I think we already understood each other position. If I haven’t changed your mind by now I probably won’t do it by repeating the same arguments!
Thank you for the honest engagement. I can only add that our disagreement is precisely in the likelihood of that plan. I don’t find likely the example that you present, and that’s ok! We could go through the points one by one and analyse why they work or not, but I suggest we dont go there: I would probably continue thinking that it is complex (you need to do that without raising any alarms) and you would probably continue arguing that a AGI would make it better. I know that perfect Bayesian thinkers cannot agree on disagreement and that’s fine, I think that probably we are not perfect Bayesian thinkers. Please, don’t get my answer here wrong, I don’t want to sound dismissive or anything and I do appreciate your scenario, it is simply that I think we already understood each other position. If I haven’t changed your mind by now I probably won’t do it by repeating the same arguments!