Its easy to say something is “not that hard”, but ridiculous to claim that when the something is build an AI that takes over the world. The hard part is building something more intelligent/capable than humanity, not anything else conditioned on that first step.
I don’t see why this would be ridiculous. To me, e.g. “Superintelligence only requires [hacky change to current public SOTA] to achieve with expected 2025 hardware, and OpenAI may or may not have realised that already” seems like a perfectly coherent way the world could be, and is plenty of reason for anyone who suspects such a thing to keep their mouth shut about gears-level models of [] that might be relevant for judging how hard and mysterious the remaining obstacles to superintelligence actually are.
If it only requires a simple hack to existing public SOTA, many others will have already thought of said hack and you won’t have any additional edge. Taboo superintelligence and think through more specifically what is actually required to outcompete the rest of the world.
Progress in DL is completely smooth as it is driven mostly from hardware and enormous number of compute-dependent small innovations (yes transformers were a small innovation on top of contemporary alternatives such as memory networks, NTMs etc and quite predictable in advance ).
If it only requires a simple hack to existing public SOTA, many others will have already thought of said hack and you won’t have any additional edge.
I don’t recall assuming the edge to be unique? That seems like an unneeded condition for Tamsin’s argument, it’s enough to believe the field consensus isn’t completely efficient by default and all relevant actors are sure of all currently deducable edges at all times.
Progress in DL is completely smooth.
Right, if you think it’s completely smooth and thus basically not meaningfully influenced by the actions of individual researchers whatsoever, I see why you would not buy Tamsin’s argument here. But then the reason you don’t buy it would seem to me to be that you think meaningful new ideas in ML capability research basically don’t exist, not because you think there is some symmetric argument to Tamsin’s for people to stay quiet about new alignment research ideas.
If you think all AGIs will coordinate with each other, nobody needs an edge. If you think humans will build lots of AI systems, many technically unable to coordinate with each other (from mechanisms similar to firewalls/myopia/sparsity) then the world takeover requires an edge. An edge such that the (coalition of hostile AIs working together) wins the war against (humans plus their AIs).
This can get interesting if you think there might be diminishing returns in intelligence, which could mean that the (humans + their AI) faction might have a large advantage if the humans start with far more resources like they control now.
Its easy to say something is “not that hard”, but ridiculous to claim that when the something is build an AI that takes over the world. The hard part is building something more intelligent/capable than humanity, not anything else conditioned on that first step.
I don’t see why this would be ridiculous. To me, e.g. “Superintelligence only requires [hacky change to current public SOTA] to achieve with expected 2025 hardware, and OpenAI may or may not have realised that already” seems like a perfectly coherent way the world could be, and is plenty of reason for anyone who suspects such a thing to keep their mouth shut about gears-level models of [] that might be relevant for judging how hard and mysterious the remaining obstacles to superintelligence actually are.
If it only requires a simple hack to existing public SOTA, many others will have already thought of said hack and you won’t have any additional edge. Taboo superintelligence and think through more specifically what is actually required to outcompete the rest of the world.
Progress in DL is completely smooth as it is driven mostly from hardware and enormous number of compute-dependent small innovations (yes transformers were a small innovation on top of contemporary alternatives such as memory networks, NTMs etc and quite predictable in advance ).
I don’t recall assuming the edge to be unique? That seems like an unneeded condition for Tamsin’s argument, it’s enough to believe the field consensus isn’t completely efficient by default and all relevant actors are sure of all currently deducable edges at all times.
Right, if you think it’s completely smooth and thus basically not meaningfully influenced by the actions of individual researchers whatsoever, I see why you would not buy Tamsin’s argument here. But then the reason you don’t buy it would seem to me to be that you think meaningful new ideas in ML capability research basically don’t exist, not because you think there is some symmetric argument to Tamsin’s for people to stay quiet about new alignment research ideas.
If you think all AGIs will coordinate with each other, nobody needs an edge. If you think humans will build lots of AI systems, many technically unable to coordinate with each other (from mechanisms similar to firewalls/myopia/sparsity) then the world takeover requires an edge. An edge such that the (coalition of hostile AIs working together) wins the war against (humans plus their AIs).
This can get interesting if you think there might be diminishing returns in intelligence, which could mean that the (humans + their AI) faction might have a large advantage if the humans start with far more resources like they control now.