I’m thinking one or two years in the future is a plausible lower bound on time when a (technological) plan would need to be enacted to still have an effect on what happens eventually, or else in four years (from now) a killeveryone arrives (again, as an arguable lower bound, not as a median forecast).
Unless it’s fine by default, on its own, for reasons nobody reliably understands in advance, not because anyone had a plan. I think there is a good chance this is true, but betting the future of humanity on that is insane. Also, even if the first AGIs don’t killeveryone, they might fail to establish strong coordination that prevents other misaligned AGIs from getting built, which do killeveryone, including the first AGIs.
I think probably it’s more like 6 and 8 years, respectively, but that’s also not a lot of time to come up with a plan that depends on having fundamental science that’s not yet developed.
I’m thinking one or two years in the future is a plausible lower bound on time when a (technological) plan would need to be enacted to still have an effect on what happens eventually, or else in four years (from now) a killeveryone arrives (again, as an arguable lower bound, not as a median forecast).
Unless it’s fine by default, on its own, for reasons nobody reliably understands in advance, not because anyone had a plan. I think there is a good chance this is true, but betting the future of humanity on that is insane. Also, even if the first AGIs don’t killeveryone, they might fail to establish strong coordination that prevents other misaligned AGIs from getting built, which do killeveryone, including the first AGIs.
I think probably it’s more like 6 and 8 years, respectively, but that’s also not a lot of time to come up with a plan that depends on having fundamental science that’s not yet developed.
Best to slow down the development of AI in sensitive fields until we have a clearer understanding of its capabilities.