humanity is doomed in this scenario. the Lotuseaters are smarter and the gap is widening. Theres no chance humans can militarily defeat them now or any point in the future. as galactic colonization continues exponentially, eventually they will meet again, perhaps in the far future. but the Lotusfolk will be even stronger relatively at that point. the only way humans can compete is developing an even faster strong-AI, which carries large chance of ending humanity on its own.
so the choices are:
-accept Lotusfolk offer now
-blow up the starline, continue expanding as normal, delay the inevitable
-blow up the starline, gamble on strong AI, hopefully powering-up human civ to the point it can destroy the Lotusfolk when they meet again
this choice set is based on the assumption that the decider values humanity for its own sake. I value raw intelligence, the chassis notwithstanding. so the only way I would not choose option 1 is if I thought that the Lotusfolk, while smarter currently, were disinclined to develop strong-AI and go exponential, and thus w humanity under their dominion, no one would. if humans could be coaxed into building strong AI in order to counter the looming threat of Lotusfolk assimiliation, and thus create something smarter than any of the 3 species combined, then I would choose option 3.
humanity is doomed in this scenario. the Lotuseaters are smarter and the gap is widening. Theres no chance humans can militarily defeat them now or any point in the future. as galactic colonization continues exponentially, eventually they will meet again, perhaps in the far future. but the Lotusfolk will be even stronger relatively at that point. the only way humans can compete is developing an even faster strong-AI, which carries large chance of ending humanity on its own.
so the choices are:
-accept Lotusfolk offer now
-blow up the starline, continue expanding as normal, delay the inevitable
-blow up the starline, gamble on strong AI, hopefully powering-up human civ to the point it can destroy the Lotusfolk when they meet again
this choice set is based on the assumption that the decider values humanity for its own sake. I value raw intelligence, the chassis notwithstanding. so the only way I would not choose option 1 is if I thought that the Lotusfolk, while smarter currently, were disinclined to develop strong-AI and go exponential, and thus w humanity under their dominion, no one would. if humans could be coaxed into building strong AI in order to counter the looming threat of Lotusfolk assimiliation, and thus create something smarter than any of the 3 species combined, then I would choose option 3.