Unclear to me how “serious” this really is. The US government has its hands in lots of things and spends money on lots of stuff. It’s more serious than it was before, but to me this seems pretty close to the least they could be doing and not be seen as ignoring AI in ways that would be used against them in the next election cycle.
Here’s the details on the NSF institutes… Sound mostly irrelevant to AInotkilleveryoneism. Some seem likely to produce minor good things for the world, like perhaps the education and agriculture focused programs. Others seem potentially harmfully accelerationist, like the Neural & Cognitive science program. Cybersecurity might be good, we certainly could do with better cybersecurity. The Trustworthy AI just sounds like Social Justice AI concerns not relevant to AInotkilleveryoneism.
Led by the University of Maryland, TRAILS aims to transform the practice of AI from one driven primarily by technological innovation to one driven with attention to ethics, human rights, and support for communities whose voices have been marginalized into mainstream AI. TRAILS will be the first Institute of its kind to integrate participatory design, technology, and governance of AI systems and technologies and will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness. TRAILS is funded by a partnership between NSF and NIST.
Intelligent Agents for Next-Generation Cybersecurity
Led by the University of California, Santa Barbara, this Institute will develop novel approaches that leverage AI to anticipate and take corrective actions against cyberthreats that target the security and privacy of computer networks and their users. The team of researchers will work with experts in security operations to develop a revolutionary approach to cybersecurity, in which AI-enabled intelligent security agents cooperate with humans across the cyber-defense life cycle to jointly improve the resilience of security of computer systems over time. ACTION is funded by a partnership between NSF, DHS S&T, and IBM.
Led by the University of Minnesota Twin Cities, this Institute aims to advance foundational AI by incorporating knowledge from agriculture and forestry sciences and leveraging these unique, new AI methods to curb climate effects while lifting rural economies. By creating a new scientific discipline and innovation ecosystem intersecting AI and climate-smart agriculture and forestry, our researchers and practitioners will discover and invent compelling AI-powered knowledge and solutions. Examples include AI-enhanced estimation methods of greenhouse gases and specialized field-to-market decision support tools. A key goal is to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision-making. The Institute will also expand and diversify rural and urban AI workforces. AI-CLIMATE is funded by USDA-NIFA.
Neural and Cognitive Foundations of Artificial Intelligence
Led by Columbia University, this Institute will draw together top researchers across the country to focus on a national priority: connecting the major progress made in AI systems to the revolution in our understanding of the brain. ARNI will meet the urgent need for new paradigms of interdisciplinary research between neuroscience, cognitive science, and AI. This will accelerate progress in all three fields and broaden the transformative impact on society in the next decade. ARNI is funded by a partnership between NSF and OUSD (R&E).
Led by Carnegie Mellon University, this Institute seeks to create human-centric AI for decision making to bolster effective response in uncertain, dynamic, and resource-constrained scenarios like disaster management and public health. By bringing together an interdisciplinary team of AI and social science researchers, AI-SDM will enable emergency managers, public health officials, first responders, community workers, and the public to make decisions that are data driven, robust, agile, resource efficient, and trustworthy. The vision of AI-SDM will be realized via development of AI theory and methods, translational research, training, and outreach, enabled by partnerships with diverse universities, government organizations, corporate partners, community colleges, public libraries, and high schools.
AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes
Led by the University of Illinois, Urbana-Champaign, this Institute seeks to fundamentally reframe how educational technologies interact with learners by developing AI tools and approaches to support three crucial noncognitive skills known to underlie effective learning: persistence, academic resilience, and collaboration. The Institute’s use-inspired research will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers support and promote noncognitive skill development. The resultant AI-based tools will be integrated into classrooms to empower teachers to support learners in more developmentally appropriate ways.
Led by the University at Buffalo, this Institute will work toward universal speech and language screening for children. The framework, the AI screener, will analyze video and audio streams of children during classroom interactions and assess the need for evidence-based interventions tailored to individual needs of students. The institute will serve children in need of ability-based speech and language services, advance foundational AI technologies and enhance understanding of childhood speech and language development. The AI Institute for Exceptional Education was previously announced in January 2023. The INVITE and AI4ExceptionalEd Institutes are funded by a partnership between NSF and ED-IES.
From my point of view, I’d love to have the US and UK govs classify cutting edge AI tech as dangerous weapons technology, and start applying military rules of discipline around these like they do for high-tech weapons R&D. Security clearances, export controls, significant government oversight and cybersecurity requirements, etc. I think that’s a reasonable step at this point.
Unclear to me how “serious” this really is. The US government has its hands in lots of things and spends money on lots of stuff. It’s more serious than it was before, but to me this seems pretty close to the least they could be doing and not be seen as ignoring AI in ways that would be used against them in the next election cycle.
Here’s the details on the NSF institutes… Sound mostly irrelevant to AInotkilleveryoneism. Some seem likely to produce minor good things for the world, like perhaps the education and agriculture focused programs. Others seem potentially harmfully accelerationist, like the Neural & Cognitive science program. Cybersecurity might be good, we certainly could do with better cybersecurity. The Trustworthy AI just sounds like Social Justice AI concerns not relevant to AInotkilleveryoneism.
Trustworthy AI
NSF Institute for Trustworthy AI in Law & Society (TRAILS)
Led by the University of Maryland, TRAILS aims to transform the practice of AI from one driven primarily by technological innovation to one driven with attention to ethics, human rights, and support for communities whose voices have been marginalized into mainstream AI. TRAILS will be the first Institute of its kind to integrate participatory design, technology, and governance of AI systems and technologies and will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness. TRAILS is funded by a partnership between NSF and NIST.
Intelligent Agents for Next-Generation Cybersecurity
AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION)
Led by the University of California, Santa Barbara, this Institute will develop novel approaches that leverage AI to anticipate and take corrective actions against cyberthreats that target the security and privacy of computer networks and their users. The team of researchers will work with experts in security operations to develop a revolutionary approach to cybersecurity, in which AI-enabled intelligent security agents cooperate with humans across the cyber-defense life cycle to jointly improve the resilience of security of computer systems over time. ACTION is funded by a partnership between NSF, DHS S&T, and IBM.
Climate Smart Agriculture and Forestry
AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE)
Led by the University of Minnesota Twin Cities, this Institute aims to advance foundational AI by incorporating knowledge from agriculture and forestry sciences and leveraging these unique, new AI methods to curb climate effects while lifting rural economies. By creating a new scientific discipline and innovation ecosystem intersecting AI and climate-smart agriculture and forestry, our researchers and practitioners will discover and invent compelling AI-powered knowledge and solutions. Examples include AI-enhanced estimation methods of greenhouse gases and specialized field-to-market decision support tools. A key goal is to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision-making. The Institute will also expand and diversify rural and urban AI workforces. AI-CLIMATE is funded by USDA-NIFA.
Neural and Cognitive Foundations of Artificial Intelligence
AI Institute for Artificial and Natural Intelligence (ARNI)
Led by Columbia University, this Institute will draw together top researchers across the country to focus on a national priority: connecting the major progress made in AI systems to the revolution in our understanding of the brain. ARNI will meet the urgent need for new paradigms of interdisciplinary research between neuroscience, cognitive science, and AI. This will accelerate progress in all three fields and broaden the transformative impact on society in the next decade. ARNI is funded by a partnership between NSF and OUSD (R&E).
AI for Decision Making
AI-Institute for Societal Decision Making (AI-SDM)
Led by Carnegie Mellon University, this Institute seeks to create human-centric AI for decision making to bolster effective response in uncertain, dynamic, and resource-constrained scenarios like disaster management and public health. By bringing together an interdisciplinary team of AI and social science researchers, AI-SDM will enable emergency managers, public health officials, first responders, community workers, and the public to make decisions that are data driven, robust, agile, resource efficient, and trustworthy. The vision of AI-SDM will be realized via development of AI theory and methods, translational research, training, and outreach, enabled by partnerships with diverse universities, government organizations, corporate partners, community colleges, public libraries, and high schools.
AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes
AI Institute for Inclusive Intelligent Technologies for Education (INVITE)
Led by the University of Illinois, Urbana-Champaign, this Institute seeks to fundamentally reframe how educational technologies interact with learners by developing AI tools and approaches to support three crucial noncognitive skills known to underlie effective learning: persistence, academic resilience, and collaboration. The Institute’s use-inspired research will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers support and promote noncognitive skill development. The resultant AI-based tools will be integrated into classrooms to empower teachers to support learners in more developmentally appropriate ways.
AI Institute for Exceptional Education (AI4ExceptionalEd)
Led by the University at Buffalo, this Institute will work toward universal speech and language screening for children. The framework, the AI screener, will analyze video and audio streams of children during classroom interactions and assess the need for evidence-based interventions tailored to individual needs of students. The institute will serve children in need of ability-based speech and language services, advance foundational AI technologies and enhance understanding of childhood speech and language development. The AI Institute for Exceptional Education was previously announced in January 2023. The INVITE and AI4ExceptionalEd Institutes are funded by a partnership between NSF and ED-IES.
What would be a reasonable standard of action by you? Genuinely asking
From my point of view, I’d love to have the US and UK govs classify cutting edge AI tech as dangerous weapons technology, and start applying military rules of discipline around these like they do for high-tech weapons R&D. Security clearances, export controls, significant government oversight and cybersecurity requirements, etc. I think that’s a reasonable step at this point.