I think it would be fun and productive to “wargame” the emergence of AGI in broader society in some specific scenario—my choice (of course) would be “we reverse-engineer the neocortex”. Different people could be different interest-groups / perspectives, e.g. industry researchers, academic researchers, people who have made friends with the new AIs, free-marketers, tech-utopians, people concerned about job losses and inequality, people who think the AIs are conscious and deserve rights, people who think the AIs are definitely not conscious and don’t deserve rights (maybe for religious reasons?), militaries, large companies, etc.
I don’t know how these “wargame”-type exercises actually work—honestly, I haven’t even played D&D :-P Just a thought. I personally have some vague opinions about brain-like AGI development paths and what systems might be like at different stages etc., but when I try to think about how this could play out with all the different actors, it kinda makes my head spin. :-)
The goal of course is to open conversations about what might plausibly happen, not to figure out what will happen, which is probably impossible.
I think it would be fun and productive to “wargame” the emergence of AGI in broader society in some specific scenario—my choice (of course) would be “we reverse-engineer the neocortex”. Different people could be different interest-groups / perspectives, e.g. industry researchers, academic researchers, people who have made friends with the new AIs, free-marketers, tech-utopians, people concerned about job losses and inequality, people who think the AIs are conscious and deserve rights, people who think the AIs are definitely not conscious and don’t deserve rights (maybe for religious reasons?), militaries, large companies, etc.
I don’t know how these “wargame”-type exercises actually work—honestly, I haven’t even played D&D :-P Just a thought. I personally have some vague opinions about brain-like AGI development paths and what systems might be like at different stages etc., but when I try to think about how this could play out with all the different actors, it kinda makes my head spin. :-)
The goal of course is to open conversations about what might plausibly happen, not to figure out what will happen, which is probably impossible.