My name is Justin Bullock. I live in the Seattle area after 27 years in Georgia and 7 years in Texas. I have a PhD and Public Administration and Policy Analysis where I focused on decision making within complex, hierarchical, public programs. For example, in my dissertation I attempted to model how errors (measured as improper payments) are built into the US Unemployment Insurance Program. I spent time looking at how agents are motivated within these complex systems trying to develop general insights into how errors occur in these systems. Until about 2016, I was very much ignorant of the discussions around AI. I was introduced to the arguments around AGI and alignment through the work PR works of Sam Harris and Max Tegmark leading me eventually to the work of Nick Bostrom and Eliezer Yudkowsky. It’s been a wild and exciting ride.
I currently have a tenured Associate Professor position at Texas A&M University that I’m resigning on July 1 to focus more on writing, creating, and learning without all of the weird pressures and incentives that come from working within a major public research university in the social sciences. In preparation for changing my employment status, I’ve been considering the communities I want to be in discussion with and the LessWrong and AlignmentForum communities are among the most interesting on that list.
My writing is on decision making, agents, communication, governance and control of complex systems, and how AI and future AGI influences these things. I’ve been thinking about the issue of control of multi-agent systems a lot lately and what types of systems of control can be used to guide or build robust agent-agnostic processes of AI and human constitution. In agreement with George Dyson’s recent arguments, I also worry that we have already lost meaningful human control over the internet. Finally, I’ve recently been significantly influenced by the works of Olaf Stapledon (Star Maker, Last and First Men, Sirius) and Aldous Huxley (The Perennial Philosophy) in thinking more carefully about the mind/body problem, the endowment of the cosmos, and the nature of reality.
My hope is that I can learn from you all and bring to this conversation thoughts on alignment, control, governance (in particular of multi-agent systems that contain only humans, humans and AI, and only AI), and form a map together that better reflects the territory . I look forward to engaging with the community!
See the Group Rationality topic. The rationalists, as a culture, still haven’t quite figured out how to coordinate groups very well, in my opinion. It’s something we should work on.
Thank you for this. I pulled up the thread. I think you’re right that there are a lot of open questions to look into at the level of group dynamics. I’m still familiarizing myself with the technical conversation around the iterated prisoner’s dilemma and other ways to look at these challenges from a game theory lens. My understanding so far is that some basic concepts of coordination and group dynamics like authority and specialization are not yet well formulated, but again, I don’t consider myself up to date in this conversation yet.
My name is Justin Bullock. I live in the Seattle area after 27 years in Georgia and 7 years in Texas. I have a PhD and Public Administration and Policy Analysis where I focused on decision making within complex, hierarchical, public programs. For example, in my dissertation I attempted to model how errors (measured as improper payments) are built into the US Unemployment Insurance Program. I spent time looking at how agents are motivated within these complex systems trying to develop general insights into how errors occur in these systems. Until about 2016, I was very much ignorant of the discussions around AI. I was introduced to the arguments around AGI and alignment through the work PR works of Sam Harris and Max Tegmark leading me eventually to the work of Nick Bostrom and Eliezer Yudkowsky. It’s been a wild and exciting ride.
I currently have a tenured Associate Professor position at Texas A&M University that I’m resigning on July 1 to focus more on writing, creating, and learning without all of the weird pressures and incentives that come from working within a major public research university in the social sciences. In preparation for changing my employment status, I’ve been considering the communities I want to be in discussion with and the LessWrong and AlignmentForum communities are among the most interesting on that list.
My writing is on decision making, agents, communication, governance and control of complex systems, and how AI and future AGI influences these things. I’ve been thinking about the issue of control of multi-agent systems a lot lately and what types of systems of control can be used to guide or build robust agent-agnostic processes of AI and human constitution. In agreement with George Dyson’s recent arguments, I also worry that we have already lost meaningful human control over the internet. Finally, I’ve recently been significantly influenced by the works of Olaf Stapledon (Star Maker, Last and First Men, Sirius) and Aldous Huxley (The Perennial Philosophy) in thinking more carefully about the mind/body problem, the endowment of the cosmos, and the nature of reality.
My hope is that I can learn from you all and bring to this conversation thoughts on alignment, control, governance (in particular of multi-agent systems that contain only humans, humans and AI, and only AI), and form a map together that better reflects the territory . I look forward to engaging with the community!
See the Group Rationality topic. The rationalists, as a culture, still haven’t quite figured out how to coordinate groups very well, in my opinion. It’s something we should work on.
Thank you for this. I pulled up the thread. I think you’re right that there are a lot of open questions to look into at the level of group dynamics. I’m still familiarizing myself with the technical conversation around the iterated prisoner’s dilemma and other ways to look at these challenges from a game theory lens. My understanding so far is that some basic concepts of coordination and group dynamics like authority and specialization are not yet well formulated, but again, I don’t consider myself up to date in this conversation yet.
From the thread you shared, I came across this organizing post I found helpful: https://medium.com/@ThingMaker/open-problems-in-group-rationality-5636440a2cd1
Thanks for the comment.