I posted about my academic research interest here, do you know their research well enough to give input on whether my interests would be compatible? I would love to find a way to do my PhD in Europe, but especially Germany.
Your post suggests that your target is to do research that’s supposed to influence AI. As far as I understand the two groups their goal focuses on improving human rationality.
My mental model of Falk Lieder would likely say something like: “The operations research team leader background is interesting. Did you find a way to bring findings from computational game theory / cognitive science / system modeling / causal inference into a way that you believe helps people in your organization make better decisions? If so it would be great to study in an academically rigorous way whether those interventions lead to better outcomes.”
Ahh, I think I did not think through what “rationality enhancement” might mean; perhaps my own recent search and the AI context of Yudkowsky’s original intent skewed me a little. I was thinking of something like “understanding and applying concepts of rationality” in a way that might include “anticipating misaligned AI” or “anticipating AI-human feedback responses”.
I like the way you’ve framed what’s probably the useful question. I’ll need to think about that a bit more.
Cool, thanks for sharing.
I posted about my academic research interest here, do you know their research well enough to give input on whether my interests would be compatible? I would love to find a way to do my PhD in Europe, but especially Germany.
Your post suggests that your target is to do research that’s supposed to influence AI. As far as I understand the two groups their goal focuses on improving human rationality.
My mental model of Falk Lieder would likely say something like: “The operations research team leader background is interesting. Did you find a way to bring findings from computational game theory / cognitive science / system modeling / causal inference into a way that you believe helps people in your organization make better decisions? If so it would be great to study in an academically rigorous way whether those interventions lead to better outcomes.”
Ahh, I think I did not think through what “rationality enhancement” might mean; perhaps my own recent search and the AI context of Yudkowsky’s original intent skewed me a little. I was thinking of something like “understanding and applying concepts of rationality” in a way that might include “anticipating misaligned AI” or “anticipating AI-human feedback responses”.
I like the way you’ve framed what’s probably the useful question. I’ll need to think about that a bit more.