I appreciate the detailed taxonomy in this post, and it’s an insightful way to analyze the gaps between understanding and implementing decision theory. However, I believe it would be even more compelling to explore how AI-based cognitive augmentation could help humans bridge these gaps and better navigate decision processes. Additionally, it would be interesting to examine the potential of GPT-style models to gain insight into AGI and its alignment with human values. Overall, great read!
Thanks! Glad at least one person read it; this post set a new personal record for low engagement, haha.
I think exploring ways that AIs and / or humans (through augmentation, neuroscience, etc.) could implement decision theories more faithfully is an interesting idea. I chose not to focus directly on AI in this post, since I think LW, and my own writing specifically, has been kinda saturated with AI content lately. And I wanted to keep this shorter and lighter in the (apparently doomed) hope that more people would read it.
I appreciate the detailed taxonomy in this post, and it’s an insightful way to analyze the gaps between understanding and implementing decision theory. However, I believe it would be even more compelling to explore how AI-based cognitive augmentation could help humans bridge these gaps and better navigate decision processes. Additionally, it would be interesting to examine the potential of GPT-style models to gain insight into AGI and its alignment with human values. Overall, great read!
Thanks! Glad at least one person read it; this post set a new personal record for low engagement, haha.
I think exploring ways that AIs and / or humans (through augmentation, neuroscience, etc.) could implement decision theories more faithfully is an interesting idea. I chose not to focus directly on AI in this post, since I think LW, and my own writing specifically, has been kinda saturated with AI content lately. And I wanted to keep this shorter and lighter in the (apparently doomed) hope that more people would read it.